Skip to content

Instantly share code, notes, and snippets.

View emrectn's full-sized avatar

Emre Çetin emrectn

View GitHub Profile
#!/bin/bash
TODAY=$(date +"%Y-%m-%d_%H_%M_%S")
DATE=`date`
display_help() {
echo
echo " -h, --help Help"
echo " -o, --output_dir Set on which display to host on "
echo " -c --container_name Watch Specific container name, if you don't use it it will track all containers"
#!/bin/bash
fileid="$1"
filename="$2"
curl -c ./cookie -s -L "https://drive.google.com/uc?export=download&id=${fileid}" > /dev/null
curl -Lb ./cookie "https://drive.google.com/uc?export=download&confirm=`awk '/download/ {print $NF}' ./cookie`&id=${fileid}" -o ${filename}
remove cookie
best, best_loss = run_style_transfer(content_path,
style_path,
verbose=True,
show_intermediates=True)
def get_feature_representations(model, content_path, style_path):
"""Helper function to compute our content and style feature representations.
This function will simply load and preprocess both the content and style
images from their path. Then it will feed them through the network to obtain
the outputs of the intermediate layers.
Arguments:
model: The model that we are using.
content_path: The path to the content image.
def get_content_loss(base_content, target):
return tf.reduce_mean(tf.square(base_content - target))
def run_style_transfer(content_path,
style_path,
num_iterations=1000,
content_weight=1e3,
style_weight = 1e-2):
display_num = 100
# We don't need to (or want to) train any layers of our model, so we set their trainability
# to false.
model = get_model()
for layer in model.layers:
def compute_grads(cfg):
with tf.GradientTape() as tape:
all_loss = compute_loss(**cfg)
# Compute gradients wrt input image
total_loss = all_loss[0]
return tape.gradient(total_loss, cfg['init_image']), all_loss
def compute_loss(model, loss_weights, init_image, gram_style_features, content_features):
"""This function will compute the loss total loss.
Arguments:
model: The model that will give us access to the intermediate layers
loss_weights: The weights of each contribution of each loss function.
(style weight, content weight, and total variation weight)
init_image: Our initial base image. This image is what we are updating with
our optimization process. We apply the gradients wrt the loss we are
calculating to this image.
def gram_matrix(input_tensor):
# We make the image channels first
channels = int(input_tensor.shape[-1])
a = tf.reshape(input_tensor, [-1, channels])
n = tf.shape(a)[0]
gram = tf.matmul(a, a, transpose_a=True)
return gram / tf.cast(n, tf.float32)
def get_style_loss(base_style, gram_target):
"""Expects two images of dimension h, w, c"""
def get_model():
""" Creates our model with access to intermediate layers.
This function will load the VGG19 model and access the intermediate layers.
These layers will then be used to create a new model that will take input image
and return the outputs from these intermediate layers from the VGG model.
Returns:
returns a keras model that takes image inputs and outputs the style and
content intermediate layers.