1. 程式人生 > >用Python十秒做表白神器!雖然520已經過去了,但是還有七夕啊!

用Python十秒做表白神器!雖然520已經過去了,但是還有七夕啊!

渴望 exp 特征 輸入 run 現在 separate imagenet pos

520小編也是吃到了一大波狗糧啊,有錢的超級浪漫,沒錢的也很會玩!所以小編今天決定還是教大家來做一款表白神器,就算這次用不著沒下次也是肯定可以用的著的!

技術分享圖片

技術分享圖片

技術分享圖片

今天,我就來教大家一下,如何用Python做一份特別的禮物送給自己的戀人。

技術分享圖片

當然了,如果還是單身的,也可以把這個作為表白神器,和心愛的人表白。

會Python編程的人當然不用我說,就知道該如何操作,那些不懂編程的人,如果想嘗試,那該怎麽辦呢?

技術分享圖片

技術分享圖片

首先教大家一個初級版的。這個就比較簡單,利用Python制作一個愛心。

我先把代碼給貼出來:

import turtle

import time

# 畫愛心的頂部

def LittleHeart():

for i in range (200):

turtle.right(1)

turtle.forward(2)

# 輸入表白的語句,默認I Love you

love=input(‘Please enter a sentence of love, otherwise the default is "I Love you":\n‘)

#輸入署名或者贈誰,沒有不執行

me=input(‘Please enter pen name, otherwise the default do not execute:\n‘)

if love==‘‘:

love=‘I Love you‘

# 窗口大小

turtle.setup(width=900, height=500)

# 顏色

turtle.color(‘red‘,‘pink‘)

# 筆粗細

turtle.pensize(3)

# 速度

turtle.speed(1)

# 提筆

turtle.up()

# 隱藏筆

turtle.hideturtle()

# 去到的坐標,窗口中心為0,0

turtle.goto(0,-180)

turtle.showturtle()

# 畫上線

turtle.down()

turtle.speed(1)

turtle.begin_fill()

turtle.left(140)

turtle.forward(224)

#調用畫愛心左邊的頂部

LittleHeart()

#調用畫愛右邊的頂部

turtle.left(120)

LittleHeart()

# 畫下線

turtle.forward(224)

turtle.end_fill()

turtle.pensize(5)

turtle.up()

turtle.hideturtle()

# 在心中寫字 一次

turtle.goto(0,0)

turtle.showturtle()

turtle.color(‘#CD5C5C‘,‘pink‘)

#在心中寫字 font可以設置字體自己電腦有的都可以設 align開始寫字的位置

turtle.write(love,font=(‘gungsuh‘,30,),align="center")

turtle.up()

turtle.hideturtle()

time.sleep(2)

# 在心中寫字 二次

turtle.goto(0,0)

turtle.showturtle()

turtle.color(‘red‘,‘pink‘)

turtle.write(love,font=(‘gungsuh‘,30,),align="center")

turtle.up()

turtle.hideturtle()

# 寫署名

if me !=‘‘:

turtle.color(‘black‘, ‘pink‘)

time.sleep(2)

turtle.goto(180,-180)

turtle.showturtle()

turtle.write(me, font=(20,), align="center", move=True)

#點擊窗口關閉

window=turtle.Screen()

window.exitonclick()

這個代碼最終呈現效果如下,這個是比較初級簡單的愛心,沒有什麽高難度。你也可以把代碼擴充一下,整的更加高大上一些。

技術分享圖片

技術分享圖片

import turtle

import random

def love(x, y): # 在(x,y)處畫愛心lalala

lv = turtle.Turtle()

lv.hideturtle()

lv.up()

lv.goto(x, y) # 定位到(x,y)

def curvemove(): # 畫圓弧

for i in range(20):

lv.right(10)

lv.forward(2)

lv.color(‘red‘, ‘pink‘)

lv.speed(10000000)

lv.pensize(1)

# 開始畫愛心lalala

lv.down()

lv.begin_fill()

lv.left(140)

lv.forward(22)

curvemove()

lv.left(120)

curvemove()

lv.forward(22)

lv.write("WM", font=("Arial", 12, "normal"), align="center") # 寫上表白的人的名字

lv.left(140) # 畫完復位

lv.end_fill()

def tree(branchLen, t):

if branchLen > 5: # 剩余樹枝太少要結束遞歸

if branchLen < 20: # 如果樹枝剩余長度較短則變綠

t.color("green")

t.pensize(random.uniform((branchLen + 5) / 4 - 2, (branchLen + 6) / 4 + 5))

t.down()

t.forward(branchLen)

love(t.xcor(), t.ycor()) # 傳輸現在turtle的坐標

t.up()

t.backward(branchLen)

t.color("brown")

return

t.pensize(random.uniform((branchLen + 5) / 4 - 2, (branchLen + 6) / 4 + 5))

t.down()

t.forward(branchLen)

# 以下遞歸

ang = random.uniform(15, 45)

t.right(ang)

tree(branchLen - random.uniform(12, 16), t) # 隨機決定減小長度

t.left(2 * ang)

tree(branchLen - random.uniform(12, 16), t) # 隨機決定減小長度

t.right(ang)

t.up()

t.backward(branchLen)

myWin = turtle.Screen()

t = turtle.Turtle()

t.hideturtle()

t.speed(1000)

t.left(90)

t.up()

t.backward(200)

t.down()

t.color("brown")

t.pensize(32)

t.forward(60)

tree(100, t)

myWin.exitonclick()

技術分享圖片

技術分享圖片

技術分享圖片

技術分享圖片

先第一個:畫像重疊。

我們先選擇兩幅畫,你們也可以一幅選你們心上人的畫像,一幅選擇風景或者其他。這個時候就看各位的審美了。這裏我選擇的都是風景。

技術分享圖片

技術分享圖片

技術分享圖片

技術分享圖片

再獲取圖片寬高:

# 獲取圖片的最小寬高

width = min(img1.size[0],img2.size[0])

height = min(img1.size[1],img2.size[1])

img_new = Image.new(‘RGB‘,(width,height))

這時候渲染圖片:

# 渲染圖片

for x in range(width):

for y in range(height):

r1,g1,b1=img1.getpixel((x,y))

r2,g2,b2=img2.getpixel((x,y))

r=int(percent1*r1+percent2*r2)

g=int(percent1*g1+percent2*g2)

b=int(percent1*b1+percent2*b2)

img_new.putpixel((x,y),(r,g,b))

最後保存就好了!

# 保存圖片

img_new.save(‘new.jpg‘)

技術分享圖片

第二個是圖像渲染:

通過Python的深度學習算法包去訓練計算機模仿世界名畫的風格,然後應用到另一幅畫中!

這個就沒有小程序了。因為這個有幾百個依賴包。

專業難度比較高一些,首先,需要安裝使用的模塊,pip一鍵搞定:

pip3 install keras

pip3 install h5py

pip3 install tensorflow

TensorFlow的安裝可能不FQ的話下載的比較慢,也可以源碼安裝。自己把握。(TensorFlow只能python3.5安裝,所以先下載一個3.5版本的)

然後再下載VGG16模型。把代碼生成py格式和需要渲染圖片放在同一個文件夾。

技術分享圖片

我先把代碼貼出來(這個代碼是根據知乎大佬:楊航鋒的代碼修改而成):

from __future__ import print_function

from keras.preprocessing.image import load_img, img_to_array

from scipy.misc import imsave

import numpy as np

import time

import argparse

from keras.applications import vgg16

from keras import backend as K

from scipy.optimize.lbfgsb import fmin_l_bfgs_b

parser = argparse.ArgumentParser(description=‘Neural style transfer with Keras.‘)

parser.add_argument(‘base_image_path‘, metavar=‘base‘, type=str,help=‘Path to the image to transform.‘)

parser.add_argument(‘style_reference_image_path‘, metavar=‘ref‘, type=str, help=‘Path to the style reference image.‘)

parser.add_argument(‘result_prefix‘, metavar=‘res_prefix‘, type=str,help=‘Prefix for the saved results.‘)

parser.add_argument(‘--iter‘, type=int, default=15, required=False,help=‘Number of iterations to run.‘)

parser.add_argument(‘--content_weight‘, type=float, default=0.025, required=False,help=‘Content weight.‘)

parser.add_argument(‘--style_weight‘, type=float, default=1.0, required=False,help=‘Style weight.‘)

parser.add_argument(‘--tv_weight‘, type=float, default=1.0, required=False,help=‘Total Variation weight.‘)

args = parser.parse_args()

base_image_path = args.base_image_path

style_reference_image_path = args.style_reference_image_path

result_prefix = args.result_prefix

iterations = args.iter

# 不同損失分量的權重

total_variation_weight = args.tv_weight

style_weight = args.style_weight

content_weight = args.content_weight

# 生成圖片的尺寸

width, height = load_img(base_image_path).size

img_nrows = 400

img_ncols = int(width * img_nrows / height)

# util function to open, 調整和格式化圖片到適當的張量

def preprocess_image(image_path):

img = load_img(image_path, target_size=(img_nrows, img_ncols))

img = img_to_array(img)

img = np.expand_dims(img, axis=0)

img = vgg16.preprocess_input(img)

return img

# util函數將一個張量轉換成一個有效的圖像

def deprocess_image(x):

if K.image_data_format() == ‘channels_first‘:

x = x.reshape((3, img_nrows, img_ncols))

x = x.transpose((1, 2, 0))

else:

x = x.reshape((img_nrows, img_ncols, 3))

# Remove zero-center by mean pixel

# 用平均像素去除零中心

x[:, :, 0] += 103.939

x[:, :, 1] += 116.779

x[:, :, 2] += 123.68

# ‘BGR‘->‘RGB‘ 轉換

x = x[:, :, ::-1]

x = np.clip(x, 0, 255).astype(‘uint8‘)

return x

# get tensor representations of our images

# 得到圖像的張量表示

base_image = K.variable(preprocess_image(base_image_path))

style_reference_image = K.variable(preprocess_image(style_reference_image_path))

# this will contain our generated image

# 包含我們生成的圖片

if K.image_data_format() == ‘channels_first‘:

combination_image = K.placeholder((1, 3, img_nrows, img_ncols))

else:

combination_image = K.placeholder((1, img_nrows, img_ncols, 3))

# combine the 3 images into a single Keras tensor

# 將3個圖像合並成一個Keras張量

input_tensor = K.concatenate([base_image,

style_reference_image,

combination_image], axis=0)

# build the VGG16 network with our 3 images as input

# the model will be loaded with pre-trained ImageNet weights

# 以我們的3個圖像作為輸入構建VGG16網絡

# 該模型將加載預先訓練的ImageNet權重

model = vgg16.VGG16(input_tensor=input_tensor,

weights=‘imagenet‘, include_top=False)

print(‘Model loaded.‘)

# get the symbolic outputs of each "key" layer (we gave them unique names).

# 獲取每個“鍵”層的符號輸出(我們給它們取了唯一的名稱)

outputs_dict = dict([(layer.name, layer.output) for layer in model.layers])

# compute the neural style loss

# 計算神經類型的損失

# first we need to define 4 util functions

# 首先我們需要定義是個until函數

# the gram matrix of an image tensor (feature-wise outer product)

# 圖像張量的克矩陣

def gram_matrix(x):

assert K.ndim(x) == 3

if K.image_data_format() == ‘channels_first‘:

features = K.batch_flatten(x)

else:

features = K.batch_flatten(K.permute_dimensions(x, (2, 0, 1)))

gram = K.dot(features, K.transpose(features))

return gram

# the "style loss" is designed to maintain

# 風格損失”是為了維護而設計的

# the style of the reference image in the generated image.

# 生成圖像中引用圖像的樣式

# It is based on the gram matrices (which capture style) of feature maps from the style reference image and from the generated image

# 它基於從樣式引用圖像和生成的圖像中獲取特征映射的gram矩陣(捕獲樣式)

def style_loss(style, combination):

assert K.ndim(style) == 3

assert K.ndim(combination) == 3

S = gram_matrix(style)

C = gram_matrix(combination)

channels = 3

size = img_nrows * img_ncols

return K.sum(K.square(S - C)) / (4. * (channels ** 2) * (size ** 2))

# an auxiliary loss function

# 一個輔助的損失函數

# designed to maintain the "content" of the base image in the generated image

#設計用於維護生成圖像中基本圖像的“內容

def content_loss(base, combination):

return K.sum(K.square(combination - base))

# the 3rd loss function, total variation loss,designed to keep the generated image locally coherent

# 第三個損失函數,總變異損失,設計來保持生成的圖像局部一致

def total_variation_loss(x):

assert K.ndim(x) == 4

if K.image_data_format() == ‘channels_first‘:

a = K.square(x[:, :, :img_nrows - 1, :img_ncols - 1] - x[:, :, 1:, :img_ncols - 1])

b = K.square(x[:, :, :img_nrows - 1, :img_ncols - 1] - x[:, :, :img_nrows - 1, 1:])

else:

a = K.square(x[:, :img_nrows - 1, :img_ncols - 1, :] - x[:, 1:, :img_ncols - 1, :])

b = K.square(x[:, :img_nrows - 1, :img_ncols - 1, :] - x[:, :img_nrows - 1, 1:, :])

return K.sum(K.pow(a + b, 1.25))

# combine these loss functions into a single scalar

# 將這些損失函數合並成一個標量。

loss = K.variable(0.)

layer_features = outputs_dict[‘block4_conv2‘]

base_image_features = layer_features[0, :, :, :]

combination_features = layer_features[2, :, :, :]

loss += content_weight * content_loss(base_image_features,

combination_features)

feature_layers = [‘block1_conv1‘, ‘block2_conv1‘,

‘block3_conv1‘, ‘block4_conv1‘,

‘block5_conv1‘]

for layer_name in feature_layers:

layer_features = outputs_dict[layer_name]

style_reference_features = layer_features[1, :, :, :]

combination_features = layer_features[2, :, :, :]

sl = style_loss(style_reference_features, combination_features)

loss += (style_weight / len(feature_layers)) * sl

loss += total_variation_weight * total_variation_loss(combination_image)

# get the gradients of the generated image wrt the loss

# 得到所生成圖像的梯度,並對損失進行wrt。

grads = K.gradients(loss, combination_image)

outputs = [loss]

if isinstance(grads, (list, tuple)):

outputs += grads

else:

outputs.append(grads)

f_outputs = K.function([combination_image], outputs)

def eval_loss_and_grads(x):

if K.image_data_format() == ‘channels_first‘:

x = x.reshape((1, 3, img_nrows, img_ncols))

else:

x = x.reshape((1, img_nrows, img_ncols, 3))

outs = f_outputs([x])

loss_value = outs[0]

if len(outs[1:]) == 1:

grad_values = outs[1].flatten().astype(‘float64‘)

else:

grad_values = np.array(outs[1:]).flatten().astype(‘float64‘)

return loss_value, grad_values

"""

this Evaluator class makes it possible

to compute loss and gradients in one pass

while retrieving them via two separate functions,

"loss" and "grads". This is done because scipy.optimize

requires separate functions for loss and gradients,

but computing them separately would be inefficient.

這個評估器類使它成為可能。

在一個通道中計算損耗和梯度。

當通過兩個不同的函數檢索它們時,

“損失”和“梯度”。這是因為scipy.optimize

要求分離的函數用於損失和梯度,

但是單獨計算它們將是低效的

"""

class Evaluator(object):

def __init__(self):

self.loss_value = None

self.grads_values = None

def loss(self, x):

assert self.loss_value is None

loss_value, grad_values = eval_loss_and_grads(x)

self.loss_value = loss_value

self.grad_values = grad_values

return self.loss_value

def grads(self, x):

assert self.loss_value is not None

grad_values = np.copy(self.grad_values)

self.loss_value = None

self.grad_values = None

return grad_values

evaluator = Evaluator()

# run scipy-based optimization (L-BFGS) over the pixels of the generated image

# 運行 scipy-based optimization (L-BFGS) 覆蓋 生成的圖像的像素

# so as to minimize the neural style loss

# 這樣可以減少神經類型的損失

if K.image_data_format() == ‘channels_first‘:

x = np.random.uniform(0, 255, (1, 3, img_nrows, img_ncols)) - 128.

else:

x = np.random.uniform(0, 255, (1, img_nrows, img_ncols, 3)) - 128.

for i in range(iterations):

print(‘Start of iteration‘, i)

start_time = time.time()

x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(),

fprime=evaluator.grads, maxfun=20)

print(‘Current loss value:‘, min_val)

# save current generated image

img = deprocess_image(x.copy())

fname = result_prefix + ‘_at_iteration_%d.png‘ % i

imsave(fname, img)

end_time = time.time()

print(‘Image saved as‘, fname)

print(‘Iteration %d completed in %ds‘ % (i, end_time - start_time))

我先把代碼貼出來(這個代碼是根據知乎大佬:楊航鋒的代碼修改而成):

from __future__ import print_function

from keras.preprocessing.image import load_img, img_to_array

from scipy.misc import imsave

import numpy as np

import time

import argparse

from keras.applications import vgg16

from keras import backend as K

from scipy.optimize.lbfgsb import fmin_l_bfgs_b

parser = argparse.ArgumentParser(description=‘Neural style transfer with Keras.‘)

parser.add_argument(‘base_image_path‘, metavar=‘base‘, type=str,help=‘Path to the image to transform.‘)

parser.add_argument(‘style_reference_image_path‘, metavar=‘ref‘, type=str, help=‘Path to the style reference image.‘)

parser.add_argument(‘result_prefix‘, metavar=‘res_prefix‘, type=str,help=‘Prefix for the saved results.‘)

parser.add_argument(‘--iter‘, type=int, default=15, required=False,help=‘Number of iterations to run.‘)

parser.add_argument(‘--content_weight‘, type=float, default=0.025, required=False,help=‘Content weight.‘)

parser.add_argument(‘--style_weight‘, type=float, default=1.0, required=False,help=‘Style weight.‘)

parser.add_argument(‘--tv_weight‘, type=float, default=1.0, required=False,help=‘Total Variation weight.‘)

args = parser.parse_args()

base_image_path = args.base_image_path

style_reference_image_path = args.style_reference_image_path

result_prefix = args.result_prefix

iterations = args.iter

# 不同損失分量的權重

total_variation_weight = args.tv_weight

style_weight = args.style_weight

content_weight = args.content_weight

# 生成圖片的尺寸

width, height = load_img(base_image_path).size

img_nrows = 400

img_ncols = int(width * img_nrows / height)

# util function to open, 調整和格式化圖片到適當的張量

def preprocess_image(image_path):

img = load_img(image_path, target_size=(img_nrows, img_ncols))

img = img_to_array(img)

img = np.expand_dims(img, axis=0)

img = vgg16.preprocess_input(img)

return img

# util函數將一個張量轉換成一個有效的圖像

def deprocess_image(x):

if K.image_data_format() == ‘channels_first‘:

x = x.reshape((3, img_nrows, img_ncols))

x = x.transpose((1, 2, 0))

else:

x = x.reshape((img_nrows, img_ncols, 3))

# Remove zero-center by mean pixel

# 用平均像素去除零中心

x[:, :, 0] += 103.939

x[:, :, 1] += 116.779

x[:, :, 2] += 123.68

# ‘BGR‘->‘RGB‘ 轉換

x = x[:, :, ::-1]

x = np.clip(x, 0, 255).astype(‘uint8‘)

return x

# get tensor representations of our images

# 得到圖像的張量表示

base_image = K.variable(preprocess_image(base_image_path))

style_reference_image = K.variable(preprocess_image(style_reference_image_path))

# this will contain our generated image

# 包含我們生成的圖片

if K.image_data_format() == ‘channels_first‘:

combination_image = K.placeholder((1, 3, img_nrows, img_ncols))

else:

combination_image = K.placeholder((1, img_nrows, img_ncols, 3))

# combine the 3 images into a single Keras tensor

# 將3個圖像合並成一個Keras張量

input_tensor = K.concatenate([base_image,

style_reference_image,

combination_image], axis=0)

# build the VGG16 network with our 3 images as input

# the model will be loaded with pre-trained ImageNet weights

# 以我們的3個圖像作為輸入構建VGG16網絡

# 該模型將加載預先訓練的ImageNet權重

model = vgg16.VGG16(input_tensor=input_tensor,

weights=‘imagenet‘, include_top=False)

print(‘Model loaded.‘)

# get the symbolic outputs of each "key" layer (we gave them unique names).

# 獲取每個“鍵”層的符號輸出(我們給它們取了唯一的名稱)

outputs_dict = dict([(layer.name, layer.output) for layer in model.layers])

# compute the neural style loss

# 計算神經類型的損失

# first we need to define 4 util functions

# 首先我們需要定義是個until函數

# the gram matrix of an image tensor (feature-wise outer product)

# 圖像張量的克矩陣

def gram_matrix(x):

assert K.ndim(x) == 3

if K.image_data_format() == ‘channels_first‘:

features = K.batch_flatten(x)

else:

features = K.batch_flatten(K.permute_dimensions(x, (2, 0, 1)))

gram = K.dot(features, K.transpose(features))

return gram

# the "style loss" is designed to maintain

# 風格損失”是為了維護而設計的

# the style of the reference image in the generated image.

# 生成圖像中引用圖像的樣式

# It is based on the gram matrices (which capture style) of feature maps from the style reference image and from the generated image

# 它基於從樣式引用圖像和生成的圖像中獲取特征映射的gram矩陣(捕獲樣式)

def style_loss(style, combination):

assert K.ndim(style) == 3

assert K.ndim(combination) == 3

S = gram_matrix(style)

C = gram_matrix(combination)

channels = 3

size = img_nrows * img_ncols

return K.sum(K.square(S - C)) / (4. * (channels ** 2) * (size ** 2))

# an auxiliary loss function

# 一個輔助的損失函數

# designed to maintain the "content" of the base image in the generated image

#設計用於維護生成圖像中基本圖像的“內容

def content_loss(base, combination):

return K.sum(K.square(combination - base))

# the 3rd loss function, total variation loss,designed to keep the generated image locally coherent

# 第三個損失函數,總變異損失,設計來保持生成的圖像局部一致

def total_variation_loss(x):

assert K.ndim(x) == 4

if K.image_data_format() == ‘channels_first‘:

a = K.square(x[:, :, :img_nrows - 1, :img_ncols - 1] - x[:, :, 1:, :img_ncols - 1])

b = K.square(x[:, :, :img_nrows - 1, :img_ncols - 1] - x[:, :, :img_nrows - 1, 1:])

else:

a = K.square(x[:, :img_nrows - 1, :img_ncols - 1, :] - x[:, 1:, :img_ncols - 1, :])

b = K.square(x[:, :img_nrows - 1, :img_ncols - 1, :] - x[:, :img_nrows - 1, 1:, :])

return K.sum(K.pow(a + b, 1.25))

# combine these loss functions into a single scalar

# 將這些損失函數合並成一個標量。

loss = K.variable(0.)

layer_features = outputs_dict[‘block4_conv2‘]

base_image_features = layer_features[0, :, :, :]

combination_features = layer_features[2, :, :, :]

loss += content_weight * content_loss(base_image_features,

combination_features)

feature_layers = [‘block1_conv1‘, ‘block2_conv1‘,

‘block3_conv1‘, ‘block4_conv1‘,

‘block5_conv1‘]

for layer_name in feature_layers:

layer_features = outputs_dict[layer_name]

style_reference_features = layer_features[1, :, :, :]

combination_features = layer_features[2, :, :, :]

sl = style_loss(style_reference_features, combination_features)

loss += (style_weight / len(feature_layers)) * sl

loss += total_variation_weight * total_variation_loss(combination_image)

# get the gradients of the generated image wrt the loss

# 得到所生成圖像的梯度,並對損失進行wrt。

grads = K.gradients(loss, combination_image)

outputs = [loss]

if isinstance(grads, (list, tuple)):

outputs += grads

else:

outputs.append(grads)

f_outputs = K.function([combination_image], outputs)

def eval_loss_and_grads(x):

if K.image_data_format() == ‘channels_first‘:

x = x.reshape((1, 3, img_nrows, img_ncols))

else:

x = x.reshape((1, img_nrows, img_ncols, 3))

outs = f_outputs([x])

loss_value = outs[0]

if len(outs[1:]) == 1:

grad_values = outs[1].flatten().astype(‘float64‘)

else:

grad_values = np.array(outs[1:]).flatten().astype(‘float64‘)

return loss_value, grad_values

"""

this Evaluator class makes it possible

to compute loss and gradients in one pass

while retrieving them via two separate functions,

"loss" and "grads". This is done because scipy.optimize

requires separate functions for loss and gradients,

but computing them separately would be inefficient.

這個評估器類使它成為可能。

在一個通道中計算損耗和梯度。

當通過兩個不同的函數檢索它們時,

“損失”和“梯度”。這是因為scipy.optimize

要求分離的函數用於損失和梯度,

但是單獨計算它們將是低效的

"""

class Evaluator(object):

def __init__(self):

self.loss_value = None

self.grads_values = None

def loss(self, x):

assert self.loss_value is None

loss_value, grad_values = eval_loss_and_grads(x)

self.loss_value = loss_value

self.grad_values = grad_values

return self.loss_value

def grads(self, x):

assert self.loss_value is not None

grad_values = np.copy(self.grad_values)

self.loss_value = None

self.grad_values = None

return grad_values

evaluator = Evaluator()

# run scipy-based optimization (L-BFGS) over the pixels of the generated image

# 運行 scipy-based optimization (L-BFGS) 覆蓋 生成的圖像的像素

# so as to minimize the neural style loss

# 這樣可以減少神經類型的損失

if K.image_data_format() == ‘channels_first‘:

x = np.random.uniform(0, 255, (1, 3, img_nrows, img_ncols)) - 128.

else:

x = np.random.uniform(0, 255, (1, img_nrows, img_ncols, 3)) - 128.

for i in range(iterations):

print(‘Start of iteration‘, i)

start_time = time.time()

x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(),

fprime=evaluator.grads, maxfun=20)

print(‘Current loss value:‘, min_val)

# save current generated image

img = deprocess_image(x.copy())

fname = result_prefix + ‘_at_iteration_%d.png‘ % i

imsave(fname, img)

end_time = time.time()

print(‘Image saved as‘, fname)

print(‘Iteration %d completed in %ds‘ % (i, end_time - start_time))

它會有一個不斷漸進渲染的過程:

技術分享圖片

雖然我有老婆,而且我老婆特別好看,漂亮。但是為了不傷害到你們,我就用萬門的新起點嘉園大樓渲染一下莫奈的名畫。給你們具體看一下。

技術分享圖片

技術分享圖片

技術分享圖片

技術分享圖片

技術分享圖片

技術分享圖片

技術分享圖片

其實,只要是自己用心做出的禮物,你喜歡的人一定會非常感動。

技術分享圖片

願每一個渴望戀愛的人都能在520這天找到自己的心有所屬。

技術分享圖片

轉載於:萬門

歡迎大家關註我的博客或者公眾號:https://home.cnblogs.com/u/Python1234/ Python學習交流

歡迎加入我的千人交流答疑群:125240963

用Python十秒做表白神器!雖然520已經過去了,但是還有七夕啊!