1. 程式人生 > >深度學習筆記——深度學習框架TensorFlow(一)

深度學習筆記——深度學習框架TensorFlow(一)

一. 學習網站:

1. Introduction:https://www.tensorflow.org/versions/r0.12/get_started/index.html
2. Tutorials:https://www.tensorflow.org/versions/r0.12/tutorials/index.html
3. API:https://www.tensorflow.org/versions/r0.12/api_docs/python/index.html

二. Introduction:

Let’s get you up and running with TensorFlow!
讓我們開始學習TensorFlow!

But before we even get started, let’s peek at what TensorFlow code looks like in the Python API, so you have a sense of where we’re headed.
在我們學習之前,讓我們看一下TensorFlow的程式碼在Python API中是怎麼樣的,以便你知道接下來我們會如何學習

Here’s a little Python program that makes up some data in two dimensions, and then fits a line to it.
首先,我們看一個Python程式,這個Python程式描述的是在二維空間中的多個點,然後尋找一條最合適的線以適配這些點。

import tensorflow as tf
import numpy as np
#Create 100 phony x,y data points in Numpy,y = x*0.1+0.3
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data*0.1+0.3
#Try to find values for W and b that compute y_data = W*x_data+b
#y_data = W*x_data+b
#we know W should be 0.1 and b 0.3
W = tf.Variable
(tf.random_uniform([1],-1.0,1.0)) b = tf.Variable(tf.zeros([1])) y = W*x_data+b #Minimize the mean squared errors loss = tf.reduce_mean(tf.square(y-y_data)) optimizer = tf.train.GradientDescentOptimizer(0.5) train = optimizer.minimize(loss) #Before starting,initialize the variables,We will run this first init = tf.global_variables_initializer() #Launch the graph sess = tf.Session() sess.run(init) #Fit the line for step in range(201): sess.run(train) if step % 20 == 0: print(step,sess.run(W),sess.run(b)) #Learns best fit is W:[0.1],b:[0.3]

The first part of this code builds the data flow graph. TensorFlow does not actually run any computation until the session is created and the run function is called.
第一部分程式碼構建了資料流圖,TensorFlow直到session被建立以及run函式被呼叫才真正開始執行。

To whet your appetite further, we suggest you check out what a classical machine learning problem looks like in TensorFlow. In the land of neural networks the most “classic” classical problem is the MNIST handwritten digit classification. We offer two introductions here, one for machine learning newbies, and one for pros. If you’ve already trained dozens of MNIST models in other software packages, please take the red pill. If you’ve never even heard of MNIST, definitely take the blue pill. If you’re somewhere in between, we suggest skimming blue, then red.
為了更進一步吸引你的興趣,我們建議你看一下機器學習的基本問題在TensorFlow中的應用。在機器學習領域最經典的焦點問題是MNIST手寫數字識別。在這我們提供2個介紹:第一個給新手看,第二給專業人士看。如果你已經訓練了多個MNIST模型,那麼請點選紅色藥丸。如果你你之前沒聽過MNIST,則你需要點選下面的藍色藥丸。如果你介於兩者之間,我們建議你點選藍色藥丸,再點選紅色藥丸。
紅色藥丸連結 : https://www.tensorflow.org/versions/r0.12/tutorials/mnist/pros/index.html
藍色藥丸連結:https://www.tensorflow.org/versions/r0.12/tutorials/mnist/beginners/index.html

If you’re already sure you want to learn and install TensorFlow you can skip these and charge ahead. Don’t worry, you’ll still get to see MNIST – we’ll also use MNIST as an example in our technical tutorial where we elaborate on TensorFlow features.
如果你確定你要學習TensorFlow,你可以跳過上面這些,直接往前看。別擔心,你還是會接觸到MNIST——我們也會以MNIST為例闡述TensorFlow的一些特徵。

三. Basic Usage:

Represents computations as graphs.
Executes graphs in the context of Sessions.
Represents data as tensors.
Maintains state with Variables.
Uses feeds and fetches to get data into and out of arbitrary operations.
使用TensorFlow之前,你需要了解TensorFlow:
1. 以圖的方式來表示估計
2. 通過Sessions在上下文中執行圖形
3. 通過張量表示資料
4. 通過Variables保持狀態
5. 使用提要和抓取來獲得資料,並且輸出任意操作。
OverView:
TensorFlow is a programming system in which you represent computations as graphs. Nodes in the graph are called ops (short for operations). An op takes zero or more Tensors, performs some computation, and produces zero or more Tensors. In TensorFlow terminology, a Tensor is a typed multi-dimensional array. For example, you can represent a mini-batch of images as a 4-D array of floating point numbers with dimensions [batch, height, width, channels].
TensorFlow是一個程式系統,在這系統中你可以將計算用圖進行表示,途中的結點被稱為操作,一個操作可能會佔據0或者更多的張量,可以用來執行一些計算。張量是一個典型的多維陣列。例如,你可以用4維float型別陣列的值來表示一個小批次圖形[batch,height,width,channels]

A TensorFlow graph is a description of computations. To compute anything, a graph must be launched in a Session. A Session places the graph ops onto Devices, such as CPUs or GPUs, and provides methods to execute them. These methods return tensors produced by ops as numpy ndarray objects in Python, and as tensorflow::Tensor instances in C and C++.
一個tensorflow圖是一種描述計算。無論計算什麼都必須在一個會話中啟動一個圖表。會話將在圖形操作安置在裝置上,如CPU或GPU,並且提供一個思想以執行它們。這些方法返回由在Python中操作的numpy ndarray 物件的張量,併為tensorflow::在C和C++的張量的例子。

The computation graph:
TensorFlow programs are usually structured into a construction phase, that assembles a graph, and an execution phase that uses a session to execute ops in the graph.
tensorflow程式通常分為施工階段,組裝圖,和執行階段,使用一個會話中執行行動圖。

For example, it is common to create a graph to represent and train a neural network in the construction phase, and then repeatedly execute a set of training ops in the graph in the execution phase.
例如,在施工階段建立一個圖形以表示和訓練一個神經網路是非常常見的。接下來在施工階段在圖形中分別執行一系列訓練操作。

TensorFlow can be used from C, C++, and Python programs. It is presently much easier to use the Python library to assemble graphs, as it provides a large set of helper functions not available in the C and C++ libraries.
TensorFlow可以使用到C/C++/Python程式中。現在它能更簡單的使用Python庫進行圖表收集,因為它通過了一系列幫助函式,而這些在C/C++庫中是沒有的。

The session libraries have equivalent functionalities for the three languages.
會話庫對於三種語言有等效功能。

Building the graph:
To build a graph start with ops that do not need any input (source ops), such as Constant, and pass their output to other ops that do computation.
以操作為始構建一個圖形不需要任何輸入(源操作),例如Constant(常熟),以及將它們輸出給其他操作用作計算。

The ops constructors in the Python library return objects that stand for the output of the constructed ops. You can pass these to other ops constructors to use as inputs.
在Python庫中這個操作結構將中返回一個物件,這個物件代表這個操作的輸出。其他操作結構可以將這輸出當作是一個輸入。

The TensorFlow Python library has a default graph to which ops constructors add nodes. The default graph is sufficient for many applications. See the Graph class documentation for how to explicitly manage multiple graphs.
TensorFlow的Python庫有一個預設圖形,這裡的操作建構函式可以增加結點。預設圖表對於許多應用來說是足夠的了。看圖表類文件(https://www.tensorflow.org/versions/r0.12/api_docs/python/framework.html#Graph)如何顯式管理多個圖形。

import tensorflow as tf
#Create a constatnt op that produces a 1*2 matrix,the op is added as a node to the default graph
#the value returned by the constructor represents the output of the Constant op.
matrix1 = tf.constant([[3.,3.]])
#Create another Constant that produces a 2*1 matrix
matrix2 = tf.constant([[2.],[2.]])
#Create a Matmul op that takes 'matrix1' and 'matrix2' as inputs.
#The returned value,'product',represents the result of the matrix multiplication
product = tf.matmul(matrix1,matrix2)

The default graph now has three nodes: two constant() ops and one matmul() op. To actually multiply the matrices, and get the result of the multiplication, you must launch the graph in a session.
現在這個預設圖形有三個結點,兩個constant()操作,一個matmul()操作。為了得到矩陣相乘的結果,你必須在會話(session)中啟動圖形

Launching the graph in a session:
Launching follows construction. To launch a graph, create a Session object. Without arguments the session constructor launches the default graph.
為了啟動一個圖表,我們需要建立一個會話(Session)物件,毫無疑問的是會話建構函式需要啟動一個預設的圖示。

#Launch the default graph
sess = tf.Session()
#To run the matmul op we call the session 'run()'method,passing 'product'
#which represents the output of the matmul op.  This indicates to the call
#that we want to get the output of the matmul op back.

#All inputs needed by the op are run automatically by the session.  They
#typically are run in parallel.

#The call 'run(product)' thus causes the execution of three ops in the
#graph: the two constants and matmul.

result = sess.run(product)
print(result)
#Close the Session when we're done
sess.close()

Sessions should be closed to release resources. You can also enter a Session with a “with” block. The Session closes automatically at the end of the with block.
會話應該在釋放資源時被關閉,你可以可以輸入一個會話以一個”with”塊,在with塊的結尾會話會自動關閉:

with tf.Session() as sess:
    result = sess.run([product])
    print(result)

The TensorFlow implementation translates the graph definition into executable operations distributed across available compute resources, such as the CPU or one of your computer’s GPU cards. In general you do not have to specify CPUs or GPUs explicitly. TensorFlow uses your first GPU, if you have one, for as many operations as possible.
TensorFlow的實現可以轉換成可執行的操作的圖形定義分佈在可用的計算資源,如CPU或計算機的GPU。通常你不必明確指定CPU或GPU。如果你有一個GPU,為了儘可能儘可能多的操作,tensorflow會使用你的第一個。

If you have more than one GPU available on your machine, to use a GPU beyond the first you must assign ops to it explicitly. Use with…Device statements to specify which CPU or GPU to use for operations:
如果你的伺服器中有超過一個的可用GPU,你必須顯式的標明需要使用的GPU。通過裝置陳述去明確使用的是CPU還是GPU。

with tf.Session() as sess:
    with tf.device("/gpu:1")
        matrix1 = tf.constant([[3.,3.]])
        matrix2 = tf.constant([[2.][2.]])
        product = tf.matmul(matrix1,matrix2)        

Devices are specified with strings. The currently supported devices are:
裝置被描述成字串,當前支援的裝置描述如下:

"/cpu:0": The CPU of your machine
"/gpu:0":The GPU of your machine,if you have one.
"/gpu:1":The second GPU of your machine,etc.

Launching the graph in a distributed session:
To create a TensorFlow cluster, launch a TensorFlow server on each of the machines in the cluster. When you instantiate a Session in your client, you pass it the network location of one of the machines in the cluster:
建立一個TensorFlow叢集,在每臺機器的叢集上登入一個TensorFlow伺服器。當你例項化一個Session在你的客戶端時,你通過它傳遞叢集中的一臺機器的網路位置。

with tf.Session("grpc://example.org:2222") as session:
    #Calls to sess.run(...) will be executed on the cluster.

This machine becomes the master for the session. The master distributes the graph across other machines in the cluster (workers), much as the local implementation distributes the graph across available compute resources within a machine.
這個機器會變成會話中的主流,主流通過叢集(工人)中的其他機器分配圖表,正如本地實現將圖表分佈在機器的可用的計算資源上一樣。

You can use “with tf.device():” statements to directly specify workers for particular parts of the graph:
你可以使用”with tf.device():”陳述直接明確的指定工人(叢集)為某個特定部分的圖表工作。

with tf.device("/job:ps/task:0")
    weights = tf.Variable(...)
    biases = tf.Variable(...)

See the Distributed TensorFlow How To for more information about distributed sessions and clusters.
獲取更多分散式TensorFlow如何為分散式會話和叢集工作的更多資訊。(https://www.tensorflow.org/versions/r0.12/how_tos/distributed/index.html

Interactive Usage:
The Python examples in the documentation launch the graph with a Session and use the Session.run() method to execute operations.
文件中Python的案例通過Session生成一個圖表,通過Session.run()方法執行一個操作。

For ease of use in interactive Python environments, such as IPython you can instead use the InteractiveSession class, and the Tensor.eval() and Operation.run() methods. This avoids having to keep a variable holding the session.
對於使用互動式Python的環境,比如IPython,你可以轉而使用InteractiveSession類,和Tensor.eval()和Operation.run()方法。這樣就避免了保留會話的變數。

#Enter an interative TensorFlow Session.
import tensorflow as tf
sess = tf.InteractiveSession()

x = tf.Variable([1.0,2.0])
a = tf.constant([3.0,3.0])

#Initialize 'x' using the run() method of its initializer op.
#Add an op to subtract 'a' from 'x'.Run it and print the result.
sub = tf.subtract(x,a)
print(sub.eval())
#==>[-2.-1.]
#Close the Session when we're done.
sess.close()

Tensors:
TensorFlow programs use a tensor data structure to represent all data – only tensors are passed between operations in the computation graph. You can think of a TensorFlow tensor as an n-dimensional array or list. A tensor has a static type, a rank, and a shape. To learn more about how TensorFlow handles these concepts, see the Rank, Shape, and Type reference.
tensorflow程式使用張量資料結構來表示所有的資料–計算圖中操作之間傳遞的是張量。你可以想象一個tensorflow張量為n維陣列或列表。張量有靜態型別、秩和形狀。要了解更多關於TensorFlow如何處理這些概念,看排名,形狀和型別的引用。(https://www.tensorflow.org/versions/r0.12/resources/dims_types.html

#Create a Variable,that will be initialized to the scalar value 0.
state = tf.Variable(0,name="counter")
#Create an Op to add one to 'state'
one = tf.constant(1)
new_value = tf.add(state,one)
update = tf.assign(state,new_value)
#Variables must be initialized by running an 'init' Op after having launched the graph.We first have to add the 'init' Op to the graph.
init_op = tf.global_variables_initializer()
#Launch the graph and run the ops.
with tf.Session() as sess:
    #Run the 'init' op
    sess.run(init_op)
    #Print the initial value of 'state'
    print(sess.run(state))
    #Run the op that updates 'state' and print 'state'.
    for _ in range(3):
        sess.run(update)
        print(sess.run(state))
# output:
#0
#1
#2
#3

The assign() operation in this code is a part of the expression graph just like the add() operation, so it does not actually perform the assignment until run() executes the expression.
程式碼中的assign()操作是圖所描述的表示式的一部分,如同add()操作。所以在run()執行之前它並不能真正得到執行。

You typically represent the parameters of a statistical model as a set of Variables. For example, you would store the weights for a neural network as a tensor in a Variable. During training you update this tensor by running a training graph repeatedly.
通常將統計模型的引數表示為一組變數。例如,你可以儲存一個神經網路的權重作為變數中的一個張量。在訓練過程中,通過反覆執行一個訓練圖來更新這個張量。

Fetches:
To fetch the outputs of operations, execute the graph with a run() call on the Session object and pass in the tensors to retrieve. In the previous example we fetched the single node state, but you can also fetch multiple tensors:
為了取回操作的輸出內容,可以在通過Session物件的run()方法執行影象時,傳入一些tensor,這些tensor會幫助你取回結果。在先前的例子中,我們不僅可以獲取單獨的點狀態,也可以獲取多個tensors。

input1 = tf.constant([3.0])
input2 = tf.constant([2.0])
input3 = tf.constant([5.0])
intermed = tf.add(input2,input3)
mul = tf.multiply(input1,intermed)
with tf.Session() as sess:
    result = sess.run([mul,intermed])
    print(result)
#output:
#[array([21.],dtype=float32),array([7.],dtype=float32)]

All the ops needed to produce the values of the requested tensors are run once (not once per requested tensor).
需要獲取的多個 tensor 值,在 op 的一次執行中一起獲得(而不是逐個去獲取 tensor)。

Feeds:
The examples above introduce tensors into the computation graph by storing them in Constants and Variables. TensorFlow also provides a feed mechanism for patching a tensor directly into any operation in the graph.
上述示例在計算圖中引入了 tensor,並以常量或變數的形式儲存。TensorFlow也提供feed機制,用來直接將tensor修補成影象中的任意操作。

A feed temporarily replaces the output of an operation with a tensor value. You supply feed data as an argument to a run() call. The feed is only used for the run call to which it is passed. The most common use case involves designating specific operations to be “feed” operations by using tf.placeholder() to create them:
一個feed通過一個tensor值,暫時性取代一個操作的輸出,提供一個feed資料作為run()的引數,eed 只在呼叫它的方法內有效,在方法結束後他就會消失。最常見的用例是將某些特殊的操作指定為 “feed” 操作, 標記的方法是使用 tf.placeholder() 為這些操作建立佔位符。

input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.mul(input1,input2)
with tf.Session() as sess:
    print(sess.run([output],feed_dict={input1:[7.],input2:[2.]}))

相關推薦

深度學習筆記——深度學習框架TensorFlow

一. 學習網站: 1. Introduction:https://www.tensorflow.org/versions/r0.12/get_started/index.html 2. Tutorials:https://www.tensorflow.org/

C#.Net 設計模式學習筆記之創建型

應用 種類 單件 src nag abstract 子類 指定 相關 1、抽象工廠(Abstract Factory)模式 常規的對象創建方法: //創建一個Road對象 Road road =new Road(); new 的問題: 實現依賴,不能應對“具

C#學習筆記-域用戶認證

byte name urn validate finally tostring code date geb public Boolean ValidateDomainUser(string Domain, string UserName, string

python程式設計:從入門到實踐學習筆記-Django開發使用者賬戶

讓使用者能夠輸入資料(表單) 在建立使用者賬戶身份驗證系統之前,先新增幾個頁面,讓使用者能偶輸入資料。新增新主題、新增新條目以及編輯既有條目。 新增新主題 1.用於新增主題的表單 建立一個forms.py檔案與models.py放在同一目錄下。 from django import

Spring Cloud學習筆記之微服務實現Spring Boot+IDEA

我們先使用Spring Boot實現一個微服務,業務非常簡單: 1.商品微服務,通過商品id查詢商品的微服務 2.訂單微服務,通過訂單id查詢訂單資料,同時需要呼叫商品微服務查詢出訂單詳情資料對應的商品資料。 說明: 1.對於商品微服務而言,商品微服務是服務的提供者,訂單微服務是服務的消費

公開課學習筆記- 哈佛 電腦科學CS50

分享一下我老師大神的人工智慧教程!零基礎,通俗易懂!http://blog.csdn.net/jiangjunshow 也歡迎大家轉載本篇文章。分享知識,造福人民,實現我們中華民族偉大復興!        

linux學習筆記之linux常用命令

Linux常用命令 檔案處理命令 許可權管理命令 檔案搜尋命令 幫助命令 使用者管理命令 壓縮解壓命令 網路命令 關機重啟命令 檔案處理命令 命令格式 命令 【-選項】【引數】 ls -la

C++ 學習筆記 變數和基本型別

C++ 學習筆記 一、變數和基本型別概述 型別是所有程式的基礎。型別告訴我們資料代表什麼意思以及可以對資料執行哪些操作。 c++基本型別: 字元型 整型 浮點型 c++ 還提供了可用於自定義資料型別的機制,標準庫正式利用了這些機制定義了許多更復雜的型別,比如可變長字串string 和vector等。此外,

linux學習筆記之常用基本命令

1.幫助命令 man獲取幫助資訊 語法:man[命令或配置檔案] help獲得shell內建命令的幫助資訊 語法:help 命令 例: help cd 常用快捷鍵 ctrl +c: 停止程序 ctrl +l: 清屏 ctrl +q: 退出 tab鍵 :補全命令(常用t

Pyhon網路爬蟲學習筆記—抓取本地網頁

如何用Python爬取本地網頁   一、寫出一個簡單的靜態網頁,下面是我隨便寫的一個 網頁原始碼如下 <!DOCTYPE html> <html lang="en"> <head> <meta charset="UT

學習筆記之MongoDB進階

MongoDB的條件操作符 MongoDB中條件操作符有: (>) 大於 - $gt (<) 小於 - $lt (>=) 大於等於 - $gte (<= ) 小於等於 - $lte $gt -------- greater than

【C++學習筆記】虛基類

1.為什麼要引入虛基類? 在類的繼承中,如果我們遇到這種情況: “B和C同時繼承A,而B和C都被D繼承” 在此時,假如A中有一個函式fun()當然同時被B和C繼承,而D按理說繼承了B和C,同時也應該能呼叫fun()函式。這一呼叫就有問題了,到底是要呼叫B中的fun()函式還是呼叫C中的f

BC404學習筆記-ABAP面向物件程式設計-基礎

只摘錄注意事項和難以理解的地方。總的來說和JAVA、C++面向物件程式設計技術差不太多。類、繼承、封裝之類的概念在abapoo中也都有體現。面向物件的物件簡單理解就是現實世界的事物,到程式設計世界就用

Shader學習筆記,通過GLSL實現

最近一直在專心研究利用GLSL編寫Shader,寫點東西將自己學的總結一下,把自己學習shader的經歷分享一下,希望能對有興趣學習shader的同學有些幫助,但這些玩意還算不上教程,很多都是我自己在學習中的問題以及如何解決的,有什麼不足還請各位指出,想要系統的學習GLSL的話還是推薦大家看《OpenGL

Swift學習筆記六:常見概念

七、常見概念1.範圍Range、ClosedRange和NSRange的使用//區間分為閉區間和半開區間import Foundationlet closedRange: ClosedRange = 1...3 //閉區間let intArray = ["1", "2", "

學習筆記之數據庫&mdash;&mdash;操作數據庫

swd 否則 userdel upd 執行文件 忘記 alt 不能 ets 用戶權限的相關命令: 權限類型: 01 讀 read r 4 02 寫 write w 2 03 執行 excute x 1 組權限: 開發組:將所有開發人員添加到一個組中,這個組中所有

深度學習筆記——深度學習框架TensorFlow

一. 學習網站: 二. 教程: 目錄: 1. 面向機器學習初學者的 MNIST 初級教程 2. 面向機器學習專家的 MNIST 高階教程 3. TensorFlow 使用指南(以MNIST為例) 4. 簡單的機器學習with tf.contr

深度學習筆記——深度學習框架TensorFlow

一. 學習網站: 二. 教程: Deep MNIST for Experts: TensorFlow is a powerful library for doing large-scale numerical computation. One of

深度學習tensorflow

spa 計算 put range mst reduce logs 分析 pen 一、TensorFlow簡介 1.TensorFlow定義: tensor :張量,N維數組 Flow : 流,基於數據流圖的計算 TensorFlow : 張量從圖像

深度學習筆記之自然語言處理word2vec

1.1 自然語言處理的應用 拼寫檢查,關鍵詞搜尋 文字挖掘 文字分類 機器翻譯 客服系統 複雜對話系統 1.2 語言模型 舉個例子: 這裡有一句話:“我今天下午打籃球。” p(S)是稱為語言模型,即用來計算