1. 程式人生 > >TensorFlow學習篇【1】Getting Started With TensorFlow

TensorFlow學習篇【1】Getting Started With TensorFlow

學習網址:https://www.tensorflow.org/get_started/get_started

This guide gets you started programming in TensorFlow. Before using this guide, install TensorFlow. To get the most out of this guide, you should know the following:

  • How to program in Python.
  • At least a little bit about arrays.
  • Ideally, something about machine learning. However, if you know little or nothing about machine learning, then this is still the first guide you should read.
基礎需求:
  • Python程式設計.
  • 陣列知識(線性代數相關知識).
  • 機器學習相關知識
機器學習推薦:coursera中Andrew NG的課程,Hinton的課程。

Tensors

The central unit of data in TensorFlow is the tensor. A tensor consists of a set of primitive values shaped into an array of any number of dimensions. A tensor's rank is its number of dimensions. Here are some examples of tensors:

3# a rank 0 tensor; this is a scalar with shape [][1.,2.,3.]# a rank 1 tensor; this is a vector with shape [3][[1.,2.,3.],[4.,5.,6.]]# a rank 2 tensor; a matrix with shape [2, 3][[[1.,2.,3.]],[[7.,8.,9.]]]# a rank 3 tensor with shape [2, 1, 3] Tensor如何翻譯?翻譯為張量,可以包含任意維的數。暫且如此翻譯,其實還是不要翻譯的好,總是感覺變味呢。 TensorFlow程式設計初體驗:
推薦應用ipython,自動補全功能最好了。 在terminal下進入:ipython



總體上看,這是一些常量的使用方法,應當沒什麼可以解釋的,繼續下面


這裡面來了一個placeholders,應該是屬於變數輸入的介面了(暫時如此理解),繼續往下看


這裡面就有了機器學習的影子了,裡面包含了輸入變數X,引數w和b,輸出y,損失函式loss,基本元素基本上都包含了。那麼y也是placeholder,所以placeholder應當包含了輸入輸出介面了,具體怎麼翻譯呢?有地方翻譯為佔位符,也挺好的。


這是如何訓練一個模型,這裡是最小化損失函式,總共訓練了1000次,最後得到了我們要的引數w和b。


最終的損失函式數值為e-11.效果很好,主要是該模型就是一個非常完美的線性模型,毫無疑問了。


該訓練網路為上圖,rank之上我就看不懂了,還有就是每一個模組似乎都跟gradient模組相連,有點萌。但是計算梯度的時候,這些數值都是需要的,應該是這個原因了。


contrib屬性學習,是屬於高層的抽象,看一下如何應用

import tensorflow as tf
# NumPy is often used to load, manipulate and preprocess data.
import numpy as np

# Declare list of features. We only have one real-valued feature. There are many
# other types of columns that are more complicated and useful.
features = [tf.contrib.layers.real_valued_column("x", dimension=1)]

# An estimator is the front end to invoke training (fitting) and evaluation
# (inference). There are many predefined types like linear regression,
# logistic regression, linear classification, logistic classification, and
# many neural network classifiers and regressors. The following code
# provides an estimator that does linear regression.
estimator = tf.contrib.learn.LinearRegressor(feature_columns=features)

# TensorFlow provides many helper methods to read and set up data sets.
# Here we use `numpy_input_fn`. We have to tell the function how many batches
# of data (num_epochs) we want and how big each batch should be.
x = np.array([1., 2., 3., 4.])
y = np.array([0., -1., -2., -3.])
input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x}, y, batch_size=4,
                                              num_epochs=1000)

# We can invoke 1000 training steps by invoking the `fit` method and passing the
# training data set.
estimator.fit(input_fn=input_fn, steps=1000)

# Here we evaluate how well our model did. In a real example, we would want
# to use a separate validation and testing data set to avoid overfitting.
estimator.evaluate(input_fn=input_fn)


features = [tf.contrib.layers.real_valued_column("x", dimension=1)]
輸入為一維x
estimator = tf.contrib.learn.LinearRegressor(feature_columns=features)
線性模型
x = np.array([1., 2., 3., 4.])
y = np.array([0., -1., -2., -3.])
input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x}, y, batch_size=4,
                                              num_epochs=1000)
輸入輸出
batch_size,機器學習中非常有用的概念,當樣本數量非常大,有一些冗餘的時候,採用minibatch訓練,效果會非常好。當然這裡就4個樣本,就是full——batch
num_epochs是每個batch的大小,為什麼是1000呢,有點暈
estimator.fit(input_fn=input_fn, steps=1000) 
這裡面有兩個input_fn,當然了,第一個是預設變數,看一下其原始碼:
這樣是不是就一目瞭然了,哈哈,1000次訓練
estimator.evaluate(input_fn=input_fn)
這個地方也是一樣的了,
具體各個引數什麼意思,慢慢學習了,先有個總體的印象,學習,就是要一遍一遍不斷的看,不斷的學習。
這個例子到此先暫時放一下了。
import numpy as np
import tensorflow as tf
# Declare list of features, we only have one real-valued featuredef model(features, labels, mode):# Build a linear model and predict values
  W
= tf.get_variable("W",[1], dtype=tf.float64)
  b
= tf.get_variable("b",[1], dtype=tf.float64)
  y
= W*features['x']+ b
 
# Loss sub-graph
  loss
= tf.reduce_sum(tf.square(y - labels))# Training sub-graph
  global_step
= tf.train.get_global_step()
  optimizer
= tf.train.GradientDescentOptimizer(0.01)
  train
= tf.group(optimizer.minimize(loss),
                   tf
.assign_add(global_step,1))# ModelFnOps connects subgraphs we built to the# appropriate functionality.return tf.contrib.learn.ModelFnOps(
      mode
=mode, predictions=y,
      loss
=loss,
      train_op
=train)

estimator
= tf.contrib.learn.Estimator(model_fn=model)# define our data set
x
= np.array([1.,2.,3.,4.])
y
= np.array([0.,-1.,-2.,-3.])
input_fn
= tf.contrib.learn.io.numpy_input_fn({"x": x}, y,4, num_epochs=1000)# train
estimator
.fit(input_fn=input_fn, steps=1000)# evaluate our modelprint(estimator.evaluate(input_fn=input_fn, steps=10))
把這個例子寫入檔案,然後python一下就可以看到結果了,這裡不多敘述。

到此,第一個感覺已經建立了。