1. 程式人生 > >第二階段-tensorflow程式圖文詳解(五) Saving and Restoring

第二階段-tensorflow程式圖文詳解(五) Saving and Restoring

This document explains how to save and restore variables and models.
這篇blog討論變數,模型的儲存和載入。

1,Saving and restoring variables

A TensorFlow variable provides the best way to represent shared, persistent state manipulated by your program. (See Variables for details.) This section explains how to save and restore variables. Note that Estimators automatically saves and restores variables (in the model_dir).
variables提供最好的共享,持久化狀態。注意Estimators自動化儲存,恢復變數。

The tf.train.Saver class provides methods for saving and restoring models. The tf.train.Saver constructor adds save and restore ops to the graph for all, or a specified list, of the variables in the graph. The Saver object provides methods to run these ops, specifying paths for the checkpoint files to write to or read from.
tf.train.Saver類提供了儲存和恢復模型的方法。tf.train.Saver構造器對於graph,或者graph中的list,variables。儲存物件提供這些操作,指定checkpoint的路徑,去讀寫。

The saver will restore all variables already defined in your model. If you’re loading a model without knowing how to build its graph (for example, if you’re writing a generic program to load models), then read the Overview of saving and restoring models section later in this document.

TensorFlow saves variables in binary checkpoint files that, roughly speaking, map variable names to tensor values.
TensorFlow將變數儲存在二進位制檢查點檔案中,粗略地說,它將變數名稱對映為張量值。

1.1 Saving variables

Create a Saver with tf.train.Saver() to manage all variables in the model. For example, the following snippet demonstrates how to call the tf.train.Saver.save method to save variables to a checkpoint file:
建立一個Saver去管理所有的variables。下面程式碼片段,演示儲存variables為checkpoint檔案。

#建立一些variables.
v1 = tf.get_variable("v1", shape=[3], initializer = tf.zeros_initializer)
v2 = tf.get_variable("v2", shape=[5], initializer = tf.zeros_initializer)

inc_v1 = v1.assign(v1+1)
dec_v2 = v2.assign(v2-1)


# 這個操作用來初始化所有variables
init_op = tf.global_variables_initializer()

# 這個操作用來儲存所有variables
saver = tf.train.Saver()

# 之後,執行這個模型,並儲存variables到硬碟上。
with tf.Session() as sess:
  sess.run(init_op)
  #讓這個模型工作起來
  inc_v1.op.run()
  dec_v2.op.run()
  # 講variables儲存到硬碟
  save_path = saver.save(sess, "/tmp/model.ckpt")
  print("Model saved in file: %s" % save_path)

1.2,Restoring variables

The tf.train.Saver object not only saves variables to checkpoint files, it also restores variables. Note that when you restore variables from a file you do not have to initialize them beforehand. For example, the following snippet demonstrates how to call the tf.train.Saver.restore method to restore variables from a checkpoint file:
恢復模型示例

tf.reset_default_graph()

# 建立一些 variables.
v1 = tf.get_variable("v1", shape=[3])
v2 = tf.get_variable("v2", shape=[5])

# 新增一個saver
saver = tf.train.Saver()

# 之後,開始儲存模型
with tf.Session() as sess:
  # 從硬碟中載入ckpt檔案
  saver.restore(sess, "/tmp/model.ckpt")
  print("Model restored.")
  # 將variables值打印出。
  print("v1 : %s" % v1.eval())
  print("v2 : %s" % v2.eval())

1.3,Choosing which variables to save and restore

If you do not pass any arguments to tf.train.Saver(), the saver handles all variables in the graph. Each variable is saved under the name that was passed when the variable was created.
如果不傳遞任何引數給tf.train.Saver(),儲存器將處理圖中的所有變數。 每個變數都儲存在建立變數時傳遞的名稱下。

It is sometimes useful to explicitly specify names for variables in the checkpoint files. For example, you may have trained a model with a variable named “weights” whose value you want to restore into a variable named “params”.
顯式指定檢查點檔案中變數的名稱有時很有用。 例如,您可能已經訓練了一個名為“weights”的變數,該變數的值要恢復到名為“params”的變數中。

It is also sometimes useful to only save or restore a subset of the variables used by a model. For example, you may have trained a neural net with five layers, and you now want to train a new model with six layers that reuses the existing weights of the five trained layers. You can use the saver to restore the weights of just the first five layers.
儲存或恢復模型使用的變數的子集,有時也是有用的。 例如,您可能已經訓練了一個五層的神經網路,現在您要訓練一個六層的新模型,重新使用五個訓練層的現有權重。 您可以使用儲存程式恢復前五層的權重。

You can easily specify the names and variables to save or load by passing to the tf.train.Saver() constructor either of the following:

  1. A list of variables (which will be stored under their own names).
  2. A Python dictionary in which keys are the names to use and the
    values are the variables to manage.
    一個Python字典,其中鍵是要使用的名稱和
    值是要管理的變數。
    Continuing from the save/restore examples shown earlier:
tf.reset_default_graph()
# 建立一些 variables.
v1 = tf.get_variable("v1", [3], initializer = tf.zeros_initializer)
v2 = tf.get_variable("v2", [5], initializer = tf.zeros_initializer)

# 新增一個儲存,恢復的操作,將v2,對映到key為v2
saver = tf.train.Saver({"v2": v2})

# 使用saver物件
with tf.Session() as sess:
  # 初始化v1變數,當不會儲存。
  v1.initializer.run()
  saver.restore(sess, "/tmp/model.ckpt")

  print("v1 : %s" % v1.eval())
  print("v2 : %s" % v2.eval())

Notes:

  1. You can create as many Saver objects as you want if you need to save
    and restore different subsets of the model variables. The same
    variable can be listed in multiple saver objects; its value is only
    changed when the Saver.restore() method is run.
    如果需要儲存和恢復模型變數的不同子集,可以根據需要建立任意多個Saver物件。 同一個變數可以列在多個儲存物件中; 它的值只有在Saver.restore()方法執行時才會改變。

  2. If you only restore a subset of the model variables at the start of
    a session, you have to run an initialize op for the other variables.
    See tf.variables_initializer for more information.
    如果只在會話開始時恢復模型變數的子集,則必須為其他變數執行初始化操作。 有關更多資訊,請參閱tf.variables_initializer。

  3. To inspect the variables in a checkpoint, you can use the
    inspect_checkpoint library, particularly the
    print_tensors_in_checkpoint_file function.

    要檢查檢查點中的變數,可以使用inspect_checkpoint庫,特別是print_tensors_in_checkpoint_file函式。

  4. By default, Saver uses the value of the tf.Variable.name property
    for each variable. However, when you create a Saver object, you may
    optionally choose names for the variables in the checkpoint files.
    預設情況下,Saver使用每個變數的tf.Variable.name屬性的值。 但是,當您建立一個Saver物件時,您可以選擇為檢查點檔案中的變數選擇名稱。

2,Overview of saving and restoring models

When you want to save and load variables, the graph, and the graph’s metadata–basically, when you want to save or restore your model–we recommend using SavedModel. SavedModel is a language-neutral, recoverable, hermetic serialization format. SavedModel enables higher-level systems and tools to produce, consume, and transform TensorFlow models. TensorFlow provides several mechanisms for interacting with SavedModel, including tf.saved_model APIs, Estimator APIs and a CLI.
當你想要儲存和載入變數,圖形和圖形的元資料 - 基本上,當你想儲存或恢復你的模型 - 我們建議使用SavedModel。 SavedModel是一種語言中立,可恢復,密封的序列化格式。 SavedModel使更高級別的系統和工具能夠生成,消耗和轉換TensorFlow模型。 TensorFlow提供了多種與SavedModel進行互動的機制,包括tf.saved_model API,Estimator API和CLI。

3,APIs to build and load a SavedModel

This section focuses on the APIs for building and loading a SavedModel, particularly when using lower-level TensorFlow APIs.
使用底層API構建一個SavedModel
3.1,Building a SavedModel

We provide a Python implementation of the SavedModel builder. The SavedModelBuilder class provides functionality to save multiple MetaGraphDefs. A MetaGraph is a dataflow graph, plus its associated variables, assets, and signatures. A MetaGraphDef is the protocol buffer representation of a MetaGraph. A signature is the set of inputs to and outputs from a graph.
使用python介面構建SavedModel 。SavedModelBuilder 提供多個MetaGraphDefs的方法。
一個MetaGraph 就是一個數據流圖,包括變數,資源,和signature 。
一個MetaGraphDef 就是一個MetaGraph 的控制協議。
一個signature 就是圖的IO集合。

If assets need to be saved and written or copied to disk, they can be provided when the first MetaGraphDef is added. If multiple MetaGraphDefs are associated with an asset of the same name, only the first version is retained.
如果assets需要儲存並寫入或複製到磁碟,則可以在新增第一個MetaGraphDef時提供assets。 如果多個MetaGraphDefs與同名assets相關聯,則只保留第一個版本。

Each MetaGraphDef added to the SavedModel must be annotated with user-specified tags. The tags provide a means to identify the specific MetaGraphDef to load and restore, along with the shared set of variables and assets. These tags typically annotate a MetaGraphDef with its functionality (for example, serving or training), and optionally with hardware-specific aspects (for example, GPU).
每個新增到SavedModel的MetaGraphDef都必須使用使用者指定的標籤註釋。 這些標籤提供了一種方法來識別要載入和恢復的特定MetaGraphDef,以及共享的一組變數和assets。 這些標籤通常使用其功能(例如服務或訓練)對MetaGraphDef進行註釋,並可選地使用特定於硬體的方面(例如GPU)對其進行註釋。

For example, the following code suggests a typical way to use SavedModelBuilder to build a SavedModel:

export_dir = ...
...
builder = tf.saved_model_builder.SavedModelBuilder(export_dir)
with tf.Session(graph=tf.Graph()) as sess:
  ...
  builder.add_meta_graph_and_variables(sess,
                                       [tag_constants.TRAINING],
                                       signature_def_map=foo_signatures,
                                       assets_collection=foo_assets)
...
# 新增第二個 MetaGraphDef for inference.
with tf.Session(graph=tf.Graph()) as sess:
  ...
  builder.add_meta_graph([tag_constants.SERVING])
...
builder.save()

3.2,Loading a SavedModel in Python

The Python version of the SavedModel loader provides load and restore capability for a SavedModel. The load operation requires the following information:
SavedModel載入器的Python版本為SavedModel提供載入和恢復功能。 載入操作需要以下資訊:

  • The session in which to restore the graph definition and variables.
  • The tags used to identify the MetaGraphDef to load.
  • The location (directory) of the SavedModel.

Upon a load, the subset of variables, assets, and signatures supplied as part of the specific MetaGraphDef will be restored into the supplied session.

export_dir = ...
...
with tf.Session(graph=tf.Graph()) as sess:
  tf.saved_model.loader.load(sess, [tag_constants.TRAINING], export_dir)
  ...

3.3,Loading a Savedmodel in C++

The C++ version of the SavedModel loader provides an API to load a SavedModel from a path, while allowing SessionOptions and RunOptions. You have to specify the tags associated with the graph to be loaded. The loaded version of SavedModel is referred to as SavedModelBundle and contains the MetaGraphDef and the session within which it is loaded.

const string export_dir = ...
SavedModelBundle bundle;
...
LoadSavedModel(session_options, run_options, export_dir, {kSavedModelTagTrain},
               &bundle);

3.4,Standard constants

SavedModel offers the flexibility to build and load TensorFlow graphs for a variety of use-cases. For the most common use-cases, SavedModel’s APIs provide a set of constants in Python and C++ that are easy to reuse and share across tools consistently.
SavedModel提供了為各種用例構建和載入TensorFlow圖表的靈活性。 對於最常見的用例,SavedModel的API在Python和C ++中提供了一組易於重複使用和持續分享的常量。
Standard MetaGraphDef tags

You may use sets of tags to uniquely identify a MetaGraphDef saved in a SavedModel. A subset of commonly used tags is specified in:

Python
C++

Standard SignatureDef constants

A SignatureDef is a protocol buffer that defines the signature of a computation supported by a graph. Commonly used input keys, output keys, and method names are defined in:
SignatureDef是一個協議緩衝區,用於定義圖形所支援的計算簽名。 常用的輸入鍵,輸出鍵和方法名稱定義如下:

Python
C++

4,Using SavedModel with Estimators

After training an Estimator model, you may want to create a service from that model that takes requests and returns a result. You can run such a service locally on your machine or deploy it scalably in the cloud.
訓練Estimator模型,你可能先提供一個服務,通過傳送請求,並返回一個結果。你可以在本地或雲機上部署一個service。

To prepare a trained Estimator for serving, you must export it in the standard SavedModel format. This section explains how to:
為了將訓練好的模型提供服務,你必須將模型匯出成一個標準的SavedModel 模型,這部分介紹如何實現:

  • Specify the output nodes and the corresponding APIs that can be
    served (Classify, Regress, or Predict).
  • Export your model to the SavedModel format.
  • Serve the model from a local server and request predictions.

4.1,Preparing serving inputs

During training, an input_fn() ingests data and prepares it for use by the model. At serving time, similarly, a serving_input_receiver_fn() accepts inference requests and prepares them for the model. This function has the following purposes:
在訓練期間input_fn()方法用來準備填充資料給模型訓練。在服務期間,serving_input_receiver_fn()方法為模型接收相關的請求,下面就是這個函式的目的:

  1. To add placeholders to the graph that the serving system will feed
    with inference requests.
    將佔位符新增到服務系統將要饋送的圖形中
    有推理請求。

  2. To add any additional ops needed to convert data from the input
    format into the feature Tensors expected by the model.
    新增所需的額外操作來轉換來自輸入的資料
    格式化為模型預期的特徵張量。

The function returns a tf.estimator.export.ServingInputReceiver object, which packages the placeholders and the resulting feature Tensors together.
該函式返回一個tf.estimator.export.ServingInputReceiver物件,該物件將佔位符和生成的特徵張量打包在一起。
A typical pattern is that inference requests arrive in the form of serialized tf.Examples, so the serving_input_receiver_fn() creates a single string placeholder to receive them. The serving_input_receiver_fn() is then also responsible for parsing the tf.Examples by adding a tf.parse_example op to the graph.
一個典型的模式是相關請求以序列化的tf.Examples的形式到達,所以serving_input_receiver_fn()建立一個單獨的字串佔位符來接收它們。 然後,serving_input_receiver_fn()還負責通過向圖中新增一個tf.parse_example操作來解析tf.Examples。

When writing such a serving_input_receiver_fn(), you must pass a parsing specification to tf.parse_example to tell the parser what feature names to expect and how to map them to Tensors. A parsing specification takes the form of a dict from feature names to tf.FixedLenFeature, tf.VarLenFeature, and tf.SparseFeature. Note this parsing specification should not include any label or weight columns, since those will not be available at serving time—in contrast to a parsing specification used in the input_fn() at training time.
在編寫這樣的serving_input_receiver_fn()時,必須將解析規範傳遞給tf.parse_example,以告訴parser 哪些特徵名稱以及如何將它們對映到Tensors。 解析規範採用從特徵名稱到tf.FixedLenFeature,tf.VarLenFeature和tf.SparseFeature的字典的形式。 請注意,此解析規範不應包含任何標籤或權重列,因為這些列在服務時間將不可用 - 與訓練時在input_fn()中使用的解析規範contrast。

In combination, then:

feature_spec = {'foo': tf.FixedLenFeature(...),
                'bar': tf.VarLenFeature(...)}

def serving_input_receiver_fn():
  """一個輸入接收器是一個序列化的 tf.Example."""
  serialized_tf_example = tf.placeholder(dtype=tf.string,
                                         shape=[default_batch_size],
                                         name='input_example_tensor')
  receiver_tensors = {'examples': serialized_tf_example}
  features = tf.parse_example(serialized_tf_example, feature_spec)
  return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)

The tf.estimator.export.build_parsing_serving_input_receiver_fn utility function provides that input receiver for the common case.
tf.estimator.export.build_parsing_serving_input_receiver_fn實用程式函式為常見情況提供了輸入接收器。

  • Note: when training a model to be served using the Predict API with a
    local server, the parsing step is not needed because the model will
    receive raw feature data.

Even if you require no parsing or other input processing—that is, if the serving system will feed feature Tensors directly—you must still provide a serving_input_receiver_fn() that creates placeholders for the feature Tensors and passes them through. The tf.estimator.export.build_raw_serving_input_receiver_fn utility provides for this.
即使您不需要解析或其他輸入處理,也就是說,如果服務系統將直接提供功能張量,您仍然必須提供一個serving_input_receiver_fn(),為功能張量建立佔位符並將其傳遞。 tf.estimator.export.build_raw_serving_input_receiver_fn實用程式提供了這個。
If these utilities do not meet your needs, you are free to write your own serving_input_receiver_fn(). One case where this may be needed is if your training input_fn() incorporates some preprocessing logic that must be recapitulated at serving time. To reduce the risk of training-serving skew, we recommend encapsulating such processing in a function which is then called from both input_fn() and serving_input_receiver_fn().

如果這些實用程式不能滿足您的需求,您可以自行編寫自己的serving_input_receiver_fn()。其中一種情況是,如果你的訓練input_fn()包含了一些預處理邏輯,那麼這些邏輯必須在服務時間重演。為了減少訓練服務偏斜的風險,我們建議將這樣的處理封裝在一個函式中,然後在input_fn()和serving_input_receiver_fn()中呼叫這個函式。

Note that the serving_input_receiver_fn() also determines the input portion of the signature. That is, when writing a serving_input_receiver_fn(), you must tell the parser what signatures to expect and how to map them to your model’s expected inputs. By contrast, the output portion of the signature is determined by the model.

請注意,serving_input_receiver_fn()也決定簽名的輸入部分。也就是說,在編寫一個serving_input_receiver_fn()時,必須告訴解析器什麼樣的特徵以及如何將它們對映到模型的期望輸入。相比之下,簽名的輸出部分由模型確定。

4.2,Performing the export

To export your trained Estimator, call tf.estimator.Estimator.export_savedmodel with the export base path and the serving_input_receiver_fn.

estimator.export_savedmodel(export_dir_base, serving_input_receiver_fn)

This method builds a new graph by first calling the serving_input_receiver_fn() to obtain feature Tensors, and then calling this Estimator’s model_fn() to generate the model graph based on those features. It starts a fresh Session, and, by default, restores the most recent checkpoint into it. (A different checkpoint may be passed, if needed.) Finally it creates a time-stamped export directory below the given export_dir_base (i.e., export_dir_base/), and writes a SavedModel into it containing a single MetaGraphDef saved from this Session.
該方法通過首先呼叫serving_input_receiver_fn()來獲得特徵張量,然後呼叫這個估計器的model_fn()來基於這些特徵來生成模型圖來構建新的圖。 它啟動一個新的會話,並且預設情況下,將最近的檢查點恢復到它。 (如果需要,可以傳遞不同的檢查點)。最後,它在給定的export_dir_base(即,export_dir_base / )下面建立一個帶時間戳的匯出目錄,並將SavedModel寫入其中,該SavedModel包含從此Session儲存的單個MetaGraphDef。

  • Note: It is your responsibility to garbage-collect old exports.
    Otherwise, successive exports will accumulate under export_dir_base.

4.3,Specifying the outputs of a custom model

When writing a custom model_fn, you must populate the export_outputs element of the tf.estimator.EstimatorSpec return value. This is a dict of {name: output} describing the output signatures to be exported and used during serving.
在編寫自定義的model_fn時,必須填充tf.estimator.EstimatorSpec返回值的export_outputs元素。這是{name:output}的一個字典,用於描述在服務期間要輸出和使用的輸出簽名。

In the usual case of making a single prediction, this dict contains one element, and the name is immaterial. In a multi-headed model, each head is represented by an entry in this dict. In this case the name is a string of your choice that can be used to request a specific head at serving time.

在通常的單一預測的情況下,這個字典包含一個元素,名字是不重要的。在一個多頭模型中,每一個頭都由這個詞典中的一個條目表示。在這種情況下,名稱是您選擇的字串,可用於請求服務時間的特定頭像。

Each output value must be an ExportOutput object such as tf.estimator.export.ClassificationOutput, tf.estimator.export.RegressionOutput, or tf.estimator.export.PredictOutput.

每個輸出值必須是ExportOutput物件,例如tf.estimator.export.ClassificationOutput,tf.estimator.export.RegressionOutput或tf.estimator.export.PredictOutput。

These output types map straightforwardly to the TensorFlow Serving APIs, and so determine which request types will be honored.
Note: In the multi-headed case, a SignatureDef will be generated for each element of the export_outputs dict returned from the model_fn, named using the same keys. These SignatureDefs differ only in their outputs, as provided by the corresponding ExportOutput entry. The inputs are always those provided by the serving_input_receiver_fn. An inference request may specify the head by name. One head must be named using signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY indicating which SignatureDef will be served when an inference request does not specify one.

這些輸出型別直接對映到TensorFlow服務API,從而確定哪些請求型別將得到遵守。
注意:在多頭情況下,將為從model_fn返回的export_outputs dict的每個元素生成一個SignatureDef,並使用相同的鍵命名。這些SignatureDefs僅在其輸出方面有所不同,由相應的ExportOutput條目提供。輸入始終是由serving_input_receiver_fn提供的。推理請求可以通過名稱指定頭部。必須使用signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY命名一個頭,以指示在推理請求未指定哪個SignatureDef時將提供哪個SignatureDef。

4.4,Serving the exported model locally

For local deployment, you can serve your model using TensorFlow Serving, an open-source project that loads a SavedModel and exposes it as a gRPC service.

First, install TensorFlow Serving.

Then build and run the local model server, substituting $export_dir_base with the path to the SavedModel you exported above:

bazel build //tensorflow_serving/model_servers:tensorflow_model_server
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_base_path=$export_dir_base

Now you have a server listening for inference requests via gRPC on port 9000!

4.5,Requesting predictions from a local server

The server responds to gRPC requests according to the PredictionService gRPC API service definition. (The nested protocol buffers are defined in various neighboring files).
伺服器根據PredictionService gRPC API服務定義來響應gRPC請求。 (巢狀協議緩衝區在各種相鄰檔案中定義)。

From the API service definition, the gRPC framework generates client libraries in various languages providing remote access to the API. In a project using the Bazel build tool, these libraries are built automatically and provided via dependencies like these (using Python for example):
從API服務定義中,gRPC框架以各種語言生成客戶端庫,提供對API的遠端訪問。 在使用Bazel構建工具的專案中,這些庫是自動構建的,並通過這些依賴關係來提供(例如使用Python):

  deps = [
    "//tensorflow_serving/apis:classification_proto_py_pb2",
    "//tensorflow_serving/apis:regression_proto_py_pb2",
    "//tensorflow_serving/apis:predict_proto_py_pb2",
    "//tensorflow_serving/apis:prediction_service_proto_py_pb2"
  ]

Python client code can then import the libraries thus:

from tensorflow_serving.apis import classification_pb2
from tensorflow_serving.apis import regression_pb2
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2
  • Note: prediction_service_pb2 defines the service as a whole and so is
    always required. However a typical client will need only one of
    classification_pb2, regression_pb2, and predict_pb2, depending on the
    type of requests being made.

Sending a gRPC request is then accomplished by assembling a protocol buffer containing the request data and passing it to the service stub. Note how the request protocol buffer is created empty and then populated via the generated protocol buffer API.
傳送一個gRPC請求是通過組裝一個包含請求資料的協議緩衝區並將其傳遞給服務存根來實現的。 請注意,請求協議緩衝區是如何建立的,然後通過生成的協議緩衝區API填充。

from grpc.beta import implementations

channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)

request = classification_pb2.ClassificationRequest()
example = request.input.example_list.examples.add()
example.features.feature['x'].float_list.value.extend(image[0].astype(float))

result = stub.Classify(request, 10.0)  # 10 secs timeout

The returned result in this example is a ClassificationResponse protocol buffer.

This is a skeletal example; please see the Tensorflow Serving documentation and examples for more details.

  • Note: ClassificationRequest and RegressionRequest contain a
    tensorflow.serving.Input protocol buffer, which in turn contains a
    list of tensorflow.Example protocol buffers. PredictRequest, by
    contrast, contains a mapping from feature names to values encoded via
    TensorProto. Correspondingly: When using the Classify and Regress
    APIs, TensorFlow Serving feeds serialized tf.Examples to the graph,
    so your serving_input_receiver_fn() should include a
    tf.parse_example() Op. When using the generic Predict API, however,
    TensorFlow Serving feeds raw feature data to the graph, so a pass
    through serving_input_receiver_fn() should be used.
    注意:ClassificationRequest和RegressionRequest包含一個
    tensorflow.serving.Input協議緩衝區,其中又包含一個
    張量列表。例子協議緩衝區。 預測請求,由
    對比,包含從功能名稱到通過編碼值的對映
    TensorProto。 相應地:使用分類和迴歸時
    API,TensorFlow Serving feeds序列化tf.Examples到圖中,
    所以你的serving_input_receiver_fn()應該包含一個
    tf.parse_example()Op。 但是,使用通用預測API時,
    TensorFlow Serving將原始特徵資料提供給圖形,所以通過
    通過serving_input_receiver_fn()應該被使用。

5,CLI to inspect and execute SavedModel

You can use the SavedModel Command Line Interface (CLI) to inspect and execute a SavedModel. For example, you can use the CLI to inspect the model’s SignatureDefs. The CLI enables you to quickly confirm that the input Tensor dtype and shape match the model. Moreover, if you want to test your model, you can use the CLI to do a sanity check by passing in sample inputs in various formats (for example, Python expressions) and then fetching the output.

5.1,Installing the SavedModel CLI

Broadly speaking, you can install TensorFlow in either of the following two ways:

  • By installing a pre-built TensorFlow binary.
  • By building TensorFlow from source code.

If you installed TensorFlow through a pre-built TensorFlow binary, then the SavedModel CLI is already installed on your system at pathname bin\saved_model_cli.

If you built TensorFlow from source code, you must run the following additional command to build saved_model_cli:

$ bazel build tensorflow/python/tools:saved_model_cli

5.2,Overview of commands

The SavedModel CLI supports the following two commands on a MetaGraphDef in a SavedModel:

show, which shows a computation on a MetaGraphDef in a SavedModel.
run, which runs a computation on a MetaGraphDef.

5.3,show command

A SavedModel contains one or more MetaGraphDefs, identified by their tag-sets. To serve a model, you might wonder what kind of SignatureDefs are in each model, and what are their inputs and outputs. The show command let you examine the contents of the SavedModel in hierarchical order. Here’s the syntax:

usage: saved_model_cli show [-h] --dir DIR [--all]
[--tag_set TAG_SET] [--signature_def SIGNATURE_DEF_KEY]

For example, the following command shows all available MetaGraphDef tag-sets in the SavedModel:

$ saved_model_cli show --dir /tmp/saved_model_dir
The given SavedModel contains the following tag-sets:
serve
serve, gpu

The following command shows all available SignatureDef keys in a MetaGraphDef:

$ saved_model_cli show --dir /tmp/saved_model_dir --tag_set serve
The given SavedModel `MetaGraphDef` contains `SignatureDefs` with the
following keys:
SignatureDef key: "classify_x2_to_y3"
SignatureDef key: "classify_x_to_y"
SignatureDef key: "regress_x2_to_y3"
SignatureDef key: "regress_x_to_y"
SignatureDef key: "regress_x_to_y2"
SignatureDef key: "serving_default"

If a MetaGraphDef has multiple tags in the tag-set, you must specify all tags, each tag separated by a comma. For example:

$ saved_model_cli show --dir /tmp/saved_model_dir --tag_set serve,gpu

To show all inputs and outputs TensorInfo for a specific SignatureDef, pass in the SignatureDef key to signature_def option. This is very useful when you want to know the tensor key value, dtype and shape of the input tensors for executing the computation graph later. For example:

$ saved_model_cli show --dir \
/tmp/saved_model_dir --tag_set serve --signature_def serving_default
The given SavedModel SignatureDef contains the following input(s):
inputs['x'] tensor_info:
    dtype: DT_FLOAT
    shape: (-1, 1)
    name: x:0
The given SavedModel SignatureDef contains the following output(s):
outputs['y'] tensor_info:
    dtype: DT_FLOAT
    shape: (-1, 1)
    name: y:0

Method name is: tensorflow/serving/predict

To show all available information in the SavedModel, use the –all option. For example:

$ saved_model_cli show --dir /tmp/saved_model_dir --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['classify_x2_to_y3']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
    dtype: DT_FLOAT
    shape: (-1, 1)
    name: x2:0
The given SavedModel SignatureDef contains the following output(s):
outputs['scores'] tensor_info:
    dtype: DT_FLOAT
    shape: (-1, 1)
    name: y3:0
Method name is: tensorflow/serving/classify
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['x'] tensor_info:
    dtype: DT_FLOAT
    shape: (-1, 1)
    name: x:0
The given SavedModel SignatureDef contains the following output(s):
outputs['y'] tensor_info:
    dtype: DT_FLOAT
    shape: (-1, 1)
    name: y:0
Method name is: tensorflow/serving/predict

5.4,run command

Invoke the run command to run a graph computation, passing inputs and then displaying (and optionally saving) the outputs. Here’s the syntax:

usage: saved_model_cli run [-h] –dir DIR –tag_set TAG_SET –signature_def
SIGNATURE_DEF_KEY [–inputs INPUTS]
[–input_exprs INPUT_EXPRS] [–outdir OUTDIR]
[–overwrite] [–tf_debug]

The run command provides the following two ways to pass inputs to the model:

--inputs option enables you to pass numpy ndarray in files.
--input_exprs option enables you to pass Python expressions.

–inputs

To pass input data in files, specify the –inputs option, which takes the following general format:

–inputs

where INPUTS is either of the following formats:

<input_key>=<filename>
<input_key>=<filename>[<variable_name>]

You may pass multiple INPUTS. If you do pass multiple inputs, use a semicolon to separate each of the INPUTS.

saved_model_cli uses numpy.load to load the filename. The filename may be in any of the following formats:

.npy
.npz
pickle format

A .npy file always contains a numpy ndarray. Therefore, when loading from a .npy file, the content will be directly assigned to the specified input tensor. If you specify a variable_name with that .npy file, the variable_name will be ignored and a warning will be issued.

When loading from a .npz (zip) file, you may optionally specify a variable_name to identify the variable within the zip file to load for the input tensor key. If you don’t specify a variable_name, the SavedModel CLI will check that only one file is included in the zip file and load it for the specified input tensor key.

When loading from a pickle file, if no variable_name is specified in the square brackets, whatever that is inside the pickle file will be passed to the specified input tensor key. Otherwise, the SavedModel CLI will assume a dictionary is stored in the pickle file and the value corresponding to the variable_name will be used.
–inputs_exprs

To pass inputs through Python expressions, specify the –input_exprs option. This can be useful for when you don’t have data files lying around, but still want to sanity check the model with some simple inputs that match the dtype and shape of the model’s SignatureDefs. For example:

input_key=[[1], [2], [3]]`

In addition to Python expressions, you may also pass numpy functions. For example:

input_key=np.ones((32, 32, 3))

(Note that the numpy module is already available to you as np.)
Save Output

By default, the SavedModel CLI writes output to stdout. If a directory is passed to –outdir option, the outputs will be saved as npy files named after output tensor keys under the given directory.

Use –overwrite to overwrite existing output files.
TensorFlow Debugger (tfdbg) Integration

If –tf_debug option is set, the SavedModel CLI will use the TensorFlow Debugger (tfdbg) to watch the intermediate Tensors and runtime graphs or subgraphs while running the SavedModel.
Full examples of run

Given:

    Your model simply adds x1 and x2 to get output y.
    All tensors in the model have shape (-1, 1).
    You have two npy files:
    /tmp/my_data1.npy, which contains a numpy ndarray [[1], [2], [3]].
    /tmp/my_data2.npy, which contains another numpy ndarray [[0.5], [0.5], [0.5]].

To run these two npy files through the model to get output y, issue the following command:

$ saved_model_cli run --dir /tmp/saved_model_dir --tag_set serve \
--signature_def x1_x2_to_y --inputs x1=/tmp/my_data1.npy;x2=/tmp/my_data2.npy \
--outdir /tmp/out
Result for output key y:
[[ 1.5]
 [ 2.5]
 [ 3.5]]

Let’s change the preceding example slightly. This time, instead of two .npy files, you now have an .npz file and a pickle file. Furthermore, you want to overwrite any existing output file. Here’s the command:

$ saved_model_cli run --dir /tmp/saved_model_dir --tag_set serve \
--signature_def x1_x2_to_y \
--inputs x1=/tmp/my_data1.npz[x];x2=/tmp/my_data2.pkl --outdir /tmp/out \
--overwrite
Result for output key y:
[[ 1.5]
 [ 2.5]
 [ 3.5]]

You may specify python expression instead of an input file. For example, the following command replaces input x2 with a Python expression:

$ saved_model_cli run --dir /tmp/saved_model_dir --tag_set serve \
--signature_def x1_x2_to_y --inputs x1=/tmp/my_data1.npz[x] \
--input_exprs 'x2=np.ones((3,1))'
Result for output key y:
[[ 2]
 [ 3]
 [ 4]]

To run the model with the TensorFlow Debugger on, issue the following command:

$ saved_model_cli run --dir /tmp/saved_model_dir --tag_set serve \
--signature_def serving_default --inputs x=/tmp/data.npz[x] --tf_debug

6,Structure of a SavedModel directory

When you save a model in SavedModel format, TensorFlow creates a SavedModel directory consisting of the following subdirectories and files:

assets/
assets.extra/
variables/
    variables.data-?????-of-?????
    variables.index
saved_model.pb|saved_model.pbtxt

where:

  1. assets is a subfolder containing auxiliary (external) files, such as
    vocabularies. Assets are copied to the SavedModel location and can
    be read when loading a specific MetaGraphDef.

  2. assets.extra is a subfolder where higher-level libraries and users
    can add their own assets that co-exist with the model, but are not
    loaded by the graph. This subfolder is not managed by the SavedModel
    libraries.

  3. variables is a subfolder that includes output from tf.train.Saver.

  4. saved_model.pb or saved_model.pbtxt is the SavedModel protocol
    buffer. It includes the graph definitions as MetaGraphDef protocol
    buffers.

A single SavedModel can represent multiple graphs. In this case, all the graphs in the SavedModel share a single set of checkpoints (variables) and assets. For example, the following diagram shows one SavedModel containing three MetaGraphDefs, all three of which share the same set of checkpoints and assets:
一個SavedModel可以表示多個圖形。 在這種情況下,SavedModel中的所有圖都共享一組檢查點(變數)和資產。 例如,下圖顯示了一個包含三個MetaGraphDefs的SavedModel,其中三個都共享同一組檢查點和資產:

SavedModel represents checkpoints, assets, and one or more MetaGraphDefs

這裡寫圖片描述
Each graph is associated with a specific set of tags, which enables identification during a load or restore operation.
每個圖形都與一組特定的標籤相關聯,從而可以在載入或恢復操作期間進行識別。

相關推薦

第二階段-tensorflow程式圖文 Saving and Restoring

This document explains how to save and restore variables and models. 這篇blog討論變數,模型的儲存和載入。 1,Saving and restoring variables A Ten

java程式設計師菜鳥進階oracle基礎oracle資料庫體系架構

分享一下我老師大神的人工智慧教程!零基礎,通俗易懂!http://blog.csdn.net/jiangjunshow 也歡迎大家轉載本篇文章。分享知識,造福人民,實現我們中華民族偉大復興!        

Android應用程式啟動從原始碼瞭解App的啟動過程

本文承接《Android應用程式啟動詳解(一)》繼續來學習應用程式的啟動的那些事。上文提到startActivity()方法啟動一個app後經過一翻過程就到了app的入口方法ActivityThread.main()。其實我們在之前的文章中《Android的訊息機制(二)之L

CSS float的相關圖文

  最近這段時間有些忙,一直沒有寫關於如何清除浮動的,現在終於抽出時間了,還是那句話,如果哪裡有錯誤或者錯別字,希望大家留言指正。我們一起進步!   在CSS中,我們通過float屬性實現元素的浮動。浮動框旁邊的行框被縮短,從而給浮動框留出空間,行框圍繞浮動框,因此,建立浮動可以使文字圍繞影象:   例如

微信小程式開發---微信小程式開發元件使用初步

1:建立一個微信小程式的工程 2:請參考如下連結裡面的內容,這是微信小程式的官方開發指南: https://mp.weixin.qq.com/debug/wxadoc/dev/component/button.html?t=20161222 3:可以拷貝部分例子程式到你

微信小程式開發---微信小程式APP生命週期

1:微信小程式APP的生命週期方法: 在微信小程式工程中的app.js中增加如圖1所示方法 圖1 編譯執行,檢視日誌如圖2,圖3所示:微信小程式啟動時,呼叫生命週期方法為:onLaunch方法(app.js)---onShow方法(app.js)---onLoad方法(首

微信小程式開發---微信小程式佈局基礎

1:Flex佈局 Flex佈局如圖1所示 圖1 1.1 Flex容器屬性 1.2 Flex容器內元素屬性 align如果定義會覆寫掉容器屬性中的justify-content,align-items設定的屬性 微信小程式開發工程中,新建檔案layout,然後新建各

SpringMVC------參數綁定

@override 占用 通過 問題 顯示 led prop -s 意義   參數綁定,簡單來說就是客戶端發送請求,而請求中包含一些數據,那麽這些數據怎麽到達 Controller ?這在實際項目開發中也是用到的最多的,那麽 SpringMVC 的參數綁定是怎麽實現的呢?下

Spring------AOP

利用 未來 bject ted r.java -c cti throw 位置   這章我們接著講 Spring 的核心概念---AOP,這也是 Spring 框架中最為核心的一個概念。   PS:本篇博客源碼下載鏈接:http://pan.baidu.com/s/1skZ

CentOS 7.4 Tengine安裝配置

tengine nginx https 十四、配置Tengine支持HTTPS1、演示環境:IP操作系統角色 192.168.1.222 CentOS 7.4 Tengine服務器 192.168.1.145 CentOS 6.9 自建CA服務器備註:Teng

Zookeeper:通過JMX查看Zookeeper信息

JMXJMX是對運行中的JAVA系統進行管控。目前ZK使用標準的JMX接口。修改ZK的啟動腳本zkServer.sh這個啟動腳本進行修改,第一句不是必須的,但是第二句是必須的在conf目錄下新建java.env文件重新啟動為什麽要在conf裏面建立一個java.env呢?其實你都寫在zkServer.sh中也

unittest 引入裝飾器@classmethod

ase 以及 testcase word ram lte 重新 username program 我們知道setUp()和setDown()的作用是在每條測試用例執行前準備測試環境以及用例測試結束後恢復測試環境,如果我們執行的測試類下所有測試用例的環境準備和環境復原的操作都

Keepalived

兩節點 監控mysql 需要 mysql sql weight 關閉 20px 依然 一.Keepalived集群中MASTER和BACKUP角色選舉策略 在keepalived集群中,其實並沒有嚴格意義上的主、備節點,雖然可以在keepalived配置文件中

hashmap資料結構之HashMap、HashTable、ConcurrentHashMap 的區別

【hashmap 與 hashtable】   hashmap資料結構詳解(一)之基礎知識奠基 hashmap資料結構詳解(二)之走進JDK原始碼 hashmap資料結構詳解(三)之hashcode例項及大小是2的冪次方解釋 hashmap資料結構詳解(四)之has

PE檔案格式

0x00 前言   前一篇瞭解了區塊虛擬地址和檔案地址轉換的相關知識,這一篇該把我們所學拿出來用用了。這篇我們將瞭解更為重要的一個知識點——輸入表和輸出表的知識。 0x01 輸入表   首先我們有疑問。這個輸入表是啥?為啥有輸入表?其實輸入表就是記錄PE輸入函式相

安卓專案實戰之強大的網路請求框架okGo使用:擴充套件專案okRx,完美結合RxJava

前言 在第一篇講解okGo框架新增依賴支援時,還記得我們額外新增的兩個依賴嗎,一個okRx和一個okServer,這兩個均是基於okGo框架的擴充套件專案,其中okRx可以使請求結合RxJava一起使用,而okServer則提供了強大的下載上傳功能,如斷點支援,多工管理等,本篇我們主要講

【linux】Valgrind工具集:命令列

一、使用方法 usage: valgrind [options] prog-and-args 使用方法:valgrind [引數選項] 程式和引數 二、選擇工具 tool-selection option, with default in [ ]: 工具選擇選項,預設值在[]

mybatis ------動態SQL

調用 otherwise efi 實例 其中 參數 sep 引用 完成 目錄 1、動態SQL:if 語句 2、動態SQL:if+where 語句 3、動態SQL:if+set 語句 4、動態SQL:choose(when,otherwise) 語句 5、動態SQL:tr

JAVA設計模式----------狀態模式

各位朋友,本次LZ分享的是狀態模式,在這之前,懇請LZ解釋一下,由於最近公司事情多,比較忙,所以導致更新速度稍微慢了些(哦,往後LZ會越來越忙=。=)。 狀態模式,又稱狀態物件模式(Pattern of Objects for States),狀態模式是物件的行為模式。

Tkinter 元件:Frame

Tkinter 元件詳解之Frame Frame(框架)元件是在螢幕上的一個矩形區域。Frame 主要是作為其他元件的框架基礎,或為其他元件提供間距填充。 何時使用 Frame 元件? Frame 元件主要用於在複雜的佈局中將其他元件分組,也用於填充間距和作為實現高階元件的基類。