1. 程式人生 > >常用的正則化方法總結

常用的正則化方法總結

常用的正則化方法:

執行程式碼:

import tensorflow as tf 
print(help(tf.contrib.layers))

得到:

import tensorflow as tf 
print(help(tf.contrib.layers))

/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.6 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.7
  return f(*args, **kwds)
Help on package tensorflow.contrib.layers in tensorflow.contrib:

NAME
    tensorflow.contrib.layers - Ops for building neural network layers, regularizers, summaries, etc.

DESCRIPTION
    See the @{$python/contrib.layers} guide.
    
    @@avg_pool2d
    @@avg_pool3d
    @@batch_norm
    @@convolution2d
    @@convolution3d
    @@conv2d_in_plane
    @@convolution2d_in_plane
    @@conv2d_transpose
    @@convolution2d_transpose
    @@conv3d_transpose
    @@convolution3d_transpose
    @@dense_to_sparse
    @@dropout
    @@elu
    @@embedding_lookup_unique
    @@flatten
    @@fully_connected
    @@GDN
    @@gdn
    @@layer_norm
    @@linear
    @@max_pool2d
    @@max_pool3d
    @@one_hot_encoding
    @@relu
    @@relu6
    @@repeat
    @@recompute_grad
    @@RevBlock
    @@rev_block
    @@safe_embedding_lookup_sparse
    @@scale_gradient
    @@separable_conv2d
    @@separable_convolution2d
    @@softmax
    @@spatial_softmax
    @@stack
    @@unit_norm
    @@bow_encoder
    @@embed_sequence
    @@maxout
    
    @@apply_regularization
    @@l1_l2_regularizer
    @@l1_regularizer
    @@l2_regularizer
    @@sum_regularizer
    
    @@xavier_initializer
    @@xavier_initializer_conv2d
    @@variance_scaling_initializer
    
    @@optimize_loss
    
    @@summarize_activation
    @@summarize_tensor
    @@summarize_tensors
    @@summarize_collection
    
    @@summarize_activations
    
    @@bucketized_column
    @@check_feature_columns
    @@create_feature_spec_for_parsing
    @@crossed_column
    @@embedding_column
    @@scattered_embedding_column
    @@input_from_feature_columns
    @@transform_features
    @@joint_weighted_sum_from_feature_columns
    @@make_place_holder_tensors_for_base_features
    @@multi_class_target
    @@one_hot_column
    @@parse_feature_columns_from_examples
    @@parse_feature_columns_from_sequence_examples
    @@real_valued_column
    @@shared_embedding_columns
    @@sparse_column_with_hash_bucket
    @@sparse_column_with_integerized_feature
    @@sparse_column_with_keys
    @@sparse_column_with_vocabulary_file
    @@weighted_sparse_column
    @@weighted_sum_from_feature_columns
    @@infer_real_valued_columns
    @@sequence_input_from_feature_columns
    
    @@instance_norm

PACKAGE CONTENTS
    ops (package)
    python (package)

SUBMODULES
    feature_column
    summaries

DATA
    OPTIMIZER_CLS_NAMES = {'Adagrad': <class 'tensorflow.python.training.a...
    OPTIMIZER_SUMMARIES = ['learning_rate', 'loss', 'gradients', 'gradient...
    SPARSE_FEATURE_CROSS_DEFAULT_HASH_KEY = 956888297470
    elu = functools.partial(<function add_arg_scope.<local...58>, activati...
    legacy_linear = functools.partial(<function legacy_fully_connected at ...
    legacy_relu = functools.partial(<function legacy_fully_connect...0>, a...
    linear = functools.partial(<function add_arg_scope.<local...nc_with_ar...
    relu = functools.partial(<function add_arg_scope.<local...8>, activati...
    relu6 = functools.partial(<function add_arg_scope.<local...>, activati...
    scale_gradient = <tensorflow.python.framework.function._OverloadedFunc...

FILE
    /usr/local/lib/python3.7/site-packages/tensorflow/contrib/layers/__init__.py


None
[Finished in 5.2s]

最常用的是

    @@l1_regularizer
    @@l2_regularizer

來看看l2正則化是什麼意思:

import tensorflow as tf 
print(help(tf.contrib.layers.l2_regularizer))

/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.6 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.7
  return f(*args, **kwds)
Help on function l2_regularizer in module tensorflow.contrib.layers.python.layers.regularizers:

l2_regularizer(scale, scope=None)

    Returns a function that can be used to apply L2 regularization to weights.
阻止過擬合用的    
    Small values of L2 can help prevent overfitting the training data.
    
    Args:
      scale: A scalar multiplier `Tensor`. 0.0 disables the regularizer.
      scope: An optional scope name.
    
    Returns:
      A function with signature `l2(weights)` that applies L2 regularization.
    
    Raises:
      ValueError: If scale is negative or if scale is not a float.

None
[Finished in 2.3s]

再來看看l1正則化:

import tensorflow as tf 
print(help(tf.contrib.layers.l1_regularizer))

/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.6 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.7
  return f(*args, **kwds)
Help on function l1_regularizer in module tensorflow.contrib.layers.python.layers.regularizers:

l1_regularizer(scale, scope=None)
    Returns a function that can be used to apply L1 regularization to weights.
    
    L1 regularization encourages sparsity.讓矩陣更稀疏
    導致某一維的權重為0 ,產生稀疏權重矩陣
    Args:
      scale: A scalar multiplier `Tensor`. 0.0 disables the regularizer.
      scope: An optional scope name.
    
    Returns:
      A function with signature `l1(weights)` that apply L1 regularization.
    
    Raises:
      ValueError: If scale is negative or if scale is not a float.

None
[Finished in 2.4s]

認識你是我們的緣分,同學,等等,學習人工智慧,記得關注我。

微信掃一掃
關注該公眾號

《灣區人工智慧》