1. 程式人生 > >池化方法總結(Pooling) 和卷積 。 第三部分講的很好

池化方法總結(Pooling) 和卷積 。 第三部分講的很好

影象大小與引數個數:

前面幾章都是針對小影象塊處理的,這一章則是針對大影象進行處理的。兩者在這的區別還是很明顯的,小影象(如8*8,MINIST的28*28)可以採用全連線的方式(即輸入層和隱含層直接相連)。但是大影象,這個將會變得很耗時:比如96*96的影象,若採用全連線方式,需要96*96個輸入單元,然後如果要訓練100個特徵,只這一層就需要96*96*100個引數(W,b),訓練時間將是前面的幾百或者上萬倍。所以這裡用到了部分聯通網路。對於影象來說,每個隱含單元僅僅連線輸入影象的一小片相鄰區域。

這樣就引出了一個卷積的方法:

convolution:

自然影象有其固有特性,也就是說,影象的一部分的統計特性與其他部分是一樣的。這也意味著我們在這一部分學習的特徵也能用在另一部分上,所以對於這個影象上的所有位置,我們都能使用同樣的學習特徵。

對於影象,當從一個大尺寸影象中隨機選取一小塊,比如說8x8作為樣本,並且從這個小塊樣本中學習到了一些特徵,這時我們可以把從這個8x8樣本中學習到的特徵作為探測器,應用到這個影象的任意地方中去。特別是,我們可以用從8x8樣本中所學習到的特徵跟原本的大尺寸影象作卷積,從而對這個大尺寸影象上的任一位置獲得一個不同特徵的啟用值。

講義中舉得具體例子,還是看例子容易理解:

假設你已經從一個96x96的影象中學習到了它的一個8x8的樣本所具有的特徵,假設這是由有100個隱含單元的自編碼完成的。為了得到卷積特徵,需要對96x96的影象的每個8x8的小塊影象區域都進行卷積運算。也就是說,抽取8x8的小塊區域,並且從起始座標開始依次標記為(1,1),(1,2),...,一直到(89,89),然後對抽取的區域逐個執行訓練過的稀疏自編碼來得到特徵的啟用值。在這個例子裡,顯然可以得到100個集合,每個集合含有89x89個卷積特徵。講義中那個gif圖更形象,這裡不知道怎麼新增進來...

最後,總結下convolution的處理過程:

假設給定了r * c的大尺寸影象,將其定義為xlarge。首先通過從大尺寸影象中抽取的a * b的小尺寸影象樣本xsmall訓練稀疏自編碼,得到了k個特徵(k為隱含層神經元數量),然後對於xlarge中的每個a*b大小的塊,求啟用值fs,然後對這些fs進行卷積。這樣得到(r-a+1)*(c-b+1)*k個卷積後的特徵矩陣。

pooling:

在通過卷積獲得了特徵(features)之後,下一步我們希望利用這些特徵去做分類。理論上講,人們可以把所有解析出來的特徵關聯到一個分類器,例如softmax分類器,但計算量非常大。例如:對於一個96X96畫素的影象,假設我們已經通過8X8個輸入學習得到了400個特徵。而每一個卷積都會得到一個(96 − 8 + 1) * (96 − 8 + 1) = 7921的結果集,由於已經得到了400個特徵,所以對於每個樣例(example)結果集的大小就將達到892

 * 400 = 3,168,400 個特徵。這樣學習一個擁有超過3百萬特徵的輸入的分類器是相當不明智的,並且極易出現過度擬合(over-fitting).

所以就有了pooling這個方法,翻譯作“池化”?感覺pooling這個英語單詞還是挺形象的,翻譯“作池”化就沒那麼形象了。其實也就是把特徵影象區域的一部分求個均值或者最大值,用來代表這部分割槽域。如果是求均值就是mean pooling,求最大值就是max pooling。講義中那個gif圖也很形象,只是不知道這裡怎麼放gif圖....

至於pooling為什麼可以這樣做,是因為:我們之所以決定使用卷積後的特徵是因為影象具有一種“靜態性”的屬性,這也就意味著在一個影象區域有用的特徵極有可能在另一個區域同樣適用。因此,為了描述大的影象,一個很自然的想法就是對不同位置的特徵進行聚合統計。這個均值或者最大值就是一種聚合統計的方法。

另外,如果人們選擇影象中的連續範圍作為池化區域,並且只是池化相同(重複)的隱藏單元產生的特徵,那麼,這些池化單元就具有平移不變性(translation invariant)。這就意味著即使影象經歷了一個小的平移之後,依然會產生相同的(池化的)特徵(這裡有個小小的疑問,既然這樣,是不是隻能保證在池化大小的這塊區域內具有平移不變性?)。在很多工中(例如物體檢測、聲音識別),我們都更希望得到具有平移不變性的特徵,因為即使影象經過了平移,樣例(影象)的標記仍然保持不變。例如,如果你處理一個MNIST資料集的數字,把它向左側或右側平移,那麼不論最終的位置在哪裡,你都會期望你的分類器仍然能夠精確地將其分類為相同的數字。

練習:

下面是講義中的練習。用到了上一章的練習的結構(即在convolution過程中的第一步,用稀疏自編碼對xsmall求k個特徵)。

以下是主要程式:

主程式cnnExercise.m

複製程式碼
%% CS294A/CS294W Convolutional Neural Networks Exercise

%  Instructions
%  ------------
% 
%  This file contains code that helps you get started on the
%  convolutional neural networks exercise. In this exercise, you will only
%  need to modify cnnConvolve.m and cnnPool.m. You will not need to modify
%  this file.

%%======================================================================
%% STEP 0: Initialization
%  Here we initialize some parameters used for the exercise.

imageDim = 64;         % image dimension
imageChannels = 3;     % number of channels (rgb, so 3)

patchDim = 8;          % patch dimension
numPatches = 50000;    % number of patches

visibleSize = patchDim * patchDim * imageChannels;  % number of input units 
outputSize = visibleSize;   % number of output units
hiddenSize = 400;           % number of hidden units 

epsilon = 0.1;           % epsilon for ZCA whitening

poolDim = 19;          % dimension of pooling region

%%======================================================================
%% STEP 1: Train a sparse autoencoder (with a linear decoder) to learn 
%  features from color patches. If you have completed the linear decoder
%  execise, use the features that you have obtained from that exercise, 
%  loading them into optTheta. Recall that we have to keep around the 
%  parameters used in whitening (i.e., the ZCA whitening matrix and the
%  meanPatch)

% --------------------------- YOUR CODE HERE --------------------------
% Train the sparse autoencoder and fill the following variables with 
% the optimal parameters:

%optTheta =  zeros(2*hiddenSize*visibleSize+hiddenSize+visibleSize, 1);
%ZCAWhite =  zeros(visibleSize, visibleSize);
%meanPatch = zeros(visibleSize, 1);
load STL10Features.mat;


% --------------------------------------------------------------------

% Display and check to see that the features look good
W = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize);
b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);

displayColorNetwork( (W*ZCAWhite)');

%%======================================================================
%% STEP 2: Implement and test convolution and pooling
%  In this step, you will implement convolution and pooling, and test them
%  on a small part of the data set to ensure that you have implemented
%  these two functions correctly. In the next step, you will actually
%  convolve and pool the features with the STL10 images.

%% STEP 2a: Implement convolution
%  Implement convolution in the function cnnConvolve in cnnConvolve.m

% Note that we have to preprocess the images in the exact same way 
% we preprocessed the patches before we can obtain the feature activations.

load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels

%% Use only the first 8 images for testing
convImages = trainImages(:, :, :, 1:8); 

% NOTE: Implement cnnConvolve in cnnConvolve.m first!
convolvedFeatures = cnnConvolve(patchDim, hiddenSize, convImages, W, b, ZCAWhite, meanPatch);

%% STEP 2b: Checking your convolution
%  To ensure that you have convolved the features correctly, we have
%  provided some code to compare the results of your convolution with
%  activations from the sparse autoencoder

% For 1000 random points
for i = 1:1000    
    featureNum = randi([1, hiddenSize]);
    imageNum = randi([1, 8]);
    imageRow = randi([1, imageDim - patchDim + 1]);
    imageCol = randi([1, imageDim - patchDim + 1]);    
   
    patch = convImages(imageRow:imageRow + patchDim - 1, imageCol:imageCol + patchDim - 1, :, imageNum);
    patch = patch(:);            
    patch = patch - meanPatch;
    patch = ZCAWhite * patch;
    
    features = feedForwardAutoencoder(optTheta, hiddenSize, visibleSize, patch); 

    if abs(features(featureNum, 1) - convolvedFeatures(featureNum, imageNum, imageRow, imageCol)) > 1e-9
        fprintf('Convolved feature does not match activation from autoencoder\n');
        fprintf('Feature Number    : %d\n', featureNum);
        fprintf('Image Number      : %d\n', imageNum);
        fprintf('Image Row         : %d\n', imageRow);
        fprintf('Image Column      : %d\n', imageCol);
        fprintf('Convolved feature : %0.5f\n', convolvedFeatures(featureNum, imageNum, imageRow, imageCol));
        fprintf('Sparse AE feature : %0.5f\n', features(featureNum, 1));       
        error('Convolved feature does not match activation from autoencoder');
    end 
end

disp('Congratulations! Your convolution code passed the test.');

%% STEP 2c: Implement pooling
%  Implement pooling in the function cnnPool in cnnPool.m

% NOTE: Implement cnnPool in cnnPool.m first!
pooledFeatures = cnnPool(poolDim, convolvedFeatures);

%% STEP 2d: Checking your pooling
%  To ensure that you have implemented pooling, we will use your pooling
%  function to pool over a test matrix and check the results.

testMatrix = reshape(1:64, 8, 8);
expectedMatrix = [mean(mean(testMatrix(1:4, 1:4))) mean(mean(testMatrix(1:4, 5:8))); ...
                  mean(mean(testMatrix(5:8, 1:4))) mean(mean(testMatrix(5:8, 5:8))); ];
            
testMatrix = reshape(testMatrix, 1, 1, 8, 8);
        
pooledFeatures = squeeze(cnnPool(4, testMatrix));

if ~isequal(pooledFeatures, expectedMatrix)
    disp('Pooling incorrect');
    disp('Expected');
    disp(expectedMatrix);
    disp('Got');
    disp(pooledFeatures);
else
    disp('Congratulations! Your pooling code passed the test.');
end

%%======================================================================
%% STEP 3: Convolve and pool with the dataset
%  In this step, you will convolve each of the features you learned with
%  the full large images to obtain the convolved features. You will then
%  pool the convolved features to obtain the pooled features for
%  classification.
%
%  Because the convolved features matrix is very large, we will do the
%  convolution and pooling 50 features at a time to avoid running out of
%  memory. Reduce this number if necessary

stepSize = 50;
assert(mod(hiddenSize, stepSize) == 0, 'stepSize should divide hiddenSize');

load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels
load stlTestSubset.mat  % loads numTestImages,  testImages,  testLabels

pooledFeaturesTrain = zeros(hiddenSize, numTrainImages, ...
    floor((imageDim - patchDim + 1) / poolDim), ...
    floor((imageDim - patchDim + 1) / poolDim) );
pooledFeaturesTest = zeros(hiddenSize, numTestImages, ...
    floor((imageDim - patchDim + 1) / poolDim), ...
    floor((imageDim - patchDim + 1) / poolDim) );

tic();

for convPart = 1:(hiddenSize / stepSize)
    
    featureStart = (convPart - 1) * stepSize + 1;
    featureEnd = convPart * stepSize;
    
    fprintf('Step %d: features %d to %d\n', convPart, featureStart, featureEnd);  
    Wt = W(featureStart:featureEnd, :);
    bt = b(featureStart:featureEnd);    
    
    fprintf('Convolving and pooling train images\n');
    convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
        trainImages, Wt, bt, ZCAWhite, meanPatch);
    pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
    pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;   
    toc();
    clear convolvedFeaturesThis pooledFeaturesThis;
    
    fprintf('Convolving and pooling test images\n');
    convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
        testImages, Wt, bt, ZCAWhite, meanPatch);
    pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
    pooledFeaturesTest(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;   
    toc();

    clear convolvedFeaturesThis pooledFeaturesThis;

end


% You might want to save the pooled features since convolution and pooling takes a long time
save('cnnPooledFeatures.mat', 'pooledFeaturesTrain', 'pooledFeaturesTest');
toc();

%%======================================================================
%% STEP 4: Use pooled features for classification
%  Now, you will use your pooled features to train a softmax classifier,
%  using softmaxTrain from the softmax exercise.
%  Training the softmax classifer for 1000 iterations should take less than
%  10 minutes.

% Add the path to your softmax solution, if necessary
% addpath /path/to/solution/

% Setup parameters for softmax
softmaxLambda = 1e-4;
numClasses = 4;
% Reshape the pooledFeatures to form an input vector for softmax
softmaxX = permute(pooledFeaturesTrain, [1 3 4 2]);
softmaxX = reshape(softmaxX, numel(pooledFeaturesTrain) / numTrainImages,...
    numTrainImages);
softmaxY = trainLabels;

options = struct;
options.maxIter = 200;
softmaxModel = softmaxTrain(numel(pooledFeaturesTrain) / numTrainImages,...
    numClasses, softmaxLambda, softmaxX, softmaxY, options);

%%======================================================================
%% STEP 5: Test classifer
%  Now you will test your trained classifer against the test images

softmaxX = permute(pooledFeaturesTest, [1 3 4 2]);
softmaxX = reshape(softmaxX, numel(pooledFeaturesTest) / numTestImages, numTestImages);
softmaxY = testLabels;

[pred] = softmaxPredict(softmaxModel, softmaxX);
acc = (pred(:) == softmaxY(:));
acc = sum(acc) / size(acc, 1);
fprintf('Accuracy: %2.3f%%\n', acc * 100);

% You should expect to get an accuracy of around 80% on the test images.
複製程式碼

cnnConvolve.m

複製程式碼
function convolvedFeatures = cnnConvolve(patchDim, numFeatures, images, W, b, ZCAWhite, meanPatch)
%cnnConvolve Returns the convolution of the features given by W and b with
%the given images
%
% Parameters:
%  patchDim - patch (feature) dimension
%  numFeatures - number of features
%  images - large images to convolve with, matrix in the form
%           images(r, c, channel, image number)
%  W, b - W, b for features from the sparse autoencoder
%  ZCAWhite, meanPatch - ZCAWhitening and meanPatch matrices used for
%                        preprocessing
%
% Returns:
%  convolvedFeatures - matrix of convolved features in the form
%                      convolvedFeatures(featureNum, imageNum, imageRow, imageCol)
patchSize = patchDim*patchDim;
numImages = size(images, 4);
imageDim = size(images, 1);
imageChannels = size(images, 3);

convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + 1, imageDim - patchDim + 1);

% Instructions:
%   Convolve every feature with every large image here to produce the 
%   numFeatures x numImages x (imageDim - patchDim + 1) x (imageDim - patchDim + 1) 
%   matrix convolvedFeatures, such that 
%   convolvedFeatures(featureNum, imageNum, imageRow, imageCol) is the
%   value of the convolved featureNum feature for the imageNum image over
%   the region (imageRow, imageCol) to (imageRow + patchDim - 1, imageCol + patchDim - 1)
%
% Expected running times: 
%   Convolving with 100 images should take less than 3 minutes 
%   Convolving with 5000 images should take around an hour
%   (So to save time when testing, you should convolve with less images, as
%   described earlier)

% -------------------- YOUR CODE HERE --------------------
% Precompute the matrices that will be used during the convolution. Recall
% that you need to take into account the whitening and mean subtraction
% steps
WT = W*ZCAWhite;
bT = b-WT*meanPatch;
% --------------------------------------------------------

convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + 1, imageDim - patchDim + 1);
for imageNum = 1:numImages
  for featureNum = 1:numFeatures

    % convolution of image with feature matrix for each channel
    convolvedImage = zeros(imageDim - patchDim + 1, imageDim - patchDim + 1);
    for channel = 1:3

      % Obtain the feature (patchDim x patchDim) needed during the convolution
      % ---- YOUR CODE HERE ----
      %feature = zeros(8,8); % You should replace this
      offset = (channel-1)*patchSize;
      feature = reshape(WT(featureNum,(offset+1):(offset+patchSize)),patchDim,patchDim);

      % ------------------------

      % Flip the feature matrix because of the definition of convolution, as explained later
      feature = flipud(fliplr(squeeze(feature)));
      
      % Obtain the image
      im = squeeze(images(:, :, channel, imageNum));

      % Convolve "feature" with "im", adding the result to convolvedImage
      % be sure to do a 'valid' convolution
      % ---- YOUR CODE HERE ----
       convolveThisChannel = conv2(im,feature,'valid');
       convolvedImage = convolvedImage + convolveThisChannel;            %三個通道加起來,應該是指三個通道同時用來做判斷標準。
    
      % ------------------------

    end
    
    % Subtract the bias unit (correcting for the mean subtraction as well)
    % Then, apply the sigmoid function to get the hidden activation
    % ---- YOUR CODE HERE ----
    convolvedImage = sigmoid(convolvedImage + bT(featureNum));

    % ------------------------
    
    % The convolved feature is the sum of the convolved values for all channels
    convolvedFeatures(featureNum, imageNum, :, :) = convolvedImage;
  end
end

function sigm = sigmoid(x)

    sigm = 1 ./ (1 + exp(-x));
end

end
複製程式碼

cnnPool.m

複製程式碼
function pooledFeatures = cnnPool(poolDim, convolvedFeatures)
%cnnPool Pools the given convolved features
%
% Parameters:
%  poolDim - dimension of pooling region
%  convolvedFeatures - convolved features to pool (as given by cnnConvolve)
%                      convolvedFeatures(featureNum, imageNum, imageRow, imageCol)
%
% Returns:
%  pooledFeatures - matrix of pooled features in the form
%                   pooledFeatures(featureNum, imageNum, poolRow, poolCol)
%     

numImages = size(convolvedFeatures, 2);
numFeatures = size(convolvedFeatures, 1);
convolvedDim = size(convolvedFeatures, 3);

pooledFeatures = zeros(numFeatures, numImages, floor(convolvedDim / poolDim), floor(convolvedDim / poolDim));

% -------------------- YOUR CODE HERE --------------------
% Instructions:
%   Now pool the convolved features in regions of poolDim x poolDim,
%   to obtain the 
%   numFeatures x numImages x (convolvedDim/poolDim) x (convolvedDim/poolDim) 
%   matrix pooledFeatures, such that
%   pooledFeatures(featureNum, imageNum, poolRow, poolCol) is the 
%   value of the featureNum feature for the imageNum image pooled over the
%   corresponding (poolRow, poolCol) pooling region 
%   (see http://ufldl/wiki/index.php/Pooling )
%   
%   Use mean pooling here.
% -------------------- YOUR CODE HERE --------------------
numBlocks = floor(convolvedDim/poolDim);             %每個維度總共分成多少塊(57/19),這裡對於不同維數的資料,poolDim要選擇能剛好除盡的?
for featureNum = 1:numFeatures
    for imageNum=1:numImages
        for poolRow = 1:numBlocks
            for poolCol = 1:numBlocks
                features = convolvedFeatures(featureNum,imageNum,(poolRow-1)*poolDim+1:poolRow*poolDim,(poolCol-1)*poolDim+1:poolCol*poolDim);
                pooledFeatures(featureNum,imageNum,poolRow,poolCol) = mean(features(:));
            end
        end
    end
end
end
複製程式碼

結果:

Accuracy: 78.938%

與講義提到的80%左右差不多。

ps:講義地址:

http://deeplearning.stanford.edu/wiki/index.PHP/Feature_extraction_using_convolution

http://deeplearning.stanford.edu/wiki/index.php/Pooling

http://deeplearning.stanford.edu/wiki/index.php/Exercise:Convolution_and_Pooling

=====================================================================================================

相關推薦

方法總結Pooling 部分

影象大小與引數個數: 前面幾章都是針對小影象塊處理的,這一章則是針對大影象進行處理的。兩者在這的區別還是很明顯的,小影象(如8*8,MINIST的28*28)可以採用全連線的方式(即輸入層和隱含層直接相連)。但是大影象,這個將會變得很耗時:比如96*96的影象,若採用全連線方式,需要96*96個

方法總結Pooling

在卷積神經網路中,我們經常會碰到池化操作,而池化層往往在卷積層後面,通過池化來降低卷積層輸出的特徵向量,同時改善結果(不易出現過擬合)。 為什麼可以通過降低維度呢? 因為影象具有一種“靜態性”的屬性,這也就意味著在一個影象區域有用的特徵極有可能在另一個區域同樣適用。因

網路權重初始方法總結:Lecun、Xavier與He Kaiming

目錄 權重初始化最佳實踐 期望與方差的相關性質 全連線層方差分析 tanh下的初始化方法 Lecun 1998 Xavier 2010 ReL

Linux查看日誌方法總結1

關鍵字 http grep 定時 abd cab 我們 做的 ext 註:日誌文件為:test.log 1.tail -f test.log 查看當前打印的日誌(平時就知道這方法!打印出的長度有限制。) 以下為網上搜集的: 2.先必須了解兩個最基本的命令: tai

STL使用方法總結入門

初始 元素 索引 ID 指針 內容 size con mil Iterator 用於操作復雜的數據結構,類似於指針,指向數據結構的位置,*it用於讀取數據 關於嵌套容器的叠代器使用: set< vector<int> >::iterator it;

HBase性能優化方法總結

rec inter next memstore 不支持 lena cred 追加 查詢效率 一 表的設計 1.1 Pre-Creating Regions 默認情況下,在創建HBase表的時候會自動創建一個region分區,當導入數據的時候,所有的HBase客戶端都向這

HttpServletRequest 各種方法總結

請求 客戶端 values AI 服務器程序 去重 request 字符串 重新   HttpServletRequest對象代表客戶端的請求,當客戶端通過HTTP協議訪問服務器時,HTTP請求頭中的所有信息都封裝在這個對象中,開發人員通過這個對象的方法,可以獲得客戶這些信

Butterknife--Android Butterknife使用方法總結

black inner 混淆 ora RoCE max 通過 要點 vat 原文鏈接:http://blog.csdn.net/donkor_/article/details/77879630 前言: ButterKnife是一個專註於Android系統的View註入框架,

java基本運算方法總結ing

方法 運算 sys 隨機數 mat ... 方法總結 clas math 邊學邊總結... 1、冪的運算 Math.pow(a,b) 2、隨機數的方法 System.currentTimeMillis()    Math.randow() 3、絕對值的運算

String類中的equals方法總結轉載

轉載:https://blog.csdn.net/qq_25827845/article/details/53868815 1.String原始碼中equals大致寫法: 1 public boolean equals(Object anObject) { 2 if (this == anObj

web測試方法總結

十四、連結測試 主要是保證連結的可用性和正確性,它也是網站測試中比較重要的一個方面。 可以使用特定的工具如XENU來進行連結測試。1導航測試 導航描述了使用者在一個頁面內操作的方式,在不同的使用者介面控制之間,例如按鈕、對話方塊、列表和視窗等;或在不同的連線頁面之間。通過考

手機端web頁除錯方法總結

如何檢視app中web頁的原始碼: 目前可用的解決辦法是使用Chrome瀏覽器提供的“”檢查裝置“”功能。 準備:最新的Chrome版本,vpn賬號(使用該功能需要翻牆),一根資料線。 準備完畢之後,將手機通過資料線連線到電腦,最好電腦下載一個手機助手,確保USB已連線上。

方法過載Overload方法重寫(Override)

java中的2中多型性:方法過載(Overload)+方法重寫(Override)/覆蓋(1)方法過載(Overload)(一個類中)目的:用自己的方法Java5class Area{float getArea(float r){return 3.14frr; //浮點型

利用ffmpeg錄製rtsp流的方法總結

這裡主要結合平時對音視訊的學習,將ffmpeg錄製rtsp的方法在這裡記錄下。 首先,一般的錄製過程都是按照如下的流程圖進行錄製,除非某些不存在音訊的特殊情況。  這個是總體的錄製流程,然而這個流程對於推流來說也適用。因此對於流程中每一步的實現,需要弄明白。 在進入

iOS之獲取UITableViewCell中UITextField的值方法總結

- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSInteg

web測試方法總結

一、輸入框 1、字元型輸入框: (1)字元型輸入框:英文全形、英文半形、數字、空或者空格、特殊字元“~!@#¥%……&*?[]{}”特別要注意單引號和&符號。禁止直接輸入特殊字元時,使用“貼上、拷貝”功能嘗試輸入。 (2)長度檢查:最小長度、最大長度、最小長度-1、最大長度+1、輸入超工字元

ASP.NET常用線上編輯器使用方法總結

嗯,本來只是想把常用的兩款感覺效果還不錯的線上編輯器給總結下,但是沒想到大家對此的反應還挺強烈的,那我就再總結兩款用的比較不錯的編譯器吧。 在上一篇文章的結尾有提到這款編譯器,個人覺得是使用者體驗最好的線上編譯器。先上個圖: 這個也就是我們新浪部落格用的釋出文章

【轉載】HBase效能優化方法總結3:寫表操作

本文主要是從HBase應用程式設計與開發的角度,總結幾種常用的效能優化方法。有關HBase系統配置級別的優化,可參考:淘寶Ken Wu同學的部落格。 下面是本文總結的第二部分內容:寫表操作相關的優化方法。 2. 寫表操作 2.1 多HTable併發寫 建立多個HTable客

機器學習方法總結

機器學習方法概論    說明:本教程的主要目的是個人秋招復習,適用於一些有基礎的同學進行復習,主要來自於對統計學習方法和西瓜書的整理,所以不適用於系統學習,詳細內容大家可以看書。其中加入個人的理解和各個演算法是例項,由於理解不夠導致的錯誤還請各位指出。  1.特點與分

Linq 語法/List列表、陣列處理資料方法總結Chinar

Linq List泛型資料陣列資料處理用法歸類 本文提供全流程,中文翻譯。 Chinar 堅持將簡單的生活方式,帶給世人!(擁有更好的閱讀體驗 —— 高解析度使用者請根據需求調整網頁縮放比例)