1. 程式人生 > >論文解讀|【Densenet】密集連線的卷積網路(附Pytorch程式碼講解)

論文解讀|【Densenet】密集連線的卷積網路(附Pytorch程式碼講解)

image_1cs0pegkqocnqk61imqab7sjv6h.png-57.3kB


@[t oc]

1 簡單介紹

image_1crv3fp6h1281lqn1fc01irl17j719.png-56.5kB

論文題目:Densely Connected Convolutional Networks
發表機構:康奈爾大學,清華大學,Facebook AI
發表時間:2018年1月
論文程式碼:https://github.com/WangXiaoCao/attention-is-all-you-need-pytorch
pytorch程式碼:https://github.com/WangXiaoCao/attention-is-all-you-need-pytorch

1.1 背景介紹

1.卷積神經網路CNN在計算機視覺物體識別上優勢顯著,典型的模型有:LeNet5, VGG, Highway Network, Residual Network.

2.CNN越深則效果越好,但是,會面臨梯度彌散的問題,經過層數越多,則前面的資訊就會漸漸減弱和消散。

3.目前已有很多措施去解決以上困境:
(1)Highway Network,Residual Network通過前後兩層的殘差連結使資訊儘量不丟失
(2)Stochastic depth通過隨機drop掉Resnet的一些層來縮短模型
(3)FractalNets通過重複組合一些平行的層序列來保證深度的同時減輕這個問題。
但這些措施都有一個共性:都是在前一層和後一層中都建立一個短連線。比如,醬紫:
image_1crv7ht2oe0thm5lgm57c178n1m.png-19.7kB

1.2 本文概要

1.2.1 模型結構預覽

本文提出的densenet就更霸道了,為了確保網路中最大的資訊流通,讓每層都與改層之前的所有層都相連,即每層的輸入,是前面所有層的輸出的concat.(resnet用的是sum).整體結構是醬紫的:
image_1crv7tocd1a5k87n1okb1a9o12mg23.png-58.4kB

1.2.2 優點

1.需要更少引數。

2.使得資訊(前向計算時)或梯度(後向計算時)在整個網路中的保持地更好,可以訓練更深的模型。

3.dense connection有正則化的效果,在較少訓練集上減少過擬合。

1.2.3 實驗結果

在4個benchmark datasets (CIFAR-10, CIFAR-100, SVHN, and
ImageNet)上測試。
大部分任務上都優於state of art.

2 模型結構

2.1 整體結構

image_1crv9jqh11qpfn3e2vo3fuujf2g.png-66.2kB

1.輸入:圖片
2.經過feature block(圖中的第一個convolution層,後面可以加一個pooling層,這裡沒有畫出來)
3.經過第一個dense block, 該Block中有n個dense layer,灰色圓圈表示,每個dense layer都是dense connection,即每一層的輸入都是前面所有層的輸出的拼接
4.經過第一個transition block,由convolution和poolling組成
5.經過第二個dense block
6.經過第二個transition block
7.經過第三個dense block
8.經過classification block,由pooling,linear層組成,輸出softmax的score
9.經過prediction層,softmax分類
10.輸出:分類概率

作者在4個數據集上進行測試,CIFAR-10, CIFAR-100, SVHN上構建的是以上3個dense block + 2個transition block;在ImageNet上構建的是4個dense block + 3個transition block。兩者在引數的設定上略有不同,下文將以ImageNet上構建的densenet為例進行講解。

2.2 Feature Block

Feature Block是輸入層與第一個Dense Block之間的那一部分,上面結構圖中只畫了一個卷積,在ImageNet資料集上構建的densenet中其實後面還跟了一個poolling層。計算過程如下:

輸入:圖片 (244 * 244 * 3)
1.卷積層convolution計算:in_channel=3, out_channel=64,kernel_size=7,stride=2,padding=3,輸出(122 * 122 * 64)
2.batch normalization計算,輸入與輸出維度不變 (122 * 122 * 64)
3.啟用函式relu計算,輸入與輸出維度不變 (122 * 122 * 64)
4.池化層poollig計算,kenel_size=3, stride=2,padding=1,輸出(56 * 56 * 64)

    from torch.nn import Sequential, Conv2d, BatchNorm2d, ReLU, MaxPool2d
    
    class FeatureBlock(RichRepr, Sequential):
        def __init__(self, in_channels, out_channels):
            super(FeatureBlock, self).__init__()
    
            self.in_channels = in_channels
            self.out_channels = out_channels
    
            # add_module:在現有model中增添子module
            self.add_module('conv', Conv2d(in_channels, out_channels, kernel_size=7, stride=2, padding=3, bias=False)),
            self.add_module('norm', BatchNorm2d(out_channels)),
            self.add_module('relu', ReLU(inplace=True)),
            self.add_module('pool', MaxPool2d(kernel_size=3, stride=2, padding=1)),

2.3 Dense Block 和 Dense Layer

2.3.1 Dense Layer

一個Dense Block中是由L層dense laryer組成,layer之間是dense connectivity。從下面這個公式上來體會什麼是dense connectivity,第l層的輸出是:
image_1crvbi3r5qljcnisu51al91s742t.png-4.7kB
H_l是該layer的計算函式,輸入是x0到x_l-1的拼接,即模型的原始輸出(x0)和前面每層的輸出的拼接。這個拼接是channel維度上的拼接,即維度(56 * 56 * 64)的資料 和(56 * 56 * 32)的資料拼接成(56 * 56 * 96)的資料維度。

而ResNet就不同了,是直接將前一層的輸出加在該層的輸出之上:
image_1crvbmdhu7o242247417jh1vpi3a.png-3.9kB
Dense Layer中函式H(·)的計算過程如下(括號中的資料維度是以第一個dense block的第一個dense layer為例的,整個模型的k值是預先設定的,本模型為k=32):

輸入:Feature Block的輸出(56 * 56 * 64)或者是上一層dense layer的輸出
1.Batch Normalization, 輸出(56 * 56 * 64)
2.ReLU ,輸出(56 * 56 * 64)
3.Bottleneck,是可選的,為了減少 feature-maps的數量,過程如下3步
-1x1 Convolution, kernel_size=1, channel = 4k, 則輸出為(56 * 56 * 128)
-Batch Normalization(56 * 56 * 128)
-ReLU(56 * 56 * 128)
4.Convolution, kernel_size=3, channel = k (56 * 56 * 32)
5.Dropout,可選的,用於防止過擬合(56 * 56 * 32)

from typing import Optional
from torch.nn import Sequential, BatchNorm2d, ReLU, Conv2d, Dropout2d
from .bottleneck import Bottleneck

class DenseLayer(RichRepr, Sequential):
    r"""
    Dense Layer as described in [DenseNet](https://arxiv.org/abs/1608.06993)
    and implemented in https://github.com/liuzhuang13/DenseNet

    Consists of:

    - Batch Normalization
    - ReLU
    - (Bottleneck)
    - 3x3 Convolution
    - (Dropout)
    """

    def __init__(self, in_channels: int, out_channels: int,
                 bottleneck_ratio: Optional[int] = None, dropout: float = 0.0):
        super(DenseLayer, self).__init__()

        self.in_channels = in_channels
        self.out_channels = out_channels

        self.add_module('norm', BatchNorm2d(num_features=in_channels))
        self.add_module('relu', ReLU(inplace=True))

        if bottleneck_ratio is not None:
            self.add_module('bottleneck', Bottleneck(in_channels, bottleneck_ratio * out_channels))
            in_channels = bottleneck_ratio * out_channels

        self.add_module('conv', Conv2d(in_channels, out_channels, kernel_size=3, padding=1, bias=False))

        if dropout > 0:
            self.add_module('drop', Dropout2d(dropout, inplace=True))

Bottleneck程式碼如下:

from torch.nn import Sequential, Conv2d, BatchNorm2d, ReLU

from ..utils import RichRepr


class Bottleneck(RichRepr, Sequential):
    r"""
    A 1x1 convolutional layer, followed by Batch Normalization and ReLU
    """

    def __init__(self, in_channels: int, out_channels: int):
        super(Bottleneck, self).__init__()

        self.in_channels = in_channels
        self.out_channels = out_channels

        self.add_module('conv', Conv2d(in_channels, out_channels, kernel_size=1, bias=False))
        self.add_module('norm', BatchNorm2d(num_features=out_channels))
        self.add_module('relu', ReLU(inplace=True))

2.3.2 Dense Block

Dense Block有L層dense layer組成
layer 0:輸入(56 * 56 * 64)->輸出(56 * 56 * 32)
layer 1:輸入(56 * 56 (32 * 1))->輸出(56 * 56 * 32)
layer 2:輸入(56 * 56
(32 * 2))->輸出(56 * 56 * 32)

layer L:輸入(56 * 56 * (32 * L))->輸出(56 * 56 * 32)

注意,L層dense layer的輸出都是不變的,而每層的輸入channel數是增加的,因為如上所述,每層的輸入是前面所有層的拼接。

rom typing import Optional
import torch
from torch.nn import Module
from .dense_layer import DenseLayer

class DenseBlock(RichRepr, Module):
    r"""
    Dense Block as described in [DenseNet](https://arxiv.org/abs/1608.06993)
    and implemented in https://github.com/liuzhuang13/DenseNet

    - Consists of several DenseLayer (possibly using a Bottleneck and Dropout) with the same output shape
    - The first DenseLayer is fed with the block input
    - Each subsequent DenseLayer is fed with a tensor obtained by concatenating the input and the output
      of the previous DenseLayer on the channel axis
    - The block output is the concatenation of the output of every DenseLayer, and optionally the block input,
      so it will have a channel depth of (growth_rate * num_layers) or (growth_rate * num_layers + in_channels)
    """

    def __init__(self, in_channels: int, growth_rate: int, num_layers: int,
                 concat_input: bool = False, dense_layer_params: Optional[dict] = None):
        super(DenseBlock, self).__init__()

        self.concat_input = concat_input
        self.in_channels = in_channels
        self.growth_rate = growth_rate
        self.num_layers = num_layers
        self.out_channels = growth_rate * num_layers
        if self.concat_input:
            self.out_channels += self.in_channels

        if dense_layer_params is None:
            dense_layer_params = {}

        for i in range(num_layers):
            # 增添dense_layer:norm->relu->bottleneck->conv->dropout
            self.add_module(
                f'layer_{i}',
                DenseLayer(in_channels=in_channels + i * growth_rate, out_channels=growth_rate, **dense_layer_params)
            )

    def forward(self, block_input):
        layer_input = block_input
        # empty tensor (not initialized) + shape=(0,)
        layer_output = block_input.new_empty(0)

        all_outputs = [block_input] if self.concat_input else []
        for layer in self._modules.values():
            layer_input = torch.cat([layer_input, layer_output], dim=1)
            layer_output = layer(layer_input)
            all_outputs.append(layer_output)

        return torch.cat(all_outputs, dim=1)

2.4 Transition Block

Transition Block是在兩個Dense Block之間的,由一個卷積+一個pooling組成(下面的資料維度以第一個transition block為例):

輸入:Dense Block的輸出(56 * 56 * 32)
1.Batch Normalization 輸出(56 * 56 * 32)
2.ReLU 輸出(56 * 56 * 32)
3.1x1 Convolution,kernel_size=1,此處可以根據預先設定的壓縮係數(0-1之間)來壓縮原來的channel數,以減小引數,輸出(56 * 56 *(32 * compression))
4.2x2 Average Pooling 輸出(28 * 28 * (32 * compression))

class Transition(RichRepr, Sequential):
    r"""
    Transition Block as described in [DenseNet](https://arxiv.org/abs/1608.06993)
    and implemented in https://github.com/liuzhuang13/DenseNet

    Consists of:
    - Batch Normalization
    - ReLU
    - 1x1 Convolution (with optional compression of the number of channels)
    - 2x2 Average Pooling
    """

    def __init__(self, in_channels, compression: float = 1.0):
        super(Transition, self).__init__()
        if not 0.0 < compression <= 1.0:
            raise ValueError(f'Compression must be in (0, 1] range, got {compression}')

        self.in_channels = in_channels
        # transition中可設定壓縮係數,以減少輸出channel
        self.out_channels = int(ceil(compression * in_channels))

        self.add_module('norm', BatchNorm2d(num_features=self.in_channels))
        self.add_module('relu', ReLU(inplace=True))
        self.add_module('conv', Conv2d(self.in_channels, self.out_channels, kernel_size=1, bias=False))
        self.add_module('pool', AvgPool2d(kernel_size=2, stride=2))

2.5 迴圈Dense Block和Transition

論文中,在ImageNet的資料集上,構建的densenet是由4個Dense Block,和3個Transition構成,按照上文講述的過程,資料流的演變過程應該是:

Dense Block1:輸入(565664),輸出(565632)
Transition1:輸入(565632),輸出(282832)
Dense Block2:輸入(282832),輸出(282832)
Transition2:輸入(282832),輸出(141432)
Dense Block3:輸入(141432),輸出(141432)
Transition3:輸入(141432),輸出(7732)

2.6 ClassificationBlock

最後一步是ClassificationBlock,這一步將原來3維的資料拉平成一維,再接上全連線層,以準備做softmax。計算過程如下:

輸入:Transition3的輸出(7 * 7 * 32)
1.Batch Normalization, 輸出(7 * 7 * 32)
2.ReLU, 輸出(7 * 7 * 32)
3.poolling, kernel_size=7, stride=1,輸出(1 * 1 * 32)
4.flatten,將(1 * 1 * 32)鋪平成(1 * 32)
5.Linear全連線,輸出(1*classes_num) classes_num為待分類的數目

from torch.nn import Sequential, BatchNorm2d, ReLU, AvgPool2d, Linear
from ..shared import Flatten

class ClassificationBlock(RichRepr, Sequential):
    r"""
    Classification block for [DenseNet](https://arxiv.org/abs/1608.06993),
    takes in a 7x7 feature map and outputs logit scores for classification
    """

    def __init__(self, in_channels, output_classes):
        super(ClassificationBlock, self).__init__()

        self.in_channels = in_channels
        self.out_classes = output_classes

        self.add_module('norm', BatchNorm2d(num_features=in_channels))
        self.add_module('relu', ReLU(inplace=True))
        self.add_module('pool', AvgPool2d(kernel_size=7, stride=1))
        self.add_module('flatten', Flatten())
        self.add_module('linear', Linear(in_channels, output_classes))

flaten程式碼如下:

from torch.nn import Module

class Flatten(Module):
    def forward(self, x):
        return x.view(x.size(0), -1)

最後將以上輸出進入softmax,預測每個類別的概率。

logits = self(x)  //x是linear層的輸出
return F.softmax(logits)

2.7 整合以上過程

將以上所有過程都這個起來,構建一個完整的densenet模型,程式碼如下:

from itertools import zip_longest
from typing import Sequence, Union, Optional
from torch.nn import Sequential, Conv2d, BatchNorm2d, Linear, init
from torch.nn import functional as F
from .classification_block import ClassificationBlock
from .feature_block import FeatureBlock
from .transition import Transition
from ..shared import DenseBlock


# 繼承Sequential類
class DenseNet(Sequential):
    def __init__(self,
                 in_channels: int = 3,
                 output_classes: int = 1000,
                 initial_num_features: int = 64,
                 dropout: float = 0.0,

                 dense_blocks_growth_rates: Union[int, Sequence[int]] = 32,
                 dense_blocks_bottleneck_ratios: Union[Optional[int], Sequence[Optional[int]]] = 4,
                 dense_blocks_num_layers: Union[int, Sequence[int]] = (6, 12, 24, 16),
                 transition_blocks_compression_factors: Union[float, Sequence[float]] = 0.5):
        """
        構建完成densenet模型
        :param in_channels: 輸入的channel數目
        :param output_classes: 待分類別樹
        :param initial_num_features: 進入第一個Block的feature map數目
        :param dropout: dropout的比率
        :param dense_blocks_growth_rates: k(block中的channel數)
        :param dense_blocks_bottleneck_ratios: (bottleneck的比率)
        :param dense_blocks_num_layers: densenet的block數目
        :param transition_blocks_compression_factors: 在transition層中的壓縮係數(0-1之間)
        """
        super(DenseNet, self).__init__()

        # region Parameters handling
        self.in_channels = in_channels
        self.output_classes = output_classes

        # 擴充套件成4維:(10,10,10,10)
        if type(dense_blocks_growth_rates) == int:
            dense_blocks_growth_rates = (dense_blocks_growth_rates,) * 4
        if dense_blocks_bottleneck_ratios is None or type(dense_blocks_bottleneck_ratios) == int:
            dense_blocks_bottleneck_ratios = (dense_blocks_bottleneck_ratios,) * 4
        if type(dense_blocks_num_layers) == int:
            dense_blocks_num_layers = (dense_blocks_num_layers,) * 4
        if type(transition_blocks_compression_factors) == float:
            transition_blocks_compression_factors = (transition_blocks_compression_factors,) * 3
        # endregion

        # region First convolution
        # 1.第一個卷積:covn->norm->relu->pool
        features = FeatureBlock(in_channels, initial_num_features)
        current_channels = features.out_channels
        self.add_module('features', features)
        # endregion

        # region Dense Blocks and Transition layers
        # Dense Blocks 引數
        dense_blocks_params = [
            {
                'growth_rate': gr,
                'num_layers': nl,
                'dense_layer_params': {
                    'dropout': dropout,
                    'bottleneck_ratio': br
                }
            }
            for gr, nl, br in zip(dense_blocks_growth_rates, dense_blocks_num_layers, dense_blocks_bottleneck_ratios)
        ]
        # Transition layers 引數
        transition_blocks_params = [
            {
                'compression': c
            }
            for c in transition_blocks_compression_factors
        ]

        block_pairs_params = zip_longest(dense_blocks_params, transition_blocks_params)
        for block_pair_idx, (dense_block_params, transition_block_params) in enumerate(block_pairs_params):
            block = DenseBlock(current_channels, **dense_block_params)
            current_channels = block.out_channels
            # 增添DenseBlock:dense_block->dense_block->...dense_block
            self.add_module(f'block_{block_pair_idx}', block)

            if transition_block_params is not None:
                transition = Transition(current_channels, **transition_block_params)
                current_channels = transition.out_channels
                # 增加transition:covn->norm->relu->pool
                self.add_module(f'trans_{block_pair_idx}', transition)
        # endregion

        # region Classification block
        # 新增最後的分類層:norm->relu->poll->flaten->linear
        self.add_module('classification', ClassificationBlock(current_channels, output_classes))
        # endregion

        # region Weight initialization
        for module in self.modules():
            if isinstance(module, Conv2d):
                init.kaiming_normal_(module.weight)
            elif isinstance(module, BatchNorm2d):
                module.reset_parameters()
            elif isinstance(module, Linear):
                init.xavier_uniform_(module.weight)
                init.constant_(module.bias, 0)
        # endregion

    def predict(self, x):
        logits = self(x)
        return F.softmax(logits)

我們可以根據構建好的densenet模型,輸入不同引數,得到自定義的densenet模型,論文中,作者分別嘗試瞭如下深度的模型:
區別在於dense_blocks_num_layers的設定,也就是每個dense block中的dense layer的數目。

from .densenet import DenseNet

class DenseNet121(DenseNet):
    def __init__(self, dropout: float = 0.0):
        super(DenseNet121, self).__init__(
            in_channels=3,
            output_classes=1000,
            initial_num_features=64,
            dropout=dropout,
            dense_blocks_growth_rates=32,
            dense_blocks_bottleneck_ratios=4,
            dense_blocks_num_layers=(6, 12, 24, 16),
            transition_blocks_compression_factors=0.5
        )


class DenseNet169(DenseNet):
    def __init__(self, dropout: float = 0.0):
        super(DenseNet169, self).__init__(
            in_channels=3,
            output_classes=1000,
            initial_num_features=64,
            dropout=dropout,
            dense_blocks_growth_rates=32,
            dense_blocks_bottleneck_ratios=4,
            dense_blocks_num_layers=(6, 12, 32, 32),
            transition_blocks_compression_factors=0.5
        )


class DenseNet201(DenseNet):
    def __init__(self, dropout: float = 0.0):
        super(DenseNet201, self).__init__(
            in_channels=3,
            output_classes=1000,
            initial_num_features=64,
            dropout=dropout
            
           

相關推薦

論文解讀|Densenet密集連線的卷網路Pytorch程式碼講解

@[t oc] 1 簡單介紹 論文題目:Densely Connected Convolutional Networks 發表機構:康奈爾大學,清華大學,Facebook AI 發表時間:2018年1月 論文程式碼:https://github.com/Wang

深入淺出| 基於深度學習的機器翻譯PDF+視訊下載

由公眾號"機器學習演算法與Python學習"整理源|將門創投本文所分享的是清華大學劉洋副教授講解

HDOJ6118度度熊的交易計劃最小費用流

const 費用流 sign else read true head 最大的 自動調整 題意: 度度熊參與了喵哈哈村的商業大會,但是這次商業大會遇到了一個難題:喵哈哈村以及周圍的村莊可以看做是一共由n個片區,m條公路組成的地區。由於生產能力的區別,第i個片區能夠花費a[i]

BZOJ3769spoj 8549 BST again DP記憶化搜索?

ret lin 多少 sam 16px char long long cst ini 【BZOJ3769】spoj 8549 BST again Description 求有多少棵大小為n的深度為h的二叉樹。(樹根深度為0;左右子樹有別;答案對1000000007取

javajava處理隨機浮點數小數點後兩位用RMB的大寫數值規則輸出

pen junit toc get code package 部分 amp print 晚上上床前,拿到這個有意思的問題,就想玩弄一番: ====================================================================

BZOJ4755扭動的回文串Manacher,哈希

ring problem def www. 二分 cpp div char class 【BZOJ4755】扭動的回文串(Manacher,哈希) 題面 BZOJ 題解 不要真的以為看見了回文串就是\(PAM,Manacher\)一類就可以過。 這題顯然不行啊。 我們主要考

Javaitext根據模板生成pdf包括圖片和表格

金額 res report als fields positions 創建模板 bst open() 1、導入需要的jar包:itext-asian-5.2.0.jar itextpdf-5.5.11.jar。 2、新建word文檔,創建模板,將文件另存為pdf,並用Ado

POJ2480 Longge's problem歐拉函數

sin bit flag += continue its 就是 題意 ace 題目 傳送門:QWQ 分析 題意就是求∑gcd(i, N) 1<=i <=N.。 顯然$ gcd(i,n) = x $時,必然$x|n$。 所以我們枚

題解洛谷P3959 [NOIP2017TG] 寶藏狀壓DP+DFS

洛谷P3959:https://www.luogu.org/problemnew/show/P3959 前言 NOIP2017時還很弱(現在也很弱 看出來是DP 但是並不會狀壓DP 現在看來思路並不複雜 只是存狀態有點難想到 思路 因為n最大為12 所以可以想到是狀壓  

ZCMU1437A Bug's Life種類並查集

題目連結 1437: A Bug's Life Time Limit: 1 Sec  Memory Limit: 128 MB Submit: 113  Solved: 50 [Submit][Status][Web Board] Descript

BZOJ4196[NOI2015] 軟體包管理器樹鏈剖分

點此看題面 大致題意: 有nnn個軟體包,它們的依賴關係形成一棵樹。現在,問你安裝或解除安裝一個軟體包,會影響多少個軟體包的安裝狀態。 樹鏈剖分 這道題應該是 樹鏈剖分 演算法比較入門的題目吧。 Link 對於安裝操作 我們對安裝和解除安裝兩種操作分別

爬蟲初學爬蟲,瞭解始末概念類—更新中

因為對堆糖的圖片心心念念,但是目前從網上翻出來的爬蟲程式碼沒有能夠做到將大圖儲存下來的,所以決定自學爬蟲,直到完成堆糖的大圖片的下載(*^▽^*) 網路爬蟲(又被稱為網頁蜘蛛,網路機器人,在FOAF社群中間,更經常的稱為網頁追逐者),是一種按照一定的規則,自動地抓取全球資訊

DPssl 1010 方格取數多執行緒DP

Description 設有N*N的方格圖(N<=10,我們將其中的某些方格中填入正整數,而其他的方格中則放入數字0。如下圖所示(見樣例):   某人從圖的左上角的A 點出發,可以向下行走,也可以向右走,直到到達右下角的B點。在走過的路上,他可以取走方格中的數(取走後的方格中將變

DSPDSP5509A的FFT演算法實現:完整程式碼及疑點解惑

傅立葉變換及FFT原理 說起傅立葉變換,每個人第一反應都是從時域轉換到頻域的手段,如下圖所示: 但除了這一點之外呢?原理呢,推導呢?大概都是一頭霧水…… 而FFT並不是一種新的變換,它是離散傅立葉變換(DFT)的一種快速演算法。 DFT的演算法速度: 由於我們在計算DF

Unity3dUnity5與Android互動通訊使用Android Studio2.4

摘自CSDN作者,網址:http://blog.csdn.net/u010377179/article/details/53105062#comments(如有侵權,聯絡刪除。謝謝!) 現在網上的Unity與Android通訊的教程,要麼是Unity版本不是較新的,要麼

codeforce-#669A-Little Artem and Presents數學,找規律

A. Little Artem and Presents time limit per test 2 seconds memory limit per test 256 me

mark最長公共子序列poj 1458+hdu 1159

經典的問題,在各大部落格上有數不清的好帖子 下面為最常見的n^2演算法 #include<cstdio> #include<cstring> #include<cmath> #include<cstdlib> #inclu

實踐基於CentOS7部署Ceph叢集版本10.2.2

1 簡單介紹Ceph的部署模式下主要包含以下幾個型別的節點Ø Ceph OSDs: A Ceph OSD 程序主要用來儲存資料,處理資料的replication,恢復,填充,調整資源組合以及通過檢查其他OSD程序的心跳資訊提供一些監控資訊給Ceph Monitors . 當C

LeetCode最短子陣列之和Minimum size subarray sum

Given an array of n positive integers and a positive integer s, find the minimal length of a subarr

linuxubantu下Apache無法啟動80埠被佔用

本來今天回來想晚會bootstarp的,但是發現ubantu下80埠被佔用,apache無法啟動,很是蛋疼! 索性又學裡一招,檢視80埠被哪個貨佔用裡哈哈哈哈! 開啟終端輸入netstat -lnp|