1. 程式人生 > >Kaggle比賽——Digit Recognizer——Part 1(Pytorch 資料集的建立)

Kaggle比賽——Digit Recognizer——Part 1(Pytorch 資料集的建立)

       首先從Kaggle官網下載資料集https://www.kaggle.com/c/digit-recognizer/data裡面包含三個CSV文件。train.csv是帶標籤的資料,用於訓練和調參,test.csv是無標籤的資料,在提交測試文件的時候才需要用到。

        這裡,我先把train裡面的資料又隨機劃分為兩個表,一個用於訓練一個用於交叉驗證,程式碼很簡單,主要是pandas的一些簡單功能。

#use torch.utils.data.Dataset to build my dataset from train.csv and test.csv
import numpy as np
import pandas as pd
from sklearn.model_selection  import train_test_split
import torch
from torch.utils.data import DataLoader
from torch import nn
from torch.autograd import Variable

def train_val_split(train = 'train.csv',train_flie='train_set.csv',val_file='val_set.csv'):
    #training set "train.csv" was downloaded from kaggle.com
    train_data = pd.read_csv(train)    
    #training datas contains Feature and Label.
    #divide training datas into training set and validation set 
    train_set, val_set = train_test_split(train_data, test_size = 0.2)
    #wirte csv files
    train_set.to_csv(train_flie,index = False )
    val_set.to_csv(val_file,index = False )
    print('train_data.shape:',train_data.shape)
    print('train_set.shape:',train_set.shape)
    print('val_set.shape:',val_set.shape)

train_val_split('train.csv','train_set.csv','val_set.csv')
執行結果為:
(33600, 785)
(42000, 785)
(8400, 785)

        這樣,訓練資料和交叉驗證的資料就分別存在兩個表裡面了。

        下一步,我們需要重寫Pytorch的資料集類,構建我們的資料集。


#image preprocessing to Gaussian distribution from -1 to 1
def data_tf(x):
    x = np.array(x, dtype='float32') / 255
    x = (x - 0.5) / 0.5 # 標準化
    x = torch.from_numpy(x)
    return x   

#define the class of my MNIST dataset
class MyMNIST(torch.utils.data.Dataset): #建立自己的類:MyMNIST,這個類是繼承的torch.utils.data.Dataset
    def __init__(self, datatxt, train=True, transform = data_tf, target_transform=None): #初始化一些需要傳入的引數
        self.data = pd.read_csv(datatxt)   
        self.transform = transform
        self.train = train 
        if self.train:
            self.X = self.data.iloc[:,1:]
            self.X = np.array(self.X)
            self.y = self.data.iloc[:,0]
            self.y = np.array(self.y)
        else :
            self.X = self.data
            self.X = np.array(self.X)
                        
    def __getitem__(self, index):    #這個方法是必須要有的,用於按照索引讀取每個元素的具體內容
        im = torch.tensor(self.X[index], dtype = torch.float)
        if self.transform is not None:
            im = self.transform(im)
        if self.train:
            label = torch.tensor(self.y[index], dtype = torch.long )
            return im, label
        else:
            return im
    def __len__(self): #return the length of batch
        return len(self.data)    
 
#Build trainset validset and testset from csv and perform data preprocessing
X_train = MyMNIST(datatxt = 'train_set.csv',train=True, transform = data_tf)
X_val = MyMNIST(datatxt= 'val_set.csv',train=True, transform = data_tf)
X_test = MyMNIST(datatxt = 'test.csv',train=False, transform = data_tf)
#iterator of our dataset
train_data = DataLoader(X_train, batch_size=64, shuffle=True)
val_data = DataLoader(X_val, batch_size=64, shuffle=False)
test_data = DataLoader(X_test, batch_size=1000, shuffle=False) 

以上是構建資料集的全部程式碼。

下一步就可以構建網路結構。