1. 程式人生 > >Hadoop--倒排索引過程詳解

Hadoop--倒排索引過程詳解

倒排索引就是根據單詞內容來查詢文件的方式,由於不是根據文件來確定文件所包含的內容,進行了相反的操作,所以被稱為倒排索引

下面來看一個例子來理解什麼是倒排索引

這裡我準備了兩個檔案 分別為1.txt和2.txt

1.txt的內容如下

    I Love Hadoop
    I like ZhouSiYuan
    I love me

2.txt的內容如下

I Love MapReduce
I like NBA
I love Hadoop

我這裡使用的是預設的輸入格式TextInputFormat,他是一行一行的讀的,鍵是偏移量,如果對於這個不理解,可以看我之前發表的文章

MapReduce工作原理
Hadoop資料型別和自定義輸入輸出

所以在map階段之前的到結果如下
map階段從1.txt的得到的輸入

0   I Love Hadoop
15  I like ZhouSiYuan
34  I love me

map階段從2.txt的得到的輸入

0   I Love MapReduce
18  I like NBA
30  I love Hadoop

map階段
把詞頻作為值
把單詞和URI組成key值
比如
key : I+hdfs://192.168.52.140:9000/index/2.txt value:1

為什麼要這樣設定鍵和值?
因為這樣設計可以使用MapReduce框架自帶的map端排序,將同一單詞的詞頻組成列表

經過map階段1.txt得到的輸出如下

I:hdfs://192.168.52.140:9000/index/1.txt            1
Love:hdfs://192.168.52.140:9000/index/1.txt         1
MapReduce:hdfs://192.168.52.140:9000/index/1.txt    1
I:hdfs://192.168.52.140:9000/index/1.txt            1
Like:hdfs://192.168.52.140:9000/index/1.txt         1
ZhouSiYuan:hdfs://192.168.52.140:9000/index/1.txt   1
I:hdfs://192.168.52.140:9000/index/1.txt            1
love:hdfs://192.168.52.140:9000/index/1.txt         1   
me:hdfs://192.168.52.140:9000/index/1.txt           1

經過map階段2.txt得到的輸出如下

I:hdfs://192.168.52.140:9000/index/2.txt            1
Love:hdfs://192.168.52.140:9000/index/2.txt         1
MapReduce:hdfs://192.168.52.140:9000/index/2.txt    1
I:hdfs://192.168.52.140:9000/index/2.txt            1
Like:hdfs://192.168.52.140:9000/index/2.txt         1
NBA:hdfs://192.168.52.140:9000/index/2.txt          1
I:hdfs://192.168.52.140:9000/index/2.txt            1
love:hdfs://192.168.52.140:9000/index/2.txt         1   
Hadoop:hdfs://192.168.52.140:9000/index/2.txt       1

1.txt經過MapReduce框架自帶的map端排序得到的輸出結果如下

I:hdfs://192.168.52.140:9000/index/1.txt            list{1,1,1}
Love:hdfs://192.168.52.140:9000/index/1.txt         list{1} 
MapReduce:hdfs://192.168.52.140:9000/index/1.txt    list{1}
Like:hdfs://192.168.52.140:9000/index/1.txt         list{1}
ZhouSiYuan:hdfs://192.168.52.140:9000/index/1.txt   list{1}
love:hdfs://192.168.52.140:9000/index/1.txt         list{1}
me:hdfs://192.168.52.140:9000/index/1.txt           list{1}

2.txt經過MapReduce框架自帶的map端排序得到的輸出結果如下

I:hdfs://192.168.52.140:9000/index/2.txt            list{1,1,1}
Love:hdfs://192.168.52.140:9000/index/2.txt         list{1} 
MapReduce:hdfs://192.168.52.140:9000/index/2.txt    list{1}
Like:hdfs://192.168.52.140:9000/index/2.txt         list{1}
NBA:hdfs://192.168.52.140:9000/index/2.txt          list{1}
love:hdfs://192.168.52.140:9000/index/2.txt         list{1}
Hadoop:hdfs://192.168.52.140:9000/index/2.txt       list{1}

combine階段:
key值為單詞,
value值由URI和詞頻組成
value: hdfs://192.168.52.140:9000/index/2.txt:3 key:I
為什麼這樣設計鍵值了?
因為在Shuffle過程將面臨一個問題,所有具有相同單詞的記錄(由單詞、URL和詞頻組成)應該交由同一個Reducer處理
所以重新把單詞設定為鍵可以使用MapReduce框架預設的Shuffle過程,將相同單詞的所有記錄傳送給同一個Reducer處理

combine階段將key相同的value值累加

1.txt得到如下輸出

I       hdfs://192.168.52.140:9000/index/1.txt:3
Love        hdfs://192.168.52.140:9000/index/1.txt:1 
MapReduce   hdfs://192.168.52.140:9000/index/1.txt:1
Like        hdfs://192.168.52.140:9000/index/1.txt:1
ZhouSiYuan  hdfs://192.168.52.140:9000/index/1.txt:1
love        hdfs://192.168.52.140:9000/index/1.txt:1
me          hdfs://192.168.52.140:9000/index/1.txt:1

2.txt得到如下輸出

I           hdfs://192.168.52.140:9000/index/2.txt:3
Love        hdfs://192.168.52.140:9000/index/2.txt:1 
MapReduce   hdfs://192.168.52.140:9000/index/2.txt:1
Like        hdfs://192.168.52.140:9000/index/2.txt:1
NBA         hdfs://192.168.52.140:9000/index/2.txt:1
love        hdfs://192.168.52.140:9000/index/2.txt:1
Hadoop      hdfs://192.168.52.140:9000/index/2.txt:1

這樣reducer過程就很簡單了,它只用來生成文件列表
比如相同的單詞I,這樣生成文件列表
I hdfs://192.168.52.140:9000/index/2.txt:3;hdfs://192.168.52.140:9000/index/1.txt:3;

最後所有的輸出結果如下

Hadoop  hdfs://192.168.52.140:9000/index/1.txt:1;hdfs://192.168.52.140:9000/index/2.txt:1;
I   hdfs://192.168.52.140:9000/index/2.txt:3;hdfs://192.168.52.140:9000/index/1.txt:3;
Love    hdfs://192.168.52.140:9000/index/1.txt:1;hdfs://192.168.52.140:9000/index/2.txt:1;
MapReduce   hdfs://192.168.52.140:9000/index/2.txt:1;
NBA hdfs://192.168.52.140:9000/index/2.txt:1;
ZhouSiYuan  hdfs://192.168.52.140:9000/index/1.txt:1;
like    hdfs://192.168.52.140:9000/index/1.txt:1;hdfs://192.168.52.140:9000/index/2.txt:1;
love    hdfs://192.168.52.140:9000/index/2.txt:1;hdfs://192.168.52.140:9000/index/1.txt:1;
me  hdfs://192.168.52.140:9000/index/1.txt:1;

下面是整個原始碼

package com.hadoop.mapreduce.test8.invertedindex;

import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class InvertedIndex {
    /**
     * 
     * @author 湯高
     *
     */
    public static class InvertedIndexMapper extends Mapper<Object, Text, Text, Text>{

        private Text keyInfo = new Text();  // 儲存單詞和URI的組合
        private Text valueInfo = new Text(); //儲存詞頻
        private FileSplit split;  // 儲存split物件。
        @Override
        protected void map(Object key, Text value, Mapper<Object, Text, Text, Text>.Context context)
                throws IOException, InterruptedException {
            //獲得<key,value>對所屬的FileSplit物件。
            split = (FileSplit) context.getInputSplit();
            System.out.println("偏移量"+key);
            System.out.println("值"+value);
            //StringTokenizer是用來把字串擷取成一個個標記或單詞的,預設是空格或多個空格(\t\n\r等等)擷取
            StringTokenizer itr = new StringTokenizer( value.toString());
            while( itr.hasMoreTokens() ){
                // key值由單詞和URI組成。
                keyInfo.set( itr.nextToken()+":"+split.getPath().toString());
                //詞頻初始為1
                valueInfo.set("1");
                context.write(keyInfo, valueInfo);
            }
            System.out.println("key"+keyInfo);
            System.out.println("value"+valueInfo);
        }
    }
    /**
     * 
     * @author 湯高
     *
     */
    public static class InvertedIndexCombiner extends Reducer<Text, Text, Text, Text>{
        private Text info = new Text();
        @Override
        protected void reduce(Text key, Iterable<Text> values, Reducer<Text, Text, Text, Text>.Context context)
                throws IOException, InterruptedException {

            //統計詞頻
            int sum = 0;
            for (Text value : values) {
                sum += Integer.parseInt(value.toString() );
            }

            int splitIndex = key.toString().indexOf(":");

            //重新設定value值由URI和詞頻組成
            info.set( key.toString().substring( splitIndex + 1) +":"+sum );

            //重新設定key值為單詞
            key.set( key.toString().substring(0,splitIndex));

            context.write(key, info);
            System.out.println("key"+key);
            System.out.println("value"+info);
        }
    }

    /**
     * 
     * @author 湯高
     *
     */
    public static class InvertedIndexReducer extends Reducer<Text, Text, Text, Text>{

        private Text result = new Text();

        @Override
        protected void reduce(Text key, Iterable<Text> values, Reducer<Text, Text, Text, Text>.Context context)
                throws IOException, InterruptedException {

            //生成文件列表
            String fileList = new String();
            for (Text value : values) {
                fileList += value.toString()+";";
            }
            result.set(fileList);

            context.write(key, result);
        }

    }

    public static void main(String[] args) {
        try {
            Configuration conf = new Configuration();

            Job job = Job.getInstance(conf,"InvertedIndex");
            job.setJarByClass(InvertedIndex.class);

            //實現map函式,根據輸入的<key,value>對生成中間結果。
            job.setMapperClass(InvertedIndexMapper.class);

            job.setMapOutputKeyClass(Text.class);
            job.setMapOutputValueClass(Text.class);

            job.setCombinerClass(InvertedIndexCombiner.class);
            job.setReducerClass(InvertedIndexReducer.class);

            job.setOutputKeyClass(Text.class);
            job.setOutputValueClass(Text.class);

            //我把那兩個檔案上傳到這個index目錄下了
            FileInputFormat.addInputPath(job, new Path("hdfs://192.168.52.140:9000/index/"));
            //把結果輸出到out_index+時間戳的目錄下
            FileOutputFormat.setOutputPath(job, new Path("hdfs://192.168.52.140:9000/out_index"+System.currentTimeMillis()+"/"));

            System.exit(job.waitForCompletion(true) ? 0 : 1);
        } catch (IllegalStateException e) {
            e.printStackTrace();
        } catch (IllegalArgumentException e) {
            e.printStackTrace();
        } catch (ClassNotFoundException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }

    }
}

歡迎各位提出寶貴的意見
碼字不易,轉載請指明出處