1. 程式人生 > >用MR(MapReduce)查詢hbase資料-用到TableMapper和Scan

用MR(MapReduce)查詢hbase資料-用到TableMapper和Scan

首先,可以設定scan的startRow, stopRow, filter等屬性。於是兩種方案:

1.設定scan的filter,然後執行mapper,再reducer成一份結果

2.不用filter過濾,將filter做的事傳給mapper做

進行了測試,前者在執行較少量scan記錄的時候效率較後者高,但是執行的scan數量多了,便容易導致超時無返回而退出的情況。而為了實現後者,學會了如何向mapper任務中傳遞引數,走了一點彎路。

最後的一點思考是,用後者效率仍然不高,即便可用前者時效率也不高,因為預設的tablemapper是將對一個region的scan任務放在了一個mapper裡,而我一個region有2G多,而我查的資料只佔七八個region。於是,想能不能不以region為單位算做mapper,如果不能改,那只有用MR直接操作HBase底層HDFS檔案了,這個,…,待研究。

上程式碼(為了保密,將表名啊,列名列族名啊都改了一下,有改漏的,大家當做沒看見啊,另:主要供大家參考下方法,即用mr來查詢海量hbase資料,還有如何向mapper傳引數):

package mapreduce.hbase;

import java.io.IOException;

import mapreduce.HDFS_File;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.FilterList;
import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
import org.apache.hadoop.hbase.mapreduce.TableMapper;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper.Context;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

/**
 * 用MR對HBase進行查詢,給出Scan的條件諸如startkey endkey;以及filters用來過濾掉不符合條件的記錄 LicenseTable
 * 的 RowKey 201101010000000095\xE5\xAE\x81WDTLBZ
 * 
 * @author Wallace
 * 
 */
@SuppressWarnings("unused")
public class MRSearchAuto {
	private static final Log LOG = LogFactory.getLog(MRSearchAuto.class);

	private static String TABLE_NAME = "tablename";
	private static byte[] FAMILY_NAME = Bytes.toBytes("cfname");
	private static byte[][] QUALIFIER_NAME = { Bytes.toBytes("col1"),
			Bytes.toBytes("col2"), Bytes.toBytes("col3") };

	public static class SearchMapper extends
			TableMapper<ImmutableBytesWritable, Text> {
		private int numOfFilter = 0;

		private Text word = new Text();
		String[] strConditionStrings = new String[]{"","",""}/* { "新C87310", "10", "2" } */;

		/*
		 * private void init(Configuration conf) throws IOException,
		 * InterruptedException { strConditionStrings[0] =
		 * conf.get("search.license").trim(); strConditionStrings[1] =
		 * conf.get("search.carColor").trim(); strConditionStrings[2] =
		 * conf.get("search.direction").trim(); LOG.info("license: " +
		 * strConditionStrings[0]); }
		 */
		protected void setup(Context context) throws IOException,
				InterruptedException {
			strConditionStrings[0] = context.getConfiguration().get("search.license").trim();
			strConditionStrings[1] = context.getConfiguration().get("search.color").trim();
			strConditionStrings[2] = context.getConfiguration().get("search.direction").trim();
		}

		protected void map(ImmutableBytesWritable key, Result value,
				Context context) throws InterruptedException, IOException {
			String string = "";
			String tempString;

			/**/
			for (int i = 0; i < 1; i++) {
				// /在此map裡進行filter的功能
				tempString = Text.decode(value.getValue(FAMILY_NAME,
						QUALIFIER_NAME[i]));
				if (tempString.equals(/* strConditionStrings[i] */"新C87310")) {
					LOG.info("新C87310. conf: " + strConditionStrings[0]);
					if (tempString.equals(strConditionStrings[i])) {
						string = string + tempString + " ";
					} else {
						return;
					}
				}

				else {
					return;
				}
			}

			word.set(string);
			context.write(null, word);
		}
	}

	public void searchHBase(int numOfDays) throws IOException,
			InterruptedException, ClassNotFoundException {
		long startTime;
		long endTime;

		Configuration conf = HBaseConfiguration.create();
		conf.set("hbase.zookeeper.quorum", "node2,node3,node4");
		conf.set("fs.default.name", "hdfs://node1");
		conf.set("mapred.job.tracker", "node1:54311");
		/*
		 * 傳遞引數給map
		 */
		conf.set("search.license", "新C87310");
		conf.set("search.color", "10");
		conf.set("search.direction", "2");

		Job job = new Job(conf, "MRSearchHBase");
		System.out.println("search.license: " + conf.get("search.license"));
		job.setNumReduceTasks(0);
		job.setJarByClass(MRSearchAuto.class);
		Scan scan = new Scan();
		scan.addFamily(FAMILY_NAME);
		byte[] startRow = Bytes.toBytes("2011010100000");
		byte[] stopRow;
		switch (numOfDays) {
		case 1:
			stopRow = Bytes.toBytes("2011010200000");
			break;
		case 10:
			stopRow = Bytes.toBytes("2011011100000");
			break;
		case 30:
			stopRow = Bytes.toBytes("2011020100000");
			break;
		case 365:
			stopRow = Bytes.toBytes("2012010100000");
			break;
		default:
			stopRow = Bytes.toBytes("2011010101000");
		}
		// 設定開始和結束key
		scan.setStartRow(startRow);
		scan.setStopRow(stopRow);

		TableMapReduceUtil.initTableMapperJob(TABLE_NAME, scan,
				SearchMapper.class, ImmutableBytesWritable.class, Text.class,
				job);
		Path outPath = new Path("searchresult");
		HDFS_File file = new HDFS_File();
		file.DelFile(conf, outPath.getName(), true); // 若已存在,則先刪除
		FileOutputFormat.setOutputPath(job, outPath);// 輸出結果

		startTime = System.currentTimeMillis();
		job.waitForCompletion(true);
		endTime = System.currentTimeMillis();
		System.out.println("Time used: " + (endTime - startTime));
		System.out.println("startRow:" + Text.decode(startRow));
		System.out.println("stopRow: " + Text.decode(stopRow));
	}

	public static void main(String args[]) throws IOException,
			InterruptedException, ClassNotFoundException {
		MRSearchAuto mrSearchAuto = new MRSearchAuto();
		int numOfDays = 1;
		if (args.length == 1)
			numOfDays = Integer.valueOf(args[0]);
		System.out.println("Num of days: " + numOfDays);
		mrSearchAuto.searchHBase(numOfDays);
	}
}

開始時,我是在外面conf.set了傳入的引數,而在mapper的init(Configuration)裡get引數並賦給mapper物件。

將引數傳給map執行時結果不對
for (int i = 0; i < 1; i++) {
    // /在此map裡進行filter的功能
    tempString = Text.decode(value.getValue(FAMILY_NAME,
      QUALIFIER_NAME[i]));
    if (tempString.equals(/*strConditionStrings[i]*/"新C87310"))
     string = string + tempString + " ";
    else {
     return;
    }
   }
如果用下面的mapper的init獲取conf傳來的引數,然後在上面map函式裡進行呼叫,結果便不對了。
直接指定值時和引數傳過來相同的值時,其output的結果分別為1條和0條。
  private void init(Configuration conf) throws IOException,
    InterruptedException {
   strConditionStrings[0] = conf.get("search.licenseNumber").trim();
   strConditionStrings[1] = conf.get("search.carColor").trim();
   strConditionStrings[2] = conf.get("search.direction").trim();
  }
加了個日誌寫
private static final Log LOG = LogFactory.getLog(MRSearchAuto.class);
init()函式裡:
LOG.info("license: " + strConditionStrings[0]);
map裡
 if (tempString.equals(/* strConditionStrings[i] */"新C87310")) {
  LOG.info("新C87310. conf: " + strConditionStrings[0]);
然後在網頁 namenode:50030上看任務,最終定位到哪臺機器執行了那個map,然後看日誌
mapreduce.hbase.TestMRHBase: 新C87310. conf: null
在conf.set之後我也寫了下,那時正常,但是在map裡卻是null了,而在map類的init函式列印的卻沒有列印。
因此,問題應該是:
map類的init()函式沒有執行到!
於是init()的獲取conf中引數值並賦給map裡變數的操作便未執行,同時列印日誌也未執行。
OK!看怎麼解決
放在setup裡獲取
  protected void setup(Context context) throws IOException,
    InterruptedException {
  // strConditionStrings[0] = context.getConfiguration().get("search.license").trim();
  // strConditionStrings[1] = context.getConfiguration().get("search.color").trim();
  // strConditionStrings[2] = context.getConfiguration().get("search.direction").trim();
  }
報錯
12/01/12 11:21:56 INFO mapred.JobClient:  map 0% reduce 0%
12/01/12 11:22:03 INFO mapred.JobClient: Task Id : attempt_201201100941_0071_m_000000_0, Status : FAILED
java.lang.NullPointerException
 at mapreduce.hbase.MRSearchAuto$SearchMapper.setup(MRSearchAuto.java:66)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:656)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
 at org.apache.hadoop.mapred.Child.main(Child.java:264)

attempt_201201100941_0071_m_000000_0: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201201100941_0071_m_000000_0: log4j:WARN Please initialize the log4j system properly.
12/01/12 11:22:09 INFO mapred.JobClient: Task Id : attempt_201201100941_0071_m_000000_1, Status : FAILED
java.lang.NullPointerException
 at mapreduce.hbase.MRSearchAuto$SearchMapper.setup(MRSearchAuto.java:66)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:656)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
 at org.apache.hadoop.mapred.Child.main(Child.java:264)
然後將setup裡的東西註釋掉,無錯,錯誤應該在context上,進一步確認,在裡面不用context,直接賦值,有結果,好!
說明是context的事了,NullPointerException,應該是context.getConfiguration().get("search.license")這些中有一個是null的。
突然想起來,改了下get時候的屬性,而set時候沒改,於是不對應,於是context.getConfiguration().get("search.color")及下面的一項都是null,null.trim()報的異常。
  conf.set("search.license", "新C87310");
  conf.set("search.color", "10");
  conf.set("search.direction", "2");
修改後,問題解決。
實現了向map中傳引數