1. 程式人生 > >Native snappy library not available: this version of libhadoop was built without snappy support

Native snappy library not available: this version of libhadoop was built without snappy support

    在使用spark Mllib的時候,訓練好的模型save之後,線上service需要load載入該模型,實現線上預測。

    實際載入load的時候,丟擲異常:Native snappy library not available: this version of libhadoop was built without snappy support

    查了下,發現是因為Hadoop需要安裝snappy支援,因此有以下兩種解決辦法:一是換一種方式,另一種是編譯安裝snappy支援:

  1. One approach was to use a different hadoop codec like belowsc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress", "true") sc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress.type", CompressionType.BLOCK.toString) sc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress.codec", "org.apache.hadoop.io.compress.BZip2Codec") sc.hadoopConfiguration.set("mapreduce.map.output.compress", "true") sc.hadoopConfiguration.set("mapreduce.map.output.compress.codec", "org.apache.hadoop.io.compress.BZip2Codec")

  2. Second approach was to mention --driver-library-path /usr/hdp/<whatever is your current version>/hadoop/lib/native/ as a parameter to my spark-submit job (in command line)