1. 程式人生 > >Scala詞法文法解析器 (一)解析SparkSQL的BNF文法

Scala詞法文法解析器 (一)解析SparkSQL的BNF文法

ram ams title blog art ron 通過 都是 res

平臺公式及翻譯後的SparkSQL

平臺公式的樣子如下所示:

if (XX1_m001[D003]="邢おb7骯α?薇" || XX1_m001[H003]<"2") && XX1_m001[D005]!="wed" thenXX1_m001[H022,COUNT]

這裏面字段值"邢おb7骯α?薇"為這個的目的是為了測試各種字符集是否都能匹配滿足。
那麽對應的SparkSQL應該是這個樣子的,由於是使用的Hive on Spark,因而長得跟Oracle的SQL語句差不多:

SELECT COUNT(H022) FROM XX1_m001 WHERE (XX1_m001.D003=‘邢おb7骯α?薇‘ OR XX1_m001.H003<‘2‘) ANDXX1_m001.D005<‘wed‘

總體而言比較簡單,因為我只是想在這裏做一個Demo。

平臺公式的EBNF範式及詞法解析設計

expr-condition ::= tableName "[" valueName "]" comparator Condition
expr-front ::= expr-condition (("&&"|"||")expr-front)*
expr-back ::= tableName "[" valueName "," operator "]"
expr ::= "if" expr-front "then" expr-back

其中詞法定義如下

operator => [SUM,COUNT]
tableName,valueName =>ident  #ident為關鍵字
comparator => ["=",">=","<=",">","<","!="]
Condition => stringLit  #stringLit為字符串常量


使用Scala基於詞法單元的解析器解析上述EBNF文法

Scala基於詞法單元的解析器是需要繼承StandardTokenParsers這個類的,該類提供了很方便的解析函數,以及詞法集合。
我們可以通過使用lexical.delimiters列表來存放在文法翻譯器執行過程中遇到的分隔符,使用lexical.reserved列表來存放執行過程中的關鍵字。
比如,我們參照平臺公式,看到"=",">=","<=",">","<","!=","&&","||","[","]",",","(",")"這些都是分隔符,其實我們也可以把"=",">=","<=",">","<","!=","&&","||"

當做是關鍵字,但是我習慣上將帶有英文字母的單詞作為關鍵字處理。因而,這裏的關鍵字集合便是"if","then","SUM","COUNT"這些。
表現在代碼中是醬紫的:

lexical.delimiters += ("=",">=","<=",">","<","!=","&&","||","[","]",",","(",")")
lexical.reserved   += ("if","then","SUM","COUNT")

是不是so easy~。
我們再來看一下如何使用基於詞法單元的解析器解析前面我們設計的EBNF文法呢。我在這裏先上代碼:

class ExprParsre extends StandardTokenParsers{
  lexical.delimiters += ("=",">=","<=",">","<","!=","&&","||","[","]",",","(",")")
  lexical.reserved   += ("if","then","SUM","COUNT")

  def expr: Parser[String] = "if" ~ expr_front ~ "then" ~ expr_back ^^{
    case "if" ~ exp1 ~ "then" ~ exp2 => exp2 + " WHERE " +exp1
  }

  def expr_priority: Parser[String] = opt("(") ~ expr_condition ~ opt(")") ^^{
    case Some("(") ~ conditions ~ Some(")") => "(" + conditions +")"
    case Some("(") ~ conditions ~ None => "(" + conditions
    case None ~ conditions ~ Some(")") => conditions +")"
    case None ~ conditions ~ None => conditions
  }

  def expr_condition: Parser[String] = ident ~ "[" ~ ident ~ "]" ~ ("="|">="|"<="|">"|"<"|"!=") ~ stringLit ^^{
    case ident1~"["~ident2~"]"~"="~stringList => ident1 + "." + ident2 +"=‘" + stringList +"‘"
    case ident1~"["~ident2~"]"~">="~stringList => ident1 + "." + ident2 +">=‘" + stringList +"‘"
    case ident1~"["~ident2~"]"~"<="~stringList => ident1 + "." + ident2 +"<=‘" + stringList +"‘"
    case ident1~"["~ident2~"]"~">"~stringList => ident1 + "." + ident2 +">‘" + stringList +"‘"
    case ident1~"["~ident2~"]"~"<"~stringList => ident1 + "." + ident2 +"<‘" + stringList +"‘"
    case ident1~"["~ident2~"]"~"!="~stringList => ident1 + "." + ident2 +"!=‘" + stringList +"‘"
  }
  def comparator: Parser[String] = ("&&"|"||") ^^{
    case "&&" => " AND "
    case "||" => " OR "
  }
  def expr_front: Parser[String] = expr_priority ~ rep(comparator ~ expr_priority) ^^{
    case exp1 ~ exp2  => exp1 +  exp2.map(x =>{x._1 + " " + x._2}).mkString(" ")  
  }
  def expr_back: Parser[String] = ident ~ "[" ~ ident ~ "," ~ ("SUM"|"COUNT") ~ "]" ^^ {
    case ident1~"["~ident2~","~"COUNT"~"]" => "SELECT COUNT("+ ident2.toString() +") FROM " + ident1.toString()
    case ident1~"["~ident2~","~"SUM"~"]" => "SELECT SUM("+ ident2.toString() +") FROM " + ident1.toString()
  }

  def parserAll[T]( p : Parser[T], input :String) = {
    phrase(p)( new lexical.Scanner(input))
  }
}

另參考:

平臺公式的樣子如下所示:

if (XX1_m001[D003]="邢おb7骯α?薇" || XX1_m001[H003]<"2") && XX1_m001[D005]!="wed" thenXX1_m001[H022,COUNT]

這裏面字段值"邢おb7骯α?薇"為這個的目的是為了測試各種字符集是否都能匹配滿足。
那麽對應的SparkSQL應該是這個樣子的,由於是使用的Hive on Spark,因而長得跟Oracle的SQL語句差不多:

SELECT COUNT(H022) FROM XX1_m001 WHERE (XX1_m001.D003=‘邢おb7骯α?薇‘ OR XX1_m001.H003<‘2‘) ANDXX1_m001.D005<‘wed‘

總體而言比較簡單,因為我只是想在這裏做一個Demo。

平臺公式的EBNF範式及詞法解析設計

expr-condition ::= tableName "[" valueName "]" comparator Condition
expr-front ::= expr-condition (("&&"|"||")expr-front)*
expr-back ::= tableName "[" valueName "," operator "]"
expr ::= "if" expr-front "then" expr-back

其中詞法定義如下

operator => [SUM,COUNT]
tableName,valueName =>ident  #ident為關鍵字
comparator => ["=",">=","<=",">","<","!="]
Condition => stringLit  #stringLit為字符串常量

使用Scala基於詞法單元的解析器解析上述EBNF文法

Scala基於詞法單元的解析器是需要繼承StandardTokenParsers這個類的,該類提供了很方便的解析函數,以及詞法集合。
我們可以通過使用lexical.delimiters列表來存放在文法翻譯器執行過程中遇到的分隔符,使用lexical.reserved列表來存放執行過程中的關鍵字。
比如,我們參照平臺公式,看到"=",">=","<=",">","<","!=","&&","||","[","]",",","(",")"這些都是分隔符,其實我們也可以把"=",">=","<=",">","<","!=","&&","||"當做是關鍵字,但是我習慣上將帶有英文字母的單詞作為關鍵字處理。因而,這裏的關鍵字集合便是"if","then","SUM","COUNT"這些。
表現在代碼中是醬紫的:

lexical.delimiters += ("=",">=","<=",">","<","!=","&&","||","[","]",",","(",")")
lexical.reserved   += ("if","then","SUM","COUNT")

是不是so easy~。
我們再來看一下如何使用基於詞法單元的解析器解析前面我們設計的EBNF文法呢。我在這裏先上代碼:

class ExprParsre extends StandardTokenParsers{
  lexical.delimiters += ("=",">=","<=",">","<","!=","&&","||","[","]",",","(",")")
  lexical.reserved   += ("if","then","SUM","COUNT")

  def expr: Parser[String] = "if" ~ expr_front ~ "then" ~ expr_back ^^{
    case "if" ~ exp1 ~ "then" ~ exp2 => exp2 + " WHERE " +exp1
  }

  def expr_priority: Parser[String] = opt("(") ~ expr_condition ~ opt(")") ^^{
    case Some("(") ~ conditions ~ Some(")") => "(" + conditions +")"
    case Some("(") ~ conditions ~ None => "(" + conditions
    case None ~ conditions ~ Some(")") => conditions +")"
    case None ~ conditions ~ None => conditions
  }

  def expr_condition: Parser[String] = ident ~ "[" ~ ident ~ "]" ~ ("="|">="|"<="|">"|"<"|"!=") ~ stringLit ^^{
    case ident1~"["~ident2~"]"~"="~stringList => ident1 + "." + ident2 +"=‘" + stringList +"‘"
    case ident1~"["~ident2~"]"~">="~stringList => ident1 + "." + ident2 +">=‘" + stringList +"‘"
    case ident1~"["~ident2~"]"~"<="~stringList => ident1 + "." + ident2 +"<=‘" + stringList +"‘"
    case ident1~"["~ident2~"]"~">"~stringList => ident1 + "." + ident2 +">‘" + stringList +"‘"
    case ident1~"["~ident2~"]"~"<"~stringList => ident1 + "." + ident2 +"<‘" + stringList +"‘"
    case ident1~"["~ident2~"]"~"!="~stringList => ident1 + "." + ident2 +"!=‘" + stringList +"‘"
  }
  def comparator: Parser[String] = ("&&"|"||") ^^{
    case "&&" => " AND "
    case "||" => " OR "
  }
  def expr_front: Parser[String] = expr_priority ~ rep(comparator ~ expr_priority) ^^{
    case exp1 ~ exp2  => exp1 +  exp2.map(x =>{x._1 + " " + x._2}).mkString(" ")  
  }
  def expr_back: Parser[String] = ident ~ "[" ~ ident ~ "," ~ ("SUM"|"COUNT") ~ "]" ^^ {
    case ident1~"["~ident2~","~"COUNT"~"]" => "SELECT COUNT("+ ident2.toString() +") FROM " + ident1.toString()
    case ident1~"["~ident2~","~"SUM"~"]" => "SELECT SUM("+ ident2.toString() +") FROM " + ident1.toString()
  }

  def parserAll[T]( p : Parser[T], input :String) = {
    phrase(p)( new lexical.Scanner(input))
  }
}

另參考:

Scala詞法文法解析器 (二)分析C++類的聲明

Scala詞法文法解析器 (一)解析SparkSQL的BNF文法