Parsing Happy中命题逻辑解析器中的移位/减少冲突

Parsing Happy中命题逻辑解析器中的移位/减少冲突,parsing,haskell,shift-reduce-conflict,happy,Parsing,Haskell,Shift Reduce Conflict,Happy,我在happy上做了一个简单的命题逻辑解析器,基于命题逻辑语法,这是我的代码 { module FNC where import Data.Char import System.IO } -- Parser name, token types and error function name: -- %name parse Prop %tokentype { Token } %error { parseError } -- Token list: %token var { Tok

我在happy上做了一个简单的命题逻辑解析器,基于命题逻辑语法,这是我的代码

{
module FNC where
import Data.Char 
import System.IO
}

-- Parser name, token types and error function name:
--
%name parse Prop
%tokentype { Token } 
%error { parseError }

-- Token list:
%token
     var { TokenVar $$ }  -- alphabetic identifier
     or { TokenOr }
     and { TokenAnd }
     '¬' { TokenNot }
     "=>" { TokenImp } -- Implication
     "<=>" { TokenDImp } --double implication
    '(' { TokenOB } --open bracket
    ')'  { TokenCB } --closing bracket
    '.' {TokenEnd}

%left "<=>"
%left "=>"
%left or
%left and
%left '¬'
%left '(' ')'
%%

--Grammar
Prop :: {Sentence}
Prop : Sentence '.' {$1}

Sentence :: {Sentence}
Sentence : AtomSent {Atom $1}
    | CompSent {Comp $1}

AtomSent :: {AtomSent}
AtomSent : var { Variable $1 }

CompSent :: {CompSent}
CompSent : '(' Sentence ')' { Bracket $2 }
    | Sentence Connective Sentence {Bin $2 $1 $3}
    | '¬' Sentence {Not $2}

Connective :: {Connective}
Connective : and {And}
    | or {Or}
    | "=>" {Imp}
    | "<=>" {DImp}


{
--Error function
parseError :: [Token] -> a
parseError _ = error ("parseError: Syntax analysis error.\n")

--Data types to represent the grammar
data Sentence
    = Atom AtomSent
    | Comp CompSent
    deriving Show

data AtomSent = Variable String deriving Show

data CompSent
      = Bin Connective Sentence Sentence
      | Not Sentence
      | Bracket Sentence
      deriving Show

data Connective
    = And
    | Or
    | Imp
    | DImp
    deriving Show

--Data types for the tokens
data Token
      = TokenVar String
      | TokenOr
      | TokenAnd
      | TokenNot
      | TokenImp
      | TokenDImp
      | TokenOB
      | TokenCB
      | TokenEnd
      deriving Show

--Lexer
lexer :: String -> [Token]
lexer [] = []  -- cadena vacia
lexer (c:cs)   -- cadena es un caracter, c, seguido de caracteres, cs.
      | isSpace c = lexer cs
      | isAlpha c = lexVar (c:cs)
      | isSymbol c = lexSym (c:cs)
      | c== '(' = TokenOB : lexer cs
      | c== ')' = TokenCB : lexer cs
      | c== '¬' = TokenNot : lexer cs --solved
      | c== '.'  = [TokenEnd]
      | otherwise = error "lexer:  Token invalido"

lexVar cs =
   case span isAlpha cs of
      ("or",rest) -> TokenOr : lexer rest
      ("and",rest)  -> TokenAnd : lexer rest
      (var,rest)   -> TokenVar var : lexer rest

lexSym cs =
    case span isSymbol cs of
        ("=>",rest) -> TokenImp : lexer rest
        ("<=>",rest) -> TokenDImp : lexer rest
}
{
模块FNC在哪里
导入数据.Char
导入系统.IO
}
--解析器名称、令牌类型和错误函数名称:
--
%名称解析属性
%令牌类型{Token}
%错误{parseError}
--令牌列表:
%代币
var{TokenVar$$}--字母标识符
或{TokenOr}
和{和}
“-”{TokenNot}
“=>”{TokenImp}--蕴涵
“{TokenDImp}--双重蕴涵
'('{TokenOB}--开括号
')'{TokenCB}--右括号
“.{tokend}
%左“”
%左“=>”
%左或右
%左和
%左‘,’
%左'('')
%%
--文法
道具:{句子}
道具:句子“.{$1}”
句子::{句子}
句子:AtomSent{Atom$1}
|CompSent{Comp$1}
AtomSent::{AtomSent}
AtomSent:var{Variable$1}
CompSent::{CompSent}
CompSent:“(“句子”)”{括号$2}
|句子连接句{Bin$2$1$3}
|“,”句{不是2美元}
连接的::{Connective}
连接词:and{and}
|或{或}
|“=>”{Imp}
|“{DImp}
{
--误差函数
parseError::[Token]->a
parseError=错误(“parseError:语法分析错误。\n”)
--表示语法的数据类型
数据句
=原子原子
|Comp CompSent
衍生节目
data AtomSent=派生显示的变量字符串
数据压缩发送
=宾语连接句
|不判刑
|括号句
衍生节目
数据连接
=及
|或
|小鬼
|酒窝
衍生节目
--令牌的数据类型
数据令牌
=标记变量字符串
|代币
|代币
|代币
|代币
|标记DIMP
|TokenOB
|代币
|令牌端
衍生节目
--雷克瑟
lexer::String->[Token]
lexer[]=[]--cadena vacia
词汇量(c:cs)--卡德娜是一个角色,c,c,c。
|isSpace c=lexer cs
|isAlpha c=lexVar(c:cs)
|isSymbol c=lexSym(c:cs)
|c=='('=TokenOB:lexer-cs
|c==')'=TokenCB:lexer-cs
|c==','=TokenNot:lexer cs——已解决
|c=='.=[TokenEnd]
|否则=错误“lexer:Token invalido”
lexVar-cs=
案例span isAlpha cs
(“or”,rest)->TokenOr:lexer rest
(“and”,rest)->TokenAnd:lexer rest
(var,rest)->TokenVar-var:lexer-rest
lexSym cs=
case span是的符号cs
(“=>”,rest)->TokenImp:lexer rest
(“”,rest)->TokenDImp:lexer rest
}
现在,我有两个问题

  • 出于某种原因,我得到了4个shift/reduce冲突,我真的不知道它们可能在哪里,因为我认为优先级可以解决它们(我认为我正确地遵循了BNF语法)
  • (这是一个Haskell问题)在我的lexer函数中,由于某种原因,我在我说如何处理“,”的那一行上出现了解析错误,如果我删除了那一行,它就可以工作了,为什么会这样呢?(这个问题解决了)
  • 任何帮助都会很好。

    我可以回答第二个问题:

    | c== '¬' == TokenNot : lexer cs --problem here
    --        ^^
    

    你有一个
    =
    ,如果你使用happy with
    -i
    它将生成一个信息文件。该文件列出了解析器具有的所有状态。它还将列出每个状态的所有可能转换。您可以使用此信息确定您是否关心换档/减少冲突

    有关调用快乐和冲突的信息:

    下面是
    -i
    的一些输出。除了17号州外,我已经移除了所有。您需要获取此文件的副本,以便能够正确调试问题。你在这里看到的只是为了帮助大家谈论它:

    -----------------------------------------------------------------------------
    Info file generated by Happy Version 1.18.10 from FNC.y
    -----------------------------------------------------------------------------
    
    state 17 contains 4 shift/reduce conflicts.
    
    -----------------------------------------------------------------------------
    Grammar
    -----------------------------------------------------------------------------
        %start_parse -> Prop                               (0)
        Prop -> Sentence '.'                               (1)
        Sentence -> AtomSent                               (2)
        Sentence -> CompSent                               (3)
        AtomSent -> var                                    (4)
        CompSent -> '(' Sentence ')'                       (5)
        CompSent -> Sentence Connective Sentence           (6)
        CompSent -> '¬' Sentence                          (7)
        Connective -> and                                  (8)
        Connective -> or                                   (9)
        Connective -> "=>"                                 (10)
        Connective -> "<=>"                                (11)
    
    -----------------------------------------------------------------------------
    Terminals
    -----------------------------------------------------------------------------
        var            { TokenVar $$ }
        or             { TokenOr }
        and            { TokenAnd }
        '¬'           { TokenNot }
        "=>"           { TokenImp }
        "<=>"          { TokenDImp }
        '('            { TokenOB }
        ')'            { TokenCB }
        '.'            { TokenEnd }
    
    -----------------------------------------------------------------------------
    Non-terminals
    -----------------------------------------------------------------------------
        %start_parse    rule  0
        Prop            rule  1
        Sentence        rules 2, 3
        AtomSent        rule  4
        CompSent        rules 5, 6, 7
        Connective      rules 8, 9, 10, 11
    
    -----------------------------------------------------------------------------
    States
    -----------------------------------------------------------------------------
    State 17
    
        CompSent -> Sentence . Connective Sentence          (rule 6)
        CompSent -> Sentence Connective Sentence .          (rule 6)
    
        or             shift, and enter state 12
                (reduce using rule 6)
    
        and            shift, and enter state 13
                (reduce using rule 6)
    
        "=>"           shift, and enter state 14
                (reduce using rule 6)
    
        "<=>"          shift, and enter state 15
                (reduce using rule 6)
    
        ')'            reduce using rule 6
        '.'            reduce using rule 6
    
        Connective     goto state 11
    
    -----------------------------------------------------------------------------
    Grammar Totals
    -----------------------------------------------------------------------------
    Number of rules: 12
    Number of terminals: 9
    Number of non-terminals: 6
    Number of states: 19
    
    -----------------------------------------------------------------------------
    快乐1.18.10版从FNC.y生成的信息文件
    -----------------------------------------------------------------------------
    状态17包含4个移位/减少冲突。
    -----------------------------------------------------------------------------
    文法
    -----------------------------------------------------------------------------
    %开始解析->道具(0)
    道具->句子“.”(1)
    句子->原子点(2)
    句子->CompSent(3)
    AtomSent->var(4)
    CompSent->'(“句子”)'(5)
    CompSent->句子连接句(6)
    CompSent->,”句(7)
    连接->和(8)
    连接->或(9)
    连接词->“=>”(10)
    连接词->“”(11)
    -----------------------------------------------------------------------------
    终端
    -----------------------------------------------------------------------------
    var{TokenVar$$}
    或{TokenOr}
    和{和}
    “-”{TokenNot}
    “=>”{TokenImp}
    “{TokenDImp}
    “({TokenOB}
    “)”{TokenCB}
    “.{tokend}
    -----------------------------------------------------------------------------
    非终端
    -----------------------------------------------------------------------------
    %开始解析规则0
    第1条规则
    判决规则2、3
    阿托姆森特规则4
    康普森特规则5、6、7
    连接规则8、9、10、11
    -----------------------------------------------------------------------------
    州
    -----------------------------------------------------------------------------
    州17
    CompSent->句子。连接句(规则6)
    句子连接