Split the lexer into PlyLexer and TokenStream components

- There are two types of token streams: file based, and list based
- I think this has better component separation
- Doxygen parsing is a bit weirder, but I think it's more straightforward to see all the pieces?
This commit is contained in:
Dustin Spicuzza
2022-12-15 02:38:44 -05:00
parent 40bf05b384
commit 1eaa85ae8d
4 changed files with 246 additions and 170 deletions

View File

@@ -1,6 +1,6 @@
import pytest
from cxxheaderparser.lexer import Lexer
from cxxheaderparser.lexer import PlyLexer
from cxxheaderparser.tokfmt import tokfmt
from cxxheaderparser.types import Token
@@ -40,11 +40,11 @@ def test_tokfmt(instr: str) -> None:
Each input string is exactly what the output of tokfmt should be
"""
toks = []
lexer = Lexer("")
lexer = PlyLexer("")
lexer.input(instr)
while True:
tok = lexer.token_eof_ok()
tok = lexer.token()
if not tok:
break