Lexer.html 19 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331
  1. <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
  2. <html>
  3. <!-- Created by GNU Texinfo 6.7, http://www.gnu.org/software/texinfo/ -->
  4. <head>
  5. <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  6. <title>Lexer (The GNU C Preprocessor Internals)</title>
  7. <meta name="description" content="Lexer (The GNU C Preprocessor Internals)">
  8. <meta name="keywords" content="Lexer (The GNU C Preprocessor Internals)">
  9. <meta name="resource-type" content="document">
  10. <meta name="distribution" content="global">
  11. <meta name="Generator" content="makeinfo">
  12. <link href="index.html" rel="start" title="Top">
  13. <link href="Concept-Index.html" rel="index" title="Concept Index">
  14. <link href="index.html#SEC_Contents" rel="contents" title="Table of Contents">
  15. <link href="index.html" rel="up" title="Top">
  16. <link href="Hash-Nodes.html" rel="next" title="Hash Nodes">
  17. <link href="Conventions.html" rel="prev" title="Conventions">
  18. <style type="text/css">
  19. <!--
  20. a.summary-letter {text-decoration: none}
  21. blockquote.indentedblock {margin-right: 0em}
  22. div.display {margin-left: 3.2em}
  23. div.example {margin-left: 3.2em}
  24. div.lisp {margin-left: 3.2em}
  25. kbd {font-style: oblique}
  26. pre.display {font-family: inherit}
  27. pre.format {font-family: inherit}
  28. pre.menu-comment {font-family: serif}
  29. pre.menu-preformatted {font-family: serif}
  30. span.nolinebreak {white-space: nowrap}
  31. span.roman {font-family: initial; font-weight: normal}
  32. span.sansserif {font-family: sans-serif; font-weight: normal}
  33. ul.no-bullet {list-style: none}
  34. -->
  35. </style>
  36. </head>
  37. <body lang="en">
  38. <span id="Lexer"></span><div class="header">
  39. <p>
  40. Next: <a href="Hash-Nodes.html" accesskey="n" rel="next">Hash Nodes</a>, Previous: <a href="Conventions.html" accesskey="p" rel="prev">Conventions</a>, Up: <a href="index.html" accesskey="u" rel="up">Top</a> &nbsp; [<a href="index.html#SEC_Contents" title="Table of contents" rel="contents">Contents</a>][<a href="Concept-Index.html" title="Index" rel="index">Index</a>]</p>
  41. </div>
  42. <hr>
  43. <span id="The-Lexer"></span><h2 class="unnumbered">The Lexer</h2>
  44. <span id="index-lexer"></span>
  45. <span id="index-newlines"></span>
  46. <span id="index-escaped-newlines"></span>
  47. <span id="Overview"></span><h3 class="section">Overview</h3>
  48. <p>The lexer is contained in the file <samp>lex.cc</samp>. It is a hand-coded
  49. lexer, and not implemented as a state machine. It can understand C, C++
  50. and Objective-C source code, and has been extended to allow reasonably
  51. successful preprocessing of assembly language. The lexer does not make
  52. an initial pass to strip out trigraphs and escaped newlines, but handles
  53. them as they are encountered in a single pass of the input file. It
  54. returns preprocessing tokens individually, not a line at a time.
  55. </p>
  56. <p>It is mostly transparent to users of the library, since the library&rsquo;s
  57. interface for obtaining the next token, <code>cpp_get_token</code>, takes care
  58. of lexing new tokens, handling directives, and expanding macros as
  59. necessary. However, the lexer does expose some functionality so that
  60. clients of the library can easily spell a given token, such as
  61. <code>cpp_spell_token</code> and <code>cpp_token_len</code>. These functions are
  62. useful when generating diagnostics, and for emitting the preprocessed
  63. output.
  64. </p>
  65. <span id="Lexing-a-token"></span><h3 class="section">Lexing a token</h3>
  66. <p>Lexing of an individual token is handled by <code>_cpp_lex_direct</code> and
  67. its subroutines. In its current form the code is quite complicated,
  68. with read ahead characters and such-like, since it strives to not step
  69. back in the character stream in preparation for handling non-ASCII file
  70. encodings. The current plan is to convert any such files to UTF-8
  71. before processing them. This complexity is therefore unnecessary and
  72. will be removed, so I&rsquo;ll not discuss it further here.
  73. </p>
  74. <p>The job of <code>_cpp_lex_direct</code> is simply to lex a token. It is not
  75. responsible for issues like directive handling, returning lookahead
  76. tokens directly, multiple-include optimization, or conditional block
  77. skipping. It necessarily has a minor r&ocirc;le to play in memory
  78. management of lexed lines. I discuss these issues in a separate section
  79. (see <a href="#Lexing-a-line">Lexing a line</a>).
  80. </p>
  81. <p>The lexer places the token it lexes into storage pointed to by the
  82. variable <code>cur_token</code>, and then increments it. This variable is
  83. important for correct diagnostic positioning. Unless a specific line
  84. and column are passed to the diagnostic routines, they will examine the
  85. <code>line</code> and <code>col</code> values of the token just before the location
  86. that <code>cur_token</code> points to, and use that location to report the
  87. diagnostic.
  88. </p>
  89. <p>The lexer does not consider whitespace to be a token in its own right.
  90. If whitespace (other than a new line) precedes a token, it sets the
  91. <code>PREV_WHITE</code> bit in the token&rsquo;s flags. Each token has its
  92. <code>line</code> and <code>col</code> variables set to the line and column of the
  93. first character of the token. This line number is the line number in
  94. the translation unit, and can be converted to a source (file, line) pair
  95. using the line map code.
  96. </p>
  97. <p>The first token on a logical, i.e. unescaped, line has the flag
  98. <code>BOL</code> set for beginning-of-line. This flag is intended for
  99. internal use, both to distinguish a &lsquo;<samp>#</samp>&rsquo; that begins a directive
  100. from one that doesn&rsquo;t, and to generate a call-back to clients that want
  101. to be notified about the start of every non-directive line with tokens
  102. on it. Clients cannot reliably determine this for themselves: the first
  103. token might be a macro, and the tokens of a macro expansion do not have
  104. the <code>BOL</code> flag set. The macro expansion may even be empty, and the
  105. next token on the line certainly won&rsquo;t have the <code>BOL</code> flag set.
  106. </p>
  107. <p>New lines are treated specially; exactly how the lexer handles them is
  108. context-dependent. The C standard mandates that directives are
  109. terminated by the first unescaped newline character, even if it appears
  110. in the middle of a macro expansion. Therefore, if the state variable
  111. <code>in_directive</code> is set, the lexer returns a <code>CPP_EOF</code> token,
  112. which is normally used to indicate end-of-file, to indicate
  113. end-of-directive. In a directive a <code>CPP_EOF</code> token never means
  114. end-of-file. Conveniently, if the caller was <code>collect_args</code>, it
  115. already handles <code>CPP_EOF</code> as if it were end-of-file, and reports an
  116. error about an unterminated macro argument list.
  117. </p>
  118. <p>The C standard also specifies that a new line in the middle of the
  119. arguments to a macro is treated as whitespace. This white space is
  120. important in case the macro argument is stringized. The state variable
  121. <code>parsing_args</code> is nonzero when the preprocessor is collecting the
  122. arguments to a macro call. It is set to 1 when looking for the opening
  123. parenthesis to a function-like macro, and 2 when collecting the actual
  124. arguments up to the closing parenthesis, since these two cases need to
  125. be distinguished sometimes. One such time is here: the lexer sets the
  126. <code>PREV_WHITE</code> flag of a token if it meets a new line when
  127. <code>parsing_args</code> is set to 2. It doesn&rsquo;t set it if it meets a new
  128. line when <code>parsing_args</code> is 1, since then code like
  129. </p>
  130. <div class="example">
  131. <pre class="example">#define foo() bar
  132. foo
  133. baz
  134. </pre></div>
  135. <p>would be output with an erroneous space before &lsquo;<samp>baz</samp>&rsquo;:
  136. </p>
  137. <div class="example">
  138. <pre class="example">foo
  139. baz
  140. </pre></div>
  141. <p>This is a good example of the subtlety of getting token spacing correct
  142. in the preprocessor; there are plenty of tests in the testsuite for
  143. corner cases like this.
  144. </p>
  145. <p>The lexer is written to treat each of &lsquo;<samp>\r</samp>&rsquo;, &lsquo;<samp>\n</samp>&rsquo;, &lsquo;<samp>\r\n</samp>&rsquo;
  146. and &lsquo;<samp>\n\r</samp>&rsquo; as a single new line indicator. This allows it to
  147. transparently preprocess MS-DOS, Macintosh and Unix files without their
  148. needing to pass through a special filter beforehand.
  149. </p>
  150. <p>We also decided to treat a backslash, either &lsquo;<samp>\</samp>&rsquo; or the trigraph
  151. &lsquo;<samp>??/</samp>&rsquo;, separated from one of the above newline indicators by
  152. non-comment whitespace only, as intending to escape the newline. It
  153. tends to be a typing mistake, and cannot reasonably be mistaken for
  154. anything else in any of the C-family grammars. Since handling it this
  155. way is not strictly conforming to the ISO standard, the library issues a
  156. warning wherever it encounters it.
  157. </p>
  158. <p>Handling newlines like this is made simpler by doing it in one place
  159. only. The function <code>handle_newline</code> takes care of all newline
  160. characters, and <code>skip_escaped_newlines</code> takes care of arbitrarily
  161. long sequences of escaped newlines, deferring to <code>handle_newline</code>
  162. to handle the newlines themselves.
  163. </p>
  164. <p>The most painful aspect of lexing ISO-standard C and C++ is handling
  165. trigraphs and backlash-escaped newlines. Trigraphs are processed before
  166. any interpretation of the meaning of a character is made, and unfortunately
  167. there is a trigraph representation for a backslash, so it is possible for
  168. the trigraph &lsquo;<samp>??/</samp>&rsquo; to introduce an escaped newline.
  169. </p>
  170. <p>Escaped newlines are tedious because theoretically they can occur
  171. anywhere&mdash;between the &lsquo;<samp>+</samp>&rsquo; and &lsquo;<samp>=</samp>&rsquo; of the &lsquo;<samp>+=</samp>&rsquo; token,
  172. within the characters of an identifier, and even between the &lsquo;<samp>*</samp>&rsquo;
  173. and &lsquo;<samp>/</samp>&rsquo; that terminates a comment. Moreover, you cannot be sure
  174. there is just one&mdash;there might be an arbitrarily long sequence of them.
  175. </p>
  176. <p>So, for example, the routine that lexes a number, <code>parse_number</code>,
  177. cannot assume that it can scan forwards until the first non-number
  178. character and be done with it, because this could be the &lsquo;<samp>\</samp>&rsquo;
  179. introducing an escaped newline, or the &lsquo;<samp>?</samp>&rsquo; introducing the trigraph
  180. sequence that represents the &lsquo;<samp>\</samp>&rsquo; of an escaped newline. If it
  181. encounters a &lsquo;<samp>?</samp>&rsquo; or &lsquo;<samp>\</samp>&rsquo;, it calls <code>skip_escaped_newlines</code>
  182. to skip over any potential escaped newlines before checking whether the
  183. number has been finished.
  184. </p>
  185. <p>Similarly code in the main body of <code>_cpp_lex_direct</code> cannot simply
  186. check for a &lsquo;<samp>=</samp>&rsquo; after a &lsquo;<samp>+</samp>&rsquo; character to determine whether it
  187. has a &lsquo;<samp>+=</samp>&rsquo; token; it needs to be prepared for an escaped newline of
  188. some sort. Such cases use the function <code>get_effective_char</code>, which
  189. returns the first character after any intervening escaped newlines.
  190. </p>
  191. <p>The lexer needs to keep track of the correct column position, including
  192. counting tabs as specified by the <samp>-ftabstop=</samp> option. This
  193. should be done even within C-style comments; they can appear in the
  194. middle of a line, and we want to report diagnostics in the correct
  195. position for text appearing after the end of the comment.
  196. </p>
  197. <span id="Invalid-identifiers"></span><p>Some identifiers, such as <code>__VA_ARGS__</code> and poisoned identifiers,
  198. may be invalid and require a diagnostic. However, if they appear in a
  199. macro expansion we don&rsquo;t want to complain with each use of the macro.
  200. It is therefore best to catch them during the lexing stage, in
  201. <code>parse_identifier</code>. In both cases, whether a diagnostic is needed
  202. or not is dependent upon the lexer&rsquo;s state. For example, we don&rsquo;t want
  203. to issue a diagnostic for re-poisoning a poisoned identifier, or for
  204. using <code>__VA_ARGS__</code> in the expansion of a variable-argument macro.
  205. Therefore <code>parse_identifier</code> makes use of state flags to determine
  206. whether a diagnostic is appropriate. Since we change state on a
  207. per-token basis, and don&rsquo;t lex whole lines at a time, this is not a
  208. problem.
  209. </p>
  210. <p>Another place where state flags are used to change behavior is whilst
  211. lexing header names. Normally, a &lsquo;<samp>&lt;</samp>&rsquo; would be lexed as a single
  212. token. After a <code>#include</code> directive, though, it should be lexed as
  213. a single token as far as the nearest &lsquo;<samp>&gt;</samp>&rsquo; character. Note that we
  214. don&rsquo;t allow the terminators of header names to be escaped; the first
  215. &lsquo;<samp>&quot;</samp>&rsquo; or &lsquo;<samp>&gt;</samp>&rsquo; terminates the header name.
  216. </p>
  217. <p>Interpretation of some character sequences depends upon whether we are
  218. lexing C, C++ or Objective-C, and on the revision of the standard in
  219. force. For example, &lsquo;<samp>::</samp>&rsquo; is a single token in C++, but in C it is
  220. two separate &lsquo;<samp>:</samp>&rsquo; tokens and almost certainly a syntax error. Such
  221. cases are handled by <code>_cpp_lex_direct</code> based upon command-line
  222. flags stored in the <code>cpp_options</code> structure.
  223. </p>
  224. <p>Once a token has been lexed, it leads an independent existence. The
  225. spelling of numbers, identifiers and strings is copied to permanent
  226. storage from the original input buffer, so a token remains valid and
  227. correct even if its source buffer is freed with <code>_cpp_pop_buffer</code>.
  228. The storage holding the spellings of such tokens remains until the
  229. client program calls cpp_destroy, probably at the end of the translation
  230. unit.
  231. </p>
  232. <span id="Lexing-a-line"></span><span id="Lexing-a-line-1"></span><h3 class="section">Lexing a line</h3>
  233. <span id="index-token-run"></span>
  234. <p>When the preprocessor was changed to return pointers to tokens, one
  235. feature I wanted was some sort of guarantee regarding how long a
  236. returned pointer remains valid. This is important to the stand-alone
  237. preprocessor, the future direction of the C family front ends, and even
  238. to cpplib itself internally.
  239. </p>
  240. <p>Occasionally the preprocessor wants to be able to peek ahead in the
  241. token stream. For example, after the name of a function-like macro, it
  242. wants to check the next token to see if it is an opening parenthesis.
  243. Another example is that, after reading the first few tokens of a
  244. <code>#pragma</code> directive and not recognizing it as a registered pragma,
  245. it wants to backtrack and allow the user-defined handler for unknown
  246. pragmas to access the full <code>#pragma</code> token stream. The stand-alone
  247. preprocessor wants to be able to test the current token with the
  248. previous one to see if a space needs to be inserted to preserve their
  249. separate tokenization upon re-lexing (paste avoidance), so it needs to
  250. be sure the pointer to the previous token is still valid. The
  251. recursive-descent C++ parser wants to be able to perform tentative
  252. parsing arbitrarily far ahead in the token stream, and then to be able
  253. to jump back to a prior position in that stream if necessary.
  254. </p>
  255. <p>The rule I chose, which is fairly natural, is to arrange that the
  256. preprocessor lex all tokens on a line consecutively into a token buffer,
  257. which I call a <em>token run</em>, and when meeting an unescaped new line
  258. (newlines within comments do not count either), to start lexing back at
  259. the beginning of the run. Note that we do <em>not</em> lex a line of
  260. tokens at once; if we did that <code>parse_identifier</code> would not have
  261. state flags available to warn about invalid identifiers (see <a href="#Invalid-identifiers">Invalid identifiers</a>).
  262. </p>
  263. <p>In other words, accessing tokens that appeared earlier in the current
  264. line is valid, but since each logical line overwrites the tokens of the
  265. previous line, tokens from prior lines are unavailable. In particular,
  266. since a directive only occupies a single logical line, this means that
  267. the directive handlers like the <code>#pragma</code> handler can jump around
  268. in the directive&rsquo;s tokens if necessary.
  269. </p>
  270. <p>Two issues remain: what about tokens that arise from macro expansions,
  271. and what happens when we have a long line that overflows the token run?
  272. </p>
  273. <p>Since we promise clients that we preserve the validity of pointers that
  274. we have already returned for tokens that appeared earlier in the line,
  275. we cannot reallocate the run. Instead, on overflow it is expanded by
  276. chaining a new token run on to the end of the existing one.
  277. </p>
  278. <p>The tokens forming a macro&rsquo;s replacement list are collected by the
  279. <code>#define</code> handler, and placed in storage that is only freed by
  280. <code>cpp_destroy</code>. So if a macro is expanded in the line of tokens,
  281. the pointers to the tokens of its expansion that are returned will always
  282. remain valid. However, macros are a little trickier than that, since
  283. they give rise to three sources of fresh tokens. They are the built-in
  284. macros like <code>__LINE__</code>, and the &lsquo;<samp>#</samp>&rsquo; and &lsquo;<samp>##</samp>&rsquo; operators
  285. for stringizing and token pasting. I handled this by allocating
  286. space for these tokens from the lexer&rsquo;s token run chain. This means
  287. they automatically receive the same lifetime guarantees as lexed tokens,
  288. and we don&rsquo;t need to concern ourselves with freeing them.
  289. </p>
  290. <p>Lexing into a line of tokens solves some of the token memory management
  291. issues, but not all. The opening parenthesis after a function-like
  292. macro name might lie on a different line, and the front ends definitely
  293. want the ability to look ahead past the end of the current line. So
  294. cpplib only moves back to the start of the token run at the end of a
  295. line if the variable <code>keep_tokens</code> is zero. Line-buffering is
  296. quite natural for the preprocessor, and as a result the only time cpplib
  297. needs to increment this variable is whilst looking for the opening
  298. parenthesis to, and reading the arguments of, a function-like macro. In
  299. the near future cpplib will export an interface to increment and
  300. decrement this variable, so that clients can share full control over the
  301. lifetime of token pointers too.
  302. </p>
  303. <p>The routine <code>_cpp_lex_token</code> handles moving to new token runs,
  304. calling <code>_cpp_lex_direct</code> to lex new tokens, or returning
  305. previously-lexed tokens if we stepped back in the token stream. It also
  306. checks each token for the <code>BOL</code> flag, which might indicate a
  307. directive that needs to be handled, or require a start-of-line call-back
  308. to be made. <code>_cpp_lex_token</code> also handles skipping over tokens in
  309. failed conditional blocks, and invalidates the control macro of the
  310. multiple-include optimization if a token was successfully lexed outside
  311. a directive. In other words, its callers do not need to concern
  312. themselves with such issues.
  313. </p>
  314. <hr>
  315. <div class="header">
  316. <p>
  317. Next: <a href="Hash-Nodes.html" accesskey="n" rel="next">Hash Nodes</a>, Previous: <a href="Conventions.html" accesskey="p" rel="prev">Conventions</a>, Up: <a href="index.html" accesskey="u" rel="up">Top</a> &nbsp; [<a href="index.html#SEC_Contents" title="Table of contents" rel="contents">Contents</a>][<a href="Concept-Index.html" title="Index" rel="index">Index</a>]</p>
  318. </div>
  319. </body>
  320. </html>