2 @setfilename cppinternals.info
3 @settitle The GNU C Preprocessor Internals
6 @dircategory Programming
8 * Cpplib: (cppinternals). Cpplib internals.
15 @setchapternewpage odd
17 This file documents the internals of the GNU C Preprocessor.
19 Copyright 2000, 2001 Free Software Foundation, Inc.
21 Permission is granted to make and distribute verbatim copies of
22 this manual provided the copyright notice and this permission notice
23 are preserved on all copies.
26 Permission is granted to process this file through Tex and print the
27 results, provided the printed document carries copying permission
28 notice identical to this one except for the removal of this paragraph
29 (this paragraph not being relevant to the printed manual).
32 Permission is granted to copy and distribute modified versions of this
33 manual under the conditions for verbatim copying, provided also that
34 the entire resulting derived work is distributed under the terms of a
35 permission notice identical to this one.
37 Permission is granted to copy and distribute translations of this manual
38 into another language, under the above conditions for modified versions.
43 @title Cpplib Internals
44 @subtitle Last revised December 2001
45 @subtitle for GCC version 3.1
48 @vskip 0pt plus 1filll
49 @c man begin COPYRIGHT
50 Copyright @copyright{} 2000, 2001
51 Free Software Foundation, Inc.
53 Permission is granted to make and distribute verbatim copies of
54 this manual provided the copyright notice and this permission notice
55 are preserved on all copies.
57 Permission is granted to copy and distribute modified versions of this
58 manual under the conditions for verbatim copying, provided also that
59 the entire resulting derived work is distributed under the terms of a
60 permission notice identical to this one.
62 Permission is granted to copy and distribute translations of this manual
63 into another language, under the above conditions for modified versions.
71 @chapter Cpplib---the core of the GNU C Preprocessor
73 The GNU C preprocessor in GCC 3.x has been completely rewritten. It is
74 now implemented as a library, @dfn{cpplib}, so it can be easily shared between
75 a stand-alone preprocessor, and a preprocessor integrated with the C,
76 C++ and Objective-C front ends. It is also available for use by other
77 programs, though this is not recommended as its exposed interface has
78 not yet reached a point of reasonable stability.
80 The library has been written to be re-entrant, so that it can be used
81 to preprocess many files simultaneously if necessary. It has also been
82 written with the preprocessing token as the fundamental unit; the
83 preprocessor in previous versions of GCC would operate on text strings
84 as the fundamental unit.
86 This brief manual documents the internals of cpplib, and explains some
87 of the tricky issues. It is intended that, along with the comments in
88 the source code, a reasonably competent C programmer should be able to
89 figure out what the code is doing, and why things have been implemented
93 * Conventions:: Conventions used in the code.
94 * Lexer:: The combined C, C++ and Objective-C Lexer.
95 * Hash Nodes:: All identifiers are entered into a hash table.
96 * Macro Expansion:: Macro expansion algorithm.
97 * Token Spacing:: Spacing and paste avoidance issues.
98 * Line Numbering:: Tracking location within files.
99 * Guard Macros:: Optimizing header files with guard macros.
100 * Files:: File handling.
105 @unnumbered Conventions
109 cpplib has two interfaces---one is exposed internally only, and the
110 other is for both internal and external use.
112 The convention is that functions and types that are exposed to multiple
113 files internally are prefixed with @samp{_cpp_}, and are to be found in
114 the file @file{cpphash.h}. Functions and types exposed to external
115 clients are in @file{cpplib.h}, and prefixed with @samp{cpp_}. For
116 historical reasons this is no longer quite true, but we should strive to
119 We are striving to reduce the information exposed in @file{cpplib.h} to the
120 bare minimum necessary, and then to keep it there. This makes clear
121 exactly what external clients are entitled to assume, and allows us to
122 change internals in the future without worrying whether library clients
123 are perhaps relying on some kind of undocumented implementation-specific
127 @unnumbered The Lexer
130 @cindex escaped newlines
133 The lexer is contained in the file @file{cpplex.c}. It is a hand-coded
134 lexer, and not implemented as a state machine. It can understand C, C++
135 and Objective-C source code, and has been extended to allow reasonably
136 successful preprocessing of assembly language. The lexer does not make
137 an initial pass to strip out trigraphs and escaped newlines, but handles
138 them as they are encountered in a single pass of the input file. It
139 returns preprocessing tokens individually, not a line at a time.
141 It is mostly transparent to users of the library, since the library's
142 interface for obtaining the next token, @code{cpp_get_token}, takes care
143 of lexing new tokens, handling directives, and expanding macros as
144 necessary. However, the lexer does expose some functionality so that
145 clients of the library can easily spell a given token, such as
146 @code{cpp_spell_token} and @code{cpp_token_len}. These functions are
147 useful when generating diagnostics, and for emitting the preprocessed
150 @section Lexing a token
151 Lexing of an individual token is handled by @code{_cpp_lex_direct} and
152 its subroutines. In its current form the code is quite complicated,
153 with read ahead characters and such-like, since it strives to not step
154 back in the character stream in preparation for handling non-ASCII file
155 encodings. The current plan is to convert any such files to UTF-8
156 before processing them. This complexity is therefore unnecessary and
157 will be removed, so I'll not discuss it further here.
159 The job of @code{_cpp_lex_direct} is simply to lex a token. It is not
160 responsible for issues like directive handling, returning lookahead
161 tokens directly, multiple-include optimization, or conditional block
162 skipping. It necessarily has a minor r@^ole to play in memory
163 management of lexed lines. I discuss these issues in a separate section
164 (@pxref{Lexing a line}).
166 The lexer places the token it lexes into storage pointed to by the
167 variable @code{cur_token}, and then increments it. This variable is
168 important for correct diagnostic positioning. Unless a specific line
169 and column are passed to the diagnostic routines, they will examine the
170 @code{line} and @code{col} values of the token just before the location
171 that @code{cur_token} points to, and use that location to report the
174 The lexer does not consider whitespace to be a token in its own right.
175 If whitespace (other than a new line) precedes a token, it sets the
176 @code{PREV_WHITE} bit in the token's flags. Each token has its
177 @code{line} and @code{col} variables set to the line and column of the
178 first character of the token. This line number is the line number in
179 the translation unit, and can be converted to a source (file, line) pair
180 using the line map code.
182 The first token on a logical, i.e.@: unescaped, line has the flag
183 @code{BOL} set for beginning-of-line. This flag is intended for
184 internal use, both to distinguish a @samp{#} that begins a directive
185 from one that doesn't, and to generate a call-back to clients that want
186 to be notified about the start of every non-directive line with tokens
187 on it. Clients cannot reliably determine this for themselves: the first
188 token might be a macro, and the tokens of a macro expansion do not have
189 the @code{BOL} flag set. The macro expansion may even be empty, and the
190 next token on the line certainly won't have the @code{BOL} flag set.
192 New lines are treated specially; exactly how the lexer handles them is
193 context-dependent. The C standard mandates that directives are
194 terminated by the first unescaped newline character, even if it appears
195 in the middle of a macro expansion. Therefore, if the state variable
196 @code{in_directive} is set, the lexer returns a @code{CPP_EOF} token,
197 which is normally used to indicate end-of-file, to indicate
198 end-of-directive. In a directive a @code{CPP_EOF} token never means
199 end-of-file. Conveniently, if the caller was @code{collect_args}, it
200 already handles @code{CPP_EOF} as if it were end-of-file, and reports an
201 error about an unterminated macro argument list.
203 The C standard also specifies that a new line in the middle of the
204 arguments to a macro is treated as whitespace. This white space is
205 important in case the macro argument is stringified. The state variable
206 @code{parsing_args} is nonzero when the preprocessor is collecting the
207 arguments to a macro call. It is set to 1 when looking for the opening
208 parenthesis to a function-like macro, and 2 when collecting the actual
209 arguments up to the closing parenthesis, since these two cases need to
210 be distinguished sometimes. One such time is here: the lexer sets the
211 @code{PREV_WHITE} flag of a token if it meets a new line when
212 @code{parsing_args} is set to 2. It doesn't set it if it meets a new
213 line when @code{parsing_args} is 1, since then code like
221 @noindent would be output with an erroneous space before @samp{baz}:
228 This is a good example of the subtlety of getting token spacing correct
229 in the preprocessor; there are plenty of tests in the test suite for
230 corner cases like this.
232 The lexer is written to treat each of @samp{\r}, @samp{\n}, @samp{\r\n}
233 and @samp{\n\r} as a single new line indicator. This allows it to
234 transparently preprocess MS-DOS, Macintosh and Unix files without their
235 needing to pass through a special filter beforehand.
237 We also decided to treat a backslash, either @samp{\} or the trigraph
238 @samp{??/}, separated from one of the above newline indicators by
239 non-comment whitespace only, as intending to escape the newline. It
240 tends to be a typing mistake, and cannot reasonably be mistaken for
241 anything else in any of the C-family grammars. Since handling it this
242 way is not strictly conforming to the ISO standard, the library issues a
243 warning wherever it encounters it.
245 Handling newlines like this is made simpler by doing it in one place
246 only. The function @code{handle_newline} takes care of all newline
247 characters, and @code{skip_escaped_newlines} takes care of arbitrarily
248 long sequences of escaped newlines, deferring to @code{handle_newline}
249 to handle the newlines themselves.
251 The most painful aspect of lexing ISO-standard C and C++ is handling
252 trigraphs and backlash-escaped newlines. Trigraphs are processed before
253 any interpretation of the meaning of a character is made, and unfortunately
254 there is a trigraph representation for a backslash, so it is possible for
255 the trigraph @samp{??/} to introduce an escaped newline.
257 Escaped newlines are tedious because theoretically they can occur
258 anywhere---between the @samp{+} and @samp{=} of the @samp{+=} token,
259 within the characters of an identifier, and even between the @samp{*}
260 and @samp{/} that terminates a comment. Moreover, you cannot be sure
261 there is just one---there might be an arbitrarily long sequence of them.
263 So, for example, the routine that lexes a number, @code{parse_number},
264 cannot assume that it can scan forwards until the first non-number
265 character and be done with it, because this could be the @samp{\}
266 introducing an escaped newline, or the @samp{?} introducing the trigraph
267 sequence that represents the @samp{\} of an escaped newline. If it
268 encounters a @samp{?} or @samp{\}, it calls @code{skip_escaped_newlines}
269 to skip over any potential escaped newlines before checking whether the
270 number has been finished.
272 Similarly code in the main body of @code{_cpp_lex_direct} cannot simply
273 check for a @samp{=} after a @samp{+} character to determine whether it
274 has a @samp{+=} token; it needs to be prepared for an escaped newline of
275 some sort. Such cases use the function @code{get_effective_char}, which
276 returns the first character after any intervening escaped newlines.
278 The lexer needs to keep track of the correct column position, including
279 counting tabs as specified by the @option{-ftabstop=} option. This
280 should be done even within C-style comments; they can appear in the
281 middle of a line, and we want to report diagnostics in the correct
282 position for text appearing after the end of the comment.
284 @anchor{Invalid identifiers}
285 Some identifiers, such as @code{__VA_ARGS__} and poisoned identifiers,
286 may be invalid and require a diagnostic. However, if they appear in a
287 macro expansion we don't want to complain with each use of the macro.
288 It is therefore best to catch them during the lexing stage, in
289 @code{parse_identifier}. In both cases, whether a diagnostic is needed
290 or not is dependent upon the lexer's state. For example, we don't want
291 to issue a diagnostic for re-poisoning a poisoned identifier, or for
292 using @code{__VA_ARGS__} in the expansion of a variable-argument macro.
293 Therefore @code{parse_identifier} makes use of state flags to determine
294 whether a diagnostic is appropriate. Since we change state on a
295 per-token basis, and don't lex whole lines at a time, this is not a
298 Another place where state flags are used to change behavior is whilst
299 lexing header names. Normally, a @samp{<} would be lexed as a single
300 token. After a @code{#include} directive, though, it should be lexed as
301 a single token as far as the nearest @samp{>} character. Note that we
302 don't allow the terminators of header names to be escaped; the first
303 @samp{"} or @samp{>} terminates the header name.
305 Interpretation of some character sequences depends upon whether we are
306 lexing C, C++ or Objective-C, and on the revision of the standard in
307 force. For example, @samp{::} is a single token in C++, but in C it is
308 two separate @samp{:} tokens and almost certainly a syntax error. Such
309 cases are handled by @code{_cpp_lex_direct} based upon command-line
310 flags stored in the @code{cpp_options} structure.
312 Once a token has been lexed, it leads an independent existence. The
313 spelling of numbers, identifiers and strings is copied to permanent
314 storage from the original input buffer, so a token remains valid and
315 correct even if its source buffer is freed with @code{_cpp_pop_buffer}.
316 The storage holding the spellings of such tokens remains until the
317 client program calls cpp_destroy, probably at the end of the translation
320 @anchor{Lexing a line}
321 @section Lexing a line
324 When the preprocessor was changed to return pointers to tokens, one
325 feature I wanted was some sort of guarantee regarding how long a
326 returned pointer remains valid. This is important to the stand-alone
327 preprocessor, the future direction of the C family front ends, and even
328 to cpplib itself internally.
330 Occasionally the preprocessor wants to be able to peek ahead in the
331 token stream. For example, after the name of a function-like macro, it
332 wants to check the next token to see if it is an opening parenthesis.
333 Another example is that, after reading the first few tokens of a
334 @code{#pragma} directive and not recognizing it as a registered pragma,
335 it wants to backtrack and allow the user-defined handler for unknown
336 pragmas to access the full @code{#pragma} token stream. The stand-alone
337 preprocessor wants to be able to test the current token with the
338 previous one to see if a space needs to be inserted to preserve their
339 separate tokenization upon re-lexing (paste avoidance), so it needs to
340 be sure the pointer to the previous token is still valid. The
341 recursive-descent C++ parser wants to be able to perform tentative
342 parsing arbitrarily far ahead in the token stream, and then to be able
343 to jump back to a prior position in that stream if necessary.
345 The rule I chose, which is fairly natural, is to arrange that the
346 preprocessor lex all tokens on a line consecutively into a token buffer,
347 which I call a @dfn{token run}, and when meeting an unescaped new line
348 (newlines within comments do not count either), to start lexing back at
349 the beginning of the run. Note that we do @emph{not} lex a line of
350 tokens at once; if we did that @code{parse_identifier} would not have
351 state flags available to warn about invalid identifiers (@pxref{Invalid
354 In other words, accessing tokens that appeared earlier in the current
355 line is valid, but since each logical line overwrites the tokens of the
356 previous line, tokens from prior lines are unavailable. In particular,
357 since a directive only occupies a single logical line, this means that
358 the directive handlers like the @code{#pragma} handler can jump around
359 in the directive's tokens if necessary.
361 Two issues remain: what about tokens that arise from macro expansions,
362 and what happens when we have a long line that overflows the token run?
364 Since we promise clients that we preserve the validity of pointers that
365 we have already returned for tokens that appeared earlier in the line,
366 we cannot reallocate the run. Instead, on overflow it is expanded by
367 chaining a new token run on to the end of the existing one.
369 The tokens forming a macro's replacement list are collected by the
370 @code{#define} handler, and placed in storage that is only freed by
371 @code{cpp_destroy}. So if a macro is expanded in our line of tokens,
372 the pointers to the tokens of its expansion that we return will always
373 remain valid. However, macros are a little trickier than that, since
374 they give rise to three sources of fresh tokens. They are the built-in
375 macros like @code{__LINE__}, and the @samp{#} and @samp{##} operators
376 for stringification and token pasting. I handled this by allocating
377 space for these tokens from the lexer's token run chain. This means
378 they automatically receive the same lifetime guarantees as lexed tokens,
379 and we don't need to concern ourselves with freeing them.
381 Lexing into a line of tokens solves some of the token memory management
382 issues, but not all. The opening parenthesis after a function-like
383 macro name might lie on a different line, and the front ends definitely
384 want the ability to look ahead past the end of the current line. So
385 cpplib only moves back to the start of the token run at the end of a
386 line if the variable @code{keep_tokens} is zero. Line-buffering is
387 quite natural for the preprocessor, and as a result the only time cpplib
388 needs to increment this variable is whilst looking for the opening
389 parenthesis to, and reading the arguments of, a function-like macro. In
390 the near future cpplib will export an interface to increment and
391 decrement this variable, so that clients can share full control over the
392 lifetime of token pointers too.
394 The routine @code{_cpp_lex_token} handles moving to new token runs,
395 calling @code{_cpp_lex_direct} to lex new tokens, or returning
396 previously-lexed tokens if we stepped back in the token stream. It also
397 checks each token for the @code{BOL} flag, which might indicate a
398 directive that needs to be handled, or require a start-of-line call-back
399 to be made. @code{_cpp_lex_token} also handles skipping over tokens in
400 failed conditional blocks, and invalidates the control macro of the
401 multiple-include optimization if a token was successfully lexed outside
402 a directive. In other words, its callers do not need to concern
403 themselves with such issues.
406 @unnumbered Hash Nodes
411 @cindex named operators
413 When cpplib encounters an ``identifier'', it generates a hash code for
414 it and stores it in the hash table. By ``identifier'' we mean tokens
415 with type @code{CPP_NAME}; this includes identifiers in the usual C
416 sense, as well as keywords, directive names, macro names and so on. For
417 example, all of @code{pragma}, @code{int}, @code{foo} and
418 @code{__GNUC__} are identifiers and hashed when lexed.
420 Each node in the hash table contain various information about the
421 identifier it represents. For example, its length and type. At any one
422 time, each identifier falls into exactly one of three categories:
427 These have been declared to be macros, either on the command line or
428 with @code{#define}. A few, such as @code{__TIME__} are built-ins
429 entered in the hash table during initialization. The hash node for a
430 normal macro points to a structure with more information about the
431 macro, such as whether it is function-like, how many arguments it takes,
432 and its expansion. Built-in macros are flagged as special, and instead
433 contain an enum indicating which of the various built-in macros it is.
437 Assertions are in a separate namespace to macros. To enforce this, cpp
438 actually prepends a @code{#} character before hashing and entering it in
439 the hash table. An assertion's node points to a chain of answers to
444 Everything else falls into this category---an identifier that is not
445 currently a macro, or a macro that has since been undefined with
448 When preprocessing C++, this category also includes the named operators,
449 such as @code{xor}. In expressions these behave like the operators they
450 represent, but in contexts where the spelling of a token matters they
451 are spelt differently. This spelling distinction is relevant when they
452 are operands of the stringizing and pasting macro operators @code{#} and
453 @code{##}. Named operator hash nodes are flagged, both to catch the
454 spelling distinction and to prevent them from being defined as macros.
457 The same identifiers share the same hash node. Since each identifier
458 token, after lexing, contains a pointer to its hash node, this is used
459 to provide rapid lookup of various information. For example, when
460 parsing a @code{#define} statement, CPP flags each argument's identifier
461 hash node with the index of that argument. This makes duplicated
462 argument checking an O(1) operation for each argument. Similarly, for
463 each identifier in the macro's expansion, lookup to see if it is an
464 argument, and which argument it is, is also an O(1) operation. Further,
465 each directive name, such as @code{endif}, has an associated directive
466 enum stored in its hash node, so that directive lookup is also O(1).
468 @node Macro Expansion
469 @unnumbered Macro Expansion Algorithm
470 @cindex macro expansion
472 Macro expansion is a surprisingly tricky operation, fraught with nasty
473 corner cases and situations that render what you thought was a nifty
474 way to optimize the preprocessor's expansion algorithm wrong in quite
477 I strongly recommend you have a good grasp of how the C and C++
478 standards require macros to be expanded before diving into this
479 section, let alone the code!. If you don't have a clear mental
480 picture of how things like nested macro expansion, stringification and
481 token pasting are supposed to work, damage to you sanity can quickly
484 @section Internal representation of Macros
485 @cindex macro representation (internal)
487 The preprocessor stores macro expansions in tokenized form. This
488 saves repeated lexing passes during expansion, at the cost of a small
489 increase in memory consumption on average. The tokens are stored
490 contiguously in memory, so a pointer to the first one and a token
491 count is all we need.
493 If the macro is a function-like macro the preprocessor also stores its
494 parameters, in the form of an ordered list of pointers to the hash
495 table entry of each parameter's identifier. Further, in the macro's
496 stored expansion each occurrence of a parameter is replaced with a
497 special token of type @code{CPP_MACRO_ARG}. Each such token holds the
498 index of the parameter it represents in the parameter list, which
499 allows rapid replacement of parameters with their arguments during
500 expansion. Despite this optimization it is still necessary to store
501 the original parameters to the macro, both for dumping with e.g.,
502 @option{-dD}, and to warn about non-trivial macro redefinitions when
503 the parameter names have changed.
505 @section Nested object-like macros
509 @section Function-like macros
514 @unnumbered Token Spacing
515 @cindex paste avoidance
517 @cindex token spacing
519 First, let's look at an issue that only concerns the stand-alone
520 preprocessor: we want to guarantee that re-reading its preprocessed
521 output results in an identical token stream. Without taking special
522 measures, this might not be the case because of macro substitution.
529 +PLUS -EMPTY- PLUS+ f(=)
530 @expansion{} + + - - + + = = =
532 @expansion{} ++ -- ++ ===
535 One solution would be to simply insert a space between all adjacent
536 tokens. However, we would like to keep space insertion to a minimum,
537 both for aesthetic reasons and because it causes problems for people who
538 still try to abuse the preprocessor for things like Fortran source and
541 For now, just notice that when tokens are added (or removed, as shown by
542 the @code{EMPTY} example) from the original lexed token stream, we need
543 to check for accidental token pasting. We call this @dfn{paste
544 avoidance}. Token addition and removal can only occur because of macro
545 expansion, but accidental pasting can occur in many places: both before
546 and after each macro replacement, each argument replacement, and
547 additionally each token created by the @samp{#} and @samp{##} operators.
549 Let's look at how the preprocessor gets whitespace output correct
550 normally. The @code{cpp_token} structure contains a flags byte, and one
551 of those flags is @code{PREV_WHITE}. This is flagged by the lexer, and
552 indicates that the token was preceded by whitespace of some form other
553 than a new line. The stand-alone preprocessor can use this flag to
554 decide whether to insert a space between tokens in the output.
556 Now consider the result of the following macro expansion:
559 #define add(x, y, z) x + y +z;
561 @expansion{} sum = 1 + 2 +3;
564 The interesting thing here is that the tokens @samp{1} and @samp{2} are
565 output with a preceding space, and @samp{3} is output without a
566 preceding space, but when lexed none of these tokens had that property.
567 Careful consideration reveals that @samp{1} gets its preceding
568 whitespace from the space preceding @samp{add} in the macro invocation,
569 @emph{not} replacement list. @samp{2} gets its whitespace from the
570 space preceding the parameter @samp{y} in the macro replacement list,
571 and @samp{3} has no preceding space because parameter @samp{z} has none
572 in the replacement list.
574 Once lexed, tokens are effectively fixed and cannot be altered, since
575 pointers to them might be held in many places, in particular by
576 in-progress macro expansions. So instead of modifying the two tokens
577 above, the preprocessor inserts a special token, which I call a
578 @dfn{padding token}, into the token stream to indicate that spacing of
579 the subsequent token is special. The preprocessor inserts padding
580 tokens in front of every macro expansion and expanded macro argument.
581 These point to a @dfn{source token} from which the subsequent real token
582 should inherit its spacing. In the above example, the source tokens are
583 @samp{add} in the macro invocation, and @samp{y} and @samp{z} in the
584 macro replacement list, respectively.
586 It is quite easy to get multiple padding tokens in a row, for example if
587 a macro's first replacement token expands straight into another macro.
596 Here, two padding tokens are generated with sources the @samp{foo} token
597 between the brackets, and the @samp{bar} token from foo's replacement
598 list, respectively. Clearly the first padding token is the one we
599 should use, so our output code should contain a rule that the first
600 padding token in a sequence is the one that matters.
602 But what if we happen to leave a macro expansion? Adjusting the above
607 #define bar EMPTY baz
610 @expansion{} [ baz] ;
613 As shown, now there should be a space before @samp{baz} and the
614 semicolon in the output.
616 The rules we decided above fail for @samp{baz}: we generate three
617 padding tokens, one per macro invocation, before the token @samp{baz}.
618 We would then have it take its spacing from the first of these, which
619 carries source token @samp{foo} with no leading space.
621 It is vital that cpplib get spacing correct in these examples since any
622 of these macro expansions could be stringified, where spacing matters.
624 So, this demonstrates that not just entering macro and argument
625 expansions, but leaving them requires special handling too. I made
626 cpplib insert a padding token with a @code{NULL} source token when
627 leaving macro expansions, as well as after each replaced argument in a
628 macro's replacement list. It also inserts appropriate padding tokens on
629 either side of tokens created by the @samp{#} and @samp{##} operators.
630 I expanded the rule so that, if we see a padding token with a
631 @code{NULL} source token, @emph{and} that source token has no leading
632 space, then we behave as if we have seen no padding tokens at all. A
633 quick check shows this rule will then get the above example correct as
636 Now a relationship with paste avoidance is apparent: we have to be
637 careful about paste avoidance in exactly the same locations we have
638 padding tokens in order to get white space correct. This makes
639 implementation of paste avoidance easy: wherever the stand-alone
640 preprocessor is fixing up spacing because of padding tokens, and it
641 turns out that no space is needed, it has to take the extra step to
642 check that a space is not needed after all to avoid an accidental paste.
643 The function @code{cpp_avoid_paste} advises whether a space is required
644 between two consecutive tokens. To avoid excessive spacing, it tries
645 hard to only require a space if one is likely to be necessary, but for
646 reasons of efficiency it is slightly conservative and might recommend a
647 space where one is not strictly needed.
650 @unnumbered Line numbering
653 @section Just which line number anyway?
655 There are three reasonable requirements a cpplib client might have for
656 the line number of a token passed to it:
660 The source line it was lexed on.
662 The line it is output on. This can be different to the line it was
663 lexed on if, for example, there are intervening escaped newlines or
664 C-style comments. For example:
675 If the token results from a macro expansion, the line of the macro name,
676 or possibly the line of the closing parenthesis in the case of
677 function-like macro expansion.
680 The @code{cpp_token} structure contains @code{line} and @code{col}
681 members. The lexer fills these in with the line and column of the first
682 character of the token. Consequently, but maybe unexpectedly, a token
683 from the replacement list of a macro expansion carries the location of
684 the token within the @code{#define} directive, because cpplib expands a
685 macro by returning pointers to the tokens in its replacement list. The
686 current implementation of cpplib assigns tokens created from built-in
687 macros and the @samp{#} and @samp{##} operators the location of the most
688 recently lexed token. This is a because they are allocated from the
689 lexer's token runs, and because of the way the diagnostic routines infer
690 the appropriate location to report.
692 The diagnostic routines in cpplib display the location of the most
693 recently @emph{lexed} token, unless they are passed a specific line and
694 column to report. For diagnostics regarding tokens that arise from
695 macro expansions, it might also be helpful for the user to see the
696 original location in the macro definition that the token came from.
697 Since that is exactly the information each token carries, such an
698 enhancement could be made relatively easily in future.
700 The stand-alone preprocessor faces a similar problem when determining
701 the correct line to output the token on: the position attached to a
702 token is fairly useless if the token came from a macro expansion. All
703 tokens on a logical line should be output on its first physical line, so
704 the token's reported location is also wrong if it is part of a physical
705 line other than the first.
707 To solve these issues, cpplib provides a callback that is generated
708 whenever it lexes a preprocessing token that starts a new logical line
709 other than a directive. It passes this token (which may be a
710 @code{CPP_EOF} token indicating the end of the translation unit) to the
711 callback routine, which can then use the line and column of this token
712 to produce correct output.
714 @section Representation of line numbers
716 As mentioned above, cpplib stores with each token the line number that
717 it was lexed on. In fact, this number is not the number of the line in
718 the source file, but instead bears more resemblance to the number of the
719 line in the translation unit.
721 The preprocessor maintains a monotonic increasing line count, which is
722 incremented at every new line character (and also at the end of any
723 buffer that does not end in a new line). Since a line number of zero is
724 useful to indicate certain special states and conditions, this variable
725 starts counting from one.
727 This variable therefore uniquely enumerates each line in the translation
728 unit. With some simple infrastructure, it is straight forward to map
729 from this to the original source file and line number pair, saving space
730 whenever line number information needs to be saved. The code the
731 implements this mapping lies in the files @file{line-map.c} and
734 Command-line macros and assertions are implemented by pushing a buffer
735 containing the right hand side of an equivalent @code{#define} or
736 @code{#assert} directive. Some built-in macros are handled similarly.
737 Since these are all processed before the first line of the main input
738 file, it will typically have an assigned line closer to twenty than to
742 @unnumbered The Multiple-Include Optimization
744 @cindex controlling macros
745 @cindex multiple-include optimization
747 Header files are often of the form
757 to prevent the compiler from processing them more than once. The
758 preprocessor notices such header files, so that if the header file
759 appears in a subsequent @code{#include} directive and @code{FOO} is
760 defined, then it is ignored and it doesn't preprocess or even re-open
761 the file a second time. This is referred to as the @dfn{multiple
762 include optimization}.
764 Under what circumstances is such an optimization valid? If the file
765 were included a second time, it can only be optimized away if that
766 inclusion would result in no tokens to return, and no relevant
767 directives to process. Therefore the current implementation imposes
768 requirements and makes some allowances as follows:
772 There must be no tokens outside the controlling @code{#if}-@code{#endif}
773 pair, but whitespace and comments are permitted.
776 There must be no directives outside the controlling directive pair, but
777 the @dfn{null directive} (a line containing nothing other than a single
778 @samp{#} and possibly whitespace) is permitted.
781 The opening directive must be of the form
790 #if !defined FOO [equivalently, #if !defined(FOO)]
794 In the second form above, the tokens forming the @code{#if} expression
795 must have come directly from the source file---no macro expansion must
796 have been involved. This is because macro definitions can change, and
797 tracking whether or not a relevant change has been made is not worth the
801 There can be no @code{#else} or @code{#elif} directives at the outer
802 conditional block level, because they would probably contain something
803 of interest to a subsequent pass.
806 First, when pushing a new file on the buffer stack,
807 @code{_stack_include_file} sets the controlling macro @code{mi_cmacro} to
808 @code{NULL}, and sets @code{mi_valid} to @code{true}. This indicates
809 that the preprocessor has not yet encountered anything that would
810 invalidate the multiple-include optimization. As described in the next
811 few paragraphs, these two variables having these values effectively
812 indicates top-of-file.
814 When about to return a token that is not part of a directive,
815 @code{_cpp_lex_token} sets @code{mi_valid} to @code{false}. This
816 enforces the constraint that tokens outside the controlling conditional
817 block invalidate the optimization.
819 The @code{do_if}, when appropriate, and @code{do_ifndef} directive
820 handlers pass the controlling macro to the function
821 @code{push_conditional}. cpplib maintains a stack of nested conditional
822 blocks, and after processing every opening conditional this function
823 pushes an @code{if_stack} structure onto the stack. In this structure
824 it records the controlling macro for the block, provided there is one
825 and we're at top-of-file (as described above). If an @code{#elif} or
826 @code{#else} directive is encountered, the controlling macro for that
827 block is cleared to @code{NULL}. Otherwise, it survives until the
828 @code{#endif} closing the block, upon which @code{do_endif} sets
829 @code{mi_valid} to true and stores the controlling macro in
832 @code{_cpp_handle_directive} clears @code{mi_valid} when processing any
833 directive other than an opening conditional and the null directive.
834 With this, and requiring top-of-file to record a controlling macro, and
835 no @code{#else} or @code{#elif} for it to survive and be copied to
836 @code{mi_cmacro} by @code{do_endif}, we have enforced the absence of
837 directives outside the main conditional block for the optimization to be
840 Note that whilst we are inside the conditional block, @code{mi_valid} is
841 likely to be reset to @code{false}, but this does not matter since the
842 the closing @code{#endif} restores it to @code{true} if appropriate.
844 Finally, since @code{_cpp_lex_direct} pops the file off the buffer stack
845 at @code{EOF} without returning a token, if the @code{#endif} directive
846 was not followed by any tokens, @code{mi_valid} is @code{true} and
847 @code{_cpp_pop_file_buffer} remembers the controlling macro associated
848 with the file. Subsequent calls to @code{stack_include_file} result in
849 no buffer being pushed if the controlling macro is defined, effecting
852 A quick word on how we handle the
859 case. @code{_cpp_parse_expr} and @code{parse_defined} take steps to see
860 whether the three stages @samp{!}, @samp{defined-expression} and
861 @samp{end-of-directive} occur in order in a @code{#if} expression. If
862 so, they return the guard macro to @code{do_if} in the variable
863 @code{mi_ind_cmacro}, and otherwise set it to @code{NULL}.
864 @code{enter_macro_context} sets @code{mi_valid} to false, so if a macro
865 was expanded whilst parsing any part of the expression, then the
866 top-of-file test in @code{push_conditional} fails and the optimization
870 @unnumbered File Handling
873 Fairly obviously, the file handling code of cpplib resides in the file
874 @file{cppfiles.c}. It takes care of the details of file searching,
875 opening, reading and caching, for both the main source file and all the
876 headers it recursively includes.
878 The basic strategy is to minimize the number of system calls. On many
879 systems, the basic @code{open ()} and @code{fstat ()} system calls can
880 be quite expensive. For every @code{#include}-d file, we need to try
881 all the directories in the search path until we find a match. Some
882 projects, such as glibc, pass twenty or thirty include paths on the
883 command line, so this can rapidly become time consuming.
885 For a header file we have not encountered before we have little choice
886 but to do this. However, it is often the case that the same headers are
887 repeatedly included, and in these cases we try to avoid repeating the
888 filesystem queries whilst searching for the correct file.
890 For each file we try to open, we store the constructed path in a splay
891 tree. This path first undergoes simplification by the function
892 @code{_cpp_simplify_pathname}. For example,
893 @file{/usr/include/bits/../foo.h} is simplified to
894 @file{/usr/include/foo.h} before we enter it in the splay tree and try
895 to @code{open ()} the file. CPP will then find subsequent uses of
896 @file{foo.h}, even as @file{/usr/include/foo.h}, in the splay tree and
899 Further, it is likely the file contents have also been cached, saving a
900 @code{read ()} system call. We don't bother caching the contents of
901 header files that are re-inclusion protected, and whose re-inclusion
902 macro is defined when we leave the header file for the first time. If
903 the host supports it, we try to map suitably large files into memory,
904 rather than reading them in directly.
906 The include paths are internally stored on a null-terminated
907 singly-linked list, starting with the @code{"header.h"} directory search
908 chain, which then links into the @code{<header.h>} directory chain.
910 Files included with the @code{<foo.h>} syntax start the lookup directly
911 in the second half of this chain. However, files included with the
912 @code{"foo.h"} syntax start at the beginning of the chain, but with one
913 extra directory prepended. This is the directory of the current file;
914 the one containing the @code{#include} directive. Prepending this
915 directory on a per-file basis is handled by the function
918 Note that a header included with a directory component, such as
919 @code{#include "mydir/foo.h"} and opened as
920 @file{/usr/local/include/mydir/foo.h}, will have the complete path minus
921 the basename @samp{foo.h} as the current directory.
923 Enough information is stored in the splay tree that CPP can immediately
924 tell whether it can skip the header file because of the multiple include
925 optimization, whether the file didn't exist or couldn't be opened for
926 some reason, or whether the header was flagged not to be re-used, as it
927 is with the obsolete @code{#import} directive.
929 For the benefit of MS-DOS filesystems with an 8.3 filename limitation,
930 CPP offers the ability to treat various include file names as aliases
931 for the real header files with shorter names. The map from one to the
932 other is found in a special file called @samp{header.gcc}, stored in the
933 command line (or system) include directories to which the mapping
934 applies. This may be higher up the directory tree than the full path to
935 the file minus the base name.