1 /* Dataflow support routines.
2 Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004
3 Free Software Foundation, Inc.
4 Contributed by Michael P. Hayes (m.hayes@elec.canterbury.ac.nz,
7 This file is part of GCC.
9 GCC is free software; you can redistribute it and/or modify it under
10 the terms of the GNU General Public License as published by the Free
11 Software Foundation; either version 2, or (at your option) any later
14 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
15 WARRANTY; without even the implied warranty of MERCHANTABILITY or
16 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
19 You should have received a copy of the GNU General Public License
20 along with GCC; see the file COPYING. If not, write to the Free
21 Software Foundation, 59 Temple Place - Suite 330, Boston, MA
27 This file provides some dataflow routines for computing reaching defs,
28 upward exposed uses, live variables, def-use chains, and use-def
29 chains. The global dataflow is performed using simple iterative
30 methods with a worklist and could be sped up by ordering the blocks
31 with a depth first search order.
33 A `struct ref' data structure (ref) is allocated for every register
34 reference (def or use) and this records the insn and bb the ref is
35 found within. The refs are linked together in chains of uses and defs
36 for each insn and for each register. Each ref also has a chain field
37 that links all the use refs for a def or all the def refs for a use.
38 This is used to create use-def or def-use chains.
43 Here's an example of using the dataflow routines.
49 df_analyze (df, 0, DF_ALL);
51 df_dump (df, DF_ALL, stderr);
56 df_init simply creates a poor man's object (df) that needs to be
57 passed to all the dataflow routines. df_finish destroys this
58 object and frees up any allocated memory. DF_ALL says to analyse
61 df_analyze performs the following:
63 1. Records defs and uses by scanning the insns in each basic block
64 or by scanning the insns queued by df_insn_modify.
65 2. Links defs and uses into insn-def and insn-use chains.
66 3. Links defs and uses into reg-def and reg-use chains.
67 4. Assigns LUIDs to each insn (for modified blocks).
68 5. Calculates local reaching definitions.
69 6. Calculates global reaching definitions.
70 7. Creates use-def chains.
71 8. Calculates local reaching uses (upwards exposed uses).
72 9. Calculates global reaching uses.
73 10. Creates def-use chains.
74 11. Calculates local live registers.
75 12. Calculates global live registers.
76 13. Calculates register lifetimes and determines local registers.
81 Note that the dataflow information is not updated for every newly
82 deleted or created insn. If the dataflow information requires
83 updating then all the changed, new, or deleted insns needs to be
84 marked with df_insn_modify (or df_insns_modify) either directly or
85 indirectly (say through calling df_insn_delete). df_insn_modify
86 marks all the modified insns to get processed the next time df_analyze
89 Beware that tinkering with insns may invalidate the dataflow information.
90 The philosophy behind these routines is that once the dataflow
91 information has been gathered, the user should store what they require
92 before they tinker with any insn. Once a reg is replaced, for example,
93 then the reg-def/reg-use chains will point to the wrong place. Once a
94 whole lot of changes have been made, df_analyze can be called again
95 to update the dataflow information. Currently, this is not very smart
96 with regard to propagating changes to the dataflow so it should not
102 The basic object is a REF (reference) and this may either be a DEF
103 (definition) or a USE of a register.
105 These are linked into a variety of lists; namely reg-def, reg-use,
106 insn-def, insn-use, def-use, and use-def lists. For example,
107 the reg-def lists contain all the refs that define a given register
108 while the insn-use lists contain all the refs used by an insn.
110 Note that the reg-def and reg-use chains are generally short (except for the
111 hard registers) and thus it is much faster to search these chains
112 rather than searching the def or use bitmaps.
114 If the insns are in SSA form then the reg-def and use-def lists
115 should only contain the single defining ref.
120 1) Incremental dataflow analysis.
122 Note that if a loop invariant insn is hoisted (or sunk), we do not
123 need to change the def-use or use-def chains. All we have to do is to
124 change the bb field for all the associated defs and uses and to
125 renumber the LUIDs for the original and new basic blocks of the insn.
127 When shadowing loop mems we create new uses and defs for new pseudos
128 so we do not affect the existing dataflow information.
130 My current strategy is to queue up all modified, created, or deleted
131 insns so when df_analyze is called we can easily determine all the new
132 or deleted refs. Currently the global dataflow information is
133 recomputed from scratch but this could be propagated more efficiently.
135 2) Reduced memory requirements.
137 We could operate a pool of ref structures. When a ref is deleted it
138 gets returned to the pool (say by linking on to a chain of free refs).
139 This will require a pair of bitmaps for defs and uses so that we can
140 tell which ones have been changed. Alternatively, we could
141 periodically squeeze the def and use tables and associated bitmaps and
142 renumber the def and use ids.
144 3) Ordering of reg-def and reg-use lists.
146 Should the first entry in the def list be the first def (within a BB)?
147 Similarly, should the first entry in the use list be the last use
150 4) Working with a sub-CFG.
152 Often the whole CFG does not need to be analyzed, for example,
153 when optimizing a loop, only certain registers are of interest.
154 Perhaps there should be a bitmap argument to df_analyze to specify
155 which registers should be analyzed?
160 Embedded addressing side-effects, such as POST_INC or PRE_INC, generate
161 both a use and a def. These are both marked read/write to show that they
162 are dependent. For example, (set (reg 40) (mem (post_inc (reg 42))))
163 will generate a use of reg 42 followed by a def of reg 42 (both marked
164 read/write). Similarly, (set (reg 40) (mem (pre_dec (reg 41))))
165 generates a use of reg 41 then a def of reg 41 (both marked read/write),
166 even though reg 41 is decremented before it is used for the memory
167 address in this second example.
169 A set to a REG inside a ZERO_EXTRACT, SIGN_EXTRACT, or SUBREG invokes
170 a read-modify write operation. We generate both a use and a def
171 and again mark them read/write.
176 #include "coretypes.h"
180 #include "insn-config.h"
182 #include "function.h"
184 #include "alloc-pool.h"
185 #include "hard-reg-set.h"
186 #include "basic-block.h"
192 #define FOR_EACH_BB_IN_BITMAP(BITMAP, MIN, BB, CODE) \
195 unsigned int node_; \
196 EXECUTE_IF_SET_IN_BITMAP (BITMAP, MIN, node_, \
197 {(BB) = BASIC_BLOCK (node_); CODE;}); \
201 static alloc_pool df_ref_pool;
202 static alloc_pool df_link_pool;
203 static struct df *ddf;
205 static void df_reg_table_realloc (struct df *, int);
206 static void df_insn_table_realloc (struct df *, unsigned int);
207 static void df_bitmaps_alloc (struct df *, int);
208 static void df_bitmaps_free (struct df *, int);
209 static void df_free (struct df *);
210 static void df_alloc (struct df *, int);
212 static rtx df_reg_clobber_gen (unsigned int);
213 static rtx df_reg_use_gen (unsigned int);
215 static inline struct df_link *df_link_create (struct ref *, struct df_link *);
216 static struct df_link *df_ref_unlink (struct df_link **, struct ref *);
217 static void df_def_unlink (struct df *, struct ref *);
218 static void df_use_unlink (struct df *, struct ref *);
219 static void df_insn_refs_unlink (struct df *, basic_block, rtx);
221 static void df_bb_refs_unlink (struct df *, basic_block);
222 static void df_refs_unlink (struct df *, bitmap);
225 static struct ref *df_ref_create (struct df *, rtx, rtx *, rtx,
226 enum df_ref_type, enum df_ref_flags);
227 static void df_ref_record_1 (struct df *, rtx, rtx *, rtx, enum df_ref_type,
229 static void df_ref_record (struct df *, rtx, rtx *, rtx, enum df_ref_type,
231 static void df_def_record_1 (struct df *, rtx, basic_block, rtx);
232 static void df_defs_record (struct df *, rtx, basic_block, rtx);
233 static void df_uses_record (struct df *, rtx *, enum df_ref_type,
234 basic_block, rtx, enum df_ref_flags);
235 static void df_insn_refs_record (struct df *, basic_block, rtx);
236 static void df_bb_refs_record (struct df *, basic_block);
237 static void df_refs_record (struct df *, bitmap);
239 static void df_bb_reg_def_chain_create (struct df *, basic_block);
240 static void df_reg_def_chain_create (struct df *, bitmap);
241 static void df_bb_reg_use_chain_create (struct df *, basic_block);
242 static void df_reg_use_chain_create (struct df *, bitmap);
243 static void df_bb_du_chain_create (struct df *, basic_block, bitmap);
244 static void df_du_chain_create (struct df *, bitmap);
245 static void df_bb_ud_chain_create (struct df *, basic_block);
246 static void df_ud_chain_create (struct df *, bitmap);
247 static void df_bb_rd_local_compute (struct df *, basic_block);
248 static void df_rd_local_compute (struct df *, bitmap);
249 static void df_bb_ru_local_compute (struct df *, basic_block);
250 static void df_ru_local_compute (struct df *, bitmap);
251 static void df_bb_lr_local_compute (struct df *, basic_block);
252 static void df_lr_local_compute (struct df *, bitmap);
253 static void df_bb_reg_info_compute (struct df *, basic_block, bitmap);
254 static void df_reg_info_compute (struct df *, bitmap);
256 static int df_bb_luids_set (struct df *df, basic_block);
257 static int df_luids_set (struct df *df, bitmap);
259 static int df_modified_p (struct df *, bitmap);
260 static int df_refs_queue (struct df *);
261 static int df_refs_process (struct df *);
262 static int df_bb_refs_update (struct df *, basic_block);
263 static int df_refs_update (struct df *);
264 static void df_analyze_1 (struct df *, bitmap, int, int);
266 static void df_insns_modify (struct df *, basic_block, rtx, rtx);
267 static int df_rtx_mem_replace (rtx *, void *);
268 static int df_rtx_reg_replace (rtx *, void *);
269 void df_refs_reg_replace (struct df *, bitmap, struct df_link *, rtx, rtx);
271 static int df_def_dominates_all_uses_p (struct df *, struct ref *def);
272 static int df_def_dominates_uses_p (struct df *, struct ref *def, bitmap);
273 static struct ref *df_bb_insn_regno_last_use_find (struct df *, basic_block,
275 static struct ref *df_bb_insn_regno_first_def_find (struct df *, basic_block,
278 static void df_chain_dump (struct df_link *, FILE *file);
279 static void df_chain_dump_regno (struct df_link *, FILE *file);
280 static void df_regno_debug (struct df *, unsigned int, FILE *);
281 static void df_ref_debug (struct df *, struct ref *, FILE *);
282 static void df_rd_transfer_function (int, int *, bitmap, bitmap, bitmap,
284 static void df_ru_transfer_function (int, int *, bitmap, bitmap, bitmap,
286 static void df_lr_transfer_function (int, int *, bitmap, bitmap, bitmap,
288 static void hybrid_search_bitmap (basic_block, bitmap *, bitmap *,
289 bitmap *, bitmap *, enum df_flow_dir,
290 enum df_confluence_op,
291 transfer_function_bitmap,
292 sbitmap, sbitmap, void *);
293 static void hybrid_search_sbitmap (basic_block, sbitmap *, sbitmap *,
294 sbitmap *, sbitmap *, enum df_flow_dir,
295 enum df_confluence_op,
296 transfer_function_sbitmap,
297 sbitmap, sbitmap, void *);
300 /* Local memory allocation/deallocation routines. */
303 /* Increase the insn info table to have space for at least SIZE + 1
306 df_insn_table_realloc (struct df *df, unsigned int size)
309 if (size <= df->insn_size)
312 /* Make the table a little larger than requested, so we do not need
313 to enlarge it so often. */
314 size += df->insn_size / 4;
316 df->insns = xrealloc (df->insns, size * sizeof (struct insn_info));
318 memset (df->insns + df->insn_size, 0,
319 (size - df->insn_size) * sizeof (struct insn_info));
321 df->insn_size = size;
323 if (! df->insns_modified)
325 df->insns_modified = BITMAP_XMALLOC ();
326 bitmap_zero (df->insns_modified);
331 /* Increase the reg info table by SIZE more elements. */
333 df_reg_table_realloc (struct df *df, int size)
335 /* Make table 25 percent larger by default. */
337 size = df->reg_size / 4;
339 size += df->reg_size;
340 if (size < max_reg_num ())
341 size = max_reg_num ();
343 df->regs = xrealloc (df->regs, size * sizeof (struct reg_info));
345 /* Zero the new entries. */
346 memset (df->regs + df->reg_size, 0,
347 (size - df->reg_size) * sizeof (struct reg_info));
353 /* Allocate bitmaps for each basic block. */
355 df_bitmaps_alloc (struct df *df, int flags)
360 /* Free the bitmaps if they need resizing. */
361 if ((flags & DF_LR) && df->n_regs < (unsigned int) max_reg_num ())
362 dflags |= DF_LR | DF_RU;
363 if ((flags & DF_RU) && df->n_uses < df->use_id)
365 if ((flags & DF_RD) && df->n_defs < df->def_id)
369 df_bitmaps_free (df, dflags);
371 df->n_defs = df->def_id;
372 df->n_uses = df->use_id;
376 struct bb_info *bb_info = DF_BB_INFO (df, bb);
378 if (flags & DF_RD && ! bb_info->rd_in)
380 /* Allocate bitmaps for reaching definitions. */
381 bb_info->rd_kill = BITMAP_XMALLOC ();
382 bitmap_zero (bb_info->rd_kill);
383 bb_info->rd_gen = BITMAP_XMALLOC ();
384 bitmap_zero (bb_info->rd_gen);
385 bb_info->rd_in = BITMAP_XMALLOC ();
386 bb_info->rd_out = BITMAP_XMALLOC ();
387 bb_info->rd_valid = 0;
390 if (flags & DF_RU && ! bb_info->ru_in)
392 /* Allocate bitmaps for upward exposed uses. */
393 bb_info->ru_kill = BITMAP_XMALLOC ();
394 bitmap_zero (bb_info->ru_kill);
395 /* Note the lack of symmetry. */
396 bb_info->ru_gen = BITMAP_XMALLOC ();
397 bitmap_zero (bb_info->ru_gen);
398 bb_info->ru_in = BITMAP_XMALLOC ();
399 bb_info->ru_out = BITMAP_XMALLOC ();
400 bb_info->ru_valid = 0;
403 if (flags & DF_LR && ! bb_info->lr_in)
405 /* Allocate bitmaps for live variables. */
406 bb_info->lr_def = BITMAP_XMALLOC ();
407 bitmap_zero (bb_info->lr_def);
408 bb_info->lr_use = BITMAP_XMALLOC ();
409 bitmap_zero (bb_info->lr_use);
410 bb_info->lr_in = BITMAP_XMALLOC ();
411 bb_info->lr_out = BITMAP_XMALLOC ();
412 bb_info->lr_valid = 0;
418 /* Free bitmaps for each basic block. */
420 df_bitmaps_free (struct df *df, int flags)
426 struct bb_info *bb_info = DF_BB_INFO (df, bb);
431 if ((flags & DF_RD) && bb_info->rd_in)
433 /* Free bitmaps for reaching definitions. */
434 BITMAP_XFREE (bb_info->rd_kill);
435 bb_info->rd_kill = NULL;
436 BITMAP_XFREE (bb_info->rd_gen);
437 bb_info->rd_gen = NULL;
438 BITMAP_XFREE (bb_info->rd_in);
439 bb_info->rd_in = NULL;
440 BITMAP_XFREE (bb_info->rd_out);
441 bb_info->rd_out = NULL;
444 if ((flags & DF_RU) && bb_info->ru_in)
446 /* Free bitmaps for upward exposed uses. */
447 BITMAP_XFREE (bb_info->ru_kill);
448 bb_info->ru_kill = NULL;
449 BITMAP_XFREE (bb_info->ru_gen);
450 bb_info->ru_gen = NULL;
451 BITMAP_XFREE (bb_info->ru_in);
452 bb_info->ru_in = NULL;
453 BITMAP_XFREE (bb_info->ru_out);
454 bb_info->ru_out = NULL;
457 if ((flags & DF_LR) && bb_info->lr_in)
459 /* Free bitmaps for live variables. */
460 BITMAP_XFREE (bb_info->lr_def);
461 bb_info->lr_def = NULL;
462 BITMAP_XFREE (bb_info->lr_use);
463 bb_info->lr_use = NULL;
464 BITMAP_XFREE (bb_info->lr_in);
465 bb_info->lr_in = NULL;
466 BITMAP_XFREE (bb_info->lr_out);
467 bb_info->lr_out = NULL;
470 df->flags &= ~(flags & (DF_RD | DF_RU | DF_LR));
474 /* Allocate and initialize dataflow memory. */
476 df_alloc (struct df *df, int n_regs)
481 df_link_pool = create_alloc_pool ("df_link pool", sizeof (struct df_link),
483 df_ref_pool = create_alloc_pool ("df_ref pool", sizeof (struct ref), 100);
485 /* Perhaps we should use LUIDs to save memory for the insn_refs
486 table. This is only a small saving; a few pointers. */
487 n_insns = get_max_uid () + 1;
491 /* Approximate number of defs by number of insns. */
492 df->def_size = n_insns;
493 df->defs = xmalloc (df->def_size * sizeof (*df->defs));
497 /* Approximate number of uses by twice number of insns. */
498 df->use_size = n_insns * 2;
499 df->uses = xmalloc (df->use_size * sizeof (*df->uses));
502 df->n_bbs = last_basic_block;
504 /* Allocate temporary working array used during local dataflow analysis. */
505 df->reg_def_last = xmalloc (df->n_regs * sizeof (struct ref *));
507 df_insn_table_realloc (df, n_insns);
509 df_reg_table_realloc (df, df->n_regs);
511 df->bbs_modified = BITMAP_XMALLOC ();
512 bitmap_zero (df->bbs_modified);
516 df->bbs = xcalloc (last_basic_block, sizeof (struct bb_info));
518 df->all_blocks = BITMAP_XMALLOC ();
520 bitmap_set_bit (df->all_blocks, bb->index);
524 /* Free all the dataflow info. */
526 df_free (struct df *df)
528 df_bitmaps_free (df, DF_ALL);
556 if (df->bbs_modified)
557 BITMAP_XFREE (df->bbs_modified);
558 df->bbs_modified = 0;
560 if (df->insns_modified)
561 BITMAP_XFREE (df->insns_modified);
562 df->insns_modified = 0;
564 BITMAP_XFREE (df->all_blocks);
567 free_alloc_pool (df_ref_pool);
568 free_alloc_pool (df_link_pool);
572 /* Local miscellaneous routines. */
574 /* Return a USE for register REGNO. */
575 static rtx df_reg_use_gen (unsigned int regno)
580 reg = regno_reg_rtx[regno];
582 use = gen_rtx_USE (GET_MODE (reg), reg);
587 /* Return a CLOBBER for register REGNO. */
588 static rtx df_reg_clobber_gen (unsigned int regno)
593 reg = regno_reg_rtx[regno];
595 use = gen_rtx_CLOBBER (GET_MODE (reg), reg);
599 /* Local chain manipulation routines. */
601 /* Create a link in a def-use or use-def chain. */
602 static inline struct df_link *
603 df_link_create (struct ref *ref, struct df_link *next)
605 struct df_link *link;
607 link = pool_alloc (df_link_pool);
614 /* Add REF to chain head pointed to by PHEAD. */
615 static struct df_link *
616 df_ref_unlink (struct df_link **phead, struct ref *ref)
618 struct df_link *link = *phead;
624 /* Only a single ref. It must be the one we want.
625 If not, the def-use and use-def chains are likely to
627 if (link->ref != ref)
629 /* Now have an empty chain. */
634 /* Multiple refs. One of them must be us. */
635 if (link->ref == ref)
640 for (; link->next; link = link->next)
642 if (link->next->ref == ref)
644 /* Unlink from list. */
645 link->next = link->next->next;
656 /* Unlink REF from all def-use/use-def chains, etc. */
658 df_ref_remove (struct df *df, struct ref *ref)
660 if (DF_REF_REG_DEF_P (ref))
662 df_def_unlink (df, ref);
663 df_ref_unlink (&df->insns[DF_REF_INSN_UID (ref)].defs, ref);
667 df_use_unlink (df, ref);
668 df_ref_unlink (&df->insns[DF_REF_INSN_UID (ref)].uses, ref);
674 /* Unlink DEF from use-def and reg-def chains. */
676 df_def_unlink (struct df *df ATTRIBUTE_UNUSED, struct ref *def)
678 struct df_link *du_link;
679 unsigned int dregno = DF_REF_REGNO (def);
681 /* Follow def-use chain to find all the uses of this def. */
682 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
684 struct ref *use = du_link->ref;
686 /* Unlink this def from the use-def chain. */
687 df_ref_unlink (&DF_REF_CHAIN (use), def);
689 DF_REF_CHAIN (def) = 0;
691 /* Unlink def from reg-def chain. */
692 df_ref_unlink (&df->regs[dregno].defs, def);
694 df->defs[DF_REF_ID (def)] = 0;
698 /* Unlink use from def-use and reg-use chains. */
700 df_use_unlink (struct df *df ATTRIBUTE_UNUSED, struct ref *use)
702 struct df_link *ud_link;
703 unsigned int uregno = DF_REF_REGNO (use);
705 /* Follow use-def chain to find all the defs of this use. */
706 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
708 struct ref *def = ud_link->ref;
710 /* Unlink this use from the def-use chain. */
711 df_ref_unlink (&DF_REF_CHAIN (def), use);
713 DF_REF_CHAIN (use) = 0;
715 /* Unlink use from reg-use chain. */
716 df_ref_unlink (&df->regs[uregno].uses, use);
718 df->uses[DF_REF_ID (use)] = 0;
721 /* Local routines for recording refs. */
724 /* Create a new ref of type DF_REF_TYPE for register REG at address
725 LOC within INSN of BB. */
727 df_ref_create (struct df *df, rtx reg, rtx *loc, rtx insn,
728 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
730 struct ref *this_ref;
732 this_ref = pool_alloc (df_ref_pool);
733 DF_REF_REG (this_ref) = reg;
734 DF_REF_LOC (this_ref) = loc;
735 DF_REF_INSN (this_ref) = insn;
736 DF_REF_CHAIN (this_ref) = 0;
737 DF_REF_TYPE (this_ref) = ref_type;
738 DF_REF_FLAGS (this_ref) = ref_flags;
740 if (ref_type == DF_REF_REG_DEF)
742 if (df->def_id >= df->def_size)
744 /* Make table 25 percent larger. */
745 df->def_size += (df->def_size / 4);
746 df->defs = xrealloc (df->defs,
747 df->def_size * sizeof (*df->defs));
749 DF_REF_ID (this_ref) = df->def_id;
750 df->defs[df->def_id++] = this_ref;
754 if (df->use_id >= df->use_size)
756 /* Make table 25 percent larger. */
757 df->use_size += (df->use_size / 4);
758 df->uses = xrealloc (df->uses,
759 df->use_size * sizeof (*df->uses));
761 DF_REF_ID (this_ref) = df->use_id;
762 df->uses[df->use_id++] = this_ref;
768 /* Create a new reference of type DF_REF_TYPE for a single register REG,
769 used inside the LOC rtx of INSN. */
771 df_ref_record_1 (struct df *df, rtx reg, rtx *loc, rtx insn,
772 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
774 df_ref_create (df, reg, loc, insn, ref_type, ref_flags);
778 /* Create new references of type DF_REF_TYPE for each part of register REG
779 at address LOC within INSN of BB. */
781 df_ref_record (struct df *df, rtx reg, rtx *loc, rtx insn,
782 enum df_ref_type ref_type, enum df_ref_flags ref_flags)
786 if (GET_CODE (reg) != REG && GET_CODE (reg) != SUBREG)
789 /* For the reg allocator we are interested in some SUBREG rtx's, but not
790 all. Notably only those representing a word extraction from a multi-word
791 reg. As written in the docu those should have the form
792 (subreg:SI (reg:M A) N), with size(SImode) > size(Mmode).
793 XXX Is that true? We could also use the global word_mode variable. */
794 if (GET_CODE (reg) == SUBREG
795 && (GET_MODE_SIZE (GET_MODE (reg)) < GET_MODE_SIZE (word_mode)
796 || GET_MODE_SIZE (GET_MODE (reg))
797 >= GET_MODE_SIZE (GET_MODE (SUBREG_REG (reg)))))
799 loc = &SUBREG_REG (reg);
801 ref_flags |= DF_REF_STRIPPED;
804 regno = REGNO (GET_CODE (reg) == SUBREG ? SUBREG_REG (reg) : reg);
805 if (regno < FIRST_PSEUDO_REGISTER)
810 if (! (df->flags & DF_HARD_REGS))
813 /* GET_MODE (reg) is correct here. We do not want to go into a SUBREG
814 for the mode, because we only want to add references to regs, which
815 are really referenced. E.g., a (subreg:SI (reg:DI 0) 0) does _not_
816 reference the whole reg 0 in DI mode (which would also include
817 reg 1, at least, if 0 and 1 are SImode registers). */
818 endregno = hard_regno_nregs[regno][GET_MODE (reg)];
819 if (GET_CODE (reg) == SUBREG)
820 regno += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
821 SUBREG_BYTE (reg), GET_MODE (reg));
824 for (i = regno; i < endregno; i++)
825 df_ref_record_1 (df, regno_reg_rtx[i],
826 loc, insn, ref_type, ref_flags);
830 df_ref_record_1 (df, reg, loc, insn, ref_type, ref_flags);
835 /* Return nonzero if writes to paradoxical SUBREGs, or SUBREGs which
836 are too narrow, are read-modify-write. */
838 read_modify_subreg_p (rtx x)
840 unsigned int isize, osize;
841 if (GET_CODE (x) != SUBREG)
843 isize = GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)));
844 osize = GET_MODE_SIZE (GET_MODE (x));
845 /* Paradoxical subreg writes don't leave a trace of the old content. */
846 return (isize > osize && isize > UNITS_PER_WORD);
850 /* Process all the registers defined in the rtx, X. */
852 df_def_record_1 (struct df *df, rtx x, basic_block bb, rtx insn)
856 enum df_ref_flags flags = 0;
858 /* We may recursively call ourselves on EXPR_LIST when dealing with PARALLEL
860 if (GET_CODE (x) == EXPR_LIST || GET_CODE (x) == CLOBBER)
866 /* Some targets place small structures in registers for
867 return values of functions. */
868 if (GET_CODE (dst) == PARALLEL && GET_MODE (dst) == BLKmode)
872 for (i = XVECLEN (dst, 0) - 1; i >= 0; i--)
874 rtx temp = XVECEXP (dst, 0, i);
875 if (GET_CODE (temp) == EXPR_LIST || GET_CODE (temp) == CLOBBER
876 || GET_CODE (temp) == SET)
877 df_def_record_1 (df, temp, bb, insn);
882 /* Maybe, we should flag the use of STRICT_LOW_PART somehow. It might
883 be handy for the reg allocator. */
884 while (GET_CODE (dst) == STRICT_LOW_PART
885 || GET_CODE (dst) == ZERO_EXTRACT
886 || GET_CODE (dst) == SIGN_EXTRACT
887 || ((df->flags & DF_FOR_REGALLOC) == 0
888 && read_modify_subreg_p (dst)))
890 /* Strict low part always contains SUBREG, but we do not want to make
891 it appear outside, as whole register is always considered. */
892 if (GET_CODE (dst) == STRICT_LOW_PART)
894 loc = &XEXP (dst, 0);
897 loc = &XEXP (dst, 0);
899 flags |= DF_REF_READ_WRITE;
902 if (GET_CODE (dst) == REG
903 || (GET_CODE (dst) == SUBREG && GET_CODE (SUBREG_REG (dst)) == REG))
904 df_ref_record (df, dst, loc, insn, DF_REF_REG_DEF, flags);
908 /* Process all the registers defined in the pattern rtx, X. */
910 df_defs_record (struct df *df, rtx x, basic_block bb, rtx insn)
912 RTX_CODE code = GET_CODE (x);
914 if (code == SET || code == CLOBBER)
916 /* Mark the single def within the pattern. */
917 df_def_record_1 (df, x, bb, insn);
919 else if (code == PARALLEL)
923 /* Mark the multiple defs within the pattern. */
924 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
926 code = GET_CODE (XVECEXP (x, 0, i));
927 if (code == SET || code == CLOBBER)
928 df_def_record_1 (df, XVECEXP (x, 0, i), bb, insn);
934 /* Process all the registers used in the rtx at address LOC. */
936 df_uses_record (struct df *df, rtx *loc, enum df_ref_type ref_type,
937 basic_block bb, rtx insn, enum df_ref_flags flags)
961 /* If we are clobbering a MEM, mark any registers inside the address
963 if (GET_CODE (XEXP (x, 0)) == MEM)
964 df_uses_record (df, &XEXP (XEXP (x, 0), 0),
965 DF_REF_REG_MEM_STORE, bb, insn, flags);
967 /* If we're clobbering a REG then we have a def so ignore. */
971 df_uses_record (df, &XEXP (x, 0), DF_REF_REG_MEM_LOAD, bb, insn, 0);
975 /* While we're here, optimize this case. */
977 /* In case the SUBREG is not of a REG, do not optimize. */
978 if (GET_CODE (SUBREG_REG (x)) != REG)
980 loc = &SUBREG_REG (x);
981 df_uses_record (df, loc, ref_type, bb, insn, flags);
984 /* ... Fall through ... */
987 df_ref_record (df, x, loc, insn, ref_type, flags);
992 rtx dst = SET_DEST (x);
994 df_uses_record (df, &SET_SRC (x), DF_REF_REG_USE, bb, insn, 0);
996 switch (GET_CODE (dst))
999 if ((df->flags & DF_FOR_REGALLOC) == 0
1000 && read_modify_subreg_p (dst))
1002 df_uses_record (df, &SUBREG_REG (dst), DF_REF_REG_USE, bb,
1003 insn, DF_REF_READ_WRITE);
1013 df_uses_record (df, &XEXP (dst, 0),
1014 DF_REF_REG_MEM_STORE,
1017 case STRICT_LOW_PART:
1018 /* A strict_low_part uses the whole REG and not just the SUBREG. */
1019 dst = XEXP (dst, 0);
1020 if (GET_CODE (dst) != SUBREG)
1022 df_uses_record (df, &SUBREG_REG (dst), DF_REF_REG_USE, bb,
1023 insn, DF_REF_READ_WRITE);
1027 df_uses_record (df, &XEXP (dst, 0), DF_REF_REG_USE, bb, insn,
1029 df_uses_record (df, &XEXP (dst, 1), DF_REF_REG_USE, bb, insn, 0);
1030 df_uses_record (df, &XEXP (dst, 2), DF_REF_REG_USE, bb, insn, 0);
1031 dst = XEXP (dst, 0);
1043 case UNSPEC_VOLATILE:
1047 /* Traditional and volatile asm instructions must be considered to use
1048 and clobber all hard registers, all pseudo-registers and all of
1049 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
1051 Consider for instance a volatile asm that changes the fpu rounding
1052 mode. An insn should not be moved across this even if it only uses
1053 pseudo-regs because it might give an incorrectly rounded result.
1055 For now, just mark any regs we can find in ASM_OPERANDS as
1058 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
1059 We can not just fall through here since then we would be confused
1060 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
1061 traditional asms unlike their normal usage. */
1062 if (code == ASM_OPERANDS)
1066 for (j = 0; j < ASM_OPERANDS_INPUT_LENGTH (x); j++)
1067 df_uses_record (df, &ASM_OPERANDS_INPUT (x, j),
1068 DF_REF_REG_USE, bb, insn, 0);
1080 /* Catch the def of the register being modified. */
1081 df_ref_record (df, XEXP (x, 0), &XEXP (x, 0), insn, DF_REF_REG_DEF, DF_REF_READ_WRITE);
1083 /* ... Fall through to handle uses ... */
1089 /* Recursively scan the operands of this expression. */
1091 const char *fmt = GET_RTX_FORMAT (code);
1094 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
1098 /* Tail recursive case: save a function call level. */
1104 df_uses_record (df, &XEXP (x, i), ref_type, bb, insn, flags);
1106 else if (fmt[i] == 'E')
1109 for (j = 0; j < XVECLEN (x, i); j++)
1110 df_uses_record (df, &XVECEXP (x, i, j), ref_type,
1118 /* Record all the df within INSN of basic block BB. */
1120 df_insn_refs_record (struct df *df, basic_block bb, rtx insn)
1128 /* Record register defs. */
1129 df_defs_record (df, PATTERN (insn), bb, insn);
1131 if (df->flags & DF_EQUIV_NOTES)
1132 for (note = REG_NOTES (insn); note;
1133 note = XEXP (note, 1))
1135 switch (REG_NOTE_KIND (note))
1139 df_uses_record (df, &XEXP (note, 0), DF_REF_REG_USE,
1146 if (GET_CODE (insn) == CALL_INSN)
1151 /* Record the registers used to pass arguments. */
1152 for (note = CALL_INSN_FUNCTION_USAGE (insn); note;
1153 note = XEXP (note, 1))
1155 if (GET_CODE (XEXP (note, 0)) == USE)
1156 df_uses_record (df, &XEXP (XEXP (note, 0), 0), DF_REF_REG_USE,
1160 /* The stack ptr is used (honorarily) by a CALL insn. */
1161 x = df_reg_use_gen (STACK_POINTER_REGNUM);
1162 df_uses_record (df, &XEXP (x, 0), DF_REF_REG_USE, bb, insn, 0);
1164 if (df->flags & DF_HARD_REGS)
1166 /* Calls may also reference any of the global registers,
1167 so they are recorded as used. */
1168 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1171 x = df_reg_use_gen (i);
1172 df_uses_record (df, &SET_DEST (x),
1173 DF_REF_REG_USE, bb, insn, 0);
1178 /* Record the register uses. */
1179 df_uses_record (df, &PATTERN (insn),
1180 DF_REF_REG_USE, bb, insn, 0);
1182 if (GET_CODE (insn) == CALL_INSN)
1186 if (df->flags & DF_HARD_REGS)
1188 /* Kill all registers invalidated by a call. */
1189 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1190 if (TEST_HARD_REG_BIT (regs_invalidated_by_call, i))
1192 rtx reg_clob = df_reg_clobber_gen (i);
1193 df_defs_record (df, reg_clob, bb, insn);
1197 /* There may be extra registers to be clobbered. */
1198 for (note = CALL_INSN_FUNCTION_USAGE (insn);
1200 note = XEXP (note, 1))
1201 if (GET_CODE (XEXP (note, 0)) == CLOBBER)
1202 df_defs_record (df, XEXP (note, 0), bb, insn);
1208 /* Record all the refs within the basic block BB. */
1210 df_bb_refs_record (struct df *df, basic_block bb)
1214 /* Scan the block an insn at a time from beginning to end. */
1215 for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
1219 /* Record defs within INSN. */
1220 df_insn_refs_record (df, bb, insn);
1222 if (insn == BB_END (bb))
1228 /* Record all the refs in the basic blocks specified by BLOCKS. */
1230 df_refs_record (struct df *df, bitmap blocks)
1234 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1236 df_bb_refs_record (df, bb);
1240 /* Dataflow analysis routines. */
1243 /* Create reg-def chains for basic block BB. These are a list of
1244 definitions for each register. */
1246 df_bb_reg_def_chain_create (struct df *df, basic_block bb)
1250 /* Perhaps the defs should be sorted using a depth first search
1251 of the CFG (or possibly a breadth first search). We currently
1252 scan the basic blocks in reverse order so that the first defs
1253 appear at the start of the chain. */
1255 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1256 insn = PREV_INSN (insn))
1258 struct df_link *link;
1259 unsigned int uid = INSN_UID (insn);
1261 if (! INSN_P (insn))
1264 for (link = df->insns[uid].defs; link; link = link->next)
1266 struct ref *def = link->ref;
1267 unsigned int dregno = DF_REF_REGNO (def);
1269 /* Do not add ref's to the chain twice, i.e., only add new
1270 refs. XXX the same could be done by testing if the
1271 current insn is a modified (or a new) one. This would be
1273 if (DF_REF_ID (def) < df->def_id_save)
1276 df->regs[dregno].defs
1277 = df_link_create (def, df->regs[dregno].defs);
1283 /* Create reg-def chains for each basic block within BLOCKS. These
1284 are a list of definitions for each register. */
1286 df_reg_def_chain_create (struct df *df, bitmap blocks)
1290 FOR_EACH_BB_IN_BITMAP/*_REV*/ (blocks, 0, bb,
1292 df_bb_reg_def_chain_create (df, bb);
1297 /* Create reg-use chains for basic block BB. These are a list of uses
1298 for each register. */
1300 df_bb_reg_use_chain_create (struct df *df, basic_block bb)
1304 /* Scan in forward order so that the last uses appear at the start
1307 for (insn = BB_HEAD (bb); insn && insn != NEXT_INSN (BB_END (bb));
1308 insn = NEXT_INSN (insn))
1310 struct df_link *link;
1311 unsigned int uid = INSN_UID (insn);
1313 if (! INSN_P (insn))
1316 for (link = df->insns[uid].uses; link; link = link->next)
1318 struct ref *use = link->ref;
1319 unsigned int uregno = DF_REF_REGNO (use);
1321 /* Do not add ref's to the chain twice, i.e., only add new
1322 refs. XXX the same could be done by testing if the
1323 current insn is a modified (or a new) one. This would be
1325 if (DF_REF_ID (use) < df->use_id_save)
1328 df->regs[uregno].uses
1329 = df_link_create (use, df->regs[uregno].uses);
1335 /* Create reg-use chains for each basic block within BLOCKS. These
1336 are a list of uses for each register. */
1338 df_reg_use_chain_create (struct df *df, bitmap blocks)
1342 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1344 df_bb_reg_use_chain_create (df, bb);
1349 /* Create def-use chains from reaching use bitmaps for basic block BB. */
1351 df_bb_du_chain_create (struct df *df, basic_block bb, bitmap ru)
1353 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1356 bitmap_copy (ru, bb_info->ru_out);
1358 /* For each def in BB create a linked list (chain) of uses
1359 reached from the def. */
1360 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1361 insn = PREV_INSN (insn))
1363 struct df_link *def_link;
1364 struct df_link *use_link;
1365 unsigned int uid = INSN_UID (insn);
1367 if (! INSN_P (insn))
1370 /* For each def in insn... */
1371 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1373 struct ref *def = def_link->ref;
1374 unsigned int dregno = DF_REF_REGNO (def);
1376 DF_REF_CHAIN (def) = 0;
1378 /* While the reg-use chains are not essential, it
1379 is _much_ faster to search these short lists rather
1380 than all the reaching uses, especially for large functions. */
1381 for (use_link = df->regs[dregno].uses; use_link;
1382 use_link = use_link->next)
1384 struct ref *use = use_link->ref;
1386 if (bitmap_bit_p (ru, DF_REF_ID (use)))
1389 = df_link_create (use, DF_REF_CHAIN (def));
1391 bitmap_clear_bit (ru, DF_REF_ID (use));
1396 /* For each use in insn... */
1397 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1399 struct ref *use = use_link->ref;
1400 bitmap_set_bit (ru, DF_REF_ID (use));
1406 /* Create def-use chains from reaching use bitmaps for basic blocks
1409 df_du_chain_create (struct df *df, bitmap blocks)
1414 ru = BITMAP_XMALLOC ();
1416 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1418 df_bb_du_chain_create (df, bb, ru);
1425 /* Create use-def chains from reaching def bitmaps for basic block BB. */
1427 df_bb_ud_chain_create (struct df *df, basic_block bb)
1429 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1430 struct ref **reg_def_last = df->reg_def_last;
1433 memset (reg_def_last, 0, df->n_regs * sizeof (struct ref *));
1435 /* For each use in BB create a linked list (chain) of defs
1436 that reach the use. */
1437 for (insn = BB_HEAD (bb); insn && insn != NEXT_INSN (BB_END (bb));
1438 insn = NEXT_INSN (insn))
1440 unsigned int uid = INSN_UID (insn);
1441 struct df_link *use_link;
1442 struct df_link *def_link;
1444 if (! INSN_P (insn))
1447 /* For each use in insn... */
1448 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1450 struct ref *use = use_link->ref;
1451 unsigned int regno = DF_REF_REGNO (use);
1453 DF_REF_CHAIN (use) = 0;
1455 /* Has regno been defined in this BB yet? If so, use
1456 the last def as the single entry for the use-def
1457 chain for this use. Otherwise, we need to add all
1458 the defs using this regno that reach the start of
1460 if (reg_def_last[regno])
1463 = df_link_create (reg_def_last[regno], 0);
1467 /* While the reg-def chains are not essential, it is
1468 _much_ faster to search these short lists rather than
1469 all the reaching defs, especially for large
1471 for (def_link = df->regs[regno].defs; def_link;
1472 def_link = def_link->next)
1474 struct ref *def = def_link->ref;
1476 if (bitmap_bit_p (bb_info->rd_in, DF_REF_ID (def)))
1479 = df_link_create (def, DF_REF_CHAIN (use));
1486 /* For each def in insn... record the last def of each reg. */
1487 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1489 struct ref *def = def_link->ref;
1490 int dregno = DF_REF_REGNO (def);
1492 reg_def_last[dregno] = def;
1498 /* Create use-def chains from reaching def bitmaps for basic blocks
1501 df_ud_chain_create (struct df *df, bitmap blocks)
1505 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1507 df_bb_ud_chain_create (df, bb);
1514 df_rd_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1515 bitmap out, bitmap gen, bitmap kill,
1516 void *data ATTRIBUTE_UNUSED)
1518 *changed = bitmap_union_of_diff (out, gen, in, kill);
1523 df_ru_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1524 bitmap out, bitmap gen, bitmap kill,
1525 void *data ATTRIBUTE_UNUSED)
1527 *changed = bitmap_union_of_diff (in, gen, out, kill);
1532 df_lr_transfer_function (int bb ATTRIBUTE_UNUSED, int *changed, bitmap in,
1533 bitmap out, bitmap use, bitmap def,
1534 void *data ATTRIBUTE_UNUSED)
1536 *changed = bitmap_union_of_diff (in, use, out, def);
1540 /* Compute local reaching def info for basic block BB. */
1542 df_bb_rd_local_compute (struct df *df, basic_block bb)
1544 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1547 for (insn = BB_HEAD (bb); insn && insn != NEXT_INSN (BB_END (bb));
1548 insn = NEXT_INSN (insn))
1550 unsigned int uid = INSN_UID (insn);
1551 struct df_link *def_link;
1553 if (! INSN_P (insn))
1556 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1558 struct ref *def = def_link->ref;
1559 unsigned int regno = DF_REF_REGNO (def);
1560 struct df_link *def2_link;
1562 for (def2_link = df->regs[regno].defs; def2_link;
1563 def2_link = def2_link->next)
1565 struct ref *def2 = def2_link->ref;
1567 /* Add all defs of this reg to the set of kills. This
1568 is greedy since many of these defs will not actually
1569 be killed by this BB but it keeps things a lot
1571 bitmap_set_bit (bb_info->rd_kill, DF_REF_ID (def2));
1573 /* Zap from the set of gens for this BB. */
1574 bitmap_clear_bit (bb_info->rd_gen, DF_REF_ID (def2));
1577 bitmap_set_bit (bb_info->rd_gen, DF_REF_ID (def));
1581 bb_info->rd_valid = 1;
1585 /* Compute local reaching def info for each basic block within BLOCKS. */
1587 df_rd_local_compute (struct df *df, bitmap blocks)
1591 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1593 df_bb_rd_local_compute (df, bb);
1598 /* Compute local reaching use (upward exposed use) info for basic
1601 df_bb_ru_local_compute (struct df *df, basic_block bb)
1603 /* This is much more tricky than computing reaching defs. With
1604 reaching defs, defs get killed by other defs. With upwards
1605 exposed uses, these get killed by defs with the same regno. */
1607 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1611 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1612 insn = PREV_INSN (insn))
1614 unsigned int uid = INSN_UID (insn);
1615 struct df_link *def_link;
1616 struct df_link *use_link;
1618 if (! INSN_P (insn))
1621 for (def_link = df->insns[uid].defs; def_link; def_link = def_link->next)
1623 struct ref *def = def_link->ref;
1624 unsigned int dregno = DF_REF_REGNO (def);
1626 for (use_link = df->regs[dregno].uses; use_link;
1627 use_link = use_link->next)
1629 struct ref *use = use_link->ref;
1631 /* Add all uses of this reg to the set of kills. This
1632 is greedy since many of these uses will not actually
1633 be killed by this BB but it keeps things a lot
1635 bitmap_set_bit (bb_info->ru_kill, DF_REF_ID (use));
1637 /* Zap from the set of gens for this BB. */
1638 bitmap_clear_bit (bb_info->ru_gen, DF_REF_ID (use));
1642 for (use_link = df->insns[uid].uses; use_link; use_link = use_link->next)
1644 struct ref *use = use_link->ref;
1645 /* Add use to set of gens in this BB. */
1646 bitmap_set_bit (bb_info->ru_gen, DF_REF_ID (use));
1649 bb_info->ru_valid = 1;
1653 /* Compute local reaching use (upward exposed use) info for each basic
1654 block within BLOCKS. */
1656 df_ru_local_compute (struct df *df, bitmap blocks)
1660 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1662 df_bb_ru_local_compute (df, bb);
1667 /* Compute local live variable info for basic block BB. */
1669 df_bb_lr_local_compute (struct df *df, basic_block bb)
1671 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1674 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1675 insn = PREV_INSN (insn))
1677 unsigned int uid = INSN_UID (insn);
1678 struct df_link *link;
1680 if (! INSN_P (insn))
1683 for (link = df->insns[uid].defs; link; link = link->next)
1685 struct ref *def = link->ref;
1686 unsigned int dregno = DF_REF_REGNO (def);
1688 /* Add def to set of defs in this BB. */
1689 bitmap_set_bit (bb_info->lr_def, dregno);
1691 bitmap_clear_bit (bb_info->lr_use, dregno);
1694 for (link = df->insns[uid].uses; link; link = link->next)
1696 struct ref *use = link->ref;
1697 /* Add use to set of uses in this BB. */
1698 bitmap_set_bit (bb_info->lr_use, DF_REF_REGNO (use));
1701 bb_info->lr_valid = 1;
1705 /* Compute local live variable info for each basic block within BLOCKS. */
1707 df_lr_local_compute (struct df *df, bitmap blocks)
1711 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1713 df_bb_lr_local_compute (df, bb);
1718 /* Compute register info: lifetime, bb, and number of defs and uses
1719 for basic block BB. */
1721 df_bb_reg_info_compute (struct df *df, basic_block bb, bitmap live)
1723 struct reg_info *reg_info = df->regs;
1724 struct bb_info *bb_info = DF_BB_INFO (df, bb);
1727 bitmap_copy (live, bb_info->lr_out);
1729 for (insn = BB_END (bb); insn && insn != PREV_INSN (BB_HEAD (bb));
1730 insn = PREV_INSN (insn))
1732 unsigned int uid = INSN_UID (insn);
1734 struct df_link *link;
1736 if (! INSN_P (insn))
1739 for (link = df->insns[uid].defs; link; link = link->next)
1741 struct ref *def = link->ref;
1742 unsigned int dregno = DF_REF_REGNO (def);
1744 /* Kill this register. */
1745 bitmap_clear_bit (live, dregno);
1746 reg_info[dregno].n_defs++;
1749 for (link = df->insns[uid].uses; link; link = link->next)
1751 struct ref *use = link->ref;
1752 unsigned int uregno = DF_REF_REGNO (use);
1754 /* This register is now live. */
1755 bitmap_set_bit (live, uregno);
1756 reg_info[uregno].n_uses++;
1759 /* Increment lifetimes of all live registers. */
1760 EXECUTE_IF_SET_IN_BITMAP (live, 0, regno,
1762 reg_info[regno].lifetime++;
1768 /* Compute register info: lifetime, bb, and number of defs and uses. */
1770 df_reg_info_compute (struct df *df, bitmap blocks)
1775 live = BITMAP_XMALLOC ();
1777 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1779 df_bb_reg_info_compute (df, bb, live);
1782 BITMAP_XFREE (live);
1786 /* Assign LUIDs for BB. */
1788 df_bb_luids_set (struct df *df, basic_block bb)
1793 /* The LUIDs are monotonically increasing for each basic block. */
1795 for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
1798 DF_INSN_LUID (df, insn) = luid++;
1799 DF_INSN_LUID (df, insn) = luid;
1801 if (insn == BB_END (bb))
1808 /* Assign LUIDs for each basic block within BLOCKS. */
1810 df_luids_set (struct df *df, bitmap blocks)
1815 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
1817 total += df_bb_luids_set (df, bb);
1823 /* Perform dataflow analysis using existing DF structure for blocks
1824 within BLOCKS. If BLOCKS is zero, use all basic blocks in the CFG. */
1826 df_analyze_1 (struct df *df, bitmap blocks, int flags, int update)
1835 if (flags & DF_UD_CHAIN)
1836 aflags |= DF_RD | DF_RD_CHAIN;
1838 if (flags & DF_DU_CHAIN)
1842 aflags |= DF_RU_CHAIN;
1844 if (flags & DF_REG_INFO)
1848 blocks = df->all_blocks;
1853 df_refs_update (df);
1854 /* More fine grained incremental dataflow analysis would be
1855 nice. For now recompute the whole shebang for the
1858 df_refs_unlink (df, blocks);
1860 /* All the def-use, use-def chains can be potentially
1861 modified by changes in one block. The size of the
1862 bitmaps can also change. */
1866 /* Scan the function for all register defs and uses. */
1868 df_refs_record (df, blocks);
1870 /* Link all the new defs and uses to the insns. */
1871 df_refs_process (df);
1874 /* Allocate the bitmaps now the total number of defs and uses are
1875 known. If the number of defs or uses have changed, then
1876 these bitmaps need to be reallocated. */
1877 df_bitmaps_alloc (df, aflags);
1879 /* Set the LUIDs for each specified basic block. */
1880 df_luids_set (df, blocks);
1882 /* Recreate reg-def and reg-use chains from scratch so that first
1883 def is at the head of the reg-def chain and the last use is at
1884 the head of the reg-use chain. This is only important for
1885 regs local to a basic block as it speeds up searching. */
1886 if (aflags & DF_RD_CHAIN)
1888 df_reg_def_chain_create (df, blocks);
1891 if (aflags & DF_RU_CHAIN)
1893 df_reg_use_chain_create (df, blocks);
1896 df->dfs_order = xmalloc (sizeof (int) * n_basic_blocks);
1897 df->rc_order = xmalloc (sizeof (int) * n_basic_blocks);
1898 df->rts_order = xmalloc (sizeof (int) * n_basic_blocks);
1899 df->inverse_dfs_map = xmalloc (sizeof (int) * last_basic_block);
1900 df->inverse_rc_map = xmalloc (sizeof (int) * last_basic_block);
1901 df->inverse_rts_map = xmalloc (sizeof (int) * last_basic_block);
1903 flow_depth_first_order_compute (df->dfs_order, df->rc_order);
1904 flow_reverse_top_sort_order_compute (df->rts_order);
1905 for (i = 0; i < n_basic_blocks; i++)
1907 df->inverse_dfs_map[df->dfs_order[i]] = i;
1908 df->inverse_rc_map[df->rc_order[i]] = i;
1909 df->inverse_rts_map[df->rts_order[i]] = i;
1913 /* Compute the sets of gens and kills for the defs of each bb. */
1914 df_rd_local_compute (df, df->flags & DF_RD ? blocks : df->all_blocks);
1916 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
1917 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
1918 bitmap *gen = xmalloc (sizeof (bitmap) * last_basic_block);
1919 bitmap *kill = xmalloc (sizeof (bitmap) * last_basic_block);
1922 in[bb->index] = DF_BB_INFO (df, bb)->rd_in;
1923 out[bb->index] = DF_BB_INFO (df, bb)->rd_out;
1924 gen[bb->index] = DF_BB_INFO (df, bb)->rd_gen;
1925 kill[bb->index] = DF_BB_INFO (df, bb)->rd_kill;
1927 iterative_dataflow_bitmap (in, out, gen, kill, df->all_blocks,
1928 DF_FORWARD, DF_UNION, df_rd_transfer_function,
1929 df->inverse_rc_map, NULL);
1937 if (aflags & DF_UD_CHAIN)
1939 /* Create use-def chains. */
1940 df_ud_chain_create (df, df->all_blocks);
1942 if (! (flags & DF_RD))
1948 /* Compute the sets of gens and kills for the upwards exposed
1950 df_ru_local_compute (df, df->flags & DF_RU ? blocks : df->all_blocks);
1952 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
1953 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
1954 bitmap *gen = xmalloc (sizeof (bitmap) * last_basic_block);
1955 bitmap *kill = xmalloc (sizeof (bitmap) * last_basic_block);
1958 in[bb->index] = DF_BB_INFO (df, bb)->ru_in;
1959 out[bb->index] = DF_BB_INFO (df, bb)->ru_out;
1960 gen[bb->index] = DF_BB_INFO (df, bb)->ru_gen;
1961 kill[bb->index] = DF_BB_INFO (df, bb)->ru_kill;
1963 iterative_dataflow_bitmap (in, out, gen, kill, df->all_blocks,
1964 DF_BACKWARD, DF_UNION, df_ru_transfer_function,
1965 df->inverse_rts_map, NULL);
1973 if (aflags & DF_DU_CHAIN)
1975 /* Create def-use chains. */
1976 df_du_chain_create (df, df->all_blocks);
1978 if (! (flags & DF_RU))
1982 /* Free up bitmaps that are no longer required. */
1984 df_bitmaps_free (df, dflags);
1988 /* Compute the sets of defs and uses of live variables. */
1989 df_lr_local_compute (df, df->flags & DF_LR ? blocks : df->all_blocks);
1991 bitmap *in = xmalloc (sizeof (bitmap) * last_basic_block);
1992 bitmap *out = xmalloc (sizeof (bitmap) * last_basic_block);
1993 bitmap *use = xmalloc (sizeof (bitmap) * last_basic_block);
1994 bitmap *def = xmalloc (sizeof (bitmap) * last_basic_block);
1997 in[bb->index] = DF_BB_INFO (df, bb)->lr_in;
1998 out[bb->index] = DF_BB_INFO (df, bb)->lr_out;
1999 use[bb->index] = DF_BB_INFO (df, bb)->lr_use;
2000 def[bb->index] = DF_BB_INFO (df, bb)->lr_def;
2002 iterative_dataflow_bitmap (in, out, use, def, df->all_blocks,
2003 DF_BACKWARD, DF_UNION, df_lr_transfer_function,
2004 df->inverse_rts_map, NULL);
2012 if (aflags & DF_REG_INFO)
2014 df_reg_info_compute (df, df->all_blocks);
2017 free (df->dfs_order);
2018 free (df->rc_order);
2019 free (df->rts_order);
2020 free (df->inverse_rc_map);
2021 free (df->inverse_dfs_map);
2022 free (df->inverse_rts_map);
2026 /* Initialize dataflow analysis. */
2032 df = xcalloc (1, sizeof (struct df));
2034 /* Squirrel away a global for debugging. */
2041 /* Start queuing refs. */
2043 df_refs_queue (struct df *df)
2045 df->def_id_save = df->def_id;
2046 df->use_id_save = df->use_id;
2047 /* ???? Perhaps we should save current obstack state so that we can
2053 /* Process queued refs. */
2055 df_refs_process (struct df *df)
2059 /* Build new insn-def chains. */
2060 for (i = df->def_id_save; i != df->def_id; i++)
2062 struct ref *def = df->defs[i];
2063 unsigned int uid = DF_REF_INSN_UID (def);
2065 /* Add def to head of def list for INSN. */
2067 = df_link_create (def, df->insns[uid].defs);
2070 /* Build new insn-use chains. */
2071 for (i = df->use_id_save; i != df->use_id; i++)
2073 struct ref *use = df->uses[i];
2074 unsigned int uid = DF_REF_INSN_UID (use);
2076 /* Add use to head of use list for INSN. */
2078 = df_link_create (use, df->insns[uid].uses);
2084 /* Update refs for basic block BB. */
2086 df_bb_refs_update (struct df *df, basic_block bb)
2091 /* While we have to scan the chain of insns for this BB, we do not
2092 need to allocate and queue a long chain of BB/INSN pairs. Using
2093 a bitmap for insns_modified saves memory and avoids queuing
2096 for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
2100 uid = INSN_UID (insn);
2102 if (bitmap_bit_p (df->insns_modified, uid))
2104 /* Delete any allocated refs of this insn. MPH, FIXME. */
2105 df_insn_refs_unlink (df, bb, insn);
2107 /* Scan the insn for refs. */
2108 df_insn_refs_record (df, bb, insn);
2112 if (insn == BB_END (bb))
2119 /* Process all the modified/deleted insns that were queued. */
2121 df_refs_update (struct df *df)
2126 if ((unsigned int) max_reg_num () >= df->reg_size)
2127 df_reg_table_realloc (df, 0);
2131 FOR_EACH_BB_IN_BITMAP (df->bbs_modified, 0, bb,
2133 count += df_bb_refs_update (df, bb);
2136 df_refs_process (df);
2141 /* Return nonzero if any of the requested blocks in the bitmap
2142 BLOCKS have been modified. */
2144 df_modified_p (struct df *df, bitmap blocks)
2153 if (bitmap_bit_p (df->bbs_modified, bb->index)
2154 && (! blocks || (blocks == (bitmap) -1) || bitmap_bit_p (blocks, bb->index)))
2164 /* Analyze dataflow info for the basic blocks specified by the bitmap
2165 BLOCKS, or for the whole CFG if BLOCKS is zero, or just for the
2166 modified blocks if BLOCKS is -1. */
2168 df_analyze (struct df *df, bitmap blocks, int flags)
2172 /* We could deal with additional basic blocks being created by
2173 rescanning everything again. */
2174 if (df->n_bbs && df->n_bbs != (unsigned int) last_basic_block)
2177 update = df_modified_p (df, blocks);
2178 if (update || (flags != df->flags))
2184 /* Recompute everything from scratch. */
2187 /* Allocate and initialize data structures. */
2188 df_alloc (df, max_reg_num ());
2189 df_analyze_1 (df, 0, flags, 0);
2194 if (blocks == (bitmap) -1)
2195 blocks = df->bbs_modified;
2200 df_analyze_1 (df, blocks, flags, 1);
2201 bitmap_zero (df->bbs_modified);
2202 bitmap_zero (df->insns_modified);
2209 /* Free all the dataflow info and the DF structure. */
2211 df_finish (struct df *df)
2218 /* Unlink INSN from its reference information. */
2220 df_insn_refs_unlink (struct df *df, basic_block bb ATTRIBUTE_UNUSED, rtx insn)
2222 struct df_link *link;
2225 uid = INSN_UID (insn);
2227 /* Unlink all refs defined by this insn. */
2228 for (link = df->insns[uid].defs; link; link = link->next)
2229 df_def_unlink (df, link->ref);
2231 /* Unlink all refs used by this insn. */
2232 for (link = df->insns[uid].uses; link; link = link->next)
2233 df_use_unlink (df, link->ref);
2235 df->insns[uid].defs = 0;
2236 df->insns[uid].uses = 0;
2241 /* Unlink all the insns within BB from their reference information. */
2243 df_bb_refs_unlink (struct df *df, basic_block bb)
2247 /* Scan the block an insn at a time from beginning to end. */
2248 for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
2252 /* Unlink refs for INSN. */
2253 df_insn_refs_unlink (df, bb, insn);
2255 if (insn == BB_END (bb))
2261 /* Unlink all the refs in the basic blocks specified by BLOCKS.
2262 Not currently used. */
2264 df_refs_unlink (struct df *df, bitmap blocks)
2270 FOR_EACH_BB_IN_BITMAP (blocks, 0, bb,
2272 df_bb_refs_unlink (df, bb);
2278 df_bb_refs_unlink (df, bb);
2283 /* Functions to modify insns. */
2286 /* Delete INSN and all its reference information. */
2288 df_insn_delete (struct df *df, basic_block bb ATTRIBUTE_UNUSED, rtx insn)
2290 /* If the insn is a jump, we should perhaps call delete_insn to
2291 handle the JUMP_LABEL? */
2293 /* We should not be deleting the NOTE_INSN_BASIC_BLOCK or label. */
2294 if (insn == BB_HEAD (bb))
2297 /* Delete the insn. */
2300 df_insn_modify (df, bb, insn);
2302 return NEXT_INSN (insn);
2306 /* Mark that INSN within BB may have changed (created/modified/deleted).
2307 This may be called multiple times for the same insn. There is no
2308 harm calling this function if the insn wasn't changed; it will just
2309 slow down the rescanning of refs. */
2311 df_insn_modify (struct df *df, basic_block bb, rtx insn)
2315 uid = INSN_UID (insn);
2316 if (uid >= df->insn_size)
2317 df_insn_table_realloc (df, uid);
2319 bitmap_set_bit (df->bbs_modified, bb->index);
2320 bitmap_set_bit (df->insns_modified, uid);
2322 /* For incremental updating on the fly, perhaps we could make a copy
2323 of all the refs of the original insn and turn them into
2324 anti-refs. When df_refs_update finds these anti-refs, it annihilates
2325 the original refs. If validate_change fails then these anti-refs
2326 will just get ignored. */
2330 typedef struct replace_args
2339 /* Replace mem pointed to by PX with its associated pseudo register.
2340 DATA is actually a pointer to a structure describing the
2341 instruction currently being scanned and the MEM we are currently
2344 df_rtx_mem_replace (rtx *px, void *data)
2346 replace_args *args = (replace_args *) data;
2349 if (mem == NULL_RTX)
2352 switch (GET_CODE (mem))
2358 /* We're not interested in the MEM associated with a
2359 CONST_DOUBLE, so there's no need to traverse into one. */
2363 /* This is not a MEM. */
2367 if (!rtx_equal_p (args->match, mem))
2368 /* This is not the MEM we are currently replacing. */
2371 /* Actually replace the MEM. */
2372 validate_change (args->insn, px, args->replacement, 1);
2380 df_insn_mem_replace (struct df *df, basic_block bb, rtx insn, rtx mem, rtx reg)
2386 args.replacement = reg;
2389 /* Search and replace all matching mems within insn. */
2390 for_each_rtx (&insn, df_rtx_mem_replace, &args);
2393 df_insn_modify (df, bb, insn);
2395 /* ???? FIXME. We may have a new def or one or more new uses of REG
2396 in INSN. REG should be a new pseudo so it won't affect the
2397 dataflow information that we currently have. We should add
2398 the new uses and defs to INSN and then recreate the chains
2399 when df_analyze is called. */
2400 return args.modified;
2404 /* Replace one register with another. Called through for_each_rtx; PX
2405 points to the rtx being scanned. DATA is actually a pointer to a
2406 structure of arguments. */
2408 df_rtx_reg_replace (rtx *px, void *data)
2411 replace_args *args = (replace_args *) data;
2416 if (x == args->match)
2418 validate_change (args->insn, px, args->replacement, 1);
2426 /* Replace the reg within every ref on CHAIN that is within the set
2427 BLOCKS of basic blocks with NEWREG. Also update the regs within
2430 df_refs_reg_replace (struct df *df, bitmap blocks, struct df_link *chain, rtx oldreg, rtx newreg)
2432 struct df_link *link;
2436 blocks = df->all_blocks;
2438 args.match = oldreg;
2439 args.replacement = newreg;
2442 for (link = chain; link; link = link->next)
2444 struct ref *ref = link->ref;
2445 rtx insn = DF_REF_INSN (ref);
2447 if (! INSN_P (insn))
2450 if (bitmap_bit_p (blocks, DF_REF_BBNO (ref)))
2452 df_ref_reg_replace (df, ref, oldreg, newreg);
2454 /* Replace occurrences of the reg within the REG_NOTES. */
2455 if ((! link->next || DF_REF_INSN (ref)
2456 != DF_REF_INSN (link->next->ref))
2457 && REG_NOTES (insn))
2460 for_each_rtx (®_NOTES (insn), df_rtx_reg_replace, &args);
2465 /* Temporary check to ensure that we have a grip on which
2466 regs should be replaced. */
2473 /* Replace all occurrences of register OLDREG with register NEWREG in
2474 blocks defined by bitmap BLOCKS. This also replaces occurrences of
2475 OLDREG in the REG_NOTES but only for insns containing OLDREG. This
2476 routine expects the reg-use and reg-def chains to be valid. */
2478 df_reg_replace (struct df *df, bitmap blocks, rtx oldreg, rtx newreg)
2480 unsigned int oldregno = REGNO (oldreg);
2482 df_refs_reg_replace (df, blocks, df->regs[oldregno].defs, oldreg, newreg);
2483 df_refs_reg_replace (df, blocks, df->regs[oldregno].uses, oldreg, newreg);
2488 /* Try replacing the reg within REF with NEWREG. Do not modify
2489 def-use/use-def chains. */
2491 df_ref_reg_replace (struct df *df, struct ref *ref, rtx oldreg, rtx newreg)
2493 /* Check that insn was deleted by being converted into a NOTE. If
2494 so ignore this insn. */
2495 if (! INSN_P (DF_REF_INSN (ref)))
2498 if (oldreg && oldreg != DF_REF_REG (ref))
2501 if (! validate_change (DF_REF_INSN (ref), DF_REF_LOC (ref), newreg, 1))
2504 df_insn_modify (df, DF_REF_BB (ref), DF_REF_INSN (ref));
2510 df_bb_def_use_swap (struct df *df, basic_block bb, rtx def_insn, rtx use_insn, unsigned int regno)
2516 struct df_link *link;
2518 def = df_bb_insn_regno_first_def_find (df, bb, def_insn, regno);
2522 use = df_bb_insn_regno_last_use_find (df, bb, use_insn, regno);
2526 /* The USE no longer exists. */
2527 use_uid = INSN_UID (use_insn);
2528 df_use_unlink (df, use);
2529 df_ref_unlink (&df->insns[use_uid].uses, use);
2531 /* The DEF requires shifting so remove it from DEF_INSN
2532 and add it to USE_INSN by reusing LINK. */
2533 def_uid = INSN_UID (def_insn);
2534 link = df_ref_unlink (&df->insns[def_uid].defs, def);
2536 link->next = df->insns[use_uid].defs;
2537 df->insns[use_uid].defs = link;
2540 link = df_ref_unlink (&df->regs[regno].defs, def);
2542 link->next = df->regs[regno].defs;
2543 df->insns[regno].defs = link;
2546 DF_REF_INSN (def) = use_insn;
2551 /* Record df between FIRST_INSN and LAST_INSN inclusive. All new
2552 insns must be processed by this routine. */
2554 df_insns_modify (struct df *df, basic_block bb, rtx first_insn, rtx last_insn)
2558 for (insn = first_insn; ; insn = NEXT_INSN (insn))
2562 /* A non-const call should not have slipped through the net. If
2563 it does, we need to create a new basic block. Ouch. The
2564 same applies for a label. */
2565 if ((GET_CODE (insn) == CALL_INSN
2566 && ! CONST_OR_PURE_CALL_P (insn))
2567 || GET_CODE (insn) == CODE_LABEL)
2570 uid = INSN_UID (insn);
2572 if (uid >= df->insn_size)
2573 df_insn_table_realloc (df, uid);
2575 df_insn_modify (df, bb, insn);
2577 if (insn == last_insn)
2583 /* Emit PATTERN before INSN within BB. */
2585 df_pattern_emit_before (struct df *df, rtx pattern, basic_block bb, rtx insn)
2588 rtx prev_insn = PREV_INSN (insn);
2590 /* We should not be inserting before the start of the block. */
2591 if (insn == BB_HEAD (bb))
2593 ret_insn = emit_insn_before (pattern, insn);
2594 if (ret_insn == insn)
2597 df_insns_modify (df, bb, NEXT_INSN (prev_insn), ret_insn);
2602 /* Emit PATTERN after INSN within BB. */
2604 df_pattern_emit_after (struct df *df, rtx pattern, basic_block bb, rtx insn)
2608 ret_insn = emit_insn_after (pattern, insn);
2609 if (ret_insn == insn)
2612 df_insns_modify (df, bb, NEXT_INSN (insn), ret_insn);
2617 /* Emit jump PATTERN after INSN within BB. */
2619 df_jump_pattern_emit_after (struct df *df, rtx pattern, basic_block bb, rtx insn)
2623 ret_insn = emit_jump_insn_after (pattern, insn);
2624 if (ret_insn == insn)
2627 df_insns_modify (df, bb, NEXT_INSN (insn), ret_insn);
2632 /* Move INSN within BB before BEFORE_INSN within BEFORE_BB.
2634 This function should only be used to move loop invariant insns
2635 out of a loop where it has been proven that the def-use info
2636 will still be valid. */
2638 df_insn_move_before (struct df *df, basic_block bb, rtx insn, basic_block before_bb, rtx before_insn)
2640 struct df_link *link;
2644 return df_pattern_emit_before (df, insn, before_bb, before_insn);
2646 uid = INSN_UID (insn);
2648 /* Change bb for all df defined and used by this insn. */
2649 for (link = df->insns[uid].defs; link; link = link->next)
2650 DF_REF_BB (link->ref) = before_bb;
2651 for (link = df->insns[uid].uses; link; link = link->next)
2652 DF_REF_BB (link->ref) = before_bb;
2654 /* The lifetimes of the registers used in this insn will be reduced
2655 while the lifetimes of the registers defined in this insn
2656 are likely to be increased. */
2658 /* ???? Perhaps all the insns moved should be stored on a list
2659 which df_analyze removes when it recalculates data flow. */
2661 return emit_insn_before (insn, before_insn);
2664 /* Functions to query dataflow information. */
2668 df_insn_regno_def_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2669 rtx insn, unsigned int regno)
2672 struct df_link *link;
2674 uid = INSN_UID (insn);
2676 for (link = df->insns[uid].defs; link; link = link->next)
2678 struct ref *def = link->ref;
2680 if (DF_REF_REGNO (def) == regno)
2687 /* Finds the reference corresponding to the definition of REG in INSN.
2688 DF is the dataflow object. */
2691 df_find_def (struct df *df, rtx insn, rtx reg)
2693 struct df_link *defs;
2695 for (defs = DF_INSN_DEFS (df, insn); defs; defs = defs->next)
2696 if (rtx_equal_p (DF_REF_REG (defs->ref), reg))
2702 /* Return 1 if REG is referenced in INSN, zero otherwise. */
2705 df_reg_used (struct df *df, rtx insn, rtx reg)
2707 struct df_link *uses;
2709 for (uses = DF_INSN_USES (df, insn); uses; uses = uses->next)
2710 if (rtx_equal_p (DF_REF_REG (uses->ref), reg))
2717 df_def_dominates_all_uses_p (struct df *df ATTRIBUTE_UNUSED, struct ref *def)
2719 struct df_link *du_link;
2721 /* Follow def-use chain to find all the uses of this def. */
2722 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
2724 struct ref *use = du_link->ref;
2725 struct df_link *ud_link;
2727 /* Follow use-def chain to check all the defs for this use. */
2728 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
2729 if (ud_link->ref != def)
2737 df_insn_dominates_all_uses_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2741 struct df_link *link;
2743 uid = INSN_UID (insn);
2745 for (link = df->insns[uid].defs; link; link = link->next)
2747 struct ref *def = link->ref;
2749 if (! df_def_dominates_all_uses_p (df, def))
2757 /* Return nonzero if all DF dominates all the uses within the bitmap
2760 df_def_dominates_uses_p (struct df *df ATTRIBUTE_UNUSED, struct ref *def,
2763 struct df_link *du_link;
2765 /* Follow def-use chain to find all the uses of this def. */
2766 for (du_link = DF_REF_CHAIN (def); du_link; du_link = du_link->next)
2768 struct ref *use = du_link->ref;
2769 struct df_link *ud_link;
2771 /* Only worry about the uses within BLOCKS. For example,
2772 consider a register defined within a loop that is live at the
2774 if (bitmap_bit_p (blocks, DF_REF_BBNO (use)))
2776 /* Follow use-def chain to check all the defs for this use. */
2777 for (ud_link = DF_REF_CHAIN (use); ud_link; ud_link = ud_link->next)
2778 if (ud_link->ref != def)
2786 /* Return nonzero if all the defs of INSN within BB dominates
2787 all the corresponding uses. */
2789 df_insn_dominates_uses_p (struct df *df, basic_block bb ATTRIBUTE_UNUSED,
2790 rtx insn, bitmap blocks)
2793 struct df_link *link;
2795 uid = INSN_UID (insn);
2797 for (link = df->insns[uid].defs; link; link = link->next)
2799 struct ref *def = link->ref;
2801 /* Only consider the defs within BLOCKS. */
2802 if (bitmap_bit_p (blocks, DF_REF_BBNO (def))
2803 && ! df_def_dominates_uses_p (df, def, blocks))
2810 /* Return the basic block that REG referenced in or NULL if referenced
2811 in multiple basic blocks. */
2813 df_regno_bb (struct df *df, unsigned int regno)
2815 struct df_link *defs = df->regs[regno].defs;
2816 struct df_link *uses = df->regs[regno].uses;
2817 struct ref *def = defs ? defs->ref : 0;
2818 struct ref *use = uses ? uses->ref : 0;
2819 basic_block bb_def = def ? DF_REF_BB (def) : 0;
2820 basic_block bb_use = use ? DF_REF_BB (use) : 0;
2822 /* Compare blocks of first def and last use. ???? FIXME. What if
2823 the reg-def and reg-use lists are not correctly ordered. */
2824 return bb_def == bb_use ? bb_def : 0;
2828 /* Return nonzero if REG used in multiple basic blocks. */
2830 df_reg_global_p (struct df *df, rtx reg)
2832 return df_regno_bb (df, REGNO (reg)) != 0;
2836 /* Return total lifetime (in insns) of REG. */
2838 df_reg_lifetime (struct df *df, rtx reg)
2840 return df->regs[REGNO (reg)].lifetime;
2844 /* Return nonzero if REG live at start of BB. */
2846 df_bb_reg_live_start_p (struct df *df, basic_block bb, rtx reg)
2848 struct bb_info *bb_info = DF_BB_INFO (df, bb);
2850 #ifdef ENABLE_CHECKING
2851 if (! bb_info->lr_in)
2855 return bitmap_bit_p (bb_info->lr_in, REGNO (reg));
2859 /* Return nonzero if REG live at end of BB. */
2861 df_bb_reg_live_end_p (struct df *df, basic_block bb, rtx reg)
2863 struct bb_info *bb_info = DF_BB_INFO (df, bb);
2865 #ifdef ENABLE_CHECKING
2866 if (! bb_info->lr_in)
2870 return bitmap_bit_p (bb_info->lr_out, REGNO (reg));
2874 /* Return -1 if life of REG1 before life of REG2, 1 if life of REG1
2875 after life of REG2, or 0, if the lives overlap. */
2877 df_bb_regs_lives_compare (struct df *df, basic_block bb, rtx reg1, rtx reg2)
2879 unsigned int regno1 = REGNO (reg1);
2880 unsigned int regno2 = REGNO (reg2);
2887 /* The regs must be local to BB. */
2888 if (df_regno_bb (df, regno1) != bb
2889 || df_regno_bb (df, regno2) != bb)
2892 def2 = df_bb_regno_first_def_find (df, bb, regno2);
2893 use1 = df_bb_regno_last_use_find (df, bb, regno1);
2895 if (DF_INSN_LUID (df, DF_REF_INSN (def2))
2896 > DF_INSN_LUID (df, DF_REF_INSN (use1)))
2899 def1 = df_bb_regno_first_def_find (df, bb, regno1);
2900 use2 = df_bb_regno_last_use_find (df, bb, regno2);
2902 if (DF_INSN_LUID (df, DF_REF_INSN (def1))
2903 > DF_INSN_LUID (df, DF_REF_INSN (use2)))
2910 /* Return last use of REGNO within BB. */
2912 df_bb_regno_last_use_find (struct df *df, basic_block bb, unsigned int regno)
2914 struct df_link *link;
2916 /* This assumes that the reg-use list is ordered such that for any
2917 BB, the last use is found first. However, since the BBs are not
2918 ordered, the first use in the chain is not necessarily the last
2919 use in the function. */
2920 for (link = df->regs[regno].uses; link; link = link->next)
2922 struct ref *use = link->ref;
2924 if (DF_REF_BB (use) == bb)
2931 /* Return first def of REGNO within BB. */
2933 df_bb_regno_first_def_find (struct df *df, basic_block bb, unsigned int regno)
2935 struct df_link *link;
2937 /* This assumes that the reg-def list is ordered such that for any
2938 BB, the first def is found first. However, since the BBs are not
2939 ordered, the first def in the chain is not necessarily the first
2940 def in the function. */
2941 for (link = df->regs[regno].defs; link; link = link->next)
2943 struct ref *def = link->ref;
2945 if (DF_REF_BB (def) == bb)
2951 /* Return last def of REGNO within BB. */
2953 df_bb_regno_last_def_find (struct df *df, basic_block bb, unsigned int regno)
2955 struct df_link *link;
2956 struct ref *last_def = NULL;
2959 /* This assumes that the reg-def list is ordered such that for any
2960 BB, the first def is found first. However, since the BBs are not
2961 ordered, the first def in the chain is not necessarily the first
2962 def in the function. */
2963 for (link = df->regs[regno].defs; link; link = link->next)
2965 struct ref *def = link->ref;
2966 /* The first time in the desired block. */
2967 if (DF_REF_BB (def) == bb)
2969 /* The last def in the desired block. */
2977 /* Return first use of REGNO inside INSN within BB. */
2979 df_bb_insn_regno_last_use_find (struct df *df,
2980 basic_block bb ATTRIBUTE_UNUSED, rtx insn,
2984 struct df_link *link;
2986 uid = INSN_UID (insn);
2988 for (link = df->insns[uid].uses; link; link = link->next)
2990 struct ref *use = link->ref;
2992 if (DF_REF_REGNO (use) == regno)
3000 /* Return first def of REGNO inside INSN within BB. */
3002 df_bb_insn_regno_first_def_find (struct df *df,
3003 basic_block bb ATTRIBUTE_UNUSED, rtx insn,
3007 struct df_link *link;
3009 uid = INSN_UID (insn);
3011 for (link = df->insns[uid].defs; link; link = link->next)
3013 struct ref *def = link->ref;
3015 if (DF_REF_REGNO (def) == regno)
3023 /* Return insn using REG if the BB contains only a single
3024 use and def of REG. */
3026 df_bb_single_def_use_insn_find (struct df *df, basic_block bb, rtx insn, rtx reg)
3030 struct df_link *du_link;
3032 def = df_bb_insn_regno_first_def_find (df, bb, insn, REGNO (reg));
3037 du_link = DF_REF_CHAIN (def);
3044 /* Check if def is dead. */
3048 /* Check for multiple uses. */
3052 return DF_REF_INSN (use);
3055 /* Functions for debugging/dumping dataflow information. */
3058 /* Dump a def-use or use-def chain for REF to FILE. */
3060 df_chain_dump (struct df_link *link, FILE *file)
3062 fprintf (file, "{ ");
3063 for (; link; link = link->next)
3065 fprintf (file, "%c%d ",
3066 DF_REF_REG_DEF_P (link->ref) ? 'd' : 'u',
3067 DF_REF_ID (link->ref));
3069 fprintf (file, "}");
3073 /* Dump a chain of refs with the associated regno. */
3075 df_chain_dump_regno (struct df_link *link, FILE *file)
3077 fprintf (file, "{ ");
3078 for (; link; link = link->next)
3080 fprintf (file, "%c%d(%d) ",
3081 DF_REF_REG_DEF_P (link->ref) ? 'd' : 'u',
3082 DF_REF_ID (link->ref),
3083 DF_REF_REGNO (link->ref));
3085 fprintf (file, "}");
3089 /* Dump dataflow info. */
3091 df_dump (struct df *df, int flags, FILE *file)
3099 fprintf (file, "\nDataflow summary:\n");
3100 fprintf (file, "n_regs = %d, n_defs = %d, n_uses = %d, n_bbs = %d\n",
3101 df->n_regs, df->n_defs, df->n_uses, df->n_bbs);
3107 fprintf (file, "Reaching defs:\n");
3110 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3112 if (! bb_info->rd_in)
3115 fprintf (file, "bb %d in \t", bb->index);
3116 dump_bitmap (file, bb_info->rd_in);
3117 fprintf (file, "bb %d gen \t", bb->index);
3118 dump_bitmap (file, bb_info->rd_gen);
3119 fprintf (file, "bb %d kill\t", bb->index);
3120 dump_bitmap (file, bb_info->rd_kill);
3121 fprintf (file, "bb %d out \t", bb->index);
3122 dump_bitmap (file, bb_info->rd_out);
3126 if (flags & DF_UD_CHAIN)
3128 fprintf (file, "Use-def chains:\n");
3129 for (j = 0; j < df->n_defs; j++)
3133 fprintf (file, "d%d bb %d luid %d insn %d reg %d ",
3134 j, DF_REF_BBNO (df->defs[j]),
3135 DF_INSN_LUID (df, DF_REF_INSN (df->defs[j])),
3136 DF_REF_INSN_UID (df->defs[j]),
3137 DF_REF_REGNO (df->defs[j]));
3138 if (df->defs[j]->flags & DF_REF_READ_WRITE)
3139 fprintf (file, "read/write ");
3140 df_chain_dump (DF_REF_CHAIN (df->defs[j]), file);
3141 fprintf (file, "\n");
3148 fprintf (file, "Reaching uses:\n");
3151 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3153 if (! bb_info->ru_in)
3156 fprintf (file, "bb %d in \t", bb->index);
3157 dump_bitmap (file, bb_info->ru_in);
3158 fprintf (file, "bb %d gen \t", bb->index);
3159 dump_bitmap (file, bb_info->ru_gen);
3160 fprintf (file, "bb %d kill\t", bb->index);
3161 dump_bitmap (file, bb_info->ru_kill);
3162 fprintf (file, "bb %d out \t", bb->index);
3163 dump_bitmap (file, bb_info->ru_out);
3167 if (flags & DF_DU_CHAIN)
3169 fprintf (file, "Def-use chains:\n");
3170 for (j = 0; j < df->n_uses; j++)
3174 fprintf (file, "u%d bb %d luid %d insn %d reg %d ",
3175 j, DF_REF_BBNO (df->uses[j]),
3176 DF_INSN_LUID (df, DF_REF_INSN (df->uses[j])),
3177 DF_REF_INSN_UID (df->uses[j]),
3178 DF_REF_REGNO (df->uses[j]));
3179 if (df->uses[j]->flags & DF_REF_READ_WRITE)
3180 fprintf (file, "read/write ");
3181 df_chain_dump (DF_REF_CHAIN (df->uses[j]), file);
3182 fprintf (file, "\n");
3189 fprintf (file, "Live regs:\n");
3192 struct bb_info *bb_info = DF_BB_INFO (df, bb);
3194 if (! bb_info->lr_in)
3197 fprintf (file, "bb %d in \t", bb->index);
3198 dump_bitmap (file, bb_info->lr_in);
3199 fprintf (file, "bb %d use \t", bb->index);
3200 dump_bitmap (file, bb_info->lr_use);
3201 fprintf (file, "bb %d def \t", bb->index);
3202 dump_bitmap (file, bb_info->lr_def);
3203 fprintf (file, "bb %d out \t", bb->index);
3204 dump_bitmap (file, bb_info->lr_out);
3208 if (flags & (DF_REG_INFO | DF_RD_CHAIN | DF_RU_CHAIN))
3210 struct reg_info *reg_info = df->regs;
3212 fprintf (file, "Register info:\n");
3213 for (j = 0; j < df->n_regs; j++)
3215 if (((flags & DF_REG_INFO)
3216 && (reg_info[j].n_uses || reg_info[j].n_defs))
3217 || ((flags & DF_RD_CHAIN) && reg_info[j].defs)
3218 || ((flags & DF_RU_CHAIN) && reg_info[j].uses))
3220 fprintf (file, "reg %d", j);
3221 if ((flags & DF_RD_CHAIN) && (flags & DF_RU_CHAIN))
3223 basic_block bb = df_regno_bb (df, j);
3226 fprintf (file, " bb %d", bb->index);
3228 fprintf (file, " bb ?");
3230 if (flags & DF_REG_INFO)
3232 fprintf (file, " life %d", reg_info[j].lifetime);
3235 if ((flags & DF_REG_INFO) || (flags & DF_RD_CHAIN))
3237 fprintf (file, " defs ");
3238 if (flags & DF_REG_INFO)
3239 fprintf (file, "%d ", reg_info[j].n_defs);
3240 if (flags & DF_RD_CHAIN)
3241 df_chain_dump (reg_info[j].defs, file);
3244 if ((flags & DF_REG_INFO) || (flags & DF_RU_CHAIN))
3246 fprintf (file, " uses ");
3247 if (flags & DF_REG_INFO)
3248 fprintf (file, "%d ", reg_info[j].n_uses);
3249 if (flags & DF_RU_CHAIN)
3250 df_chain_dump (reg_info[j].uses, file);
3253 fprintf (file, "\n");
3257 fprintf (file, "\n");
3262 df_insn_debug (struct df *df, rtx insn, FILE *file)
3267 uid = INSN_UID (insn);
3268 if (uid >= df->insn_size)
3271 if (df->insns[uid].defs)
3272 bbi = DF_REF_BBNO (df->insns[uid].defs->ref);
3273 else if (df->insns[uid].uses)
3274 bbi = DF_REF_BBNO (df->insns[uid].uses->ref);
3278 fprintf (file, "insn %d bb %d luid %d defs ",
3279 uid, bbi, DF_INSN_LUID (df, insn));
3280 df_chain_dump (df->insns[uid].defs, file);
3281 fprintf (file, " uses ");
3282 df_chain_dump (df->insns[uid].uses, file);
3283 fprintf (file, "\n");
3288 df_insn_debug_regno (struct df *df, rtx insn, FILE *file)
3293 uid = INSN_UID (insn);
3294 if (uid >= df->insn_size)
3297 if (df->insns[uid].defs)
3298 bbi = DF_REF_BBNO (df->insns[uid].defs->ref);
3299 else if (df->insns[uid].uses)
3300 bbi = DF_REF_BBNO (df->insns[uid].uses->ref);
3304 fprintf (file, "insn %d bb %d luid %d defs ",
3305 uid, bbi, DF_INSN_LUID (df, insn));
3306 df_chain_dump_regno (df->insns[uid].defs, file);
3307 fprintf (file, " uses ");
3308 df_chain_dump_regno (df->insns[uid].uses, file);
3309 fprintf (file, "\n");
3314 df_regno_debug (struct df *df, unsigned int regno, FILE *file)
3316 if (regno >= df->reg_size)
3319 fprintf (file, "reg %d life %d defs ",
3320 regno, df->regs[regno].lifetime);
3321 df_chain_dump (df->regs[regno].defs, file);
3322 fprintf (file, " uses ");
3323 df_chain_dump (df->regs[regno].uses, file);
3324 fprintf (file, "\n");
3329 df_ref_debug (struct df *df, struct ref *ref, FILE *file)
3331 fprintf (file, "%c%d ",
3332 DF_REF_REG_DEF_P (ref) ? 'd' : 'u',
3334 fprintf (file, "reg %d bb %d luid %d insn %d chain ",
3337 DF_INSN_LUID (df, DF_REF_INSN (ref)),
3338 INSN_UID (DF_REF_INSN (ref)));
3339 df_chain_dump (DF_REF_CHAIN (ref), file);
3340 fprintf (file, "\n");
3343 /* Functions for debugging from GDB. */
3346 debug_df_insn (rtx insn)
3348 df_insn_debug (ddf, insn, stderr);
3354 debug_df_reg (rtx reg)
3356 df_regno_debug (ddf, REGNO (reg), stderr);
3361 debug_df_regno (unsigned int regno)
3363 df_regno_debug (ddf, regno, stderr);
3368 debug_df_ref (struct ref *ref)
3370 df_ref_debug (ddf, ref, stderr);
3375 debug_df_defno (unsigned int defno)
3377 df_ref_debug (ddf, ddf->defs[defno], stderr);
3382 debug_df_useno (unsigned int defno)
3384 df_ref_debug (ddf, ddf->uses[defno], stderr);
3389 debug_df_chain (struct df_link *link)
3391 df_chain_dump (link, stderr);
3392 fputc ('\n', stderr);
3396 /* Hybrid search algorithm from "Implementation Techniques for
3397 Efficient Data-Flow Analysis of Large Programs". */
3399 hybrid_search_bitmap (basic_block block, bitmap *in, bitmap *out, bitmap *gen,
3400 bitmap *kill, enum df_flow_dir dir,
3401 enum df_confluence_op conf_op,
3402 transfer_function_bitmap transfun, sbitmap visited,
3403 sbitmap pending, void *data)
3406 int i = block->index;
3408 basic_block bb = block;
3410 SET_BIT (visited, block->index);
3411 if (TEST_BIT (pending, block->index))
3413 if (dir == DF_FORWARD)
3415 /* Calculate <conf_op> of predecessor_outs. */
3416 bitmap_zero (in[i]);
3417 for (e = bb->pred; e != 0; e = e->pred_next)
3419 if (e->src == ENTRY_BLOCK_PTR)
3424 bitmap_a_or_b (in[i], in[i], out[e->src->index]);
3426 case DF_INTERSECTION:
3427 bitmap_a_and_b (in[i], in[i], out[e->src->index]);
3434 /* Calculate <conf_op> of successor ins. */
3435 bitmap_zero (out[i]);
3436 for (e = bb->succ; e != 0; e = e->succ_next)
3438 if (e->dest == EXIT_BLOCK_PTR)
3443 bitmap_a_or_b (out[i], out[i], in[e->dest->index]);
3445 case DF_INTERSECTION:
3446 bitmap_a_and_b (out[i], out[i], in[e->dest->index]);
3452 (*transfun)(i, &changed, in[i], out[i], gen[i], kill[i], data);
3453 RESET_BIT (pending, i);
3456 if (dir == DF_FORWARD)
3458 for (e = bb->succ; e != 0; e = e->succ_next)
3460 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3462 SET_BIT (pending, e->dest->index);
3467 for (e = bb->pred; e != 0; e = e->pred_next)
3469 if (e->src == ENTRY_BLOCK_PTR || e->dest->index == i)
3471 SET_BIT (pending, e->src->index);
3476 if (dir == DF_FORWARD)
3478 for (e = bb->succ; e != 0; e = e->succ_next)
3480 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3482 if (!TEST_BIT (visited, e->dest->index))
3483 hybrid_search_bitmap (e->dest, in, out, gen, kill, dir,
3484 conf_op, transfun, visited, pending,
3490 for (e = bb->pred; e != 0; e = e->pred_next)
3492 if (e->src == ENTRY_BLOCK_PTR || e->src->index == i)
3494 if (!TEST_BIT (visited, e->src->index))
3495 hybrid_search_bitmap (e->src, in, out, gen, kill, dir,
3496 conf_op, transfun, visited, pending,
3503 /* Hybrid search for sbitmaps, rather than bitmaps. */
3505 hybrid_search_sbitmap (basic_block block, sbitmap *in, sbitmap *out,
3506 sbitmap *gen, sbitmap *kill, enum df_flow_dir dir,
3507 enum df_confluence_op conf_op,
3508 transfer_function_sbitmap transfun, sbitmap visited,
3509 sbitmap pending, void *data)
3512 int i = block->index;
3514 basic_block bb = block;
3516 SET_BIT (visited, block->index);
3517 if (TEST_BIT (pending, block->index))
3519 if (dir == DF_FORWARD)
3521 /* Calculate <conf_op> of predecessor_outs. */
3522 sbitmap_zero (in[i]);
3523 for (e = bb->pred; e != 0; e = e->pred_next)
3525 if (e->src == ENTRY_BLOCK_PTR)
3530 sbitmap_a_or_b (in[i], in[i], out[e->src->index]);
3532 case DF_INTERSECTION:
3533 sbitmap_a_and_b (in[i], in[i], out[e->src->index]);
3540 /* Calculate <conf_op> of successor ins. */
3541 sbitmap_zero (out[i]);
3542 for (e = bb->succ; e != 0; e = e->succ_next)
3544 if (e->dest == EXIT_BLOCK_PTR)
3549 sbitmap_a_or_b (out[i], out[i], in[e->dest->index]);
3551 case DF_INTERSECTION:
3552 sbitmap_a_and_b (out[i], out[i], in[e->dest->index]);
3558 (*transfun)(i, &changed, in[i], out[i], gen[i], kill[i], data);
3559 RESET_BIT (pending, i);
3562 if (dir == DF_FORWARD)
3564 for (e = bb->succ; e != 0; e = e->succ_next)
3566 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3568 SET_BIT (pending, e->dest->index);
3573 for (e = bb->pred; e != 0; e = e->pred_next)
3575 if (e->src == ENTRY_BLOCK_PTR || e->dest->index == i)
3577 SET_BIT (pending, e->src->index);
3582 if (dir == DF_FORWARD)
3584 for (e = bb->succ; e != 0; e = e->succ_next)
3586 if (e->dest == EXIT_BLOCK_PTR || e->dest->index == i)
3588 if (!TEST_BIT (visited, e->dest->index))
3589 hybrid_search_sbitmap (e->dest, in, out, gen, kill, dir,
3590 conf_op, transfun, visited, pending,
3596 for (e = bb->pred; e != 0; e = e->pred_next)
3598 if (e->src == ENTRY_BLOCK_PTR || e->src->index == i)
3600 if (!TEST_BIT (visited, e->src->index))
3601 hybrid_search_sbitmap (e->src, in, out, gen, kill, dir,
3602 conf_op, transfun, visited, pending,
3611 in, out = Filled in by function.
3612 blocks = Blocks to analyze.
3613 dir = Dataflow direction.
3614 conf_op = Confluence operation.
3615 transfun = Transfer function.
3616 order = Order to iterate in. (Should map block numbers -> order)
3617 data = Whatever you want. It's passed to the transfer function.
3619 This function will perform iterative bitvector dataflow, producing
3620 the in and out sets. Even if you only want to perform it for a
3621 small number of blocks, the vectors for in and out must be large
3622 enough for *all* blocks, because changing one block might affect
3623 others. However, it'll only put what you say to analyze on the
3626 For forward problems, you probably want to pass in a mapping of
3627 block number to rc_order (like df->inverse_rc_map).
3630 iterative_dataflow_sbitmap (sbitmap *in, sbitmap *out, sbitmap *gen,
3631 sbitmap *kill, bitmap blocks,
3632 enum df_flow_dir dir,
3633 enum df_confluence_op conf_op,
3634 transfer_function_sbitmap transfun, int *order,
3640 sbitmap visited, pending;
3642 pending = sbitmap_alloc (last_basic_block);
3643 visited = sbitmap_alloc (last_basic_block);
3644 sbitmap_zero (pending);
3645 sbitmap_zero (visited);
3646 worklist = fibheap_new ();
3648 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3650 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3651 SET_BIT (pending, i);
3652 if (dir == DF_FORWARD)
3653 sbitmap_copy (out[i], gen[i]);
3655 sbitmap_copy (in[i], gen[i]);
3658 while (sbitmap_first_set_bit (pending) != -1)
3660 while (!fibheap_empty (worklist))
3662 i = (size_t) fibheap_extract_min (worklist);
3663 bb = BASIC_BLOCK (i);
3664 if (!TEST_BIT (visited, bb->index))
3665 hybrid_search_sbitmap (bb, in, out, gen, kill, dir,
3666 conf_op, transfun, visited, pending, data);
3669 if (sbitmap_first_set_bit (pending) != -1)
3671 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3673 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3675 sbitmap_zero (visited);
3683 sbitmap_free (pending);
3684 sbitmap_free (visited);
3685 fibheap_delete (worklist);
3689 /* Exactly the same as iterative_dataflow_sbitmap, except it works on
3692 iterative_dataflow_bitmap (bitmap *in, bitmap *out, bitmap *gen, bitmap *kill,
3693 bitmap blocks, enum df_flow_dir dir,
3694 enum df_confluence_op conf_op,
3695 transfer_function_bitmap transfun, int *order,
3701 sbitmap visited, pending;
3703 pending = sbitmap_alloc (last_basic_block);
3704 visited = sbitmap_alloc (last_basic_block);
3705 sbitmap_zero (pending);
3706 sbitmap_zero (visited);
3707 worklist = fibheap_new ();
3709 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3711 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3712 SET_BIT (pending, i);
3713 if (dir == DF_FORWARD)
3714 bitmap_copy (out[i], gen[i]);
3716 bitmap_copy (in[i], gen[i]);
3719 while (sbitmap_first_set_bit (pending) != -1)
3721 while (!fibheap_empty (worklist))
3723 i = (size_t) fibheap_extract_min (worklist);
3724 bb = BASIC_BLOCK (i);
3725 if (!TEST_BIT (visited, bb->index))
3726 hybrid_search_bitmap (bb, in, out, gen, kill, dir,
3727 conf_op, transfun, visited, pending, data);
3730 if (sbitmap_first_set_bit (pending) != -1)
3732 EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i,
3734 fibheap_insert (worklist, order[i], (void *) (size_t) i);
3736 sbitmap_zero (visited);
3743 sbitmap_free (pending);
3744 sbitmap_free (visited);
3745 fibheap_delete (worklist);