1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 93-96, 1997 Free Software Foundation, Inc.
3 Contributed by Michael Tiemann (tiemann@cygnus.com)
4 Enhanced by, and currently maintained by, Jim Wilson (wilson@cygnus.com)
6 This file is part of GNU CC.
8 GNU CC is free software; you can redistribute it and/or modify
9 it under the terms of the GNU General Public License as published by
10 the Free Software Foundation; either version 2, or (at your option)
13 GNU CC is distributed in the hope that it will be useful,
14 but WITHOUT ANY WARRANTY; without even the implied warranty of
15 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 GNU General Public License for more details.
18 You should have received a copy of the GNU General Public License
19 along with GNU CC; see the file COPYING. If not, write to
20 the Free Software Foundation, 59 Temple Place - Suite 330,
21 Boston, MA 02111-1307, USA. */
23 /* Instruction scheduling pass.
25 This pass implements list scheduling within basic blocks. It is
26 run after flow analysis, but before register allocation. The
27 scheduler works as follows:
29 We compute insn priorities based on data dependencies. Flow
30 analysis only creates a fraction of the data-dependencies we must
31 observe: namely, only those dependencies which the combiner can be
32 expected to use. For this pass, we must therefore create the
33 remaining dependencies we need to observe: register dependencies,
34 memory dependencies, dependencies to keep function calls in order,
35 and the dependence between a conditional branch and the setting of
36 condition codes are all dealt with here.
38 The scheduler first traverses the data flow graph, starting with
39 the last instruction, and proceeding to the first, assigning
40 values to insn_priority as it goes. This sorts the instructions
41 topologically by data dependence.
43 Once priorities have been established, we order the insns using
44 list scheduling. This works as follows: starting with a list of
45 all the ready insns, and sorted according to priority number, we
46 schedule the insn from the end of the list by placing its
47 predecessors in the list according to their priority order. We
48 consider this insn scheduled by setting the pointer to the "end" of
49 the list to point to the previous insn. When an insn has no
50 predecessors, we either queue it until sufficient time has elapsed
51 or add it to the ready list. As the instructions are scheduled or
52 when stalls are introduced, the queue advances and dumps insns into
53 the ready list. When all insns down to the lowest priority have
54 been scheduled, the critical path of the basic block has been made
55 as short as possible. The remaining insns are then scheduled in
58 Function unit conflicts are resolved during reverse list scheduling
59 by tracking the time when each insn is committed to the schedule
60 and from that, the time the function units it uses must be free.
61 As insns on the ready list are considered for scheduling, those
62 that would result in a blockage of the already committed insns are
63 queued until no blockage will result. Among the remaining insns on
64 the ready list to be considered, the first one with the largest
65 potential for causing a subsequent blockage is chosen.
67 The following list shows the order in which we want to break ties
68 among insns in the ready list:
70 1. choose insn with lowest conflict cost, ties broken by
71 2. choose insn with the longest path to end of bb, ties broken by
72 3. choose insn that kills the most registers, ties broken by
73 4. choose insn that conflicts with the most ready insns, or finally
74 5. choose insn with lowest UID.
76 Memory references complicate matters. Only if we can be certain
77 that memory references are not part of the data dependency graph
78 (via true, anti, or output dependence), can we move operations past
79 memory references. To first approximation, reads can be done
80 independently, while writes introduce dependencies. Better
81 approximations will yield fewer dependencies.
83 Dependencies set up by memory references are treated in exactly the
84 same way as other dependencies, by using LOG_LINKS.
86 Having optimized the critical path, we may have also unduly
87 extended the lifetimes of some registers. If an operation requires
88 that constants be loaded into registers, it is certainly desirable
89 to load those constants as early as necessary, but no earlier.
90 I.e., it will not do to load up a bunch of registers at the
91 beginning of a basic block only to use them at the end, if they
92 could be loaded later, since this may result in excessive register
95 Note that since branches are never in basic blocks, but only end
96 basic blocks, this pass will not do any branch scheduling. But
97 that is ok, since we can use GNU's delayed branch scheduling
98 pass to take care of this case.
100 Also note that no further optimizations based on algebraic identities
101 are performed, so this pass would be a good one to perform instruction
102 splitting, such as breaking up a multiply instruction into shifts
103 and adds where that is profitable.
105 Given the memory aliasing analysis that this pass should perform,
106 it should be possible to remove redundant stores to memory, and to
107 load values from registers instead of hitting memory.
109 This pass must update information that subsequent passes expect to be
110 correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
111 reg_n_calls_crossed, and reg_live_length. Also, basic_block_head,
114 The information in the line number notes is carefully retained by
115 this pass. Notes that refer to the starting and ending of
116 exception regions are also carefully retained by this pass. All
117 other NOTE insns are grouped in their same relative order at the
118 beginning of basic blocks that have been scheduled. */
123 #include "basic-block.h"
125 #include "hard-reg-set.h"
127 #include "insn-config.h"
128 #include "insn-attr.h"
130 extern char *reg_known_equiv_p;
131 extern rtx *reg_known_value;
133 #ifdef INSN_SCHEDULING
134 /* Arrays set up by scheduling for the same respective purposes as
135 similar-named arrays set up by flow analysis. We work with these
136 arrays during the scheduling pass so we can compare values against
139 Values of these arrays are copied at the end of this pass into the
140 arrays set up by flow analysis. */
141 static int *sched_reg_n_calls_crossed;
142 static int *sched_reg_live_length;
144 /* Element N is the next insn that sets (hard or pseudo) register
145 N within the current basic block; or zero, if there is no
146 such insn. Needed for new registers which may be introduced
147 by splitting insns. */
148 static rtx *reg_last_uses;
149 static rtx *reg_last_sets;
150 static regset reg_pending_sets;
151 static int reg_pending_sets_all;
153 /* Vector indexed by INSN_UID giving the original ordering of the insns. */
154 static int *insn_luid;
155 #define INSN_LUID(INSN) (insn_luid[INSN_UID (INSN)])
157 /* Vector indexed by INSN_UID giving each instruction a priority. */
158 static int *insn_priority;
159 #define INSN_PRIORITY(INSN) (insn_priority[INSN_UID (INSN)])
161 static short *insn_costs;
162 #define INSN_COST(INSN) insn_costs[INSN_UID (INSN)]
164 /* Vector indexed by INSN_UID giving an encoding of the function units
166 static short *insn_units;
167 #define INSN_UNIT(INSN) insn_units[INSN_UID (INSN)]
169 /* Vector indexed by INSN_UID giving an encoding of the blockage range
170 function. The unit and the range are encoded. */
171 static unsigned int *insn_blockage;
172 #define INSN_BLOCKAGE(INSN) insn_blockage[INSN_UID (INSN)]
174 #define BLOCKAGE_MASK ((1 << BLOCKAGE_BITS) - 1)
175 #define ENCODE_BLOCKAGE(U,R) \
176 ((((U) << UNIT_BITS) << BLOCKAGE_BITS \
177 | MIN_BLOCKAGE_COST (R)) << BLOCKAGE_BITS \
178 | MAX_BLOCKAGE_COST (R))
179 #define UNIT_BLOCKED(B) ((B) >> (2 * BLOCKAGE_BITS))
180 #define BLOCKAGE_RANGE(B) \
181 (((((B) >> BLOCKAGE_BITS) & BLOCKAGE_MASK) << (HOST_BITS_PER_INT / 2)) \
182 | (B) & BLOCKAGE_MASK)
184 /* Encodings of the `<name>_unit_blockage_range' function. */
185 #define MIN_BLOCKAGE_COST(R) ((R) >> (HOST_BITS_PER_INT / 2))
186 #define MAX_BLOCKAGE_COST(R) ((R) & ((1 << (HOST_BITS_PER_INT / 2)) - 1))
188 #define DONE_PRIORITY -1
189 #define MAX_PRIORITY 0x7fffffff
190 #define TAIL_PRIORITY 0x7ffffffe
191 #define LAUNCH_PRIORITY 0x7f000001
192 #define DONE_PRIORITY_P(INSN) (INSN_PRIORITY (INSN) < 0)
193 #define LOW_PRIORITY_P(INSN) ((INSN_PRIORITY (INSN) & 0x7f000000) == 0)
195 /* Vector indexed by INSN_UID giving number of insns referring to this insn. */
196 static int *insn_ref_count;
197 #define INSN_REF_COUNT(INSN) (insn_ref_count[INSN_UID (INSN)])
199 /* Vector indexed by INSN_UID giving line-number note in effect for each
200 insn. For line-number notes, this indicates whether the note may be
202 static rtx *line_note;
203 #define LINE_NOTE(INSN) (line_note[INSN_UID (INSN)])
205 /* Vector indexed by basic block number giving the starting line-number
206 for each basic block. */
207 static rtx *line_note_head;
209 /* List of important notes we must keep around. This is a pointer to the
210 last element in the list. */
211 static rtx note_list;
213 /* Regsets telling whether a given register is live or dead before the last
214 scheduled insn. Must scan the instructions once before scheduling to
215 determine what registers are live or dead at the end of the block. */
216 static regset bb_dead_regs;
217 static regset bb_live_regs;
219 /* Regset telling whether a given register is live after the insn currently
220 being scheduled. Before processing an insn, this is equal to bb_live_regs
221 above. This is used so that we can find registers that are newly born/dead
222 after processing an insn. */
223 static regset old_live_regs;
225 /* The chain of REG_DEAD notes. REG_DEAD notes are removed from all insns
226 during the initial scan and reused later. If there are not exactly as
227 many REG_DEAD notes in the post scheduled code as there were in the
228 prescheduled code then we trigger an abort because this indicates a bug. */
229 static rtx dead_notes;
233 /* An instruction is ready to be scheduled when all insns following it
234 have already been scheduled. It is important to ensure that all
235 insns which use its result will not be executed until its result
236 has been computed. An insn is maintained in one of four structures:
238 (P) the "Pending" set of insns which cannot be scheduled until
239 their dependencies have been satisfied.
240 (Q) the "Queued" set of insns that can be scheduled when sufficient
242 (R) the "Ready" list of unscheduled, uncommitted insns.
243 (S) the "Scheduled" list of insns.
245 Initially, all insns are either "Pending" or "Ready" depending on
246 whether their dependencies are satisfied.
248 Insns move from the "Ready" list to the "Scheduled" list as they
249 are committed to the schedule. As this occurs, the insns in the
250 "Pending" list have their dependencies satisfied and move to either
251 the "Ready" list or the "Queued" set depending on whether
252 sufficient time has passed to make them ready. As time passes,
253 insns move from the "Queued" set to the "Ready" list. Insns may
254 move from the "Ready" list to the "Queued" set if they are blocked
255 due to a function unit conflict.
257 The "Pending" list (P) are the insns in the LOG_LINKS of the unscheduled
258 insns, i.e., those that are ready, queued, and pending.
259 The "Queued" set (Q) is implemented by the variable `insn_queue'.
260 The "Ready" list (R) is implemented by the variables `ready' and
262 The "Scheduled" list (S) is the new insn chain built by this pass.
264 The transition (R->S) is implemented in the scheduling loop in
265 `schedule_block' when the best insn to schedule is chosen.
266 The transition (R->Q) is implemented in `schedule_select' when an
267 insn is found to to have a function unit conflict with the already
269 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
270 insns move from the ready list to the scheduled list.
271 The transition (Q->R) is implemented at the top of the scheduling
272 loop in `schedule_block' as time passes or stalls are introduced. */
274 /* Implement a circular buffer to delay instructions until sufficient
275 time has passed. INSN_QUEUE_SIZE is a power of two larger than
276 MAX_BLOCKAGE and MAX_READY_COST computed by genattr.c. This is the
277 longest time an isnsn may be queued. */
278 static rtx insn_queue[INSN_QUEUE_SIZE];
279 static int q_ptr = 0;
280 static int q_size = 0;
281 #define NEXT_Q(X) (((X)+1) & (INSN_QUEUE_SIZE-1))
282 #define NEXT_Q_AFTER(X,C) (((X)+C) & (INSN_QUEUE_SIZE-1))
284 /* Vector indexed by INSN_UID giving the minimum clock tick at which
285 the insn becomes ready. This is used to note timing constraints for
286 insns in the pending list. */
287 static int *insn_tick;
288 #define INSN_TICK(INSN) (insn_tick[INSN_UID (INSN)])
290 /* Data structure for keeping track of register information
291 during that register's life. */
300 /* Forward declarations. */
301 static void add_dependence PROTO((rtx, rtx, enum reg_note));
302 static void remove_dependence PROTO((rtx, rtx));
303 static rtx find_insn_list PROTO((rtx, rtx));
304 static int insn_unit PROTO((rtx));
305 static unsigned int blockage_range PROTO((int, rtx));
306 static void clear_units PROTO((void));
307 static void prepare_unit PROTO((int));
308 static int actual_hazard_this_instance PROTO((int, int, rtx, int, int));
309 static void schedule_unit PROTO((int, rtx, int));
310 static int actual_hazard PROTO((int, rtx, int, int));
311 static int potential_hazard PROTO((int, rtx, int));
312 static int insn_cost PROTO((rtx, rtx, rtx));
313 static int priority PROTO((rtx));
314 static void free_pending_lists PROTO((void));
315 static void add_insn_mem_dependence PROTO((rtx *, rtx *, rtx, rtx));
316 static void flush_pending_lists PROTO((rtx, int));
317 static void sched_analyze_1 PROTO((rtx, rtx));
318 static void sched_analyze_2 PROTO((rtx, rtx));
319 static void sched_analyze_insn PROTO((rtx, rtx, rtx));
320 static int sched_analyze PROTO((rtx, rtx));
321 static void sched_note_set PROTO((int, rtx, int));
322 static int rank_for_schedule PROTO((rtx *, rtx *));
323 static void swap_sort PROTO((rtx *, int));
324 static void queue_insn PROTO((rtx, int));
325 static int birthing_insn PROTO((rtx));
326 static void adjust_priority PROTO((rtx));
327 static int schedule_insn PROTO((rtx, rtx *, int, int));
328 static int schedule_select PROTO((rtx *, int, int, FILE *));
329 static void create_reg_dead_note PROTO((rtx, rtx));
330 static void attach_deaths PROTO((rtx, rtx, int));
331 static void attach_deaths_insn PROTO((rtx));
332 static rtx unlink_notes PROTO((rtx, rtx));
333 static int new_sometimes_live PROTO((struct sometimes *, int, int));
334 static void finish_sometimes_live PROTO((struct sometimes *, int));
335 static rtx reemit_notes PROTO((rtx, rtx));
336 static void schedule_block PROTO((int, FILE *));
337 static rtx regno_use_in PROTO((int, rtx));
338 static void split_hard_reg_notes PROTO((rtx, rtx, rtx, rtx));
339 static void new_insn_dead_notes PROTO((rtx, rtx, rtx, rtx));
340 static void update_n_sets PROTO((rtx, int));
341 static void update_flow_info PROTO((rtx, rtx, rtx, rtx));
343 /* Main entry point of this file. */
344 void schedule_insns PROTO((FILE *));
346 #endif /* INSN_SCHEDULING */
348 #define SIZE_FOR_MODE(X) (GET_MODE_SIZE (GET_MODE (X)))
350 /* Helper functions for instruction scheduling. */
352 /* Add ELEM wrapped in an INSN_LIST with reg note kind DEP_TYPE to the
353 LOG_LINKS of INSN, if not already there. DEP_TYPE indicates the type
354 of dependence that this link represents. */
357 add_dependence (insn, elem, dep_type)
360 enum reg_note dep_type;
364 /* Don't depend an insn on itself. */
368 /* If elem is part of a sequence that must be scheduled together, then
369 make the dependence point to the last insn of the sequence.
370 When HAVE_cc0, it is possible for NOTEs to exist between users and
371 setters of the condition codes, so we must skip past notes here.
372 Otherwise, NOTEs are impossible here. */
374 next = NEXT_INSN (elem);
377 while (next && GET_CODE (next) == NOTE)
378 next = NEXT_INSN (next);
381 if (next && SCHED_GROUP_P (next)
382 && GET_CODE (next) != CODE_LABEL)
384 /* Notes will never intervene here though, so don't bother checking
386 /* We must reject CODE_LABELs, so that we don't get confused by one
387 that has LABEL_PRESERVE_P set, which is represented by the same
388 bit in the rtl as SCHED_GROUP_P. A CODE_LABEL can never be
390 while (NEXT_INSN (next) && SCHED_GROUP_P (NEXT_INSN (next))
391 && GET_CODE (NEXT_INSN (next)) != CODE_LABEL)
392 next = NEXT_INSN (next);
394 /* Again, don't depend an insn on itself. */
398 /* Make the dependence to NEXT, the last insn of the group, instead
399 of the original ELEM. */
403 /* Check that we don't already have this dependence. */
404 for (link = LOG_LINKS (insn); link; link = XEXP (link, 1))
405 if (XEXP (link, 0) == elem)
407 /* If this is a more restrictive type of dependence than the existing
408 one, then change the existing dependence to this type. */
409 if ((int) dep_type < (int) REG_NOTE_KIND (link))
410 PUT_REG_NOTE_KIND (link, dep_type);
413 /* Might want to check one level of transitivity to save conses. */
415 link = rtx_alloc (INSN_LIST);
416 /* Insn dependency, not data dependency. */
417 PUT_REG_NOTE_KIND (link, dep_type);
418 XEXP (link, 0) = elem;
419 XEXP (link, 1) = LOG_LINKS (insn);
420 LOG_LINKS (insn) = link;
423 /* Remove ELEM wrapped in an INSN_LIST from the LOG_LINKS
424 of INSN. Abort if not found. */
427 remove_dependence (insn, elem)
434 for (prev = 0, link = LOG_LINKS (insn); link;
435 prev = link, link = XEXP (link, 1))
437 if (XEXP (link, 0) == elem)
440 XEXP (prev, 1) = XEXP (link, 1);
442 LOG_LINKS (insn) = XEXP (link, 1);
452 #ifndef INSN_SCHEDULING
454 schedule_insns (dump_file)
463 /* Computation of memory dependencies. */
465 /* The *_insns and *_mems are paired lists. Each pending memory operation
466 will have a pointer to the MEM rtx on one list and a pointer to the
467 containing insn on the other list in the same place in the list. */
469 /* We can't use add_dependence like the old code did, because a single insn
470 may have multiple memory accesses, and hence needs to be on the list
471 once for each memory access. Add_dependence won't let you add an insn
472 to a list more than once. */
474 /* An INSN_LIST containing all insns with pending read operations. */
475 static rtx pending_read_insns;
477 /* An EXPR_LIST containing all MEM rtx's which are pending reads. */
478 static rtx pending_read_mems;
480 /* An INSN_LIST containing all insns with pending write operations. */
481 static rtx pending_write_insns;
483 /* An EXPR_LIST containing all MEM rtx's which are pending writes. */
484 static rtx pending_write_mems;
486 /* Indicates the combined length of the two pending lists. We must prevent
487 these lists from ever growing too large since the number of dependencies
488 produced is at least O(N*N), and execution time is at least O(4*N*N), as
489 a function of the length of these pending lists. */
491 static int pending_lists_length;
493 /* An INSN_LIST containing all INSN_LISTs allocated but currently unused. */
495 static rtx unused_insn_list;
497 /* An EXPR_LIST containing all EXPR_LISTs allocated but currently unused. */
499 static rtx unused_expr_list;
501 /* The last insn upon which all memory references must depend.
502 This is an insn which flushed the pending lists, creating a dependency
503 between it and all previously pending memory references. This creates
504 a barrier (or a checkpoint) which no memory reference is allowed to cross.
506 This includes all non constant CALL_INSNs. When we do interprocedural
507 alias analysis, this restriction can be relaxed.
508 This may also be an INSN that writes memory if the pending lists grow
511 static rtx last_pending_memory_flush;
513 /* The last function call we have seen. All hard regs, and, of course,
514 the last function call, must depend on this. */
516 static rtx last_function_call;
518 /* The LOG_LINKS field of this is a list of insns which use a pseudo register
519 that does not already cross a call. We create dependencies between each
520 of those insn and the next call insn, to ensure that they won't cross a call
521 after scheduling is done. */
523 static rtx sched_before_next_call;
525 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
526 so that insns independent of the last scheduled insn will be preferred
527 over dependent instructions. */
529 static rtx last_scheduled_insn;
531 /* Process an insn's memory dependencies. There are four kinds of
534 (0) read dependence: read follows read
535 (1) true dependence: read follows write
536 (2) anti dependence: write follows read
537 (3) output dependence: write follows write
539 We are careful to build only dependencies which actually exist, and
540 use transitivity to avoid building too many links. */
542 /* Return the INSN_LIST containing INSN in LIST, or NULL
543 if LIST does not contain INSN. */
546 find_insn_list (insn, list)
552 if (XEXP (list, 0) == insn)
554 list = XEXP (list, 1);
559 /* Compute the function units used by INSN. This caches the value
560 returned by function_units_used. A function unit is encoded as the
561 unit number if the value is non-negative and the compliment of a
562 mask if the value is negative. A function unit index is the
563 non-negative encoding. */
569 register int unit = INSN_UNIT (insn);
573 recog_memoized (insn);
575 /* A USE insn, or something else we don't need to understand.
576 We can't pass these directly to function_units_used because it will
577 trigger a fatal error for unrecognizable insns. */
578 if (INSN_CODE (insn) < 0)
582 unit = function_units_used (insn);
583 /* Increment non-negative values so we can cache zero. */
584 if (unit >= 0) unit++;
586 /* We only cache 16 bits of the result, so if the value is out of
587 range, don't cache it. */
588 if (FUNCTION_UNITS_SIZE < HOST_BITS_PER_SHORT
590 || (~unit & ((1 << (HOST_BITS_PER_SHORT - 1)) - 1)) == 0)
591 INSN_UNIT (insn) = unit;
593 return (unit > 0 ? unit - 1 : unit);
596 /* Compute the blockage range for executing INSN on UNIT. This caches
597 the value returned by the blockage_range_function for the unit.
598 These values are encoded in an int where the upper half gives the
599 minimum value and the lower half gives the maximum value. */
601 __inline static unsigned int
602 blockage_range (unit, insn)
606 unsigned int blockage = INSN_BLOCKAGE (insn);
609 if (UNIT_BLOCKED (blockage) != unit + 1)
611 range = function_units[unit].blockage_range_function (insn);
612 /* We only cache the blockage range for one unit and then only if
614 if (HOST_BITS_PER_INT >= UNIT_BITS + 2 * BLOCKAGE_BITS)
615 INSN_BLOCKAGE (insn) = ENCODE_BLOCKAGE (unit + 1, range);
618 range = BLOCKAGE_RANGE (blockage);
623 /* A vector indexed by function unit instance giving the last insn to use
624 the unit. The value of the function unit instance index for unit U
625 instance I is (U + I * FUNCTION_UNITS_SIZE). */
626 static rtx unit_last_insn[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
628 /* A vector indexed by function unit instance giving the minimum time when
629 the unit will unblock based on the maximum blockage cost. */
630 static int unit_tick[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
632 /* A vector indexed by function unit number giving the number of insns
633 that remain to use the unit. */
634 static int unit_n_insns[FUNCTION_UNITS_SIZE];
636 /* Reset the function unit state to the null state. */
641 bzero ((char *) unit_last_insn, sizeof (unit_last_insn));
642 bzero ((char *) unit_tick, sizeof (unit_tick));
643 bzero ((char *) unit_n_insns, sizeof (unit_n_insns));
646 /* Record an insn as one that will use the units encoded by UNIT. */
655 unit_n_insns[unit]++;
657 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
662 /* Return the actual hazard cost of executing INSN on the unit UNIT,
663 instance INSTANCE at time CLOCK if the previous actual hazard cost
667 actual_hazard_this_instance (unit, instance, insn, clock, cost)
668 int unit, instance, clock, cost;
671 int tick = unit_tick[instance];
673 if (tick - clock > cost)
675 /* The scheduler is operating in reverse, so INSN is the executing
676 insn and the unit's last insn is the candidate insn. We want a
677 more exact measure of the blockage if we execute INSN at CLOCK
678 given when we committed the execution of the unit's last insn.
680 The blockage value is given by either the unit's max blockage
681 constant, blockage range function, or blockage function. Use
682 the most exact form for the given unit. */
684 if (function_units[unit].blockage_range_function)
686 if (function_units[unit].blockage_function)
687 tick += (function_units[unit].blockage_function
688 (insn, unit_last_insn[instance])
689 - function_units[unit].max_blockage);
691 tick += ((int) MAX_BLOCKAGE_COST (blockage_range (unit, insn))
692 - function_units[unit].max_blockage);
694 if (tick - clock > cost)
700 /* Record INSN as having begun execution on the units encoded by UNIT at
704 schedule_unit (unit, insn, clock)
713 #if MAX_MULTIPLICITY > 1
714 /* Find the first free instance of the function unit and use that
715 one. We assume that one is free. */
716 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
718 if (! actual_hazard_this_instance (unit, instance, insn, clock, 0))
720 instance += FUNCTION_UNITS_SIZE;
723 unit_last_insn[instance] = insn;
724 unit_tick[instance] = (clock + function_units[unit].max_blockage);
727 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
729 schedule_unit (i, insn, clock);
732 /* Return the actual hazard cost of executing INSN on the units encoded by
733 UNIT at time CLOCK if the previous actual hazard cost was COST. */
736 actual_hazard (unit, insn, clock, cost)
737 int unit, clock, cost;
744 /* Find the instance of the function unit with the minimum hazard. */
746 int best_cost = actual_hazard_this_instance (unit, instance, insn,
750 #if MAX_MULTIPLICITY > 1
751 if (best_cost > cost)
753 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
755 instance += FUNCTION_UNITS_SIZE;
756 this_cost = actual_hazard_this_instance (unit, instance, insn,
758 if (this_cost < best_cost)
760 best_cost = this_cost;
761 if (this_cost <= cost)
767 cost = MAX (cost, best_cost);
770 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
772 cost = actual_hazard (i, insn, clock, cost);
777 /* Return the potential hazard cost of executing an instruction on the
778 units encoded by UNIT if the previous potential hazard cost was COST.
779 An insn with a large blockage time is chosen in preference to one
780 with a smaller time; an insn that uses a unit that is more likely
781 to be used is chosen in preference to one with a unit that is less
782 used. We are trying to minimize a subsequent actual hazard. */
785 potential_hazard (unit, insn, cost)
790 unsigned int minb, maxb;
794 minb = maxb = function_units[unit].max_blockage;
797 if (function_units[unit].blockage_range_function)
799 maxb = minb = blockage_range (unit, insn);
800 maxb = MAX_BLOCKAGE_COST (maxb);
801 minb = MIN_BLOCKAGE_COST (minb);
806 /* Make the number of instructions left dominate. Make the
807 minimum delay dominate the maximum delay. If all these
808 are the same, use the unit number to add an arbitrary
809 ordering. Other terms can be added. */
810 ncost = minb * 0x40 + maxb;
811 ncost *= (unit_n_insns[unit] - 1) * 0x1000 + unit;
818 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
820 cost = potential_hazard (i, insn, cost);
825 /* Compute cost of executing INSN given the dependence LINK on the insn USED.
826 This is the number of virtual cycles taken between instruction issue and
827 instruction results. */
830 insn_cost (insn, link, used)
831 rtx insn, link, used;
833 register int cost = INSN_COST (insn);
837 recog_memoized (insn);
839 /* A USE insn, or something else we don't need to understand.
840 We can't pass these directly to result_ready_cost because it will
841 trigger a fatal error for unrecognizable insns. */
842 if (INSN_CODE (insn) < 0)
844 INSN_COST (insn) = 1;
849 cost = result_ready_cost (insn);
854 INSN_COST (insn) = cost;
858 /* A USE insn should never require the value used to be computed. This
859 allows the computation of a function's result and parameter values to
860 overlap the return and call. */
861 recog_memoized (used);
862 if (INSN_CODE (used) < 0)
863 LINK_COST_FREE (link) = 1;
865 /* If some dependencies vary the cost, compute the adjustment. Most
866 commonly, the adjustment is complete: either the cost is ignored
867 (in the case of an output- or anti-dependence), or the cost is
868 unchanged. These values are cached in the link as LINK_COST_FREE
869 and LINK_COST_ZERO. */
871 if (LINK_COST_FREE (link))
874 else if (! LINK_COST_ZERO (link))
878 ADJUST_COST (used, link, insn, ncost);
880 LINK_COST_FREE (link) = ncost = 1;
882 LINK_COST_ZERO (link) = 1;
889 /* Compute the priority number for INSN. */
895 if (insn && GET_RTX_CLASS (GET_CODE (insn)) == 'i')
899 int this_priority = INSN_PRIORITY (insn);
902 if (this_priority > 0)
903 return this_priority;
907 /* Nonzero if these insns must be scheduled together. */
908 if (SCHED_GROUP_P (insn))
911 while (SCHED_GROUP_P (prev))
913 prev = PREV_INSN (prev);
914 INSN_REF_COUNT (prev) += 1;
918 for (prev = LOG_LINKS (insn); prev; prev = XEXP (prev, 1))
920 rtx x = XEXP (prev, 0);
922 /* A dependence pointing to a note or deleted insn is always
923 obsolete, because sched_analyze_insn will have created any
924 necessary new dependences which replace it. Notes and deleted
925 insns can be created when instructions are deleted by insn
926 splitting, or by register allocation. */
927 if (GET_CODE (x) == NOTE || INSN_DELETED_P (x))
929 remove_dependence (insn, x);
933 /* Clear the link cost adjustment bits. */
934 LINK_COST_FREE (prev) = 0;
936 LINK_COST_ZERO (prev) = 0;
939 /* This priority calculation was chosen because it results in the
940 least instruction movement, and does not hurt the performance
941 of the resulting code compared to the old algorithm.
942 This makes the sched algorithm more stable, which results
943 in better code, because there is less register pressure,
944 cross jumping is more likely to work, and debugging is easier.
946 When all instructions have a latency of 1, there is no need to
947 move any instructions. Subtracting one here ensures that in such
948 cases all instructions will end up with a priority of one, and
949 hence no scheduling will be done.
951 The original code did not subtract the one, and added the
952 insn_cost of the current instruction to its priority (e.g.
953 move the insn_cost call down to the end). */
955 prev_priority = priority (x) + insn_cost (x, prev, insn) - 1;
957 if (prev_priority > max_priority)
958 max_priority = prev_priority;
959 INSN_REF_COUNT (x) += 1;
962 prepare_unit (insn_unit (insn));
963 INSN_PRIORITY (insn) = max_priority;
964 return INSN_PRIORITY (insn);
969 /* Remove all INSN_LISTs and EXPR_LISTs from the pending lists and add
970 them to the unused_*_list variables, so that they can be reused. */
973 free_pending_lists ()
975 register rtx link, prev_link;
977 if (pending_read_insns)
979 prev_link = pending_read_insns;
980 link = XEXP (prev_link, 1);
985 link = XEXP (link, 1);
988 XEXP (prev_link, 1) = unused_insn_list;
989 unused_insn_list = pending_read_insns;
990 pending_read_insns = 0;
993 if (pending_write_insns)
995 prev_link = pending_write_insns;
996 link = XEXP (prev_link, 1);
1001 link = XEXP (link, 1);
1004 XEXP (prev_link, 1) = unused_insn_list;
1005 unused_insn_list = pending_write_insns;
1006 pending_write_insns = 0;
1009 if (pending_read_mems)
1011 prev_link = pending_read_mems;
1012 link = XEXP (prev_link, 1);
1017 link = XEXP (link, 1);
1020 XEXP (prev_link, 1) = unused_expr_list;
1021 unused_expr_list = pending_read_mems;
1022 pending_read_mems = 0;
1025 if (pending_write_mems)
1027 prev_link = pending_write_mems;
1028 link = XEXP (prev_link, 1);
1033 link = XEXP (link, 1);
1036 XEXP (prev_link, 1) = unused_expr_list;
1037 unused_expr_list = pending_write_mems;
1038 pending_write_mems = 0;
1042 /* Add an INSN and MEM reference pair to a pending INSN_LIST and MEM_LIST.
1043 The MEM is a memory reference contained within INSN, which we are saving
1044 so that we can do memory aliasing on it. */
1047 add_insn_mem_dependence (insn_list, mem_list, insn, mem)
1048 rtx *insn_list, *mem_list, insn, mem;
1052 if (unused_insn_list)
1054 link = unused_insn_list;
1055 unused_insn_list = XEXP (link, 1);
1058 link = rtx_alloc (INSN_LIST);
1059 XEXP (link, 0) = insn;
1060 XEXP (link, 1) = *insn_list;
1063 if (unused_expr_list)
1065 link = unused_expr_list;
1066 unused_expr_list = XEXP (link, 1);
1069 link = rtx_alloc (EXPR_LIST);
1070 XEXP (link, 0) = mem;
1071 XEXP (link, 1) = *mem_list;
1074 pending_lists_length++;
1077 /* Make a dependency between every memory reference on the pending lists
1078 and INSN, thus flushing the pending lists. If ONLY_WRITE, don't flush
1082 flush_pending_lists (insn, only_write)
1088 while (pending_read_insns && ! only_write)
1090 add_dependence (insn, XEXP (pending_read_insns, 0), REG_DEP_ANTI);
1092 link = pending_read_insns;
1093 pending_read_insns = XEXP (pending_read_insns, 1);
1094 XEXP (link, 1) = unused_insn_list;
1095 unused_insn_list = link;
1097 link = pending_read_mems;
1098 pending_read_mems = XEXP (pending_read_mems, 1);
1099 XEXP (link, 1) = unused_expr_list;
1100 unused_expr_list = link;
1102 while (pending_write_insns)
1104 add_dependence (insn, XEXP (pending_write_insns, 0), REG_DEP_ANTI);
1106 link = pending_write_insns;
1107 pending_write_insns = XEXP (pending_write_insns, 1);
1108 XEXP (link, 1) = unused_insn_list;
1109 unused_insn_list = link;
1111 link = pending_write_mems;
1112 pending_write_mems = XEXP (pending_write_mems, 1);
1113 XEXP (link, 1) = unused_expr_list;
1114 unused_expr_list = link;
1116 pending_lists_length = 0;
1118 if (last_pending_memory_flush)
1119 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1121 last_pending_memory_flush = insn;
1124 /* Analyze a single SET or CLOBBER rtx, X, creating all dependencies generated
1125 by the write to the destination of X, and reads of everything mentioned. */
1128 sched_analyze_1 (x, insn)
1133 register rtx dest = SET_DEST (x);
1138 while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
1139 || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
1141 if (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
1143 /* The second and third arguments are values read by this insn. */
1144 sched_analyze_2 (XEXP (dest, 1), insn);
1145 sched_analyze_2 (XEXP (dest, 2), insn);
1147 dest = SUBREG_REG (dest);
1150 if (GET_CODE (dest) == REG)
1154 regno = REGNO (dest);
1156 /* A hard reg in a wide mode may really be multiple registers.
1157 If so, mark all of them just like the first. */
1158 if (regno < FIRST_PSEUDO_REGISTER)
1160 i = HARD_REGNO_NREGS (regno, GET_MODE (dest));
1165 for (u = reg_last_uses[regno+i]; u; u = XEXP (u, 1))
1166 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1167 reg_last_uses[regno + i] = 0;
1168 if (reg_last_sets[regno + i])
1169 add_dependence (insn, reg_last_sets[regno + i],
1171 SET_REGNO_REG_SET (reg_pending_sets, regno + i);
1172 if ((call_used_regs[i] || global_regs[i])
1173 && last_function_call)
1174 /* Function calls clobber all call_used regs. */
1175 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1182 for (u = reg_last_uses[regno]; u; u = XEXP (u, 1))
1183 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1184 reg_last_uses[regno] = 0;
1185 if (reg_last_sets[regno])
1186 add_dependence (insn, reg_last_sets[regno], REG_DEP_OUTPUT);
1187 SET_REGNO_REG_SET (reg_pending_sets, regno);
1189 /* Pseudos that are REG_EQUIV to something may be replaced
1190 by that during reloading. We need only add dependencies for
1191 the address in the REG_EQUIV note. */
1192 if (! reload_completed
1193 && reg_known_equiv_p[regno]
1194 && GET_CODE (reg_known_value[regno]) == MEM)
1195 sched_analyze_2 (XEXP (reg_known_value[regno], 0), insn);
1197 /* Don't let it cross a call after scheduling if it doesn't
1198 already cross one. */
1199 if (REG_N_CALLS_CROSSED (regno) == 0 && last_function_call)
1200 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1203 else if (GET_CODE (dest) == MEM)
1205 /* Writing memory. */
1207 if (pending_lists_length > 32)
1209 /* Flush all pending reads and writes to prevent the pending lists
1210 from getting any larger. Insn scheduling runs too slowly when
1211 these lists get long. The number 32 was chosen because it
1212 seems like a reasonable number. When compiling GCC with itself,
1213 this flush occurs 8 times for sparc, and 10 times for m88k using
1215 flush_pending_lists (insn, 0);
1219 rtx pending, pending_mem;
1221 pending = pending_read_insns;
1222 pending_mem = pending_read_mems;
1225 /* If a dependency already exists, don't create a new one. */
1226 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1227 if (anti_dependence (XEXP (pending_mem, 0), dest))
1228 add_dependence (insn, XEXP (pending, 0), REG_DEP_ANTI);
1230 pending = XEXP (pending, 1);
1231 pending_mem = XEXP (pending_mem, 1);
1234 pending = pending_write_insns;
1235 pending_mem = pending_write_mems;
1238 /* If a dependency already exists, don't create a new one. */
1239 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1240 if (output_dependence (XEXP (pending_mem, 0), dest))
1241 add_dependence (insn, XEXP (pending, 0), REG_DEP_OUTPUT);
1243 pending = XEXP (pending, 1);
1244 pending_mem = XEXP (pending_mem, 1);
1247 if (last_pending_memory_flush)
1248 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1250 add_insn_mem_dependence (&pending_write_insns, &pending_write_mems,
1253 sched_analyze_2 (XEXP (dest, 0), insn);
1256 /* Analyze reads. */
1257 if (GET_CODE (x) == SET)
1258 sched_analyze_2 (SET_SRC (x), insn);
1261 /* Analyze the uses of memory and registers in rtx X in INSN. */
1264 sched_analyze_2 (x, insn)
1270 register enum rtx_code code;
1276 code = GET_CODE (x);
1285 /* Ignore constants. Note that we must handle CONST_DOUBLE here
1286 because it may have a cc0_rtx in its CONST_DOUBLE_CHAIN field, but
1287 this does not mean that this insn is using cc0. */
1295 /* User of CC0 depends on immediately preceding insn. */
1296 SCHED_GROUP_P (insn) = 1;
1298 /* There may be a note before this insn now, but all notes will
1299 be removed before we actually try to schedule the insns, so
1300 it won't cause a problem later. We must avoid it here though. */
1301 prev = prev_nonnote_insn (insn);
1303 /* Make a copy of all dependencies on the immediately previous insn,
1304 and add to this insn. This is so that all the dependencies will
1305 apply to the group. Remove an explicit dependence on this insn
1306 as SCHED_GROUP_P now represents it. */
1308 if (find_insn_list (prev, LOG_LINKS (insn)))
1309 remove_dependence (insn, prev);
1311 for (link = LOG_LINKS (prev); link; link = XEXP (link, 1))
1312 add_dependence (insn, XEXP (link, 0), REG_NOTE_KIND (link));
1320 int regno = REGNO (x);
1321 if (regno < FIRST_PSEUDO_REGISTER)
1325 i = HARD_REGNO_NREGS (regno, GET_MODE (x));
1328 reg_last_uses[regno + i]
1329 = gen_rtx (INSN_LIST, VOIDmode,
1330 insn, reg_last_uses[regno + i]);
1331 if (reg_last_sets[regno + i])
1332 add_dependence (insn, reg_last_sets[regno + i], 0);
1333 if ((call_used_regs[regno + i] || global_regs[regno + i])
1334 && last_function_call)
1335 /* Function calls clobber all call_used regs. */
1336 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1341 reg_last_uses[regno]
1342 = gen_rtx (INSN_LIST, VOIDmode, insn, reg_last_uses[regno]);
1343 if (reg_last_sets[regno])
1344 add_dependence (insn, reg_last_sets[regno], 0);
1346 /* Pseudos that are REG_EQUIV to something may be replaced
1347 by that during reloading. We need only add dependencies for
1348 the address in the REG_EQUIV note. */
1349 if (! reload_completed
1350 && reg_known_equiv_p[regno]
1351 && GET_CODE (reg_known_value[regno]) == MEM)
1352 sched_analyze_2 (XEXP (reg_known_value[regno], 0), insn);
1354 /* If the register does not already cross any calls, then add this
1355 insn to the sched_before_next_call list so that it will still
1356 not cross calls after scheduling. */
1357 if (REG_N_CALLS_CROSSED (regno) == 0)
1358 add_dependence (sched_before_next_call, insn, REG_DEP_ANTI);
1365 /* Reading memory. */
1367 rtx pending, pending_mem;
1369 pending = pending_read_insns;
1370 pending_mem = pending_read_mems;
1373 /* If a dependency already exists, don't create a new one. */
1374 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1375 if (read_dependence (XEXP (pending_mem, 0), x))
1376 add_dependence (insn, XEXP (pending, 0), REG_DEP_ANTI);
1378 pending = XEXP (pending, 1);
1379 pending_mem = XEXP (pending_mem, 1);
1382 pending = pending_write_insns;
1383 pending_mem = pending_write_mems;
1386 /* If a dependency already exists, don't create a new one. */
1387 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1388 if (true_dependence (XEXP (pending_mem, 0), VOIDmode,
1390 add_dependence (insn, XEXP (pending, 0), 0);
1392 pending = XEXP (pending, 1);
1393 pending_mem = XEXP (pending_mem, 1);
1395 if (last_pending_memory_flush)
1396 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1398 /* Always add these dependencies to pending_reads, since
1399 this insn may be followed by a write. */
1400 add_insn_mem_dependence (&pending_read_insns, &pending_read_mems,
1403 /* Take advantage of tail recursion here. */
1404 sched_analyze_2 (XEXP (x, 0), insn);
1410 case UNSPEC_VOLATILE:
1415 /* Traditional and volatile asm instructions must be considered to use
1416 and clobber all hard registers, all pseudo-registers and all of
1417 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
1419 Consider for instance a volatile asm that changes the fpu rounding
1420 mode. An insn should not be moved across this even if it only uses
1421 pseudo-regs because it might give an incorrectly rounded result. */
1422 if (code != ASM_OPERANDS || MEM_VOLATILE_P (x))
1424 int max_reg = max_reg_num ();
1425 for (i = 0; i < max_reg; i++)
1427 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1428 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1429 reg_last_uses[i] = 0;
1430 if (reg_last_sets[i])
1431 add_dependence (insn, reg_last_sets[i], 0);
1433 reg_pending_sets_all = 1;
1435 flush_pending_lists (insn, 0);
1438 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
1439 We can not just fall through here since then we would be confused
1440 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
1441 traditional asms unlike their normal usage. */
1443 if (code == ASM_OPERANDS)
1445 for (j = 0; j < ASM_OPERANDS_INPUT_LENGTH (x); j++)
1446 sched_analyze_2 (ASM_OPERANDS_INPUT (x, j), insn);
1456 /* These both read and modify the result. We must handle them as writes
1457 to get proper dependencies for following instructions. We must handle
1458 them as reads to get proper dependencies from this to previous
1459 instructions. Thus we need to pass them to both sched_analyze_1
1460 and sched_analyze_2. We must call sched_analyze_2 first in order
1461 to get the proper antecedent for the read. */
1462 sched_analyze_2 (XEXP (x, 0), insn);
1463 sched_analyze_1 (x, insn);
1467 /* Other cases: walk the insn. */
1468 fmt = GET_RTX_FORMAT (code);
1469 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
1472 sched_analyze_2 (XEXP (x, i), insn);
1473 else if (fmt[i] == 'E')
1474 for (j = 0; j < XVECLEN (x, i); j++)
1475 sched_analyze_2 (XVECEXP (x, i, j), insn);
1479 /* Analyze an INSN with pattern X to find all dependencies. */
1482 sched_analyze_insn (x, insn, loop_notes)
1486 register RTX_CODE code = GET_CODE (x);
1488 int maxreg = max_reg_num ();
1491 if (code == SET || code == CLOBBER)
1492 sched_analyze_1 (x, insn);
1493 else if (code == PARALLEL)
1496 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
1498 code = GET_CODE (XVECEXP (x, 0, i));
1499 if (code == SET || code == CLOBBER)
1500 sched_analyze_1 (XVECEXP (x, 0, i), insn);
1502 sched_analyze_2 (XVECEXP (x, 0, i), insn);
1506 sched_analyze_2 (x, insn);
1508 /* Mark registers CLOBBERED or used by called function. */
1509 if (GET_CODE (insn) == CALL_INSN)
1510 for (link = CALL_INSN_FUNCTION_USAGE (insn); link; link = XEXP (link, 1))
1512 if (GET_CODE (XEXP (link, 0)) == CLOBBER)
1513 sched_analyze_1 (XEXP (link, 0), insn);
1515 sched_analyze_2 (XEXP (link, 0), insn);
1518 /* If there is a {LOOP,EHREGION}_{BEG,END} note in the middle of a basic block, then
1519 we must be sure that no instructions are scheduled across it.
1520 Otherwise, the reg_n_refs info (which depends on loop_depth) would
1521 become incorrect. */
1525 int max_reg = max_reg_num ();
1528 for (i = 0; i < max_reg; i++)
1531 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1532 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1533 reg_last_uses[i] = 0;
1534 if (reg_last_sets[i])
1535 add_dependence (insn, reg_last_sets[i], 0);
1537 reg_pending_sets_all = 1;
1539 flush_pending_lists (insn, 0);
1542 while (XEXP (link, 1))
1543 link = XEXP (link, 1);
1544 XEXP (link, 1) = REG_NOTES (insn);
1545 REG_NOTES (insn) = loop_notes;
1548 /* After reload, it is possible for an instruction to have a REG_DEAD note
1549 for a register that actually dies a few instructions earlier. For
1550 example, this can happen with SECONDARY_MEMORY_NEEDED reloads.
1551 In this case, we must consider the insn to use the register mentioned
1552 in the REG_DEAD note. Otherwise, we may accidentally move this insn
1553 after another insn that sets the register, thus getting obviously invalid
1554 rtl. This confuses reorg which believes that REG_DEAD notes are still
1557 ??? We would get better code if we fixed reload to put the REG_DEAD
1558 notes in the right places, but that may not be worth the effort. */
1560 if (reload_completed)
1564 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
1565 if (REG_NOTE_KIND (note) == REG_DEAD)
1566 sched_analyze_2 (XEXP (note, 0), insn);
1569 EXECUTE_IF_SET_IN_REG_SET (reg_pending_sets, 0, i,
1571 reg_last_sets[i] = insn;
1573 CLEAR_REG_SET (reg_pending_sets);
1575 if (reg_pending_sets_all)
1577 for (i = 0; i < maxreg; i++)
1578 reg_last_sets[i] = insn;
1579 reg_pending_sets_all = 0;
1582 /* Handle function calls and function returns created by the epilogue
1584 if (GET_CODE (insn) == CALL_INSN || GET_CODE (insn) == JUMP_INSN)
1589 /* When scheduling instructions, we make sure calls don't lose their
1590 accompanying USE insns by depending them one on another in order.
1592 Also, we must do the same thing for returns created by the epilogue
1593 threading code. Note this code works only in this special case,
1594 because other passes make no guarantee that they will never emit
1595 an instruction between a USE and a RETURN. There is such a guarantee
1596 for USE instructions immediately before a call. */
1598 prev_dep_insn = insn;
1599 dep_insn = PREV_INSN (insn);
1600 while (GET_CODE (dep_insn) == INSN
1601 && GET_CODE (PATTERN (dep_insn)) == USE
1602 && GET_CODE (XEXP (PATTERN (dep_insn), 0)) == REG)
1604 SCHED_GROUP_P (prev_dep_insn) = 1;
1606 /* Make a copy of all dependencies on dep_insn, and add to insn.
1607 This is so that all of the dependencies will apply to the
1610 for (link = LOG_LINKS (dep_insn); link; link = XEXP (link, 1))
1611 add_dependence (insn, XEXP (link, 0), REG_NOTE_KIND (link));
1613 prev_dep_insn = dep_insn;
1614 dep_insn = PREV_INSN (dep_insn);
1619 /* Analyze every insn between HEAD and TAIL inclusive, creating LOG_LINKS
1620 for every dependency. */
1623 sched_analyze (head, tail)
1627 register int n_insns = 0;
1629 register int luid = 0;
1632 for (insn = head; ; insn = NEXT_INSN (insn))
1634 INSN_LUID (insn) = luid++;
1636 if (GET_CODE (insn) == INSN || GET_CODE (insn) == JUMP_INSN)
1638 sched_analyze_insn (PATTERN (insn), insn, loop_notes);
1642 else if (GET_CODE (insn) == CALL_INSN)
1647 /* Any instruction using a hard register which may get clobbered
1648 by a call needs to be marked as dependent on this call.
1649 This prevents a use of a hard return reg from being moved
1650 past a void call (i.e. it does not explicitly set the hard
1653 /* If this call is followed by a NOTE_INSN_SETJMP, then assume that
1654 all registers, not just hard registers, may be clobbered by this
1657 /* Insn, being a CALL_INSN, magically depends on
1658 `last_function_call' already. */
1660 if (NEXT_INSN (insn) && GET_CODE (NEXT_INSN (insn)) == NOTE
1661 && NOTE_LINE_NUMBER (NEXT_INSN (insn)) == NOTE_INSN_SETJMP)
1663 int max_reg = max_reg_num ();
1664 for (i = 0; i < max_reg; i++)
1666 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1667 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1668 reg_last_uses[i] = 0;
1669 if (reg_last_sets[i])
1670 add_dependence (insn, reg_last_sets[i], 0);
1672 reg_pending_sets_all = 1;
1674 /* Add a pair of fake REG_NOTEs which we will later
1675 convert back into a NOTE_INSN_SETJMP note. See
1676 reemit_notes for why we use a pair of of NOTEs. */
1678 REG_NOTES (insn) = gen_rtx (EXPR_LIST, REG_DEAD,
1681 REG_NOTES (insn) = gen_rtx (EXPR_LIST, REG_DEAD,
1682 GEN_INT (NOTE_INSN_SETJMP),
1687 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1688 if (call_used_regs[i] || global_regs[i])
1690 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1691 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1692 reg_last_uses[i] = 0;
1693 if (reg_last_sets[i])
1694 add_dependence (insn, reg_last_sets[i], REG_DEP_ANTI);
1695 SET_REGNO_REG_SET (reg_pending_sets, i);
1699 /* For each insn which shouldn't cross a call, add a dependence
1700 between that insn and this call insn. */
1701 x = LOG_LINKS (sched_before_next_call);
1704 add_dependence (insn, XEXP (x, 0), REG_DEP_ANTI);
1707 LOG_LINKS (sched_before_next_call) = 0;
1709 sched_analyze_insn (PATTERN (insn), insn, loop_notes);
1712 /* In the absence of interprocedural alias analysis, we must flush
1713 all pending reads and writes, and start new dependencies starting
1714 from here. But only flush writes for constant calls (which may
1715 be passed a pointer to something we haven't written yet). */
1716 flush_pending_lists (insn, CONST_CALL_P (insn));
1718 /* Depend this function call (actually, the user of this
1719 function call) on all hard register clobberage. */
1720 last_function_call = insn;
1724 /* See comments on reemit_notes as to why we do this. */
1725 else if (GET_CODE (insn) == NOTE
1726 && (NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_BEG
1727 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_END
1728 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_BEG
1729 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_END
1730 || (NOTE_LINE_NUMBER (insn) == NOTE_INSN_SETJMP
1731 && GET_CODE (PREV_INSN (insn)) != CALL_INSN)))
1733 loop_notes = gen_rtx (EXPR_LIST, REG_DEAD,
1734 GEN_INT (NOTE_BLOCK_NUMBER (insn)), loop_notes);
1735 loop_notes = gen_rtx (EXPR_LIST, REG_DEAD,
1736 GEN_INT (NOTE_LINE_NUMBER (insn)), loop_notes);
1737 CONST_CALL_P (loop_notes) = CONST_CALL_P (insn);
1747 /* Called when we see a set of a register. If death is true, then we are
1748 scanning backwards. Mark that register as unborn. If nobody says
1749 otherwise, that is how things will remain. If death is false, then we
1750 are scanning forwards. Mark that register as being born. */
1753 sched_note_set (b, x, death)
1759 register rtx reg = SET_DEST (x);
1765 while (GET_CODE (reg) == SUBREG || GET_CODE (reg) == STRICT_LOW_PART
1766 || GET_CODE (reg) == SIGN_EXTRACT || GET_CODE (reg) == ZERO_EXTRACT)
1768 /* Must treat modification of just one hardware register of a multi-reg
1769 value or just a byte field of a register exactly the same way that
1770 mark_set_1 in flow.c does, i.e. anything except a paradoxical subreg
1771 does not kill the entire register. */
1772 if (GET_CODE (reg) != SUBREG
1773 || REG_SIZE (SUBREG_REG (reg)) > REG_SIZE (reg))
1776 reg = SUBREG_REG (reg);
1779 if (GET_CODE (reg) != REG)
1782 /* Global registers are always live, so the code below does not apply
1785 regno = REGNO (reg);
1786 if (regno >= FIRST_PSEUDO_REGISTER || ! global_regs[regno])
1790 /* If we only set part of the register, then this set does not
1795 /* Try killing this register. */
1796 if (regno < FIRST_PSEUDO_REGISTER)
1798 int j = HARD_REGNO_NREGS (regno, GET_MODE (reg));
1801 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
1802 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
1807 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
1808 SET_REGNO_REG_SET (bb_dead_regs, regno);
1813 /* Make the register live again. */
1814 if (regno < FIRST_PSEUDO_REGISTER)
1816 int j = HARD_REGNO_NREGS (regno, GET_MODE (reg));
1819 SET_REGNO_REG_SET (bb_live_regs, regno + j);
1820 CLEAR_REGNO_REG_SET (bb_dead_regs, regno + j);
1825 SET_REGNO_REG_SET (bb_live_regs, regno);
1826 CLEAR_REGNO_REG_SET (bb_dead_regs, regno);
1832 /* Macros and functions for keeping the priority queue sorted, and
1833 dealing with queueing and dequeueing of instructions. */
1835 #define SCHED_SORT(READY, NEW_READY, OLD_READY) \
1836 do { if ((NEW_READY) - (OLD_READY) == 1) \
1837 swap_sort (READY, NEW_READY); \
1838 else if ((NEW_READY) - (OLD_READY) > 1) \
1839 qsort (READY, NEW_READY, sizeof (rtx), rank_for_schedule); } \
1842 /* Returns a positive value if y is preferred; returns a negative value if
1843 x is preferred. Should never return 0, since that will make the sort
1847 rank_for_schedule (x, y)
1853 int tmp_class, tmp2_class;
1856 /* Choose the instruction with the highest priority, if different. */
1857 if (value = INSN_PRIORITY (tmp) - INSN_PRIORITY (tmp2))
1860 if (last_scheduled_insn)
1862 /* Classify the instructions into three classes:
1863 1) Data dependent on last schedule insn.
1864 2) Anti/Output dependent on last scheduled insn.
1865 3) Independent of last scheduled insn, or has latency of one.
1866 Choose the insn from the highest numbered class if different. */
1867 link = find_insn_list (tmp, LOG_LINKS (last_scheduled_insn));
1868 if (link == 0 || insn_cost (tmp, link, last_scheduled_insn) == 1)
1870 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
1875 link = find_insn_list (tmp2, LOG_LINKS (last_scheduled_insn));
1876 if (link == 0 || insn_cost (tmp2, link, last_scheduled_insn) == 1)
1878 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
1883 if (value = tmp_class - tmp2_class)
1887 /* If insns are equally good, sort by INSN_LUID (original insn order),
1888 so that we make the sort stable. This minimizes instruction movement,
1889 thus minimizing sched's effect on debugging and cross-jumping. */
1890 return INSN_LUID (tmp) - INSN_LUID (tmp2);
1893 /* Resort the array A in which only element at index N may be out of order. */
1895 __inline static void
1903 while (i >= 0 && rank_for_schedule (a+i, &insn) >= 0)
1911 static int max_priority;
1913 /* Add INSN to the insn queue so that it fires at least N_CYCLES
1914 before the currently executing insn. */
1916 __inline static void
1917 queue_insn (insn, n_cycles)
1921 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
1922 NEXT_INSN (insn) = insn_queue[next_q];
1923 insn_queue[next_q] = insn;
1927 /* Return nonzero if PAT is the pattern of an insn which makes a
1931 birthing_insn_p (pat)
1936 if (reload_completed == 1)
1939 if (GET_CODE (pat) == SET
1940 && GET_CODE (SET_DEST (pat)) == REG)
1942 rtx dest = SET_DEST (pat);
1943 int i = REGNO (dest);
1945 /* It would be more accurate to use refers_to_regno_p or
1946 reg_mentioned_p to determine when the dest is not live before this
1949 if (REGNO_REG_SET_P (bb_live_regs, i))
1950 return (REG_N_SETS (i) == 1);
1954 if (GET_CODE (pat) == PARALLEL)
1956 for (j = 0; j < XVECLEN (pat, 0); j++)
1957 if (birthing_insn_p (XVECEXP (pat, 0, j)))
1963 /* PREV is an insn that is ready to execute. Adjust its priority if that
1964 will help shorten register lifetimes. */
1966 __inline static void
1967 adjust_priority (prev)
1970 /* Trying to shorten register lives after reload has completed
1971 is useless and wrong. It gives inaccurate schedules. */
1972 if (reload_completed == 0)
1977 /* ??? This code has no effect, because REG_DEAD notes are removed
1978 before we ever get here. */
1979 for (note = REG_NOTES (prev); note; note = XEXP (note, 1))
1980 if (REG_NOTE_KIND (note) == REG_DEAD)
1983 /* Defer scheduling insns which kill registers, since that
1984 shortens register lives. Prefer scheduling insns which
1985 make registers live for the same reason. */
1989 INSN_PRIORITY (prev) >>= 3;
1992 INSN_PRIORITY (prev) >>= 2;
1996 INSN_PRIORITY (prev) >>= 1;
1999 if (birthing_insn_p (PATTERN (prev)))
2001 int max = max_priority;
2003 if (max > INSN_PRIORITY (prev))
2004 INSN_PRIORITY (prev) = max;
2008 #ifdef ADJUST_PRIORITY
2009 ADJUST_PRIORITY (prev);
2014 /* INSN is the "currently executing insn". Launch each insn which was
2015 waiting on INSN (in the backwards dataflow sense). READY is a
2016 vector of insns which are ready to fire. N_READY is the number of
2017 elements in READY. CLOCK is the current virtual cycle. */
2020 schedule_insn (insn, ready, n_ready, clock)
2027 int new_ready = n_ready;
2029 if (MAX_BLOCKAGE > 1)
2030 schedule_unit (insn_unit (insn), insn, clock);
2032 if (LOG_LINKS (insn) == 0)
2035 /* This is used by the function adjust_priority above. */
2037 max_priority = MAX (INSN_PRIORITY (ready[0]), INSN_PRIORITY (insn));
2039 max_priority = INSN_PRIORITY (insn);
2041 for (link = LOG_LINKS (insn); link != 0; link = XEXP (link, 1))
2043 rtx prev = XEXP (link, 0);
2044 int cost = insn_cost (prev, link, insn);
2046 if ((INSN_REF_COUNT (prev) -= 1) != 0)
2048 /* We satisfied one requirement to fire PREV. Record the earliest
2049 time when PREV can fire. No need to do this if the cost is 1,
2050 because PREV can fire no sooner than the next cycle. */
2052 INSN_TICK (prev) = MAX (INSN_TICK (prev), clock + cost);
2056 /* We satisfied the last requirement to fire PREV. Ensure that all
2057 timing requirements are satisfied. */
2058 if (INSN_TICK (prev) - clock > cost)
2059 cost = INSN_TICK (prev) - clock;
2061 /* Adjust the priority of PREV and either put it on the ready
2062 list or queue it. */
2063 adjust_priority (prev);
2065 ready[new_ready++] = prev;
2067 queue_insn (prev, cost);
2074 /* Given N_READY insns in the ready list READY at time CLOCK, queue
2075 those that are blocked due to function unit hazards and rearrange
2076 the remaining ones to minimize subsequent function unit hazards. */
2079 schedule_select (ready, n_ready, clock, file)
2084 int pri = INSN_PRIORITY (ready[0]);
2085 int i, j, k, q, cost, best_cost, best_insn = 0, new_ready = n_ready;
2088 /* Work down the ready list in groups of instructions with the same
2089 priority value. Queue insns in the group that are blocked and
2090 select among those that remain for the one with the largest
2091 potential hazard. */
2092 for (i = 0; i < n_ready; i = j)
2095 for (j = i + 1; j < n_ready; j++)
2096 if ((pri = INSN_PRIORITY (ready[j])) != opri)
2099 /* Queue insns in the group that are blocked. */
2100 for (k = i, q = 0; k < j; k++)
2103 if ((cost = actual_hazard (insn_unit (insn), insn, clock, 0)) != 0)
2107 queue_insn (insn, cost);
2109 fprintf (file, "\n;; blocking insn %d for %d cycles",
2110 INSN_UID (insn), cost);
2115 /* Check the next group if all insns were queued. */
2119 /* If more than one remains, select the first one with the largest
2120 potential hazard. */
2121 else if (j - i - q > 1)
2124 for (k = i; k < j; k++)
2126 if ((insn = ready[k]) == 0)
2128 if ((cost = potential_hazard (insn_unit (insn), insn, 0))
2136 /* We have found a suitable insn to schedule. */
2140 /* Move the best insn to be front of the ready list. */
2145 fprintf (file, ", now");
2146 for (i = 0; i < n_ready; i++)
2148 fprintf (file, " %d", INSN_UID (ready[i]));
2149 fprintf (file, "\n;; insn %d has a greater potential hazard",
2150 INSN_UID (ready[best_insn]));
2152 for (i = best_insn; i > 0; i--)
2155 ready[i-1] = ready[i];
2160 /* Compact the ready list. */
2161 if (new_ready < n_ready)
2162 for (i = j = 0; i < n_ready; i++)
2164 ready[j++] = ready[i];
2169 /* Add a REG_DEAD note for REG to INSN, reusing a REG_DEAD note from the
2173 create_reg_dead_note (reg, insn)
2178 /* The number of registers killed after scheduling must be the same as the
2179 number of registers killed before scheduling. The number of REG_DEAD
2180 notes may not be conserved, i.e. two SImode hard register REG_DEAD notes
2181 might become one DImode hard register REG_DEAD note, but the number of
2182 registers killed will be conserved.
2184 We carefully remove REG_DEAD notes from the dead_notes list, so that
2185 there will be none left at the end. If we run out early, then there
2186 is a bug somewhere in flow, combine and/or sched. */
2188 if (dead_notes == 0)
2193 link = rtx_alloc (EXPR_LIST);
2194 PUT_REG_NOTE_KIND (link, REG_DEAD);
2199 /* Number of regs killed by REG. */
2200 int regs_killed = (REGNO (reg) >= FIRST_PSEUDO_REGISTER ? 1
2201 : HARD_REGNO_NREGS (REGNO (reg), GET_MODE (reg)));
2202 /* Number of regs killed by REG_DEAD notes taken off the list. */
2206 reg_note_regs = (REGNO (XEXP (link, 0)) >= FIRST_PSEUDO_REGISTER ? 1
2207 : HARD_REGNO_NREGS (REGNO (XEXP (link, 0)),
2208 GET_MODE (XEXP (link, 0))));
2209 while (reg_note_regs < regs_killed)
2211 link = XEXP (link, 1);
2212 reg_note_regs += (REGNO (XEXP (link, 0)) >= FIRST_PSEUDO_REGISTER ? 1
2213 : HARD_REGNO_NREGS (REGNO (XEXP (link, 0)),
2214 GET_MODE (XEXP (link, 0))));
2216 dead_notes = XEXP (link, 1);
2218 /* If we took too many regs kills off, put the extra ones back. */
2219 while (reg_note_regs > regs_killed)
2221 rtx temp_reg, temp_link;
2223 temp_reg = gen_rtx (REG, word_mode, 0);
2224 temp_link = rtx_alloc (EXPR_LIST);
2225 PUT_REG_NOTE_KIND (temp_link, REG_DEAD);
2226 XEXP (temp_link, 0) = temp_reg;
2227 XEXP (temp_link, 1) = dead_notes;
2228 dead_notes = temp_link;
2233 XEXP (link, 0) = reg;
2234 XEXP (link, 1) = REG_NOTES (insn);
2235 REG_NOTES (insn) = link;
2238 /* Subroutine on attach_deaths_insn--handles the recursive search
2239 through INSN. If SET_P is true, then x is being modified by the insn. */
2242 attach_deaths (x, insn, set_p)
2249 register enum rtx_code code;
2255 code = GET_CODE (x);
2267 /* Get rid of the easy cases first. */
2272 /* If the register dies in this insn, queue that note, and mark
2273 this register as needing to die. */
2274 /* This code is very similar to mark_used_1 (if set_p is false)
2275 and mark_set_1 (if set_p is true) in flow.c. */
2285 all_needed = some_needed = REGNO_REG_SET_P (old_live_regs, regno);
2286 if (regno < FIRST_PSEUDO_REGISTER)
2290 n = HARD_REGNO_NREGS (regno, GET_MODE (x));
2293 int needed = (REGNO_REG_SET_P (old_live_regs, regno + n));
2294 some_needed |= needed;
2295 all_needed &= needed;
2299 /* If it wasn't live before we started, then add a REG_DEAD note.
2300 We must check the previous lifetime info not the current info,
2301 because we may have to execute this code several times, e.g.
2302 once for a clobber (which doesn't add a note) and later
2303 for a use (which does add a note).
2305 Always make the register live. We must do this even if it was
2306 live before, because this may be an insn which sets and uses
2307 the same register, in which case the register has already been
2308 killed, so we must make it live again.
2310 Global registers are always live, and should never have a REG_DEAD
2311 note added for them, so none of the code below applies to them. */
2313 if (regno >= FIRST_PSEUDO_REGISTER || ! global_regs[regno])
2315 /* Never add REG_DEAD notes for the FRAME_POINTER_REGNUM or the
2316 STACK_POINTER_REGNUM, since these are always considered to be
2317 live. Similarly for ARG_POINTER_REGNUM if it is fixed. */
2318 if (regno != FRAME_POINTER_REGNUM
2319 #if HARD_FRAME_POINTER_REGNUM != FRAME_POINTER_REGNUM
2320 && ! (regno == HARD_FRAME_POINTER_REGNUM)
2322 #if ARG_POINTER_REGNUM != FRAME_POINTER_REGNUM
2323 && ! (regno == ARG_POINTER_REGNUM && fixed_regs[regno])
2325 && regno != STACK_POINTER_REGNUM)
2327 /* ??? It is perhaps a dead_or_set_p bug that it does
2328 not check for REG_UNUSED notes itself. This is necessary
2329 for the case where the SET_DEST is a subreg of regno, as
2330 dead_or_set_p handles subregs specially. */
2331 if (! all_needed && ! dead_or_set_p (insn, x)
2332 && ! find_reg_note (insn, REG_UNUSED, x))
2334 /* Check for the case where the register dying partially
2335 overlaps the register set by this insn. */
2336 if (regno < FIRST_PSEUDO_REGISTER
2337 && HARD_REGNO_NREGS (regno, GET_MODE (x)) > 1)
2339 int n = HARD_REGNO_NREGS (regno, GET_MODE (x));
2341 some_needed |= dead_or_set_regno_p (insn, regno + n);
2344 /* If none of the words in X is needed, make a REG_DEAD
2345 note. Otherwise, we must make partial REG_DEAD
2348 create_reg_dead_note (x, insn);
2353 /* Don't make a REG_DEAD note for a part of a
2354 register that is set in the insn. */
2355 for (i = HARD_REGNO_NREGS (regno, GET_MODE (x)) - 1;
2357 if (! REGNO_REG_SET_P (old_live_regs, regno + i)
2358 && ! dead_or_set_regno_p (insn, regno + i))
2359 create_reg_dead_note (gen_rtx (REG,
2360 reg_raw_mode[regno + i],
2367 if (regno < FIRST_PSEUDO_REGISTER)
2369 int j = HARD_REGNO_NREGS (regno, GET_MODE (x));
2372 CLEAR_REGNO_REG_SET (bb_dead_regs, regno + j);
2373 SET_REGNO_REG_SET (bb_live_regs, regno + j);
2378 CLEAR_REGNO_REG_SET (bb_dead_regs, regno);
2379 SET_REGNO_REG_SET (bb_live_regs, regno);
2386 /* Handle tail-recursive case. */
2387 attach_deaths (XEXP (x, 0), insn, 0);
2391 case STRICT_LOW_PART:
2392 /* These two cases preserve the value of SET_P, so handle them
2394 attach_deaths (XEXP (x, 0), insn, set_p);
2399 /* This case preserves the value of SET_P for the first operand, but
2400 clears it for the other two. */
2401 attach_deaths (XEXP (x, 0), insn, set_p);
2402 attach_deaths (XEXP (x, 1), insn, 0);
2403 attach_deaths (XEXP (x, 2), insn, 0);
2407 /* Other cases: walk the insn. */
2408 fmt = GET_RTX_FORMAT (code);
2409 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2412 attach_deaths (XEXP (x, i), insn, 0);
2413 else if (fmt[i] == 'E')
2414 for (j = 0; j < XVECLEN (x, i); j++)
2415 attach_deaths (XVECEXP (x, i, j), insn, 0);
2420 /* After INSN has executed, add register death notes for each register
2421 that is dead after INSN. */
2424 attach_deaths_insn (insn)
2427 rtx x = PATTERN (insn);
2428 register RTX_CODE code = GET_CODE (x);
2433 attach_deaths (SET_SRC (x), insn, 0);
2435 /* A register might die here even if it is the destination, e.g.
2436 it is the target of a volatile read and is otherwise unused.
2437 Hence we must always call attach_deaths for the SET_DEST. */
2438 attach_deaths (SET_DEST (x), insn, 1);
2440 else if (code == PARALLEL)
2443 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
2445 code = GET_CODE (XVECEXP (x, 0, i));
2448 attach_deaths (SET_SRC (XVECEXP (x, 0, i)), insn, 0);
2450 attach_deaths (SET_DEST (XVECEXP (x, 0, i)), insn, 1);
2452 /* Flow does not add REG_DEAD notes to registers that die in
2453 clobbers, so we can't either. */
2454 else if (code != CLOBBER)
2455 attach_deaths (XVECEXP (x, 0, i), insn, 0);
2458 /* If this is a CLOBBER, only add REG_DEAD notes to registers inside a
2459 MEM being clobbered, just like flow. */
2460 else if (code == CLOBBER && GET_CODE (XEXP (x, 0)) == MEM)
2461 attach_deaths (XEXP (XEXP (x, 0), 0), insn, 0);
2462 /* Otherwise don't add a death note to things being clobbered. */
2463 else if (code != CLOBBER)
2464 attach_deaths (x, insn, 0);
2466 /* Make death notes for things used in the called function. */
2467 if (GET_CODE (insn) == CALL_INSN)
2468 for (link = CALL_INSN_FUNCTION_USAGE (insn); link; link = XEXP (link, 1))
2469 attach_deaths (XEXP (XEXP (link, 0), 0), insn,
2470 GET_CODE (XEXP (link, 0)) == CLOBBER);
2473 /* Delete notes beginning with INSN and maybe put them in the chain
2474 of notes ended by NOTE_LIST.
2475 Returns the insn following the notes. */
2478 unlink_notes (insn, tail)
2481 rtx prev = PREV_INSN (insn);
2483 while (insn != tail && GET_CODE (insn) == NOTE)
2485 rtx next = NEXT_INSN (insn);
2486 /* Delete the note from its current position. */
2488 NEXT_INSN (prev) = next;
2490 PREV_INSN (next) = prev;
2492 if (write_symbols != NO_DEBUG && NOTE_LINE_NUMBER (insn) > 0)
2493 /* Record line-number notes so they can be reused. */
2494 LINE_NOTE (insn) = insn;
2496 /* Don't save away NOTE_INSN_SETJMPs, because they must remain
2497 immediately after the call they follow. We use a fake
2498 (REG_DEAD (const_int -1)) note to remember them.
2499 Likewise with NOTE_INSN_{LOOP,EHREGION}_{BEG, END}. */
2500 else if (NOTE_LINE_NUMBER (insn) != NOTE_INSN_SETJMP
2501 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_BEG
2502 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_END
2503 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_BEG
2504 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_END)
2506 /* Insert the note at the end of the notes list. */
2507 PREV_INSN (insn) = note_list;
2509 NEXT_INSN (note_list) = insn;
2518 /* Constructor for `sometimes' data structure. */
2521 new_sometimes_live (regs_sometimes_live, regno, sometimes_max)
2522 struct sometimes *regs_sometimes_live;
2526 register struct sometimes *p;
2528 /* There should never be a register greater than max_regno here. If there
2529 is, it means that a define_split has created a new pseudo reg. This
2530 is not allowed, since there will not be flow info available for any
2531 new register, so catch the error here. */
2532 if (regno >= max_regno)
2535 p = ®s_sometimes_live[sometimes_max];
2538 p->calls_crossed = 0;
2540 return sometimes_max;
2543 /* Count lengths of all regs we are currently tracking,
2544 and find new registers no longer live. */
2547 finish_sometimes_live (regs_sometimes_live, sometimes_max)
2548 struct sometimes *regs_sometimes_live;
2553 for (i = 0; i < sometimes_max; i++)
2555 register struct sometimes *p = ®s_sometimes_live[i];
2556 int regno = p->regno;
2558 sched_reg_live_length[regno] += p->live_length;
2559 sched_reg_n_calls_crossed[regno] += p->calls_crossed;
2563 /* Search INSN for fake REG_DEAD note pairs for NOTE_INSN_SETJMP,
2564 NOTE_INSN_{LOOP,EHREGION}_{BEG,END}; and convert them back into
2565 NOTEs. The REG_DEAD note following first one is contains the saved
2566 value for NOTE_BLOCK_NUMBER which is useful for
2567 NOTE_INSN_EH_REGION_{BEG,END} NOTEs. LAST is the last instruction
2568 output by the instruction scheduler. Return the new value of LAST. */
2571 reemit_notes (insn, last)
2577 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
2579 if (REG_NOTE_KIND (note) == REG_DEAD
2580 && GET_CODE (XEXP (note, 0)) == CONST_INT)
2582 if (INTVAL (XEXP (note, 0)) == NOTE_INSN_SETJMP)
2584 CONST_CALL_P (emit_note_after (INTVAL (XEXP (note, 0)), insn))
2585 = CONST_CALL_P (note);
2586 remove_note (insn, note);
2587 note = XEXP (note, 1);
2591 last = emit_note_before (INTVAL (XEXP (note, 0)), last);
2592 remove_note (insn, note);
2593 note = XEXP (note, 1);
2594 NOTE_BLOCK_NUMBER (last) = INTVAL (XEXP (note, 0));
2596 remove_note (insn, note);
2602 /* Use modified list scheduling to rearrange insns in basic block
2603 B. FILE, if nonzero, is where we dump interesting output about
2607 schedule_block (b, file)
2613 int i, j, n_ready = 0, new_ready, n_insns;
2614 int sched_n_insns = 0;
2616 #define NEED_NOTHING 0
2621 /* HEAD and TAIL delimit the region being scheduled. */
2622 rtx head = basic_block_head[b];
2623 rtx tail = basic_block_end[b];
2624 /* PREV_HEAD and NEXT_TAIL are the boundaries of the insns
2625 being scheduled. When the insns have been ordered,
2626 these insns delimit where the new insns are to be
2627 spliced back into the insn chain. */
2631 /* Keep life information accurate. */
2632 register struct sometimes *regs_sometimes_live;
2636 fprintf (file, ";;\t -- basic block number %d from %d to %d --\n",
2637 b, INSN_UID (basic_block_head[b]), INSN_UID (basic_block_end[b]));
2640 reg_last_uses = (rtx *) alloca (i * sizeof (rtx));
2641 bzero ((char *) reg_last_uses, i * sizeof (rtx));
2642 reg_last_sets = (rtx *) alloca (i * sizeof (rtx));
2643 bzero ((char *) reg_last_sets, i * sizeof (rtx));
2644 reg_pending_sets = ALLOCA_REG_SET ();
2645 CLEAR_REG_SET (reg_pending_sets);
2646 reg_pending_sets_all = 0;
2649 /* Remove certain insns at the beginning from scheduling,
2650 by advancing HEAD. */
2652 /* At the start of a function, before reload has run, don't delay getting
2653 parameters from hard registers into pseudo registers. */
2654 if (reload_completed == 0 && b == 0)
2657 && GET_CODE (head) == NOTE
2658 && NOTE_LINE_NUMBER (head) != NOTE_INSN_FUNCTION_BEG)
2659 head = NEXT_INSN (head);
2661 && GET_CODE (head) == INSN
2662 && GET_CODE (PATTERN (head)) == SET)
2664 rtx src = SET_SRC (PATTERN (head));
2665 while (GET_CODE (src) == SUBREG
2666 || GET_CODE (src) == SIGN_EXTEND
2667 || GET_CODE (src) == ZERO_EXTEND
2668 || GET_CODE (src) == SIGN_EXTRACT
2669 || GET_CODE (src) == ZERO_EXTRACT)
2670 src = XEXP (src, 0);
2671 if (GET_CODE (src) != REG
2672 || REGNO (src) >= FIRST_PSEUDO_REGISTER)
2674 /* Keep this insn from ever being scheduled. */
2675 INSN_REF_COUNT (head) = 1;
2676 head = NEXT_INSN (head);
2680 /* Don't include any notes or labels at the beginning of the
2681 basic block, or notes at the ends of basic blocks. */
2682 while (head != tail)
2684 if (GET_CODE (head) == NOTE)
2685 head = NEXT_INSN (head);
2686 else if (GET_CODE (tail) == NOTE)
2687 tail = PREV_INSN (tail);
2688 else if (GET_CODE (head) == CODE_LABEL)
2689 head = NEXT_INSN (head);
2692 /* If the only insn left is a NOTE or a CODE_LABEL, then there is no need
2693 to schedule this block. */
2695 && (GET_CODE (head) == NOTE || GET_CODE (head) == CODE_LABEL))
2699 /* This short-cut doesn't work. It does not count call insns crossed by
2700 registers in reg_sometimes_live. It does not mark these registers as
2701 dead if they die in this block. It does not mark these registers live
2702 (or create new reg_sometimes_live entries if necessary) if they are born
2705 The easy solution is to just always schedule a block. This block only
2706 has one insn, so this won't slow down this pass by much. */
2712 /* Now HEAD through TAIL are the insns actually to be rearranged;
2713 Let PREV_HEAD and NEXT_TAIL enclose them. */
2714 prev_head = PREV_INSN (head);
2715 next_tail = NEXT_INSN (tail);
2717 /* Initialize basic block data structures. */
2719 pending_read_insns = 0;
2720 pending_read_mems = 0;
2721 pending_write_insns = 0;
2722 pending_write_mems = 0;
2723 pending_lists_length = 0;
2724 last_pending_memory_flush = 0;
2725 last_function_call = 0;
2726 last_scheduled_insn = 0;
2728 LOG_LINKS (sched_before_next_call) = 0;
2730 n_insns = sched_analyze (head, tail);
2733 free_pending_lists ();
2737 /* Allocate vector to hold insns to be rearranged (except those
2738 insns which are controlled by an insn with SCHED_GROUP_P set).
2739 All these insns are included between ORIG_HEAD and ORIG_TAIL,
2740 as those variables ultimately are set up. */
2741 ready = (rtx *) alloca ((n_insns+1) * sizeof (rtx));
2743 /* TAIL is now the last of the insns to be rearranged.
2744 Put those insns into the READY vector. */
2747 /* For all branches, calls, uses, and cc0 setters, force them to remain
2748 in order at the end of the block by adding dependencies and giving
2749 the last a high priority. There may be notes present, and prev_head
2752 Branches must obviously remain at the end. Calls should remain at the
2753 end since moving them results in worse register allocation. Uses remain
2754 at the end to ensure proper register allocation. cc0 setters remaim
2755 at the end because they can't be moved away from their cc0 user. */
2757 while (GET_CODE (insn) == CALL_INSN || GET_CODE (insn) == JUMP_INSN
2758 || (GET_CODE (insn) == INSN
2759 && (GET_CODE (PATTERN (insn)) == USE
2761 || sets_cc0_p (PATTERN (insn))
2764 || GET_CODE (insn) == NOTE)
2766 if (GET_CODE (insn) != NOTE)
2771 ready[n_ready++] = insn;
2772 INSN_PRIORITY (insn) = TAIL_PRIORITY - i;
2773 INSN_REF_COUNT (insn) = 0;
2775 else if (! find_insn_list (insn, LOG_LINKS (last)))
2777 add_dependence (last, insn, REG_DEP_ANTI);
2778 INSN_REF_COUNT (insn)++;
2782 /* Skip over insns that are part of a group. */
2783 while (SCHED_GROUP_P (insn))
2785 insn = prev_nonnote_insn (insn);
2790 insn = PREV_INSN (insn);
2791 /* Don't overrun the bounds of the basic block. */
2792 if (insn == prev_head)
2796 /* Assign priorities to instructions. Also check whether they
2797 are in priority order already. If so then I will be nonnegative.
2798 We use this shortcut only before reloading. */
2800 i = reload_completed ? DONE_PRIORITY : MAX_PRIORITY;
2803 for (; insn != prev_head; insn = PREV_INSN (insn))
2805 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i')
2808 if (INSN_REF_COUNT (insn) == 0)
2811 ready[n_ready++] = insn;
2814 /* Make this dependent on the last of the instructions
2815 that must remain in order at the end of the block. */
2816 add_dependence (last, insn, REG_DEP_ANTI);
2817 INSN_REF_COUNT (insn) = 1;
2820 if (SCHED_GROUP_P (insn))
2822 while (SCHED_GROUP_P (insn))
2824 insn = prev_nonnote_insn (insn);
2832 if (INSN_PRIORITY (insn) < i)
2833 i = INSN_PRIORITY (insn);
2834 else if (INSN_PRIORITY (insn) > i)
2841 /* This short-cut doesn't work. It does not count call insns crossed by
2842 registers in reg_sometimes_live. It does not mark these registers as
2843 dead if they die in this block. It does not mark these registers live
2844 (or create new reg_sometimes_live entries if necessary) if they are born
2847 The easy solution is to just always schedule a block. These blocks tend
2848 to be very short, so this doesn't slow down this pass by much. */
2850 /* If existing order is good, don't bother to reorder. */
2851 if (i != DONE_PRIORITY)
2854 fprintf (file, ";; already scheduled\n");
2856 if (reload_completed == 0)
2858 for (i = 0; i < sometimes_max; i++)
2859 regs_sometimes_live[i].live_length += n_insns;
2861 finish_sometimes_live (regs_sometimes_live, sometimes_max);
2863 free_pending_lists ();
2868 /* Scan all the insns to be scheduled, removing NOTE insns
2869 and register death notes.
2870 Line number NOTE insns end up in NOTE_LIST.
2871 Register death notes end up in DEAD_NOTES.
2873 Recreate the register life information for the end of this basic
2876 if (reload_completed == 0)
2878 COPY_REG_SET (bb_live_regs, basic_block_live_at_start[b]);
2879 CLEAR_REG_SET (bb_dead_regs);
2883 /* This is the first block in the function. There may be insns
2884 before head that we can't schedule. We still need to examine
2885 them though for accurate register lifetime analysis. */
2887 /* We don't want to remove any REG_DEAD notes as the code below
2890 for (insn = basic_block_head[b]; insn != head;
2891 insn = NEXT_INSN (insn))
2892 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i')
2894 /* See if the register gets born here. */
2895 /* We must check for registers being born before we check for
2896 registers dying. It is possible for a register to be born
2897 and die in the same insn, e.g. reading from a volatile
2898 memory location into an otherwise unused register. Such
2899 a register must be marked as dead after this insn. */
2900 if (GET_CODE (PATTERN (insn)) == SET
2901 || GET_CODE (PATTERN (insn)) == CLOBBER)
2902 sched_note_set (b, PATTERN (insn), 0);
2903 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
2906 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
2907 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
2908 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
2909 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
2911 /* ??? This code is obsolete and should be deleted. It
2912 is harmless though, so we will leave it in for now. */
2913 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
2914 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == USE)
2915 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
2918 /* Each call clobbers (makes live) all call-clobbered regs
2919 that are not global or fixed. Note that the function-value
2920 reg is a call_clobbered reg. */
2922 if (GET_CODE (insn) == CALL_INSN)
2925 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
2926 if (call_used_regs[j] && ! global_regs[j]
2929 SET_REGNO_REG_SET (bb_live_regs, j);
2930 CLEAR_REGNO_REG_SET (bb_dead_regs, j);
2934 for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
2936 if ((REG_NOTE_KIND (link) == REG_DEAD
2937 || REG_NOTE_KIND (link) == REG_UNUSED)
2938 /* Verify that the REG_NOTE has a valid value. */
2939 && GET_CODE (XEXP (link, 0)) == REG)
2941 register int regno = REGNO (XEXP (link, 0));
2943 if (regno < FIRST_PSEUDO_REGISTER)
2945 int j = HARD_REGNO_NREGS (regno,
2946 GET_MODE (XEXP (link, 0)));
2949 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
2950 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
2955 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
2956 SET_REGNO_REG_SET (bb_dead_regs, regno);
2964 /* If debugging information is being produced, keep track of the line
2965 number notes for each insn. */
2966 if (write_symbols != NO_DEBUG)
2968 /* We must use the true line number for the first insn in the block
2969 that was computed and saved at the start of this pass. We can't
2970 use the current line number, because scheduling of the previous
2971 block may have changed the current line number. */
2972 rtx line = line_note_head[b];
2974 for (insn = basic_block_head[b];
2976 insn = NEXT_INSN (insn))
2977 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
2980 LINE_NOTE (insn) = line;
2983 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
2985 rtx prev, next, link;
2987 /* Farm out notes. This is needed to keep the debugger from
2988 getting completely deranged. */
2989 if (GET_CODE (insn) == NOTE)
2992 insn = unlink_notes (insn, next_tail);
2997 if (insn == next_tail)
3001 if (reload_completed == 0
3002 && GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3004 /* See if the register gets born here. */
3005 /* We must check for registers being born before we check for
3006 registers dying. It is possible for a register to be born and
3007 die in the same insn, e.g. reading from a volatile memory
3008 location into an otherwise unused register. Such a register
3009 must be marked as dead after this insn. */
3010 if (GET_CODE (PATTERN (insn)) == SET
3011 || GET_CODE (PATTERN (insn)) == CLOBBER)
3012 sched_note_set (b, PATTERN (insn), 0);
3013 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3016 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3017 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3018 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3019 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3021 /* ??? This code is obsolete and should be deleted. It
3022 is harmless though, so we will leave it in for now. */
3023 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3024 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == USE)
3025 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3028 /* Each call clobbers (makes live) all call-clobbered regs that are
3029 not global or fixed. Note that the function-value reg is a
3030 call_clobbered reg. */
3032 if (GET_CODE (insn) == CALL_INSN)
3035 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
3036 if (call_used_regs[j] && ! global_regs[j]
3039 SET_REGNO_REG_SET (bb_live_regs, j);
3040 CLEAR_REGNO_REG_SET (bb_dead_regs, j);
3044 /* Need to know what registers this insn kills. */
3045 for (prev = 0, link = REG_NOTES (insn); link; link = next)
3047 next = XEXP (link, 1);
3048 if ((REG_NOTE_KIND (link) == REG_DEAD
3049 || REG_NOTE_KIND (link) == REG_UNUSED)
3050 /* Verify that the REG_NOTE has a valid value. */
3051 && GET_CODE (XEXP (link, 0)) == REG)
3053 register int regno = REGNO (XEXP (link, 0));
3055 /* Only unlink REG_DEAD notes; leave REG_UNUSED notes
3057 if (REG_NOTE_KIND (link) == REG_DEAD)
3060 XEXP (prev, 1) = next;
3062 REG_NOTES (insn) = next;
3063 XEXP (link, 1) = dead_notes;
3069 if (regno < FIRST_PSEUDO_REGISTER)
3071 int j = HARD_REGNO_NREGS (regno,
3072 GET_MODE (XEXP (link, 0)));
3075 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
3076 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
3081 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
3082 SET_REGNO_REG_SET (bb_dead_regs, regno);
3091 if (reload_completed == 0)
3093 /* Keep track of register lives. */
3094 old_live_regs = ALLOCA_REG_SET ();
3096 = (struct sometimes *) alloca (max_regno * sizeof (struct sometimes));
3099 /* Start with registers live at end. */
3100 COPY_REG_SET (old_live_regs, bb_live_regs);
3101 EXECUTE_IF_SET_IN_REG_SET (bb_live_regs, 0, j,
3104 = new_sometimes_live (regs_sometimes_live,
3109 SCHED_SORT (ready, n_ready, 1);
3113 fprintf (file, ";; ready list initially:\n;; ");
3114 for (i = 0; i < n_ready; i++)
3115 fprintf (file, "%d ", INSN_UID (ready[i]));
3116 fprintf (file, "\n\n");
3118 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3119 if (INSN_PRIORITY (insn) > 0)
3120 fprintf (file, ";; insn[%4d]: priority = %4d, ref_count = %4d\n",
3121 INSN_UID (insn), INSN_PRIORITY (insn),
3122 INSN_REF_COUNT (insn));
3125 /* Now HEAD and TAIL are going to become disconnected
3126 entirely from the insn chain. */
3129 /* Q_SIZE will always be zero here. */
3130 q_ptr = 0; clock = 0;
3131 bzero ((char *) insn_queue, sizeof (insn_queue));
3133 /* Now, perform list scheduling. */
3135 /* Where we start inserting insns is after TAIL. */
3138 new_needs = (NEXT_INSN (prev_head) == basic_block_head[b]
3139 ? NEED_HEAD : NEED_NOTHING);
3140 if (PREV_INSN (next_tail) == basic_block_end[b])
3141 new_needs |= NEED_TAIL;
3143 new_ready = n_ready;
3144 while (sched_n_insns < n_insns)
3146 q_ptr = NEXT_Q (q_ptr); clock++;
3148 /* Add all pending insns that can be scheduled without stalls to the
3150 for (insn = insn_queue[q_ptr]; insn; insn = NEXT_INSN (insn))
3153 fprintf (file, ";; launching %d before %d with no stalls at T-%d\n",
3154 INSN_UID (insn), INSN_UID (last), clock);
3155 ready[new_ready++] = insn;
3158 insn_queue[q_ptr] = 0;
3160 /* If there are no ready insns, stall until one is ready and add all
3161 of the pending insns at that point to the ready list. */
3164 register int stalls;
3166 for (stalls = 1; stalls < INSN_QUEUE_SIZE; stalls++)
3167 if (insn = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)])
3169 for (; insn; insn = NEXT_INSN (insn))
3172 fprintf (file, ";; launching %d before %d with %d stalls at T-%d\n",
3173 INSN_UID (insn), INSN_UID (last), stalls, clock);
3174 ready[new_ready++] = insn;
3177 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = 0;
3181 q_ptr = NEXT_Q_AFTER (q_ptr, stalls); clock += stalls;
3184 /* There should be some instructions waiting to fire. */
3190 fprintf (file, ";; ready list at T-%d:", clock);
3191 for (i = 0; i < new_ready; i++)
3192 fprintf (file, " %d (%x)",
3193 INSN_UID (ready[i]), INSN_PRIORITY (ready[i]));
3196 /* Sort the ready list and choose the best insn to schedule. Select
3197 which insn should issue in this cycle and queue those that are
3198 blocked by function unit hazards.
3200 N_READY holds the number of items that were scheduled the last time,
3201 minus the one instruction scheduled on the last loop iteration; it
3202 is not modified for any other reason in this loop. */
3204 SCHED_SORT (ready, new_ready, n_ready);
3205 if (MAX_BLOCKAGE > 1)
3207 new_ready = schedule_select (ready, new_ready, clock, file);
3211 fprintf (file, "\n");
3212 /* We must set n_ready here, to ensure that sorting always
3213 occurs when we come back to the SCHED_SORT line above. */
3218 n_ready = new_ready;
3219 last_scheduled_insn = insn = ready[0];
3221 /* The first insn scheduled becomes the new tail. */
3227 fprintf (file, ", now");
3228 for (i = 0; i < n_ready; i++)
3229 fprintf (file, " %d", INSN_UID (ready[i]));
3230 fprintf (file, "\n");
3233 if (DONE_PRIORITY_P (insn))
3236 if (reload_completed == 0)
3238 /* Process this insn, and each insn linked to this one which must
3239 be immediately output after this insn. */
3242 /* First we kill registers set by this insn, and then we
3243 make registers used by this insn live. This is the opposite
3244 order used above because we are traversing the instructions
3247 /* Strictly speaking, we should scan REG_UNUSED notes and make
3248 every register mentioned there live, however, we will just
3249 kill them again immediately below, so there doesn't seem to
3250 be any reason why we bother to do this. */
3252 /* See if this is the last notice we must take of a register. */
3253 if (GET_CODE (PATTERN (insn)) == SET
3254 || GET_CODE (PATTERN (insn)) == CLOBBER)
3255 sched_note_set (b, PATTERN (insn), 1);
3256 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3259 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3260 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3261 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3262 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 1);
3265 /* This code keeps life analysis information up to date. */
3266 if (GET_CODE (insn) == CALL_INSN)
3268 register struct sometimes *p;
3270 /* A call kills all call used registers that are not
3271 global or fixed, except for those mentioned in the call
3272 pattern which will be made live again later. */
3273 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
3274 if (call_used_regs[i] && ! global_regs[i]
3277 CLEAR_REGNO_REG_SET (bb_live_regs, i);
3278 SET_REGNO_REG_SET (bb_dead_regs, i);
3281 /* Regs live at the time of a call instruction must not
3282 go in a register clobbered by calls. Record this for
3283 all regs now live. Note that insns which are born or
3284 die in a call do not cross a call, so this must be done
3285 after the killings (above) and before the births
3287 p = regs_sometimes_live;
3288 for (i = 0; i < sometimes_max; i++, p++)
3289 if (REGNO_REG_SET_P (bb_live_regs, p->regno))
3290 p->calls_crossed += 1;
3293 /* Make every register used live, and add REG_DEAD notes for
3294 registers which were not live before we started. */
3295 attach_deaths_insn (insn);
3297 /* Find registers now made live by that instruction. */
3298 EXECUTE_IF_AND_COMPL_IN_REG_SET (bb_live_regs, old_live_regs, 0, i,
3301 = new_sometimes_live (regs_sometimes_live,
3304 IOR_REG_SET (old_live_regs, bb_live_regs);
3306 /* Count lengths of all regs we are worrying about now,
3307 and handle registers no longer live. */
3309 for (i = 0; i < sometimes_max; i++)
3311 register struct sometimes *p = ®s_sometimes_live[i];
3312 int regno = p->regno;
3314 p->live_length += 1;
3316 if (!REGNO_REG_SET_P (bb_live_regs, p->regno))
3318 /* This is the end of one of this register's lifetime
3319 segments. Save the lifetime info collected so far,
3320 and clear its bit in the old_live_regs entry. */
3321 sched_reg_live_length[regno] += p->live_length;
3322 sched_reg_n_calls_crossed[regno] += p->calls_crossed;
3323 CLEAR_REGNO_REG_SET (old_live_regs, p->regno);
3325 /* Delete the reg_sometimes_live entry for this reg by
3326 copying the last entry over top of it. */
3327 *p = regs_sometimes_live[--sometimes_max];
3328 /* ...and decrement i so that this newly copied entry
3329 will be processed. */
3335 insn = PREV_INSN (insn);
3337 while (SCHED_GROUP_P (link));
3339 /* Set INSN back to the insn we are scheduling now. */
3343 /* Schedule INSN. Remove it from the ready list. */
3348 NEXT_INSN (insn) = last;
3349 PREV_INSN (last) = insn;
3351 /* Everything that precedes INSN now either becomes "ready", if
3352 it can execute immediately before INSN, or "pending", if
3353 there must be a delay. Give INSN high enough priority that
3354 at least one (maybe more) reg-killing insns can be launched
3355 ahead of all others. Mark INSN as scheduled by changing its
3357 INSN_PRIORITY (insn) = LAUNCH_PRIORITY;
3358 new_ready = schedule_insn (insn, ready, n_ready, clock);
3359 INSN_PRIORITY (insn) = DONE_PRIORITY;
3361 /* Schedule all prior insns that must not be moved. */
3362 if (SCHED_GROUP_P (insn))
3364 /* Disable these insns from being launched, in case one of the
3365 insns in the group has a dependency on an earlier one. */
3367 while (SCHED_GROUP_P (link))
3369 /* Disable these insns from being launched by anybody. */
3370 link = PREV_INSN (link);
3371 INSN_REF_COUNT (link) = 0;
3374 /* Now handle each group insn like the main insn was handled
3377 while (SCHED_GROUP_P (link))
3379 link = PREV_INSN (link);
3383 /* ??? Why don't we set LAUNCH_PRIORITY here? */
3384 new_ready = schedule_insn (link, ready, new_ready, clock);
3385 INSN_PRIORITY (link) = DONE_PRIORITY;
3389 /* Put back NOTE_INSN_SETJMP,
3390 NOTE_INSN_{LOOP,EHREGION}_{BEGIN,END} notes. */
3392 /* To prime the loop. We need to handle INSN and all the insns in the
3394 last = NEXT_INSN (insn);
3397 insn = PREV_INSN (last);
3399 /* Maintain a valid chain so emit_note_before works.
3400 This is necessary because PREV_INSN (insn) isn't valid
3401 (if ! SCHED_GROUP_P) and if it points to an insn already
3402 scheduled, a circularity will result. */
3403 if (! SCHED_GROUP_P (insn))
3405 NEXT_INSN (prev_head) = insn;
3406 PREV_INSN (insn) = prev_head;
3409 last = reemit_notes (insn, insn);
3411 while (SCHED_GROUP_P (insn));
3416 if (reload_completed == 0)
3417 finish_sometimes_live (regs_sometimes_live, sometimes_max);
3419 /* HEAD is now the first insn in the chain of insns that
3420 been scheduled by the loop above.
3421 TAIL is the last of those insns. */
3424 /* NOTE_LIST is the end of a chain of notes previously found
3425 among the insns. Insert them at the beginning of the insns. */
3428 rtx note_head = note_list;
3429 while (PREV_INSN (note_head))
3430 note_head = PREV_INSN (note_head);
3432 PREV_INSN (head) = note_list;
3433 NEXT_INSN (note_list) = head;
3437 /* There should be no REG_DEAD notes leftover at the end.
3438 In practice, this can occur as the result of bugs in flow, combine.c,
3439 and/or sched.c. The values of the REG_DEAD notes remaining are
3440 meaningless, because dead_notes is just used as a free list. */
3442 if (dead_notes != 0)
3446 if (new_needs & NEED_HEAD)
3447 basic_block_head[b] = head;
3448 PREV_INSN (head) = prev_head;
3449 NEXT_INSN (prev_head) = head;
3451 if (new_needs & NEED_TAIL)
3452 basic_block_end[b] = tail;
3453 NEXT_INSN (tail) = next_tail;
3454 PREV_INSN (next_tail) = tail;
3456 /* Restore the line-number notes of each insn. */
3457 if (write_symbols != NO_DEBUG)
3459 rtx line, note, prev, new;
3462 head = basic_block_head[b];
3463 next_tail = NEXT_INSN (basic_block_end[b]);
3465 /* Determine the current line-number. We want to know the current
3466 line number of the first insn of the block here, in case it is
3467 different from the true line number that was saved earlier. If
3468 different, then we need a line number note before the first insn
3469 of this block. If it happens to be the same, then we don't want to
3470 emit another line number note here. */
3471 for (line = head; line; line = PREV_INSN (line))
3472 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
3475 /* Walk the insns keeping track of the current line-number and inserting
3476 the line-number notes as needed. */
3477 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3478 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
3480 /* This used to emit line number notes before every non-deleted note.
3481 However, this confuses a debugger, because line notes not separated
3482 by real instructions all end up at the same address. I can find no
3483 use for line number notes before other notes, so none are emitted. */
3484 else if (GET_CODE (insn) != NOTE
3485 && (note = LINE_NOTE (insn)) != 0
3488 || NOTE_LINE_NUMBER (note) != NOTE_LINE_NUMBER (line)
3489 || NOTE_SOURCE_FILE (note) != NOTE_SOURCE_FILE (line)))
3492 prev = PREV_INSN (insn);
3493 if (LINE_NOTE (note))
3495 /* Re-use the original line-number note. */
3496 LINE_NOTE (note) = 0;
3497 PREV_INSN (note) = prev;
3498 NEXT_INSN (prev) = note;
3499 PREV_INSN (insn) = note;
3500 NEXT_INSN (note) = insn;
3505 new = emit_note_after (NOTE_LINE_NUMBER (note), prev);
3506 NOTE_SOURCE_FILE (new) = NOTE_SOURCE_FILE (note);
3507 RTX_INTEGRATED_P (new) = RTX_INTEGRATED_P (note);
3511 fprintf (file, ";; added %d line-number notes\n", notes);
3516 fprintf (file, ";; total time = %d\n;; new basic block head = %d\n;; new basic block end = %d\n\n",
3517 clock, INSN_UID (basic_block_head[b]), INSN_UID (basic_block_end[b]));
3520 /* Yow! We're done! */
3521 free_pending_lists ();
3524 FREE_REG_SET (reg_pending_sets);
3525 FREE_REG_SET (old_live_regs);
3530 /* Subroutine of split_hard_reg_notes. Searches X for any reference to
3531 REGNO, returning the rtx of the reference found if any. Otherwise,
3535 regno_use_in (regno, x)
3543 if (GET_CODE (x) == REG && REGNO (x) == regno)
3546 fmt = GET_RTX_FORMAT (GET_CODE (x));
3547 for (i = GET_RTX_LENGTH (GET_CODE (x)) - 1; i >= 0; i--)
3551 if (tem = regno_use_in (regno, XEXP (x, i)))
3554 else if (fmt[i] == 'E')
3555 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3556 if (tem = regno_use_in (regno , XVECEXP (x, i, j)))
3563 /* Subroutine of update_flow_info. Determines whether any new REG_NOTEs are
3564 needed for the hard register mentioned in the note. This can happen
3565 if the reference to the hard register in the original insn was split into
3566 several smaller hard register references in the split insns. */
3569 split_hard_reg_notes (note, first, last, orig_insn)
3570 rtx note, first, last, orig_insn;
3572 rtx reg, temp, link;
3573 int n_regs, i, new_reg;
3576 /* Assume that this is a REG_DEAD note. */
3577 if (REG_NOTE_KIND (note) != REG_DEAD)
3580 reg = XEXP (note, 0);
3582 n_regs = HARD_REGNO_NREGS (REGNO (reg), GET_MODE (reg));
3584 for (i = 0; i < n_regs; i++)
3586 new_reg = REGNO (reg) + i;
3588 /* Check for references to new_reg in the split insns. */
3589 for (insn = last; ; insn = PREV_INSN (insn))
3591 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
3592 && (temp = regno_use_in (new_reg, PATTERN (insn))))
3594 /* Create a new reg dead note here. */
3595 link = rtx_alloc (EXPR_LIST);
3596 PUT_REG_NOTE_KIND (link, REG_DEAD);
3597 XEXP (link, 0) = temp;
3598 XEXP (link, 1) = REG_NOTES (insn);
3599 REG_NOTES (insn) = link;
3601 /* If killed multiple registers here, then add in the excess. */
3602 i += HARD_REGNO_NREGS (REGNO (temp), GET_MODE (temp)) - 1;
3606 /* It isn't mentioned anywhere, so no new reg note is needed for
3614 /* Subroutine of update_flow_info. Determines whether a SET or CLOBBER in an
3615 insn created by splitting needs a REG_DEAD or REG_UNUSED note added. */
3618 new_insn_dead_notes (pat, insn, last, orig_insn)
3619 rtx pat, insn, last, orig_insn;
3623 /* PAT is either a CLOBBER or a SET here. */
3624 dest = XEXP (pat, 0);
3626 while (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SUBREG
3627 || GET_CODE (dest) == STRICT_LOW_PART
3628 || GET_CODE (dest) == SIGN_EXTRACT)
3629 dest = XEXP (dest, 0);
3631 if (GET_CODE (dest) == REG)
3633 for (tem = last; tem != insn; tem = PREV_INSN (tem))
3635 if (GET_RTX_CLASS (GET_CODE (tem)) == 'i'
3636 && reg_overlap_mentioned_p (dest, PATTERN (tem))
3637 && (set = single_set (tem)))
3639 rtx tem_dest = SET_DEST (set);
3641 while (GET_CODE (tem_dest) == ZERO_EXTRACT
3642 || GET_CODE (tem_dest) == SUBREG
3643 || GET_CODE (tem_dest) == STRICT_LOW_PART
3644 || GET_CODE (tem_dest) == SIGN_EXTRACT)
3645 tem_dest = XEXP (tem_dest, 0);
3647 if (! rtx_equal_p (tem_dest, dest))
3649 /* Use the same scheme as combine.c, don't put both REG_DEAD
3650 and REG_UNUSED notes on the same insn. */
3651 if (! find_regno_note (tem, REG_UNUSED, REGNO (dest))
3652 && ! find_regno_note (tem, REG_DEAD, REGNO (dest)))
3654 rtx note = rtx_alloc (EXPR_LIST);
3655 PUT_REG_NOTE_KIND (note, REG_DEAD);
3656 XEXP (note, 0) = dest;
3657 XEXP (note, 1) = REG_NOTES (tem);
3658 REG_NOTES (tem) = note;
3660 /* The reg only dies in one insn, the last one that uses
3664 else if (reg_overlap_mentioned_p (dest, SET_SRC (set)))
3665 /* We found an instruction that both uses the register,
3666 and sets it, so no new REG_NOTE is needed for this set. */
3670 /* If this is a set, it must die somewhere, unless it is the dest of
3671 the original insn, and hence is live after the original insn. Abort
3672 if it isn't supposed to be live after the original insn.
3674 If this is a clobber, then just add a REG_UNUSED note. */
3677 int live_after_orig_insn = 0;
3678 rtx pattern = PATTERN (orig_insn);
3681 if (GET_CODE (pat) == CLOBBER)
3683 rtx note = rtx_alloc (EXPR_LIST);
3684 PUT_REG_NOTE_KIND (note, REG_UNUSED);
3685 XEXP (note, 0) = dest;
3686 XEXP (note, 1) = REG_NOTES (insn);
3687 REG_NOTES (insn) = note;
3691 /* The original insn could have multiple sets, so search the
3692 insn for all sets. */
3693 if (GET_CODE (pattern) == SET)
3695 if (reg_overlap_mentioned_p (dest, SET_DEST (pattern)))
3696 live_after_orig_insn = 1;
3698 else if (GET_CODE (pattern) == PARALLEL)
3700 for (i = 0; i < XVECLEN (pattern, 0); i++)
3701 if (GET_CODE (XVECEXP (pattern, 0, i)) == SET
3702 && reg_overlap_mentioned_p (dest,
3703 SET_DEST (XVECEXP (pattern,
3705 live_after_orig_insn = 1;
3708 if (! live_after_orig_insn)
3714 /* Subroutine of update_flow_info. Update the value of reg_n_sets for all
3715 registers modified by X. INC is -1 if the containing insn is being deleted,
3716 and is 1 if the containing insn is a newly generated insn. */
3719 update_n_sets (x, inc)
3723 rtx dest = SET_DEST (x);
3725 while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
3726 || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
3727 dest = SUBREG_REG (dest);
3729 if (GET_CODE (dest) == REG)
3731 int regno = REGNO (dest);
3733 if (regno < FIRST_PSEUDO_REGISTER)
3736 int endregno = regno + HARD_REGNO_NREGS (regno, GET_MODE (dest));
3738 for (i = regno; i < endregno; i++)
3739 REG_N_SETS (i) += inc;
3742 REG_N_SETS (regno) += inc;
3746 /* Updates all flow-analysis related quantities (including REG_NOTES) for
3747 the insns from FIRST to LAST inclusive that were created by splitting
3748 ORIG_INSN. NOTES are the original REG_NOTES. */
3751 update_flow_info (notes, first, last, orig_insn)
3758 rtx orig_dest, temp;
3761 /* Get and save the destination set by the original insn. */
3763 orig_dest = single_set (orig_insn);
3765 orig_dest = SET_DEST (orig_dest);
3767 /* Move REG_NOTES from the original insn to where they now belong. */
3769 for (note = notes; note; note = next)
3771 next = XEXP (note, 1);
3772 switch (REG_NOTE_KIND (note))
3776 /* Move these notes from the original insn to the last new insn where
3777 the register is now set. */
3779 for (insn = last; ; insn = PREV_INSN (insn))
3781 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
3782 && reg_mentioned_p (XEXP (note, 0), PATTERN (insn)))
3784 /* If this note refers to a multiple word hard register, it
3785 may have been split into several smaller hard register
3786 references, so handle it specially. */
3787 temp = XEXP (note, 0);
3788 if (REG_NOTE_KIND (note) == REG_DEAD
3789 && GET_CODE (temp) == REG
3790 && REGNO (temp) < FIRST_PSEUDO_REGISTER
3791 && HARD_REGNO_NREGS (REGNO (temp), GET_MODE (temp)) > 1)
3792 split_hard_reg_notes (note, first, last, orig_insn);
3795 XEXP (note, 1) = REG_NOTES (insn);
3796 REG_NOTES (insn) = note;
3799 /* Sometimes need to convert REG_UNUSED notes to REG_DEAD
3801 /* ??? This won't handle multiple word registers correctly,
3802 but should be good enough for now. */
3803 if (REG_NOTE_KIND (note) == REG_UNUSED
3804 && ! dead_or_set_p (insn, XEXP (note, 0)))
3805 PUT_REG_NOTE_KIND (note, REG_DEAD);
3807 /* The reg only dies in one insn, the last one that uses
3811 /* It must die somewhere, fail it we couldn't find where it died.
3813 If this is a REG_UNUSED note, then it must be a temporary
3814 register that was not needed by this instantiation of the
3815 pattern, so we can safely ignore it. */
3818 /* After reload, REG_DEAD notes come sometimes an
3819 instruction after the register actually dies. */
3820 if (reload_completed && REG_NOTE_KIND (note) == REG_DEAD)
3822 XEXP (note, 1) = REG_NOTES (insn);
3823 REG_NOTES (insn) = note;
3827 if (REG_NOTE_KIND (note) != REG_UNUSED)
3836 /* This note applies to the dest of the original insn. Find the
3837 first new insn that now has the same dest, and move the note
3843 for (insn = first; ; insn = NEXT_INSN (insn))
3845 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
3846 && (temp = single_set (insn))
3847 && rtx_equal_p (SET_DEST (temp), orig_dest))
3849 XEXP (note, 1) = REG_NOTES (insn);
3850 REG_NOTES (insn) = note;
3851 /* The reg is only zero before one insn, the first that
3855 /* If this note refers to a multiple word hard
3856 register, it may have been split into several smaller
3857 hard register references. We could split the notes,
3858 but simply dropping them is good enough. */
3859 if (GET_CODE (orig_dest) == REG
3860 && REGNO (orig_dest) < FIRST_PSEUDO_REGISTER
3861 && HARD_REGNO_NREGS (REGNO (orig_dest),
3862 GET_MODE (orig_dest)) > 1)
3864 /* It must be set somewhere, fail if we couldn't find where it
3873 /* A REG_EQUIV or REG_EQUAL note on an insn with more than one
3874 set is meaningless. Just drop the note. */
3878 case REG_NO_CONFLICT:
3879 /* These notes apply to the dest of the original insn. Find the last
3880 new insn that now has the same dest, and move the note there. */
3885 for (insn = last; ; insn = PREV_INSN (insn))
3887 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
3888 && (temp = single_set (insn))
3889 && rtx_equal_p (SET_DEST (temp), orig_dest))
3891 XEXP (note, 1) = REG_NOTES (insn);
3892 REG_NOTES (insn) = note;
3893 /* Only put this note on one of the new insns. */
3897 /* The original dest must still be set someplace. Abort if we
3898 couldn't find it. */
3901 /* However, if this note refers to a multiple word hard
3902 register, it may have been split into several smaller
3903 hard register references. We could split the notes,
3904 but simply dropping them is good enough. */
3905 if (GET_CODE (orig_dest) == REG
3906 && REGNO (orig_dest) < FIRST_PSEUDO_REGISTER
3907 && HARD_REGNO_NREGS (REGNO (orig_dest),
3908 GET_MODE (orig_dest)) > 1)
3910 /* Likewise for multi-word memory references. */
3911 if (GET_CODE (orig_dest) == MEM
3912 && SIZE_FOR_MODE (orig_dest) > MOVE_MAX)
3920 /* Move a REG_LIBCALL note to the first insn created, and update
3921 the corresponding REG_RETVAL note. */
3922 XEXP (note, 1) = REG_NOTES (first);
3923 REG_NOTES (first) = note;
3925 insn = XEXP (note, 0);
3926 note = find_reg_note (insn, REG_RETVAL, NULL_RTX);
3928 XEXP (note, 0) = first;
3931 case REG_EXEC_COUNT:
3932 /* Move a REG_EXEC_COUNT note to the first insn created. */
3933 XEXP (note, 1) = REG_NOTES (first);
3934 REG_NOTES (first) = note;
3938 /* Move a REG_RETVAL note to the last insn created, and update
3939 the corresponding REG_LIBCALL note. */
3940 XEXP (note, 1) = REG_NOTES (last);
3941 REG_NOTES (last) = note;
3943 insn = XEXP (note, 0);
3944 note = find_reg_note (insn, REG_LIBCALL, NULL_RTX);
3946 XEXP (note, 0) = last;
3951 /* This should be moved to whichever instruction is a JUMP_INSN. */
3953 for (insn = last; ; insn = PREV_INSN (insn))
3955 if (GET_CODE (insn) == JUMP_INSN)
3957 XEXP (note, 1) = REG_NOTES (insn);
3958 REG_NOTES (insn) = note;
3959 /* Only put this note on one of the new insns. */
3962 /* Fail if we couldn't find a JUMP_INSN. */
3969 /* reload sometimes leaves obsolete REG_INC notes around. */
3970 if (reload_completed)
3972 /* This should be moved to whichever instruction now has the
3973 increment operation. */
3977 /* Should be moved to the new insn(s) which use the label. */
3978 for (insn = first; insn != NEXT_INSN (last); insn = NEXT_INSN (insn))
3979 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
3980 && reg_mentioned_p (XEXP (note, 0), PATTERN (insn)))
3981 REG_NOTES (insn) = gen_rtx (EXPR_LIST, REG_LABEL,
3982 XEXP (note, 0), REG_NOTES (insn));
3987 /* These two notes will never appear until after reorg, so we don't
3988 have to handle them here. */
3994 /* Each new insn created, except the last, has a new set. If the destination
3995 is a register, then this reg is now live across several insns, whereas
3996 previously the dest reg was born and died within the same insn. To
3997 reflect this, we now need a REG_DEAD note on the insn where this
4000 Similarly, the new insns may have clobbers that need REG_UNUSED notes. */
4002 for (insn = first; insn != last; insn = NEXT_INSN (insn))
4007 pat = PATTERN (insn);
4008 if (GET_CODE (pat) == SET || GET_CODE (pat) == CLOBBER)
4009 new_insn_dead_notes (pat, insn, last, orig_insn);
4010 else if (GET_CODE (pat) == PARALLEL)
4012 for (i = 0; i < XVECLEN (pat, 0); i++)
4013 if (GET_CODE (XVECEXP (pat, 0, i)) == SET
4014 || GET_CODE (XVECEXP (pat, 0, i)) == CLOBBER)
4015 new_insn_dead_notes (XVECEXP (pat, 0, i), insn, last, orig_insn);
4019 /* If any insn, except the last, uses the register set by the last insn,
4020 then we need a new REG_DEAD note on that insn. In this case, there
4021 would not have been a REG_DEAD note for this register in the original
4022 insn because it was used and set within one insn. */
4024 set = single_set (last);
4027 rtx dest = SET_DEST (set);
4029 while (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SUBREG
4030 || GET_CODE (dest) == STRICT_LOW_PART
4031 || GET_CODE (dest) == SIGN_EXTRACT)
4032 dest = XEXP (dest, 0);
4034 if (GET_CODE (dest) == REG
4035 /* Global registers are always live, so the code below does not
4037 && (REGNO (dest) >= FIRST_PSEUDO_REGISTER
4038 || ! global_regs[REGNO (dest)]))
4040 rtx stop_insn = PREV_INSN (first);
4042 /* If the last insn uses the register that it is setting, then
4043 we don't want to put a REG_DEAD note there. Search backwards
4044 to find the first insn that sets but does not use DEST. */
4047 if (reg_overlap_mentioned_p (dest, SET_SRC (set)))
4049 for (insn = PREV_INSN (insn); insn != first;
4050 insn = PREV_INSN (insn))
4052 if ((set = single_set (insn))
4053 && reg_mentioned_p (dest, SET_DEST (set))
4054 && ! reg_overlap_mentioned_p (dest, SET_SRC (set)))
4059 /* Now find the first insn that uses but does not set DEST. */
4061 for (insn = PREV_INSN (insn); insn != stop_insn;
4062 insn = PREV_INSN (insn))
4064 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4065 && reg_mentioned_p (dest, PATTERN (insn))
4066 && (set = single_set (insn)))
4068 rtx insn_dest = SET_DEST (set);
4070 while (GET_CODE (insn_dest) == ZERO_EXTRACT
4071 || GET_CODE (insn_dest) == SUBREG
4072 || GET_CODE (insn_dest) == STRICT_LOW_PART
4073 || GET_CODE (insn_dest) == SIGN_EXTRACT)
4074 insn_dest = XEXP (insn_dest, 0);
4076 if (insn_dest != dest)
4078 note = rtx_alloc (EXPR_LIST);
4079 PUT_REG_NOTE_KIND (note, REG_DEAD);
4080 XEXP (note, 0) = dest;
4081 XEXP (note, 1) = REG_NOTES (insn);
4082 REG_NOTES (insn) = note;
4083 /* The reg only dies in one insn, the last one
4092 /* If the original dest is modifying a multiple register target, and the
4093 original instruction was split such that the original dest is now set
4094 by two or more SUBREG sets, then the split insns no longer kill the
4095 destination of the original insn.
4097 In this case, if there exists an instruction in the same basic block,
4098 before the split insn, which uses the original dest, and this use is
4099 killed by the original insn, then we must remove the REG_DEAD note on
4100 this insn, because it is now superfluous.
4102 This does not apply when a hard register gets split, because the code
4103 knows how to handle overlapping hard registers properly. */
4104 if (orig_dest && GET_CODE (orig_dest) == REG)
4106 int found_orig_dest = 0;
4107 int found_split_dest = 0;
4109 for (insn = first; ; insn = NEXT_INSN (insn))
4114 /* I'm not sure if this can happen, but let's be safe. */
4115 if (GET_RTX_CLASS (GET_CODE (insn)) != 'i')
4118 pat = PATTERN (insn);
4119 i = GET_CODE (pat) == PARALLEL ? XVECLEN (pat, 0) : 0;
4123 if (GET_CODE (set) == SET)
4125 if (GET_CODE (SET_DEST (set)) == REG
4126 && REGNO (SET_DEST (set)) == REGNO (orig_dest))
4128 found_orig_dest = 1;
4131 else if (GET_CODE (SET_DEST (set)) == SUBREG
4132 && SUBREG_REG (SET_DEST (set)) == orig_dest)
4134 found_split_dest = 1;
4140 set = XVECEXP (pat, 0, i);
4147 if (found_split_dest)
4149 /* Search backwards from FIRST, looking for the first insn that uses
4150 the original dest. Stop if we pass a CODE_LABEL or a JUMP_INSN.
4151 If we find an insn, and it has a REG_DEAD note, then delete the
4154 for (insn = first; insn; insn = PREV_INSN (insn))
4156 if (GET_CODE (insn) == CODE_LABEL
4157 || GET_CODE (insn) == JUMP_INSN)
4159 else if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4160 && reg_mentioned_p (orig_dest, insn))
4162 note = find_regno_note (insn, REG_DEAD, REGNO (orig_dest));
4164 remove_note (insn, note);
4168 else if (! found_orig_dest)
4170 /* This should never happen. */
4175 /* Update reg_n_sets. This is necessary to prevent local alloc from
4176 converting REG_EQUAL notes to REG_EQUIV when splitting has modified
4177 a reg from set once to set multiple times. */
4180 rtx x = PATTERN (orig_insn);
4181 RTX_CODE code = GET_CODE (x);
4183 if (code == SET || code == CLOBBER)
4184 update_n_sets (x, -1);
4185 else if (code == PARALLEL)
4188 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
4190 code = GET_CODE (XVECEXP (x, 0, i));
4191 if (code == SET || code == CLOBBER)
4192 update_n_sets (XVECEXP (x, 0, i), -1);
4196 for (insn = first; ; insn = NEXT_INSN (insn))
4199 code = GET_CODE (x);
4201 if (code == SET || code == CLOBBER)
4202 update_n_sets (x, 1);
4203 else if (code == PARALLEL)
4206 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
4208 code = GET_CODE (XVECEXP (x, 0, i));
4209 if (code == SET || code == CLOBBER)
4210 update_n_sets (XVECEXP (x, 0, i), 1);
4220 /* The one entry point in this file. DUMP_FILE is the dump file for
4224 schedule_insns (dump_file)
4227 int max_uid = MAX_INSNS_PER_SPLIT * (get_max_uid () + 1);
4232 /* Taking care of this degenerate case makes the rest of
4233 this code simpler. */
4234 if (n_basic_blocks == 0)
4237 /* Create an insn here so that we can hang dependencies off of it later. */
4238 sched_before_next_call
4239 = gen_rtx (INSN, VOIDmode, 0, NULL_RTX, NULL_RTX,
4240 NULL_RTX, 0, NULL_RTX, 0);
4242 /* Initialize the unused_*_lists. We can't use the ones left over from
4243 the previous function, because gcc has freed that memory. We can use
4244 the ones left over from the first sched pass in the second pass however,
4245 so only clear them on the first sched pass. The first pass is before
4246 reload if flag_schedule_insns is set, otherwise it is afterwards. */
4248 if (reload_completed == 0 || ! flag_schedule_insns)
4250 unused_insn_list = 0;
4251 unused_expr_list = 0;
4254 /* We create no insns here, only reorder them, so we
4255 remember how far we can cut back the stack on exit. */
4257 /* Allocate data for this pass. See comments, above,
4258 for what these vectors do. */
4259 insn_luid = (int *) alloca (max_uid * sizeof (int));
4260 insn_priority = (int *) alloca (max_uid * sizeof (int));
4261 insn_tick = (int *) alloca (max_uid * sizeof (int));
4262 insn_costs = (short *) alloca (max_uid * sizeof (short));
4263 insn_units = (short *) alloca (max_uid * sizeof (short));
4264 insn_blockage = (unsigned int *) alloca (max_uid * sizeof (unsigned int));
4265 insn_ref_count = (int *) alloca (max_uid * sizeof (int));
4267 if (reload_completed == 0)
4269 sched_reg_n_calls_crossed = (int *) alloca (max_regno * sizeof (int));
4270 sched_reg_live_length = (int *) alloca (max_regno * sizeof (int));
4271 bb_dead_regs = ALLOCA_REG_SET ();
4272 bb_live_regs = ALLOCA_REG_SET ();
4273 bzero ((char *) sched_reg_n_calls_crossed, max_regno * sizeof (int));
4274 bzero ((char *) sched_reg_live_length, max_regno * sizeof (int));
4278 sched_reg_n_calls_crossed = 0;
4279 sched_reg_live_length = 0;
4283 init_alias_analysis ();
4285 if (write_symbols != NO_DEBUG)
4289 line_note = (rtx *) alloca (max_uid * sizeof (rtx));
4290 bzero ((char *) line_note, max_uid * sizeof (rtx));
4291 line_note_head = (rtx *) alloca (n_basic_blocks * sizeof (rtx));
4292 bzero ((char *) line_note_head, n_basic_blocks * sizeof (rtx));
4294 /* Determine the line-number at the start of each basic block.
4295 This must be computed and saved now, because after a basic block's
4296 predecessor has been scheduled, it is impossible to accurately
4297 determine the correct line number for the first insn of the block. */
4299 for (b = 0; b < n_basic_blocks; b++)
4300 for (line = basic_block_head[b]; line; line = PREV_INSN (line))
4301 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
4303 line_note_head[b] = line;
4308 bzero ((char *) insn_luid, max_uid * sizeof (int));
4309 bzero ((char *) insn_priority, max_uid * sizeof (int));
4310 bzero ((char *) insn_tick, max_uid * sizeof (int));
4311 bzero ((char *) insn_costs, max_uid * sizeof (short));
4312 bzero ((char *) insn_units, max_uid * sizeof (short));
4313 bzero ((char *) insn_blockage, max_uid * sizeof (unsigned int));
4314 bzero ((char *) insn_ref_count, max_uid * sizeof (int));
4316 /* Schedule each basic block, block by block. */
4318 /* ??? Add a NOTE after the last insn of the last basic block. It is not
4319 known why this is done. */
4320 /* ??? Perhaps it's done to ensure NEXT_TAIL in schedule_block is a
4323 insn = basic_block_end[n_basic_blocks-1];
4324 if (NEXT_INSN (insn) == 0
4325 || (GET_CODE (insn) != NOTE
4326 && GET_CODE (insn) != CODE_LABEL
4327 /* Don't emit a NOTE if it would end up between an unconditional
4328 jump and a BARRIER. */
4329 && ! (GET_CODE (insn) == JUMP_INSN
4330 && GET_CODE (NEXT_INSN (insn)) == BARRIER)))
4331 emit_note_after (NOTE_INSN_DELETED, basic_block_end[n_basic_blocks-1]);
4333 for (b = 0; b < n_basic_blocks; b++)
4339 for (insn = basic_block_head[b]; ; insn = next)
4344 /* Can't use `next_real_insn' because that
4345 might go across CODE_LABELS and short-out basic blocks. */
4346 next = NEXT_INSN (insn);
4347 if (GET_CODE (insn) != INSN)
4349 if (insn == basic_block_end[b])
4355 /* Don't split no-op move insns. These should silently disappear
4356 later in final. Splitting such insns would break the code
4357 that handles REG_NO_CONFLICT blocks. */
4358 set = single_set (insn);
4359 if (set && rtx_equal_p (SET_SRC (set), SET_DEST (set)))
4361 if (insn == basic_block_end[b])
4364 /* Nops get in the way while scheduling, so delete them now if
4365 register allocation has already been done. It is too risky
4366 to try to do this before register allocation, and there are
4367 unlikely to be very many nops then anyways. */
4368 if (reload_completed)
4370 PUT_CODE (insn, NOTE);
4371 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4372 NOTE_SOURCE_FILE (insn) = 0;
4378 /* Split insns here to get max fine-grain parallelism. */
4379 prev = PREV_INSN (insn);
4380 /* It is probably not worthwhile to try to split again in the
4381 second pass. However, if flag_schedule_insns is not set,
4382 the first and only (if any) scheduling pass is after reload. */
4383 if (reload_completed == 0 || ! flag_schedule_insns)
4385 rtx last, first = PREV_INSN (insn);
4386 rtx notes = REG_NOTES (insn);
4388 last = try_split (PATTERN (insn), insn, 1);
4391 /* try_split returns the NOTE that INSN became. */
4392 first = NEXT_INSN (first);
4393 update_flow_info (notes, first, last, insn);
4395 PUT_CODE (insn, NOTE);
4396 NOTE_SOURCE_FILE (insn) = 0;
4397 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4398 if (insn == basic_block_head[b])
4399 basic_block_head[b] = first;
4400 if (insn == basic_block_end[b])
4402 basic_block_end[b] = last;
4408 if (insn == basic_block_end[b])
4412 schedule_block (b, dump_file);
4419 /* Reposition the prologue and epilogue notes in case we moved the
4420 prologue/epilogue insns. */
4421 if (reload_completed)
4422 reposition_prologue_and_epilogue_notes (get_insns ());
4424 if (write_symbols != NO_DEBUG)
4427 rtx insn = get_insns ();
4428 int active_insn = 0;
4431 /* Walk the insns deleting redundant line-number notes. Many of these
4432 are already present. The remainder tend to occur at basic
4433 block boundaries. */
4434 for (insn = get_last_insn (); insn; insn = PREV_INSN (insn))
4435 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
4437 /* If there are no active insns following, INSN is redundant. */
4438 if (active_insn == 0)
4441 NOTE_SOURCE_FILE (insn) = 0;
4442 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4444 /* If the line number is unchanged, LINE is redundant. */
4446 && NOTE_LINE_NUMBER (line) == NOTE_LINE_NUMBER (insn)
4447 && NOTE_SOURCE_FILE (line) == NOTE_SOURCE_FILE (insn))
4450 NOTE_SOURCE_FILE (line) = 0;
4451 NOTE_LINE_NUMBER (line) = NOTE_INSN_DELETED;
4458 else if (! ((GET_CODE (insn) == NOTE
4459 && NOTE_LINE_NUMBER (insn) == NOTE_INSN_DELETED)
4460 || (GET_CODE (insn) == INSN
4461 && (GET_CODE (PATTERN (insn)) == USE
4462 || GET_CODE (PATTERN (insn)) == CLOBBER))))
4465 if (dump_file && notes)
4466 fprintf (dump_file, ";; deleted %d line-number notes\n", notes);
4469 if (reload_completed == 0)
4472 for (regno = 0; regno < max_regno; regno++)
4473 if (sched_reg_live_length[regno])
4477 if (REG_LIVE_LENGTH (regno) > sched_reg_live_length[regno])
4479 ";; register %d life shortened from %d to %d\n",
4480 regno, REG_LIVE_LENGTH (regno),
4481 sched_reg_live_length[regno]);
4482 /* Negative values are special; don't overwrite the current
4483 reg_live_length value if it is negative. */
4484 else if (REG_LIVE_LENGTH (regno) < sched_reg_live_length[regno]
4485 && REG_LIVE_LENGTH (regno) >= 0)
4487 ";; register %d life extended from %d to %d\n",
4488 regno, REG_LIVE_LENGTH (regno),
4489 sched_reg_live_length[regno]);
4491 if (! REG_N_CALLS_CROSSED (regno)
4492 && sched_reg_n_calls_crossed[regno])
4494 ";; register %d now crosses calls\n", regno);
4495 else if (REG_N_CALLS_CROSSED (regno)
4496 && ! sched_reg_n_calls_crossed[regno]
4497 && REG_BASIC_BLOCK (regno) != REG_BLOCK_GLOBAL)
4499 ";; register %d no longer crosses calls\n", regno);
4502 /* Negative values are special; don't overwrite the current
4503 reg_live_length value if it is negative. */
4504 if (REG_LIVE_LENGTH (regno) >= 0)
4505 REG_LIVE_LENGTH (regno) = sched_reg_live_length[regno];
4507 /* We can't change the value of reg_n_calls_crossed to zero for
4508 pseudos which are live in more than one block.
4510 This is because combine might have made an optimization which
4511 invalidated basic_block_live_at_start and reg_n_calls_crossed,
4512 but it does not update them. If we update reg_n_calls_crossed
4513 here, the two variables are now inconsistent, and this might
4514 confuse the caller-save code into saving a register that doesn't
4515 need to be saved. This is only a problem when we zero calls
4516 crossed for a pseudo live in multiple basic blocks.
4518 Alternatively, we could try to correctly update basic block live
4519 at start here in sched, but that seems complicated. */
4520 if (sched_reg_n_calls_crossed[regno]
4521 || REG_BASIC_BLOCK (regno) != REG_BLOCK_GLOBAL)
4522 REG_N_CALLS_CROSSED (regno) = sched_reg_n_calls_crossed[regno];
4526 if (reload_completed == 0)
4528 FREE_REG_SET (bb_dead_regs);
4529 FREE_REG_SET (bb_live_regs);
4533 #endif /* INSN_SCHEDULING */