* doc/invoke.texi (-fvar-tracking-assignments): New.
(-fvar-tracking-assignments-toggle): New.
(-fdump-final-insns=file): Mark filename as optional.
(--param min-nondebug-insn-uid): New.
(-gdwarf-@{version}): Mention version 4.
* opts.c (common_handle_option): Accept it.
* tree-vrp.c (find_assert_locations_1): Skip debug stmts.
* regrename.c (regrename_optimize): Drop last. Don't count debug
insns as uses. Don't reject change because of debug insn.
(do_replace): Reject DEBUG_INSN as chain starter. Take base_regno
from the chain starter, and check for inexact matches in
DEBUG_INSNS.
(scan_rtx_reg): Accept inexact matches in DEBUG_INSNs.
(build_def_use): Simplify and fix the marking of DEBUG_INSNs.
* sched-ebb.c (schedule_ebbs): Skip boundary debug insns.
* fwprop.c (forward_propagate_and_simplify): ...into debug insns.
* doc/gimple.texi (is_gimple_debug): New.
(gimple_debug_bind_p): New.
(is_gimple_call, gimple_assign_cast_p): End sentence with period.
* doc/install.texi (bootstrap-debug): More details.
(bootstrap-debug-big, bootstrap-debug-lean): Document.
(bootstrap-debug-lib): More details.
(bootstrap-debug-ckovw): Update.
(bootstrap-time): New.
* tree-into-ssa.c (mark_def_sites): Skip debug stmts.
(insert_phi_nodes_for): Insert debug stmts.
(rewrite_stmt): Take iterator. Insert debug stmts.
(rewrite_enter_block): Adjust.
(maybe_replace_use_in_debug_stmt): New.
(rewrite_update_stmt): Use it.
(mark_use_interesting): Return early for debug stmts.
* tree-ssa-loop-im.c (rewrite_bittest): Propagate DEFs into debug
stmts before replacing stmt.
(move_computations_stmt): Likewise.
* ira-conflicts.c (add_copies): Skip debug insns.
* regstat.c (regstat_init_n_sets_and_refs): Discount debug insns.
(regstat_bb_compute_ri): Skip debug insns.
* tree-ssa-threadupdate.c (redirection_block_p): Skip debug stmts.
* tree-ssa-loop-manip.c (find_uses_to_rename_stmt,
check_loop_closed_ssa_stmt): Skip debug stmts.
* tree-tailcall.c (find_tail_calls): Likewise.
* tree-ssa-loop-ch.c (should_duplicate_loop_header_p): Likewise.
* tree.h (MAY_HAVE_DEBUG_STMTS): New.
(build_var_debug_value_stat): Declare.
(build_var_debug_value): Define.
(target_for_debug_bind): Declare.
* reload.c (find_equiv_reg): Skip debug insns.
* rtlanal.c (reg_used_between_p): Skip debug insns.
(side_effects_p): Likewise.
(canonicalize_condition): Likewise.
* ddg.c (create_ddg_dep_from_intra_loop_link): Check that non-debug
insns never depend on debug insns.
(create_ddg_dep_no_link): Likewise.
(add_cross_iteration_register_deps): Use ANTI_DEP for debug insns.
Don't add inter-loop dependencies for debug insns.
(build_intra_loop_deps): Likewise.
(create_ddg): Count debug insns.
* ddg.h (struct ddg::num_debug): New.
(num_backargs): Pair up with previous int field.
* diagnostic.c (diagnostic_report_diagnostic): Skip notes on
-fcompare-debug-second.
* final.c (get_attr_length_1): Skip debug insns.
(rest_of_clean-state): Don't dump CFA_RESTORE_STATE.
* gcc.c (invoke_as): Call compare-debug-dump-opt.
(driver_self_specs): Map -fdump-final-insns to
-fdump-final-insns=..
(get_local_tick): New.
(compare_debug_dump_opt_spec_function): Test for . argument and
compute output name. Compute temp output spec without flag name.
Compute -frandom-seed.
(OPT): Undef after use.
* cfgloopanal.c (num_loop_insns): Skip debug insns.
(average_num_loop_insns): Likewise.
* params.h (MIN_NONDEBUG_INSN_UID): New.
* gimple.def (GIMPLE_DEBUG): New.
* ipa-reference.c (scan_stmt_for_static_refs): Skip debug stmts.
* auto-inc-dec.c (merge_in_block): Skip debug insns.
(merge_in_block): Fix whitespace.
* toplev.c (flag_var_tracking): Update comment.
(flag_var_tracking_assignments): New.
(flag_var_tracking_assignments_toggle): New.
(process_options): Don't open final insns dump file if we're not
going to write to it. Compute defaults for var_tracking.
* df-scan.c (df_insn_rescan_debug_internal): New.
(df_uses_record): Handle debug insns.
* haifa-sched.c (ready): Initialize n_debug.
(contributes_to_priority): Skip debug insns.
(dep_list_size): New.
(priority): Use it.
(rank_for_schedule): Likewise. Schedule debug insns as soon as
they're ready. Disregard previous debug insns to make decisions.
(queue_insn): Never queue debug insns.
(ready_add, ready_remove_first, ready_remove): Count debug insns.
(schedule_insn): Don't reject debug insns because of issue rate.
(get_ebb_head_tail, no_real_insns_p): Skip boundary debug insns.
(queue_to_ready): Skip and discount debug insns.
(choose_ready): Let debug insns through.
(schedule_block): Check boundary debug insns. Discount debug
insns, schedule them early. Adjust whitespace.
(set_priorities): Check for boundary debug insns.
(add_jump_dependencies): Use dep_list_size.
(prev_non_location_insn): New.
(check_cfg): Use it.
* tree-ssa-loop-ivopts.c (find-interesting_users): Skip debug
stmts.
(remove_unused_ivs): Reset debug stmts.
* modulo-sched.c (const_iteration_count): Skip debug insns.
(res_MII): Discount debug insns.
(loop_single_full_bb_p): Skip debug insns.
(sms_schedule): Likewise.
(sms_schedule_by_order): Likewise.
(ps_has_conflicts): Likewise.
* caller-save.c (refmarker_fn): New.
(save_call_clobbered_regs): Replace regs with saved mem in
debug insns.
(mark_referenced_regs): Take pointer, mark and arg. Adjust.
Call refmarker_fn mark for hardregnos.
(mark_reg_as_referenced): New.
(replace_reg_with_saved_mem): New.
* ipa-pure-const.c (check_stmt): Skip debug stmts.
* cse.c (cse_insn): Canonicalize debug insns. Skip them when
searching back.
(cse_extended_basic_block): Skip debug insns.
(count_reg_usage): Likewise.
(is_dead_reg): New, split out of...
(set_live_p): ... here.
(insn_live_p): Use it for debug insns.
* tree-stdarg.c (check_all_va_list_escapes): Skip debug stmts.
(execute_optimize_stdarg): Likewise.
* tree-ssa-dom.c (propagate_rhs_into_lhs): Likewise.
* tree-ssa-propagate.c (substitute_and_fold): Don't regard
changes in debug stmts as changes.
* sel-sched.c (moving_insn_creates_bookkeeping_block_p): New.
(moveup_expr): Don't move across debug insns. Don't move
debug insn if it would create a bookkeeping block.
(moveup_expr_cached): Don't use cache for debug insns that
are heads of blocks.
(compute_av_set_inside_bb): Skip debug insns.
(sel_rank_for_schedule): Schedule debug insns first. Remove
dead code.
(block_valid_for_bookkeeping_p); Support lax searches.
(create_block_for_bookkeeping): Adjust block numbers when
encountering debug-only blocks.
(find_place_for_bookkeeping): Deal with debug-only blocks.
(generate_bookkeeping_insn): Accept no place to insert.
(remove_temp_moveop_nops): New argument full_tidying.
(prepare_place_to_insert): Deal with debug insns.
(advance_state_on_fence): Debug insns don't start cycles.
(update_boundaries): Take fence as argument. Deal with
debug insns.
(schedule_expr_on_boundary): No full_tidying on debug insns.
(fill_insns): Deal with debug insns.
(track_scheduled_insns_and_blocks): Don't count debug insns.
(need_nop_to_preserve_insn_bb): New, split out of...
(remove_insn_from_stream): ... this.
(fur_orig_expr_not_found): Skip debug insns.
* rtl.def (VALUE): Move up.
(DEBUG_INSN): New.
* tree-ssa-sink.c (all_immediate_uses_same_place): Skip debug
stmts.
(nearest_common_dominator_of_uses): Take debug_stmts argument.
Set it if debug stmts are found.
(statement_sink_location): Skip debug stmts. Propagate
moving defs into debug stmts.
* ifcvt.c (first_active_insn): Skip debug insns.
(last_active_insns): Likewise.
(cond_exec_process_insns): Likewise.
(noce_process_if_block): Likewise.
(check_cond_move_block): Likewise.
(cond_move_convert_if_block): Likewise.
(block_jumps_and_fallthru_p): Likewise.
(dead_or_predicable): Likewise.
* dwarf2out.c (debug_str_hash_forced): New.
(find_AT_string): Add comment.
(gen_label_for_indirect_string): New.
(get_debug_string_label): New.
(AT_string_form): Use it.
(mem_loc_descriptor): Handle non-TLS symbols. Handle MINUS , DIV,
MOD, AND, IOR, XOR, NOT, ABS, NEG, and CONST_STRING. Accept but
discard COMPARE, IF_THEN_ELSE, ROTATE, ROTATERT, TRUNCATE and
several operations that cannot be represented with DWARF opcodes.
(loc_descriptor): Ignore SIGN_EXTEND and ZERO_EXTEND. Require
dwarf_version 4 for DW_OP_implicit_value and DW_OP_stack_value.
(dwarf2out_var_location): Take during-call mark into account.
(output_indirect_string): Update comment. Output if there are
label and references.
(prune_indirect_string): New.
(prune_unused_types): Call it if debug_str_hash_forced.
More in dwarf2out.c, from Jakub Jelinek <jakub@redhat.com>:
(dw_long_long_const): Remove.
(struct dw_val_struct): Change val_long_long type to rtx.
(print_die, attr_checksum, same_dw_val_p, loc_descriptor): Adjust for
val_long_long change to CONST_DOUBLE rtx from a long hi/lo pair.
(output_die): Likewise. Use HOST_BITS_PER_WIDE_INT size of each
component instead of HOST_BITS_PER_LONG.
(output_loc_operands): Likewise. For const8* assert
HOST_BITS_PER_WIDE_INT rather than HOST_BITS_PER_LONG is >= 64.
(output_loc_operands_raw): For const8* assert HOST_BITS_PER_WIDE_INT
rather than HOST_BITS_PER_LONG is >= 64.
(add_AT_long_long): Remove val_hi and val_lo arguments, add
val_const_double.
(size_of_die): Use HOST_BITS_PER_WIDE_INT size multiplier instead of
HOST_BITS_PER_LONG for dw_val_class_long_long.
(add_const_value_attribute): Adjust add_AT_long_long caller. Don't
handle TLS SYMBOL_REFs. If CONST wraps a constant, tail recurse.
(dwarf_stack_op_name): Handle DW_OP_implicit_value and
DW_OP_stack_value.
(size_of_loc_descr, output_loc_operands, output_loc_operands_raw):
Handle DW_OP_implicit_value.
(extract_int): Move prototype earlier.
(mem_loc_descriptor): For SUBREG punt if inner
mode size is wider than DWARF2_ADDR_SIZE. Handle SIGN_EXTEND
and ZERO_EXTEND by DW_OP_shl and DW_OP_shr{a,}. Handle
EQ, NE, GT, GE, LT, LE, GTU, GEU, LTU, LEU, SMIN, SMAX, UMIN,
UMAX, SIGN_EXTRACT, ZERO_EXTRACT.
(loc_descriptor): Compare mode size with DWARF2_ADDR_SIZE
instead of Pmode size.
(loc_descriptor): Add MODE argument. Handle CONST_INT, CONST_DOUBLE,
CONST_VECTOR, CONST, LABEL_REF and SYMBOL_REF if mode != VOIDmode,
attempt to handle other expressions. Don't handle TLS SYMBOL_REFs.
(concat_loc_descriptor, concatn_loc_descriptor,
loc_descriptor_from_tree_1): Adjust loc_descriptor callers.
(add_location_or_const_value_attribute): Likewise. For single
location loc_lists attempt to use add_const_value_attribute
for constant decls. Add DW_AT_const_value even if
NOTE_VAR_LOCATION is VAR_LOCATION with CONSTANT_P or CONST_STRING
in its expression.
* cfgbuild.c (inside_basic_block_p): Handle debug insns.
(control_flow_insn_p): Likewise.
* tree-parloops.c (eliminate_local_variables_stmt): Handle debug
stmt.
(separate_decls_in_region_debug_bind): New.
(separate_decls_in_region): Process debug bind stmts afterwards.
* recog.c (verify_changes): Handle debug insns.
(extract_insn): Likewise.
(peephole2_optimize): Skip debug insns.
* dse.c (scan_insn): Skip debug insns.
* sel-sched-ir.c (return_nop_to_pool): Take full_tidying argument.
Pass it on.
(setup_id_for_insn): Handle debug insns.
(maybe_tidy_empty_bb): Adjust whitespace.
(tidy_control_flow): Skip debug insns.
(sel_remove_insn): Adjust for debug insns.
(sel_estimate_number_of_insns): Skip debug insns.
(create_insn_rtx_from_pattern): Handle debug insns.
(create_copy_of_insn_rtx): Likewise.
* sel-sched-.h (sel_bb_end): Declare.
(sel_bb_empty_or_nop_p): New.
(get_all_loop_exits): Use it.
(_eligible_successor_edge_p): Likewise.
(return_nop_to_pool): Adjust.
* tree-eh.c (tre_empty_eh_handler_p): Skip debug stmts.
* ira-lives.c (process_bb_node_lives): Skip debug insns.
* gimple-pretty-print.c (dump_gimple_debug): New.
(dump_gimple_stmt): Use it.
(dump_bb_header): Skip gimple debug stmts.
* regmove.c (optimize_reg_copy_1): Discount debug insns.
(fixup_match_2): Likewise.
(regmove_backward_pass): Likewise. Simplify combined
replacement. Handle debug insns.
* function.c (instantiate_virtual_regs): Handle debug insns.
* function.h (struct emit_status): Add x_cur_debug_insn_uid.
* print-rtl.h: Include cselib.h.
(print_rtx): Print VALUEs. Split out and recurse for
VAR_LOCATIONs.
* df.h (df_inns_rescan_debug_internal): Declare.
* gcse.c (alloc_hash_table): Estimate n_insns.
(cprop_insn): Don't regard debug insns as changes.
(bypass_conditional_jumps): Skip debug insns.
(one_pre_gcse_pass): Adjust.
(one_code_hoisting_pass): Likewise.
(compute_ld_motion_mems): Skip debug insns.
(one_cprop_pass): Adjust.
* tree-if-conv.c (tree_if_convert_stmt): Reset debug stmts.
(if_convertible_stmt_p): Handle debug stmts.
* init-regs.c (initialize_uninitialized_regs): Skip debug insns.
* tree-vect-loop.c (vect_is_simple_reduction): Skip debug stmts.
* ira-build.c (create_bb_allocnos): Skip debug insns.
* tree-flow-inline.h (has_zero_uses): Discount debug stmts.
(has_single_use): Likewise.
(single_imm_use): Likewise.
(num_imm_uses): Likewise.
* tree-ssa-phiopt.c (empty_block_p): Skip debug stmts.
* tree-ssa-coalesce.c (build_ssa_conflict_graph): Skip debug stmts.
(create_outofssa_var_map): Likewise.
* lower-subreg.c (adjust_decomposed_uses): New.
(resolve_debug): New.
(decompose_multiword_subregs): Use it.
* tree-dfa.c (find_referenced_vars): Skip debug stmts.
* emit-rtl.c: Include params.h.
(cur_debug_insn_uid): Define.
(set_new_first_and_last_insn): Set cur_debug_insn_uid too.
(copy_rtx_if_shared_1): Handle debug insns.
(reset_used_flags): Likewise.
(set_used_flags): LIkewise.
(get_max_insn_count): New.
(next_nondebug_insn): New.
(prev_nondebug_insn): New.
(make_debug_insn_raw): New.
(emit_insn_before_noloc): Handle debug insns.
(emit_jump_insn_before_noloc): Likewise.
(emit_call_insn_before_noloc): Likewise.
(emit_debug_insn_before_noloc): New.
(emit_insn_after_noloc): Handle debug insns.
(emit_jump_insn_after_noloc): Likewise.
(emit_call_insn_after_noloc): Likewise.
(emit_debug_insn_after_noloc): Likewise.
(emit_insn_after): Take loc from earlier non-debug insn.
(emit_jump_insn_after): Likewise.
(emit_call_insn_after): Likewise.
(emit_debug_insn_after_setloc): New.
(emit_debug_insn_after): New.
(emit_insn_before): Take loc from later non-debug insn.
(emit_jump_insn_before): Likewise.
(emit_call_insn_before): Likewise.
(emit_debug_insn_before_setloc): New.
(emit_debug_insn_before): New.
(emit_insn): Handle debug insns.
(emit_debug_insn): New.
(emit_jump_insn): Handle debug insns.
(emit_call_insn): Likewise.
(emit): Likewise.
(init_emit): Take min-nondebug-insn-uid into account.
Initialize cur_debug_insn_uid.
(emit_copy_of_insn_after): Handle debug insns.
* cfgexpand.c (gimple_assign_rhs_to_tree): Do not overwrite
location of single rhs in place.
(maybe_dump_rtl_for_gimple_stmt): Dump lineno.
(floor_sdiv_adjust): New.
(cell_sdiv_adjust): New.
(cell_udiv_adjust): New.
(round_sdiv_adjust): New.
(round_udiv_adjust): New.
(wrap_constant): Moved from cselib.
(unwrap_constant): New.
(expand_debug_expr): New.
(expand_debug_locations): New.
(expand_gimple_basic_block): Drop hiding redeclaration. Expand
debug bind stmts.
(gimple_expand_cfg): Expand debug locations.
* cselib.c: Include tree-pass.h.
(struct expand_value_data): New.
(cselib_record_sets_hook): New.
(PRESERVED_VALUE_P, LONG_TERM_PRESERVED_VALUE_P): New.
(cselib_clear_table): Move, and implemnet in terms of...
(cselib_reset_table_with_next_value): ... this.
(cselib_get_next_unknown_value): New.
(discard_useless_locs): Don't discard preserved values.
(cselib_preserve_value): New.
(cselib_preserved_value_p): New.
(cselib_preserve_definitely): New.
(cselib_clear_preserve): New.
(cselib_preserve_only_values): New.
(new_cselib_val): Take rtx argument. Dump it in details.
(cselib_lookup_mem): Adjust.
(expand_loc): Take regs_active in struct. Adjust. Silence
dumps unless details are requested.
(cselib_expand_value_rtx_cb): New.
(cselib_expand_value_rtx): Rename and reimplment in terms of...
(cselib_expand_value_rtx_1): ... this. Adjust. Silence dumps
without details. Copy more subregs. Try to resolve values
using a callback. Wrap constants.
(cselib_subst_to_values): Adjust.
(cselib_log_lookup): New.
(cselib_lookup): Call it.
(cselib_invalidate_regno): Don't count preserved values as
useless.
(cselib_invalidate_mem): Likewise.
(cselib_record_set): Likewise.
(struct set): Renamed to cselib_set, moved to cselib.h.
(cselib_record_sets): Adjust. Call hook.
(cselib_process_insn): Reset table when it would be cleared.
(dump_cselib_val): New.
(dump_cselib_table): New.
* tree-cfgcleanup.c (tree_forwarded_block_p): Skip debug stmts.
(remove_forwarder_block): Support moving debug stmts.
* cselib.h (cselib_record_sets_hook): Declare.
(cselib_expand_callback): New type.
(cselib_expand_value_rtx_cb): Declare.
(cselib_reset_table_with_next_value): Declare.
(cselib_get_next_unknown_value): Declare.
(cselib_preserve_value): Declare.
(cselib_preserved_value_p): Declare.
(cselib_preserve_only_values): Declare.
(dump_cselib_table): Declare.
* cfgcleanup.c (flow_find_cross_jump): Skip debug insns.
(try_crossjump_to_edge): Likewise.
(delete_unreachable_blocks): Remove dominant GIMPLE blocks after
dominated blocks when debug stmts are present.
* simplify-rtx.c (delegitimize_mem_from_attrs): New.
* tree-ssa-live.c (remove_unused_locals): Skip debug stmts.
(set_var_live_on_entry): Likewise.
* loop-invariant.c (find_invariants_bb): Skip debug insns.
* cfglayout.c (curr_location, last_location): Make static.
(set_curr_insn_source_location): Don't avoid bouncing.
(get_curr_insn_source_location): New.
(get_curr_insn_block): New.
(duplicate_insn_chain): Handle debug insns.
* tree-ssa-forwprop.c (forward_propagate_addr_expr): Propagate
into debug stmts.
* common.opt (fcompare-debug): Move to sort order.
(fdump-unnumbered-links): Likewise.
(fvar-tracking-assignments): New.
(fvar-tracking-assignments-toggle): New.
* tree-ssa-dce.c (mark_stmt_necessary): Don't mark blocks
because of debug stmts.
(mark_stmt_if_obviously_necessary): Mark debug stmts.
(eliminate_unnecessary_stmts): Walk dominated blocks before
dominators.
* tree-ssa-ter.c (find_replaceable_in_bb): Skip debug stmts.
* ira.c (memref_used_between_p): Skip debug insns.
(update_equiv_regs): Likewise.
* sched-deps.c (sd_lists_size): Accept empty list.
(sd_init_insn): Mark debug insns.
(sd_finish_insn): Unmark them.
(sd_add_dep): Reject non-debug deps on debug insns.
(fixup_sched_groups): Give debug insns group treatment.
Skip debug insns.
(sched_analyze_reg): Don't mark debug insns for sched before call.
(sched_analyze_2): Handle debug insns.
(sched_analyze_insn): Compute next non-debug insn. Handle debug
insns.
(deps_analyze_insn): Handle debug insns.
(deps_start_bb): Skip debug insns.
(init_deps): Initialize last_debug_insn.
* tree-ssa.c (target_for_debug_bind): New.
(find_released_ssa_name): New.
(propagate_var_def_into_debug_stmts): New.
(propagate_defs_into_debug_stmts): New.
(verify_ssa): Skip debug bind stmts without values.
(warn_uninialized_vars): Skip debug stmts.
* target-def.h (TARGET_DELEGITIMIZE_ADDRESS): Set default.
* rtl.c (rtx_equal_p_cb): Handle VALUEs.
(rtx_equal_p): Likewise.
* ira-costs.c (scan_one_insn): Skip debug insns.
(process_bb_node_for_hard_reg_moves): Likewise.
* rtl.h (DEBUG_INSN_P): New.
(NONDEBUG_INSN_P): New.
(MAY_HAVE_DEBUG_INSNS): New.
(INSN_P): Accept debug insns.
(RTX_FRAME_RELATED_P): Likewise.
(INSN_DELETED_P): Likewise
(PAT_VAR_LOCATION_DECL): New.
(PAT_VAR_LOCATION_LOC): New.
(PAT_VAR_OCATION_STATUS): New.
(NOTE_VAR_LOCATION_DECL): Reimplement.
(NOTE_VAR_LOCATION_LOC): Likewise.
(NOTE_VAR_LOCATION_STATUS): Likewise.
(INSN_VAR_LOCATION): New.
(INSN_VAR_LOCATION_DECL): New.
(INSN_VAR_LOCATION_LOC): New.
(INSN_VAR_LOCATION_STATUS): New.
(gen_rtx_UNKNOWN_VAR_LOC): New.
(VAR_LOC_UNKNOWN_P): New.
(NOTE_DURING_CALL_P): New.
(SCHED_GROUP_P): Accept debug insns.
(emit_debug_insn_before): Declare.
(emit_debug_insn_before_noloc): Declare.
(emit_debug_insn_beore_setloc): Declare.
(emit_debug_insn_after): Declare.
(emit_debug_insn_after_noloc): Declare.
(emit_debug_insn_after_setloc): Declare.
(emit_debug_insn): Declare.
(make_debug_insn_raw): Declare.
(prev_nondebug_insn): Declare.
(next_nondebug_insn): Declare.
(delegitimize_mem_from_attrs): Declare.
(get_max_insn_count): Declare.
(wrap_constant): Declare.
(unwrap_constant): Declare.
(get_curr_insn_source_location): Declare.
(get_curr_insn_block): Declare.
* tree-inline.c (insert_debug_decl_map): New.
(processing_debug_stmt): New.
(remap_decl): Don't create new mappings in debug stmts.
(remap_gimple_op_r): Don't add references in debug stmts.
(copy_tree_body_r): Likewise.
(remap_gimple_stmt): Handle debug bind stmts.
(copy_bb): Skip debug stmts.
(copy_edges_for_bb): Likewise.
(copy_debug_stmt): New.
(copy_debug_stmts): New.
(copy_body): Copy debug stmts at the end.
(insert_init_debug_bind): New.
(insert_init_stmt): Take id. Skip and emit debug stmts.
(setup_one_parameter): Remap variable earlier, register debug
mapping.
(estimate_num_insns): Skip debug stmts.
(expand_call_inline): Preserve debug_map.
(optimize_inline_calls): Check for no debug_stmts left-overs.
(unsave_expr_now): Preserve debug_map.
(copy_gimple_seq_and_replace_locals): Likewise.
(tree_function_versioning): Check for no debug_stmts left-overs.
Init and destroy debug_map as needed. Split edges unconditionally.
(build_duplicate_type): Init and destroy debug_map as needed.
* tree-inline.h: Include gimple.h instead of pointer-set.h.
(struct copy_body_data): Add debug_stmts and debug_map.
* sched-int.h (struct ready_list): Add n_debug.
(struct deps): Add last_debug_insn.
(DEBUG_INSN_SCHED_P): New.
(BOUNDARY_DEBUG_INSN_P): New.
(SCHEDULE_DEBUG_INSN_P): New.
(sd_iterator_cond): Accept empty list.
* combine.c (create_log_links): Skip debug insns.
(combine_instructions): Likewise.
(cleanup_auto_inc_dec): New. From Jakub Jelinek: Make sure the
return value is always unshared.
(struct rtx_subst_pair): New.
(auto_adjust_pair): New.
(propagate_for_debug_subst): New.
(propagate_for_debug): New.
(try_combine): Skip debug insns. Propagate removed defs into
debug insns.
(next_nonnote_nondebug_insn): New.
(distribute_notes): Use it. Skip debug insns.
(distribute_links): Skip debug insns.
* tree-outof-ssa.c (set_location_for_edge): Likewise.
* resource.c (mark_target_live_regs): Likewise.
* var-tracking.c: Include cselib.h and target.h.
(enum micro_operation_type): Add MO_VAL_USE, MO_VAL_LOC, and
MO_VAL_SET.
(micro_operation_type_name): New.
(enum emit_note_where): Add EMIT_NOTE_AFTER_CALL_INSN.
(struct micro_operation_def): Update comments.
(decl_or_value): New type. Use instead of decls.
(struct emit_note_data_def): Add vars.
(struct attrs_def): Use decl_or_value.
(struct variable_tracking_info_def): Add permp, flooded.
(struct location_chain_def): Update comment.
(struct variable_part_def): Use decl_or_value.
(struct variable_def): Make var_part a variable length array.
(valvar_pool): New.
(scratch_regs): New.
(cselib_hook_called): New.
(dv_is_decl_p): New.
(dv_is_value_p): New.
(dv_as_decl): New.
(dv_as_value): New.
(dv_as_opaque): New.
(dv_onepart_p): New.
(dv_pool): New.
(IS_DECL_CODE): New.
(check_value_is_not_decl): New.
(dv_from_decl): New.
(dv_from_value): New.
(dv_htab_hash): New.
(variable_htab_hash): Use it.
(variable_htab_eq): Support values.
(variable_htab_free): Free from the right pool.
(attrs_list_member, attrs_list_insert): Use decl_or_value.
(attrs_list_union): Adjust.
(attrs_list_mpdv_union): New.
(tie_break_pointers): New.
(canon_value_cmp): New.
(unshare_variable): Return possibly-modified slot.
(vars_copy_1): Adjust.
(var_reg_decl_set): Adjust. Split out of...
(var_reg_set): ... this.
(get_init_value): Adjust.
(var_reg_delete_and_set): Adjust.
(var_reg_delete): Adjust.
(var_regno_delete): Adjust.
(var_mem_decl_set): Split out of...
(var_mem_set): ... this.
(var_mem_delete_and_set): Adjust.
(var_mem_delete): Adjust.
(val_store): New.
(val_reset): New.
(val_resolve): New.
(variable_union): Adjust. Speed up merge of 1-part vars.
(variable_canonicalize): Use unshared slot.
(VALUED_RECURSED_INTO): New.
(find_loc_in_1pdv): New.
(struct dfset_merge): New.
(insert_into_intersection): New.
(intersect_loc_chains): New.
(loc_cmp): New.
(canonicalize_loc_order_check): New.
(canonicalize_values_mark): New.
(canonicalize_values_star): New.
(variable_merge_over_cur): New.
(variable_merge_over_src): New.
(dataflow_set_merge): New.
(dataflow_set_equiv_regs): New.
(remove_duplicate_values): New.
(struct dfset_post_merge): New.
(variable_post_merge_new_vals): New.
(variable_post_merge_perm_vals): New.
(dataflow_post_merge_adjust): New.
(find_mem_expr_in_1pdv): New.
(dataflow_set_preserve_mem_locs): New.
(dataflow_set_remove_mem_locs): New.
(dataflow_set_clear_at_call): New.
(onepart_variable_different_p): New.
(variable_different_p): Use it.
(dataflow_set_different_1): Adjust. Make detailed dump
more verbose.
(track_expr_p): Add need_rtl parameter. Don't generate rtl
if not needed.
(track_loc_p): Pass it true.
(struct count_use_info): New.
(find_use_val): New.
(replace_expr_with_values): New.
(log_op_type): New.
(use_type): New, partially split out of...
(count_uses): ... this. Count new micro-ops.
(count_uses_1): Adjust.
(count_stores): Adjust.
(count_with_sets): New.
(VAL_NEEDS_RESOLUTION): New.
(VAL_HOLDS_TRACK_EXPR): New.
(VAL_EXPR_IS_COPIED): New.
(VAL_EXPR_IS_CLOBBERED): New.
(add_uses): Adjust. Generate new micro-ops.
(add_uses_1): Adjust.
(add_stores): Generate new micro-ops.
(add_with_sets): New.
(find_src_status): Adjust.
(find_src_set_src): Adjust.
(compute_bb_dataflow): Use dataflow_set_clear_at_call.
Handle new micro-ops. Canonicalize value equivalances.
(vt_find_locations): Compute total size of hash tables for
dumping. Perform merge for var-tracking-assignments. Don't
disregard single-block loops.
(dump_attrs_list): Handle decl_or_value.
(dump_variable): Take variable. Deal with decl_or_value.
(dump_variable_slot): New.
(dump_vars): Use it.
(dump_dataflow_sets): Adjust.
(set_slot_part): New, extended to support one-part variables
after splitting out of...
(set_variable_part): ... this.
(clobber_slot_part): New, split out of...
(clobber_variable_part): ... this.
(delete_slot_part): New, split out of...
(delete_variable_part): .... this.
(check_wrap_constant): New.
(vt_expand_loc_callback): New.
(vt_expand_loc): New.
(emit_note_insn_var_location): Adjust. Handle values. Handle
EMIT_NOTE_AFTER_CALL_INSN.
(emit_notes_for_differences_1): Adjust. Handle values.
(emit_notes_for_differences_2): Likewise.
(emit_notes_for_differences): Adjust.
(emit_notes_in_bb): Take pointer to set. Emit AFTER_CALL_INSN
notes. Adjust. Handle new micro-ops.
(vt_add_function_parameters): Adjust. Create and bind values.
(vt_initialize): Adjust. Initialize scratch_regs and
valvar_pool, flooded and perm.. Initialize and use cselib. Log
operations. Move some code to count_with_sets and add_with_sets.
(delete_debug_insns): New.
(vt_debug_insns_local): New.
(vt_finalize): Release permp, valvar_pool, scratch_regs. Finish
cselib.
(var_tracking_main): If var-tracking-assignments is enabled
but var-tracking isn't, delete debug insns and leave. Likewise
if we exceed limits or fail the stack adjustments tests, and
after all var-tracking processing.
More in var-tracking, from Jakub Jelinek <jakub@redhat.com>:
(dataflow_set): Add traversed_vars.
(value_chain, const_value_chain): New typedefs.
(value_chain_pool, value_chains): New variables.
(value_chain_htab_hash, value_chain_htab_eq, add_value_chain,
add_value_chains, add_cselib_value_chains, remove_value_chain,
remove_value_chains, remove_cselib_value_chains): New functions.
(shared_hash_find_slot_unshare_1, shared_hash_find_slot_1,
shared_hash_find_slot_noinsert_1, shared_hash_find_1): New
static inlines.
(shared_hash_find_slot_unshare, shared_hash_find_slot,
shared_hash_find_slot_noinsert, shared_hash_find): Update.
(dst_can_be_shared): New variable.
(unshare_variable): Unshare set->vars if shared, use shared_hash_*.
Clear dst_can_be_shared. If set->traversed_vars is non-NULL and
different from set->vars, look up slot again instead of using the
passed in slot.
(dataflow_set_init): Initialize traversed_vars.
(variable_union): Use shared_hash_*. Use initially NO_INSERT
lookup if set->vars is shared. Don't keep slot cleared before
calling unshare_variable. Unshare set->vars if needed. Adjust
unshare_variable callers. Clear dst_can_be_shared if needed.
Even ->refcount == 1 vars must be unshared if set->vars is shared
and var needs to be modified.
(dataflow_set_union): Set traversed_vars during canonicalization.
(VALUE_CHANGED, DECL_CHANGED): Define.
(set_dv_changed, dv_changed_p): New static inlines.
(track_expr_p): Clear DECL_CHANGED.
(dump_dataflow_sets): Set it.
(variable_was_changed): Call set_dv_changed.
(emit_note_insn_var_location): Likewise.
(changed_variables_stack): New variable.
(check_changed_vars_1, check_changed_vars_2): New functions.
(emit_notes_for_changes): Do nothing if changed_variables is
empty. Traverse changed_variables with check_changed_vars_1,
call check_changed_vars_2 on each changed_variables_stack entry.
(emit_notes_in_bb): Add SET argument. Just clear it at the
beginning, use it instead of local &set, don't destroy it at the
end.
(vt_emit_notes): Call dataflow_set_clear early on all
VTI(bb)->out sets, never use them, instead use emit_notes_in_bb
computed set, dataflow_set_clear also VTI(bb)->in when we are
done with the basic block. Initialize changed_variables_stack,
free it afterwards. If ENABLE_CHECKING verify that after noting
differences to an empty set value_chains hash table is empty.
(vt_initialize): Initialize value_chains and value_chain_pool.
(vt_finalize): Delete value_chains htab, free value_chain_pool.
(variable_tracking_main): Call dump_dataflow_sets before calling
vt_emit_notes, not after it.
* tree-flow.h (propagate_defs_into_debug_stmts): Declare.
(propagate_var_def_into_debug_stmts): Declare.
* df-problems.c (df_lr_bb_local_compute): Skip debug insns.
(df_set_note): Reject debug insns.
(df_whole_mw_reg_dead_p): Take added_notes_p argument. Don't
add notes to debug insns.
(df_note_bb_compute): Adjust. Likewise.
(df_simulate_uses): Skip debug insns.
(df_simulate_initialize_backwards): Likewise.
* reg-stack.c (subst_stack_regs_in_debug_insn): New.
(subst_stack_regs_pat): Reject debug insns.
(convert_regs_1): Handle debug insns.
* Makefile.in (TREE_INLINE_H): Take pointer-set.h from GIMPLE_H.
(print-rtl.o): Depend on cselib.h.
(cselib.o): Depend on TREE_PASS_H.
(var-tracking.o): Depend on cselib.h and TARGET_H.
* sched-rgn.c (rgn_estimate_number_of_insns): Discount
debug insns.
(init_ready_list): Skip boundary debug insns.
(add_branch_dependences): Skip debug insns.
(free_block_dependencies): Check for blocks with only debug
insns.
(compute_priorities): Likewise.
* gimple.c (gss_for_code): Handle GIMPLE_DEBUG.
(gimple_build_with_ops_stat): Take subcode as unsigned. Adjust
all callers.
(gimple_build_debug_bind_stat): New.
(empty_body_p): Skip debug stmts.
(gimple_has_side_effects): Likewise.
(gimple_rhs_has_side_effects): Likewise.
* gimple.h (enum gimple_debug_subcode, GIMPLE_DEBUG_BIND): New.
(gimple_build_debug_bind_stat): Declare.
(gimple_build_debug_bind): Define.
(is_gimple_debug): New.
(gimple_debug_bind_p): New.
(gimple_debug_bind_get_var): New.
(gimple_debug_bind_get_value): New.
(gimple_debug_bind_get_value_ptr): New.
(gimple_debug_bind_set_var): New.
(gimple_debug_bind_set_value): New.
(GIMPLE_DEBUG_BIND_NOVALUE): New internal temporary macro.
(gimple_debug_bind_reset_value): New.
(gimple_debug_bind_has_value_p): New.
(gsi_next_nondebug): New.
(gsi_prev_nondebug): New.
(gsi_start_nondebug_bb): New.
(gsi_last_nondebug_bb): New.
* sched-vis.c (print_pattern): Handle VAR_LOCATION.
(print_insn): Handle DEBUG_INSN.
* tree-cfg.c (remove_bb): Walk stmts backwards. Let loc
of first insn prevail.
(first_stmt): Skip debug stmts.
(first_non_label_stmt): Likewise.
(last_stmt): Likewise.
(has_zero_uses_1): New.
(single_imm_use_1): New.
(verify_gimple_debug): New.
(verify_types_in_gimple_stmt): Handle debug stmts.
(verify_stmt): Likewise.
(debug_loop_num): Skip debug stmts.
(remove_edge_and_dominated_blocks): Remove dominators last.
* tree-ssa-reasssoc.c (rewrite_expr_tree): Propagate into
debug stmts.
(linearize_expr): Likewise.
* config/i386/i386.c (ix86_delegitimize_address): Call
default implementation.
* config/ia64/ia64.c (ia64_safe_itanium_class): Handle debug
insns.
(group_barrier_needed): Skip debug insns.
(emit_insn_group_barriers): Likewise.
(emit_all_insn_group_barriers): Likewise.
(ia64_variable_issue): Handle debug insns.
(ia64_dfa_new_cycle): Likewise.
(final_emit_insn_group_barriers): Skip debug insns.
(ia64_dwarf2out_def_steady_cfa): Take frame argument. Don't
def cfa without frame.
(process_set): Likewise.
(process_for_unwind_directive): Pass frame on.
* config/rs6000/rs6000.c (TARGET_DELEGITIMIZE_ADDRESS): Define.
(rs6000_delegitimize_address): New.
(rs6000_debug_adjust_cost): Handle debug insns.
(is_microcoded_insn): Likewise.
(is_cracked_insn): Likewise.
(is_nonpipeline_insn): Likewise.
(insn_must_be_first_in_group): Likewise.
(insn_must_be_last_in_group): Likewise.
(force_new_group): Likewise.
* cfgrtl.c (rtl_split_block): Emit INSN_DELETED note if block
contains only debug insns.
(rtl_merge_blocks): Skip debug insns.
(purge_dead_edges): Likewise.
(rtl_block_ends_with_call_p): Skip debug insns.
* dce.c (deletable_insn_p): Handle VAR_LOCATION.
(mark_reg_dependencies): Skip debug insns.
* params.def (PARAM_MIN_NONDEBUG_INSN_UID): New.
* tree-ssanames.c (release_ssa_name): Propagate def into
debug stmts.
* tree-ssa-threadedge.c
(record_temporary_equivalences_from_stmts): Skip debug stmts.
* regcprop.c (replace_oldest_value_addr): Skip debug insns.
(replace_oldest_value_mem): Use ALL_REGS for debug insns.
(copyprop_hardreg_forward_1): Handle debug insns.
* reload1.c (reload): Skip debug insns. Replace unassigned
pseudos in debug insns with their equivalences.
(eliminate_regs_in_insn): Skip debug insns.
(emit_input_reload_insns): Skip debug insns at first, adjust
them later.
* tree-ssa-operands.c (add_virtual_operand): Reject debug stmts.
(get_indirect_ref_operands): Pass opf_no_vops on.
(get_expr_operands): Likewise. Skip debug stmts.
(parse_ssa_operands): Scan debug insns with opf_no_vops.
gcc/testsuite/ChangeLog:
* gcc.dg/guality/guality.c: New.
* gcc.dg/guality/guality.h: New.
* gcc.dg/guality/guality.exp: New.
* gcc.dg/guality/example.c: New.
* lib/gcc-dg.exp (cleanup-dump): Remove .gk files.
(cleanup-saved-temps): Likewise, .gkd files too.
gcc/cp/ChangeLog:
* cp-tree.h (TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS): New.
* cp-lang.c (cxx_dwarf_name): Pass it.
* error.c (count_non_default_template_args): Take flags as
argument. Adjust all callers. Skip counting of default
arguments if the new flag is given.
ChangeLog:
* Makefile.tpl (BUILD_CONFIG): Default to bootstrap-debug.
* Makefile.in: Rebuilt.
contrib/ChangeLog:
* compare-debug: Look for .gkd files and compare them.
config/ChangeLog:
* bootstrap-debug.mk: Add comments.
* bootstrap-debug-big.mk: New.
* bootstrap-debug-lean.mk: New.
* bootstrap-debug-ckovw.mk: Add comments.
* bootstrap-debug-lib.mk: Drop CFLAGS for stages. Use -g0
for TFLAGS in stage1. Drop -fvar-tracking-assignments-toggle.
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@151312
138bc75d-0d04-0410-961f-
82ee72b054a4
GCC_FLAGS_TO_PASS = $(BASE_FLAGS_TO_PASS) $(EXTRA_HOST_FLAGS) $(EXTRA_GCC_FLAGS)
@if gcc
-BUILD_CONFIG =
+BUILD_CONFIG = bootstrap-debug
ifneq ($(BUILD_CONFIG),)
include $(foreach CONFIG, $(BUILD_CONFIG), $(srcdir)/config/$(CONFIG).mk)
endif
GCC_FLAGS_TO_PASS = $(BASE_FLAGS_TO_PASS) $(EXTRA_HOST_FLAGS) $(EXTRA_GCC_FLAGS)
@if gcc
-BUILD_CONFIG =
+BUILD_CONFIG = bootstrap-debug
ifneq ($(BUILD_CONFIG),)
include $(foreach CONFIG, $(BUILD_CONFIG), $(srcdir)/config/$(CONFIG).mk)
endif
--- /dev/null
+# This BUILD_CONFIG option is a bit like bootstrap-debug-lean, but it
+# trades space for speed: instead of recompiling programs during
+# stage3, it generates dumps during stage2 and stage3, saving them all
+# until the final compare.
+
+STAGE2_CFLAGS += -gtoggle -fdump-final-insns
+STAGE3_CFLAGS += -fdump-final-insns
+do-compare = $(SHELL) $(srcdir)/contrib/compare-debug $$f1 $$f2
--- /dev/null
+# This BUILD_CONFIG option is to be used along with
+# bootstrap-debug-lean and bootstrap-debug-lib in a full bootstrap, to
+# check that all host and target files are built with -fcompare-debug.
+
+# These arrange for a simple warning to be issued if -fcompare-debug
+# is not given.
+# BOOT_CFLAGS += -fcompare-debug="-w%n-fcompare-debug not overridden"
+# TFLAGS += -fcompare-debug="-w%n-fcompare-debug not overridden"
+
+# GCC_COMPARE_DEBUG="-w%n-fcompare-debug not overridden";
+
+FORCE_COMPARE_DEBUG = \
+ GCC_COMPARE_DEBUG=$${GCC_COMPARE_DEBUG--fcompare-debug-not-overridden}; \
+ export GCC_COMPARE_DEBUG;
+POSTSTAGE1_HOST_EXPORTS += $(FORCE_COMPARE_DEBUG)
+BASE_TARGET_EXPORTS += $(FORCE_COMPARE_DEBUG)
--- /dev/null
+# This BUILD_CONFIG option is a bit like bootstrap-debug, but in
+# addition to comparing stripped object files, it also compares
+# compiler internal state during stage3.
+
+# This makes it slower than bootstrap-debug, for there's additional
+# dumping and recompilation during stage3. bootstrap-debug-big can
+# avoid the recompilation, if plenty of disk space is available.
+
+STAGE2_CFLAGS += -gtoggle -fcompare-debug=
+STAGE3_CFLAGS += -fcompare-debug
+do-compare = $(SHELL) $(srcdir)/contrib/compare-debug $$f1 $$f2
--- /dev/null
+# This BUILD_CONFIG option tests that target libraries built during
+# stage3 would have generated the same executable code if they were
+# compiled with -g0.
+
+# It uses -g0 rather than -gtoggle because -g is default on target
+# library builds, and toggling it where it's supposed to be disabled
+# breaks e.g. crtstuff on ppc.
+
+STAGE1_TFLAGS += -g0 -fcompare-debug=
+STAGE2_TFLAGS += -fcompare-debug=
+STAGE3_TFLAGS += -fcompare-debug=-g0
+do-compare = $(SHELL) $(srcdir)/contrib/compare-debug $$f1 $$f2
-STAGE2_CFLAGS += -g0
+# This BUILD_CONFIG option builds checks that toggling debug
+# information generation doesn't affect the generated object code.
+
+# It is very lightweight: in addition to not performing any additional
+# compilation (unlike bootstrap-debug-lean), it actually speeds up
+# stage2, for no debug information is generated when compiling with
+# the unoptimized stage1.
+
+# For more thorough testing, see bootstrap-debug-lean.mk
+
+STAGE2_CFLAGS += -gtoggle
do-compare = $(SHELL) $(srcdir)/contrib/compare-debug $$f1 $$f2
--- /dev/null
+BOOT_CFLAGS += -time=$(shell pwd)/time.log
+TFLAGS += -time=$(shell pwd)/time.log
trap "exit $status; exit" 0 1 2 15
+if test -f "$1".gkd || test -f "$2".gkd; then
+ if cmp "$1".gkd "$2".gkd; then
+ :
+ else
+ status=$?
+ fi
+fi
+
exit $status
LAMBDA_H = lambda.h $(TREE_H) vec.h $(GGC_H)
TREE_DATA_REF_H = tree-data-ref.h $(LAMBDA_H) omega.h graphds.h $(SCEV_H)
VARRAY_H = varray.h $(MACHMODE_H) $(SYSTEM_H) coretypes.h $(TM_H)
-TREE_INLINE_H = tree-inline.h pointer-set.h
+TREE_INLINE_H = tree-inline.h $(GIMPLE_H)
REAL_H = real.h $(MACHMODE_H)
IRA_INT_H = ira.h ira-int.h $(CFGLOOP_H) alloc-pool.h
DBGCNT_H = dbgcnt.h dbgcnt.def
print-rtl.o : print-rtl.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) \
$(RTL_H) $(TREE_H) hard-reg-set.h $(BASIC_BLOCK_H) $(FLAGS_H) \
- $(BCONFIG_H) $(REAL_H) $(DIAGNOSTIC_H)
+ $(BCONFIG_H) $(REAL_H) $(DIAGNOSTIC_H) cselib.h
rtlanal.o : rtlanal.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TOPLEV_H) \
$(RTL_H) hard-reg-set.h $(TM_P_H) insn-config.h $(RECOG_H) $(REAL_H) \
$(FLAGS_H) $(REGS_H) output.h $(TARGET_H) $(FUNCTION_H) $(TREE_H) \
$(HASHTAB_H) tree-iterator.h $(CGRAPH_H) $(TREE_PASS_H) gcov-io.c $(TM_P_H)
cselib.o : cselib.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
$(REGS_H) hard-reg-set.h $(FLAGS_H) $(REAL_H) insn-config.h $(RECOG_H) \
- $(EMIT_RTL_H) $(TOPLEV_H) output.h $(FUNCTION_H) cselib.h $(GGC_H) $(TM_P_H) \
- gt-cselib.h $(PARAMS_H) alloc-pool.h $(HASHTAB_H) $(TARGET_H)
+ $(EMIT_RTL_H) $(TOPLEV_H) output.h $(FUNCTION_H) $(TREE_PASS_H) \
+ cselib.h gt-cselib.h $(GGC_H) $(TM_P_H) $(PARAMS_H) alloc-pool.h \
+ $(HASHTAB_H) $(TARGET_H)
cse.o : cse.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) $(REGS_H) \
hard-reg-set.h $(FLAGS_H) insn-config.h $(RECOG_H) $(EXPR_H) $(TOPLEV_H) \
output.h $(FUNCTION_H) $(BASIC_BLOCK_H) $(GGC_H) $(TM_P_H) $(TIMEVAR_H) \
var-tracking.o : var-tracking.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) \
$(RTL_H) $(TREE_H) hard-reg-set.h insn-config.h reload.h $(FLAGS_H) \
$(BASIC_BLOCK_H) output.h sbitmap.h alloc-pool.h $(FIBHEAP_H) $(HASHTAB_H) \
- $(REGS_H) $(EXPR_H) $(TIMEVAR_H) $(TREE_PASS_H)
+ $(REGS_H) $(EXPR_H) $(TIMEVAR_H) $(TREE_PASS_H) cselib.h $(TARGET_H)
profile.o : profile.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
$(TREE_H) $(FLAGS_H) output.h $(REGS_H) $(EXPR_H) $(FUNCTION_H) \
$(TOPLEV_H) $(COVERAGE_H) $(TREE_FLOW_H) value-prof.h cfghooks.h \
unsigned int uid = INSN_UID (insn);
bool insn_is_add_or_inc = true;
- if (!INSN_P (insn))
+ if (!NONDEBUG_INSN_P (insn))
continue;
/* This continue is deliberate. We do not want the uses of the
/* If the inc insn was merged with a mem, the inc insn is gone
and there is noting to update. */
- if (DF_INSN_UID_GET(uid))
+ if (DF_INSN_UID_GET (uid))
{
df_ref *def_rec;
df_ref *use_rec;
static HARD_REG_SET referenced_regs;
+typedef void refmarker_fn (rtx *loc, enum machine_mode mode, int hardregno,
+ void *mark_arg);
+
static int reg_save_code (int, enum machine_mode);
static int reg_restore_code (int, enum machine_mode);
static int saved_hard_reg_compare_func (const void *, const void *);
static void mark_set_regs (rtx, const_rtx, void *);
-static void add_stored_regs (rtx, const_rtx, void *);
-static void mark_referenced_regs (rtx);
+static void mark_referenced_regs (rtx *, refmarker_fn *mark, void *mark_arg);
+static refmarker_fn mark_reg_as_referenced;
+static refmarker_fn replace_reg_with_saved_mem;
static int insert_save (struct insn_chain *, int, int, HARD_REG_SET *,
enum machine_mode *);
static int insert_restore (struct insn_chain *, int, int, int,
gcc_assert (!chain->is_caller_save_insn);
- if (INSN_P (insn))
+ if (NONDEBUG_INSN_P (insn))
{
/* If some registers have been saved, see if INSN references
any of them. We must restore them before the insn if so. */
else
{
CLEAR_HARD_REG_SET (referenced_regs);
- mark_referenced_regs (PATTERN (insn));
+ mark_referenced_regs (&PATTERN (insn),
+ mark_reg_as_referenced, NULL);
AND_HARD_REG_SET (referenced_regs, hard_regs_saved);
}
n_regs_saved++;
}
}
+ else if (DEBUG_INSN_P (insn) && n_regs_saved)
+ mark_referenced_regs (&PATTERN (insn),
+ replace_reg_with_saved_mem,
+ save_mode);
if (chain->next == 0 || chain->next->block != chain->block)
{
/* Walk X and record all referenced registers in REFERENCED_REGS. */
static void
-mark_referenced_regs (rtx x)
+mark_referenced_regs (rtx *loc, refmarker_fn *mark, void *arg)
{
- enum rtx_code code = GET_CODE (x);
+ enum rtx_code code = GET_CODE (*loc);
const char *fmt;
int i, j;
if (code == SET)
- mark_referenced_regs (SET_SRC (x));
+ mark_referenced_regs (&SET_SRC (*loc), mark, arg);
if (code == SET || code == CLOBBER)
{
- x = SET_DEST (x);
- code = GET_CODE (x);
- if ((code == REG && REGNO (x) < FIRST_PSEUDO_REGISTER)
+ loc = &SET_DEST (*loc);
+ code = GET_CODE (*loc);
+ if ((code == REG && REGNO (*loc) < FIRST_PSEUDO_REGISTER)
|| code == PC || code == CC0
- || (code == SUBREG && REG_P (SUBREG_REG (x))
- && REGNO (SUBREG_REG (x)) < FIRST_PSEUDO_REGISTER
+ || (code == SUBREG && REG_P (SUBREG_REG (*loc))
+ && REGNO (SUBREG_REG (*loc)) < FIRST_PSEUDO_REGISTER
/* If we're setting only part of a multi-word register,
we shall mark it as referenced, because the words
that are not being set should be restored. */
- && ((GET_MODE_SIZE (GET_MODE (x))
- >= GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))))
- || (GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)))
+ && ((GET_MODE_SIZE (GET_MODE (*loc))
+ >= GET_MODE_SIZE (GET_MODE (SUBREG_REG (*loc))))
+ || (GET_MODE_SIZE (GET_MODE (SUBREG_REG (*loc)))
<= UNITS_PER_WORD))))
return;
}
if (code == MEM || code == SUBREG)
{
- x = XEXP (x, 0);
- code = GET_CODE (x);
+ loc = &XEXP (*loc, 0);
+ code = GET_CODE (*loc);
}
if (code == REG)
{
- int regno = REGNO (x);
+ int regno = REGNO (*loc);
int hardregno = (regno < FIRST_PSEUDO_REGISTER ? regno
: reg_renumber[regno]);
if (hardregno >= 0)
- add_to_hard_reg_set (&referenced_regs, GET_MODE (x), hardregno);
+ mark (loc, GET_MODE (*loc), hardregno, arg);
+ else if (arg)
+ /* ??? Will we ever end up with an equiv expression in a debug
+ insn, that would have required restoring a reg, or will
+ reload take care of it for us? */
+ return;
/* If this is a pseudo that did not get a hard register, scan its
memory location, since it might involve the use of another
register, which might be saved. */
else if (reg_equiv_mem[regno] != 0)
- mark_referenced_regs (XEXP (reg_equiv_mem[regno], 0));
+ mark_referenced_regs (&XEXP (reg_equiv_mem[regno], 0), mark, arg);
else if (reg_equiv_address[regno] != 0)
- mark_referenced_regs (reg_equiv_address[regno]);
+ mark_referenced_regs (®_equiv_address[regno], mark, arg);
return;
}
for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
{
if (fmt[i] == 'e')
- mark_referenced_regs (XEXP (x, i));
+ mark_referenced_regs (&XEXP (*loc, i), mark, arg);
else if (fmt[i] == 'E')
- for (j = XVECLEN (x, i) - 1; j >= 0; j--)
- mark_referenced_regs (XVECEXP (x, i, j));
+ for (j = XVECLEN (*loc, i) - 1; j >= 0; j--)
+ mark_referenced_regs (&XVECEXP (*loc, i, j), mark, arg);
+ }
+}
+
+/* Parameter function for mark_referenced_regs() that adds registers
+ present in the insn and in equivalent mems and addresses to
+ referenced_regs. */
+
+static void
+mark_reg_as_referenced (rtx *loc ATTRIBUTE_UNUSED,
+ enum machine_mode mode,
+ int hardregno,
+ void *arg ATTRIBUTE_UNUSED)
+{
+ add_to_hard_reg_set (&referenced_regs, mode, hardregno);
+}
+
+/* Parameter function for mark_referenced_regs() that replaces
+ registers referenced in a debug_insn that would have been restored,
+ should it be a non-debug_insn, with their save locations. */
+
+static void
+replace_reg_with_saved_mem (rtx *loc,
+ enum machine_mode mode,
+ int regno,
+ void *arg)
+{
+ unsigned int i, nregs = hard_regno_nregs [regno][mode];
+ rtx mem;
+ enum machine_mode *save_mode = (enum machine_mode *)arg;
+
+ for (i = 0; i < nregs; i++)
+ if (TEST_HARD_REG_BIT (hard_regs_saved, regno + i))
+ break;
+
+ /* If none of the registers in the range would need restoring, we're
+ all set. */
+ if (i == nregs)
+ return;
+
+ while (++i < nregs)
+ if (!TEST_HARD_REG_BIT (hard_regs_saved, regno + i))
+ break;
+
+ if (i == nregs
+ && regno_save_mem[regno][nregs])
+ {
+ mem = copy_rtx (regno_save_mem[regno][nregs]);
+
+ if (nregs == (unsigned int) hard_regno_nregs[regno][save_mode[regno]])
+ mem = adjust_address_nv (mem, save_mode[regno], 0);
+
+ if (GET_MODE (mem) != mode)
+ {
+ /* This is gen_lowpart_if_possible(), but without validating
+ the newly-formed address. */
+ int offset = 0;
+
+ if (WORDS_BIG_ENDIAN)
+ offset = (MAX (GET_MODE_SIZE (GET_MODE (mem)), UNITS_PER_WORD)
+ - MAX (GET_MODE_SIZE (mode), UNITS_PER_WORD));
+ if (BYTES_BIG_ENDIAN)
+ /* Adjust the address so that the address-after-the-data is
+ unchanged. */
+ offset -= (MIN (UNITS_PER_WORD, GET_MODE_SIZE (mode))
+ - MIN (UNITS_PER_WORD, GET_MODE_SIZE (GET_MODE (mem))));
+
+ mem = adjust_address_nv (mem, mode, offset);
+ }
}
+ else
+ {
+ mem = gen_rtx_CONCATN (mode, rtvec_alloc (nregs));
+ for (i = 0; i < nregs; i++)
+ if (TEST_HARD_REG_BIT (hard_regs_saved, regno + i))
+ {
+ gcc_assert (regno_save_mem[regno + i][1]);
+ XVECEXP (mem, 0, i) = copy_rtx (regno_save_mem[regno + i][1]);
+ }
+ else
+ {
+ gcc_assert (save_mode[regno] != VOIDmode);
+ XVECEXP (mem, 0, i) = gen_rtx_REG (save_mode [regno],
+ regno + i);
+ }
+ }
+
+ gcc_assert (GET_MODE (mem) == mode);
+ *loc = mem;
}
+
\f
/* Insert a sequence of insns to restore. Place these insns in front of
CHAIN if BEFORE_P is nonzero, behind the insn otherwise. MAXRESTORE is
case CALL_INSN:
case INSN:
+ case DEBUG_INSN:
return true;
case BARRIER:
{
case NOTE:
case CODE_LABEL:
+ case DEBUG_INSN:
return false;
case JUMP_INSN:
while (true)
{
/* Ignore notes. */
- while (!INSN_P (i1) && i1 != BB_HEAD (bb1))
+ while (!NONDEBUG_INSN_P (i1) && i1 != BB_HEAD (bb1))
i1 = PREV_INSN (i1);
- while (!INSN_P (i2) && i2 != BB_HEAD (bb2))
+ while (!NONDEBUG_INSN_P (i2) && i2 != BB_HEAD (bb2))
i2 = PREV_INSN (i2);
if (i1 == BB_HEAD (bb1) || i2 == BB_HEAD (bb2))
Two, it keeps line number notes as matched as may be. */
if (ninsns)
{
- while (last1 != BB_HEAD (bb1) && !INSN_P (PREV_INSN (last1)))
+ while (last1 != BB_HEAD (bb1) && !NONDEBUG_INSN_P (PREV_INSN (last1)))
last1 = PREV_INSN (last1);
if (last1 != BB_HEAD (bb1) && LABEL_P (PREV_INSN (last1)))
last1 = PREV_INSN (last1);
- while (last2 != BB_HEAD (bb2) && !INSN_P (PREV_INSN (last2)))
+ while (last2 != BB_HEAD (bb2) && !NONDEBUG_INSN_P (PREV_INSN (last2)))
last2 = PREV_INSN (last2);
if (last2 != BB_HEAD (bb2) && LABEL_P (PREV_INSN (last2)))
/* Skip possible basic block header. */
if (LABEL_P (newpos2))
newpos2 = NEXT_INSN (newpos2);
+ while (DEBUG_INSN_P (newpos2))
+ newpos2 = NEXT_INSN (newpos2);
if (NOTE_P (newpos2))
newpos2 = NEXT_INSN (newpos2);
+ while (DEBUG_INSN_P (newpos2))
+ newpos2 = NEXT_INSN (newpos2);
}
if (dump_file)
/* Skip possible basic block header. */
if (LABEL_P (newpos1))
newpos1 = NEXT_INSN (newpos1);
+
+ while (DEBUG_INSN_P (newpos1))
+ newpos1 = NEXT_INSN (newpos1);
+
if (NOTE_INSN_BASIC_BLOCK_P (newpos1))
newpos1 = NEXT_INSN (newpos1);
+ while (DEBUG_INSN_P (newpos1))
+ newpos1 = NEXT_INSN (newpos1);
+
redirect_from = split_block (src1, PREV_INSN (newpos1))->src;
to_remove = single_succ (redirect_from);
delete_unreachable_blocks (void)
{
bool changed = false;
- basic_block b, next_bb;
+ basic_block b, prev_bb;
find_unreachable_blocks ();
- /* Delete all unreachable basic blocks. */
-
- for (b = ENTRY_BLOCK_PTR->next_bb; b != EXIT_BLOCK_PTR; b = next_bb)
+ /* When we're in GIMPLE mode and there may be debug insns, we should
+ delete blocks in reverse dominator order, so as to get a chance
+ to substitute all released DEFs into debug stmts. If we don't
+ have dominators information, walking blocks backward gets us a
+ better chance of retaining most debug information than
+ otherwise. */
+ if (MAY_HAVE_DEBUG_STMTS && current_ir_type () == IR_GIMPLE
+ && dom_info_available_p (CDI_DOMINATORS))
{
- next_bb = b->next_bb;
+ for (b = EXIT_BLOCK_PTR->prev_bb; b != ENTRY_BLOCK_PTR; b = prev_bb)
+ {
+ prev_bb = b->prev_bb;
+
+ if (!(b->flags & BB_REACHABLE))
+ {
+ /* Speed up the removal of blocks that don't dominate
+ others. Walking backwards, this should be the common
+ case. */
+ if (!first_dom_son (CDI_DOMINATORS, b))
+ delete_basic_block (b);
+ else
+ {
+ VEC (basic_block, heap) *h
+ = get_all_dominated_blocks (CDI_DOMINATORS, b);
+
+ while (VEC_length (basic_block, h))
+ {
+ b = VEC_pop (basic_block, h);
+
+ prev_bb = b->prev_bb;
- if (!(b->flags & BB_REACHABLE))
+ gcc_assert (!(b->flags & BB_REACHABLE));
+
+ delete_basic_block (b);
+ }
+
+ VEC_free (basic_block, heap, h);
+ }
+
+ changed = true;
+ }
+ }
+ }
+ else
+ {
+ for (b = EXIT_BLOCK_PTR->prev_bb; b != ENTRY_BLOCK_PTR; b = prev_bb)
{
- delete_basic_block (b);
- changed = true;
+ prev_bb = b->prev_bb;
+
+ if (!(b->flags & BB_REACHABLE))
+ {
+ delete_basic_block (b);
+ changed = true;
+ }
}
}
TREE_TYPE (gimple_assign_lhs (stmt)),
gimple_assign_rhs1 (stmt));
else if (grhs_class == GIMPLE_SINGLE_RHS)
- t = gimple_assign_rhs1 (stmt);
+ {
+ t = gimple_assign_rhs1 (stmt);
+ /* Avoid modifying this tree in place below. */
+ if (gimple_has_location (stmt) && CAN_HAVE_LOCATION_P (t)
+ && gimple_location (stmt) != EXPR_LOCATION (t))
+ t = copy_node (t);
+ }
else
gcc_unreachable ();
if (dump_file && (dump_flags & TDF_DETAILS))
{
fprintf (dump_file, "\n;; ");
- print_gimple_stmt (dump_file, stmt, 0, TDF_SLIM);
+ print_gimple_stmt (dump_file, stmt, 0,
+ TDF_SLIM | (dump_flags & TDF_LINENO));
fprintf (dump_file, "\n");
print_rtl (dump_file, since ? NEXT_INSN (since) : since);
return bb;
}
+/* Return the difference between the floor and the truncated result of
+ a signed division by OP1 with remainder MOD. */
+static rtx
+floor_sdiv_adjust (enum machine_mode mode, rtx mod, rtx op1)
+{
+ /* (mod != 0 ? (op1 / mod < 0 ? -1 : 0) : 0) */
+ return gen_rtx_IF_THEN_ELSE
+ (mode, gen_rtx_NE (BImode, mod, const0_rtx),
+ gen_rtx_IF_THEN_ELSE
+ (mode, gen_rtx_LT (BImode,
+ gen_rtx_DIV (mode, op1, mod),
+ const0_rtx),
+ constm1_rtx, const0_rtx),
+ const0_rtx);
+}
+
+/* Return the difference between the ceil and the truncated result of
+ a signed division by OP1 with remainder MOD. */
+static rtx
+ceil_sdiv_adjust (enum machine_mode mode, rtx mod, rtx op1)
+{
+ /* (mod != 0 ? (op1 / mod > 0 ? 1 : 0) : 0) */
+ return gen_rtx_IF_THEN_ELSE
+ (mode, gen_rtx_NE (BImode, mod, const0_rtx),
+ gen_rtx_IF_THEN_ELSE
+ (mode, gen_rtx_GT (BImode,
+ gen_rtx_DIV (mode, op1, mod),
+ const0_rtx),
+ const1_rtx, const0_rtx),
+ const0_rtx);
+}
+
+/* Return the difference between the ceil and the truncated result of
+ an unsigned division by OP1 with remainder MOD. */
+static rtx
+ceil_udiv_adjust (enum machine_mode mode, rtx mod, rtx op1 ATTRIBUTE_UNUSED)
+{
+ /* (mod != 0 ? 1 : 0) */
+ return gen_rtx_IF_THEN_ELSE
+ (mode, gen_rtx_NE (BImode, mod, const0_rtx),
+ const1_rtx, const0_rtx);
+}
+
+/* Return the difference between the rounded and the truncated result
+ of a signed division by OP1 with remainder MOD. Halfway cases are
+ rounded away from zero, rather than to the nearest even number. */
+static rtx
+round_sdiv_adjust (enum machine_mode mode, rtx mod, rtx op1)
+{
+ /* (abs (mod) >= abs (op1) - abs (mod)
+ ? (op1 / mod > 0 ? 1 : -1)
+ : 0) */
+ return gen_rtx_IF_THEN_ELSE
+ (mode, gen_rtx_GE (BImode, gen_rtx_ABS (mode, mod),
+ gen_rtx_MINUS (mode,
+ gen_rtx_ABS (mode, op1),
+ gen_rtx_ABS (mode, mod))),
+ gen_rtx_IF_THEN_ELSE
+ (mode, gen_rtx_GT (BImode,
+ gen_rtx_DIV (mode, op1, mod),
+ const0_rtx),
+ const1_rtx, constm1_rtx),
+ const0_rtx);
+}
+
+/* Return the difference between the rounded and the truncated result
+ of a unsigned division by OP1 with remainder MOD. Halfway cases
+ are rounded away from zero, rather than to the nearest even
+ number. */
+static rtx
+round_udiv_adjust (enum machine_mode mode, rtx mod, rtx op1)
+{
+ /* (mod >= op1 - mod ? 1 : 0) */
+ return gen_rtx_IF_THEN_ELSE
+ (mode, gen_rtx_GE (BImode, mod,
+ gen_rtx_MINUS (mode, op1, mod)),
+ const1_rtx, const0_rtx);
+}
+
+/* Wrap modeless constants in CONST:MODE. */
+rtx
+wrap_constant (enum machine_mode mode, rtx x)
+{
+ if (GET_MODE (x) != VOIDmode)
+ return x;
+
+ if (CONST_INT_P (x)
+ || GET_CODE (x) == CONST_FIXED
+ || GET_CODE (x) == CONST_DOUBLE
+ || GET_CODE (x) == LABEL_REF)
+ {
+ gcc_assert (mode != VOIDmode);
+
+ x = gen_rtx_CONST (mode, x);
+ }
+
+ return x;
+}
+
+/* Remove CONST wrapper added by wrap_constant(). */
+rtx
+unwrap_constant (rtx x)
+{
+ rtx ret = x;
+
+ if (GET_CODE (x) != CONST)
+ return x;
+
+ x = XEXP (x, 0);
+
+ if (CONST_INT_P (x)
+ || GET_CODE (x) == CONST_FIXED
+ || GET_CODE (x) == CONST_DOUBLE
+ || GET_CODE (x) == LABEL_REF)
+ ret = x;
+
+ return ret;
+}
+
+/* Return an RTX equivalent to the value of the tree expression
+ EXP. */
+
+static rtx
+expand_debug_expr (tree exp)
+{
+ rtx op0 = NULL_RTX, op1 = NULL_RTX, op2 = NULL_RTX;
+ enum machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+ int unsignedp = TYPE_UNSIGNED (TREE_TYPE (exp));
+
+ switch (TREE_CODE_CLASS (TREE_CODE (exp)))
+ {
+ case tcc_expression:
+ switch (TREE_CODE (exp))
+ {
+ case COND_EXPR:
+ goto ternary;
+
+ case TRUTH_ANDIF_EXPR:
+ case TRUTH_ORIF_EXPR:
+ case TRUTH_AND_EXPR:
+ case TRUTH_OR_EXPR:
+ case TRUTH_XOR_EXPR:
+ goto binary;
+
+ case TRUTH_NOT_EXPR:
+ goto unary;
+
+ default:
+ break;
+ }
+ break;
+
+ ternary:
+ op2 = expand_debug_expr (TREE_OPERAND (exp, 2));
+ if (!op2)
+ return NULL_RTX;
+ /* Fall through. */
+
+ binary:
+ case tcc_binary:
+ case tcc_comparison:
+ op1 = expand_debug_expr (TREE_OPERAND (exp, 1));
+ if (!op1)
+ return NULL_RTX;
+ /* Fall through. */
+
+ unary:
+ case tcc_unary:
+ op0 = expand_debug_expr (TREE_OPERAND (exp, 0));
+ if (!op0)
+ return NULL_RTX;
+ break;
+
+ case tcc_type:
+ case tcc_statement:
+ gcc_unreachable ();
+
+ case tcc_constant:
+ case tcc_exceptional:
+ case tcc_declaration:
+ case tcc_reference:
+ case tcc_vl_exp:
+ break;
+ }
+
+ switch (TREE_CODE (exp))
+ {
+ case STRING_CST:
+ if (!lookup_constant_def (exp))
+ {
+ op0 = gen_rtx_CONST_STRING (Pmode, TREE_STRING_POINTER (exp));
+ op0 = gen_rtx_MEM (BLKmode, op0);
+ set_mem_attributes (op0, exp, 0);
+ return op0;
+ }
+ /* Fall through... */
+
+ case INTEGER_CST:
+ case REAL_CST:
+ case FIXED_CST:
+ op0 = expand_expr (exp, NULL_RTX, mode, EXPAND_INITIALIZER);
+ return op0;
+
+ case COMPLEX_CST:
+ gcc_assert (COMPLEX_MODE_P (mode));
+ op0 = expand_debug_expr (TREE_REALPART (exp));
+ op0 = wrap_constant (GET_MODE_INNER (mode), op0);
+ op1 = expand_debug_expr (TREE_IMAGPART (exp));
+ op1 = wrap_constant (GET_MODE_INNER (mode), op1);
+ return gen_rtx_CONCAT (mode, op0, op1);
+
+ case VAR_DECL:
+ case PARM_DECL:
+ case FUNCTION_DECL:
+ case LABEL_DECL:
+ case CONST_DECL:
+ case RESULT_DECL:
+ op0 = DECL_RTL_IF_SET (exp);
+
+ /* This decl was probably optimized away. */
+ if (!op0)
+ return NULL;
+
+ op0 = copy_rtx (op0);
+
+ if (GET_MODE (op0) == BLKmode)
+ {
+ gcc_assert (MEM_P (op0));
+ op0 = adjust_address_nv (op0, mode, 0);
+ return op0;
+ }
+
+ /* Fall through. */
+
+ adjust_mode:
+ case PAREN_EXPR:
+ case NOP_EXPR:
+ case CONVERT_EXPR:
+ {
+ enum machine_mode inner_mode = GET_MODE (op0);
+
+ if (mode == inner_mode)
+ return op0;
+
+ if (inner_mode == VOIDmode)
+ {
+ inner_mode = TYPE_MODE (TREE_TYPE (TREE_OPERAND (exp, 0)));
+ if (mode == inner_mode)
+ return op0;
+ }
+
+ if (FLOAT_MODE_P (mode) && FLOAT_MODE_P (inner_mode))
+ {
+ if (GET_MODE_BITSIZE (mode) == GET_MODE_BITSIZE (inner_mode))
+ op0 = simplify_gen_subreg (mode, op0, inner_mode, 0);
+ else if (GET_MODE_BITSIZE (mode) < GET_MODE_BITSIZE (inner_mode))
+ op0 = simplify_gen_unary (FLOAT_TRUNCATE, mode, op0, inner_mode);
+ else
+ op0 = simplify_gen_unary (FLOAT_EXTEND, mode, op0, inner_mode);
+ }
+ else if (FLOAT_MODE_P (mode))
+ {
+ if (TYPE_UNSIGNED (TREE_TYPE (TREE_OPERAND (exp, 0))))
+ op0 = simplify_gen_unary (UNSIGNED_FLOAT, mode, op0, inner_mode);
+ else
+ op0 = simplify_gen_unary (FLOAT, mode, op0, inner_mode);
+ }
+ else if (FLOAT_MODE_P (inner_mode))
+ {
+ if (unsignedp)
+ op0 = simplify_gen_unary (UNSIGNED_FIX, mode, op0, inner_mode);
+ else
+ op0 = simplify_gen_unary (FIX, mode, op0, inner_mode);
+ }
+ else if (CONSTANT_P (op0)
+ || GET_MODE_BITSIZE (mode) <= GET_MODE_BITSIZE (inner_mode))
+ op0 = simplify_gen_subreg (mode, op0, inner_mode,
+ subreg_lowpart_offset (mode,
+ inner_mode));
+ else if (unsignedp)
+ op0 = gen_rtx_ZERO_EXTEND (mode, op0);
+ else
+ op0 = gen_rtx_SIGN_EXTEND (mode, op0);
+
+ return op0;
+ }
+
+ case INDIRECT_REF:
+ case ALIGN_INDIRECT_REF:
+ case MISALIGNED_INDIRECT_REF:
+ op0 = expand_debug_expr (TREE_OPERAND (exp, 0));
+ if (!op0)
+ return NULL;
+
+ gcc_assert (GET_MODE (op0) == Pmode
+ || GET_CODE (op0) == CONST_INT
+ || GET_CODE (op0) == CONST_DOUBLE);
+
+ if (TREE_CODE (exp) == ALIGN_INDIRECT_REF)
+ {
+ int align = TYPE_ALIGN_UNIT (TREE_TYPE (exp));
+ op0 = gen_rtx_AND (Pmode, op0, GEN_INT (-align));
+ }
+
+ op0 = gen_rtx_MEM (mode, op0);
+
+ set_mem_attributes (op0, exp, 0);
+
+ return op0;
+
+ case TARGET_MEM_REF:
+ if (TMR_SYMBOL (exp) && !DECL_RTL_SET_P (TMR_SYMBOL (exp)))
+ return NULL;
+
+ op0 = expand_debug_expr
+ (tree_mem_ref_addr (build_pointer_type (TREE_TYPE (exp)),
+ exp));
+ if (!op0)
+ return NULL;
+
+ gcc_assert (GET_MODE (op0) == Pmode
+ || GET_CODE (op0) == CONST_INT
+ || GET_CODE (op0) == CONST_DOUBLE);
+
+ op0 = gen_rtx_MEM (mode, op0);
+
+ set_mem_attributes (op0, exp, 0);
+
+ return op0;
+
+ case ARRAY_REF:
+ case ARRAY_RANGE_REF:
+ case COMPONENT_REF:
+ case BIT_FIELD_REF:
+ case REALPART_EXPR:
+ case IMAGPART_EXPR:
+ case VIEW_CONVERT_EXPR:
+ {
+ enum machine_mode mode1;
+ HOST_WIDE_INT bitsize, bitpos;
+ tree offset;
+ int volatilep = 0;
+ tree tem = get_inner_reference (exp, &bitsize, &bitpos, &offset,
+ &mode1, &unsignedp, &volatilep, false);
+ rtx orig_op0;
+
+ orig_op0 = op0 = expand_debug_expr (tem);
+
+ if (!op0)
+ return NULL;
+
+ if (offset)
+ {
+ gcc_assert (MEM_P (op0));
+
+ op1 = expand_debug_expr (offset);
+ if (!op1)
+ return NULL;
+
+ op0 = gen_rtx_MEM (mode, gen_rtx_PLUS (Pmode, XEXP (op0, 0), op1));
+ }
+
+ if (MEM_P (op0))
+ {
+ if (bitpos >= BITS_PER_UNIT)
+ {
+ op0 = adjust_address_nv (op0, mode1, bitpos / BITS_PER_UNIT);
+ bitpos %= BITS_PER_UNIT;
+ }
+ else if (bitpos < 0)
+ {
+ int units = (-bitpos + BITS_PER_UNIT - 1) / BITS_PER_UNIT;
+ op0 = adjust_address_nv (op0, mode1, units);
+ bitpos += units * BITS_PER_UNIT;
+ }
+ else if (bitpos == 0 && bitsize == GET_MODE_BITSIZE (mode))
+ op0 = adjust_address_nv (op0, mode, 0);
+ else if (GET_MODE (op0) != mode1)
+ op0 = adjust_address_nv (op0, mode1, 0);
+ else
+ op0 = copy_rtx (op0);
+ if (op0 == orig_op0)
+ op0 = shallow_copy_rtx (op0);
+ set_mem_attributes (op0, exp, 0);
+ }
+
+ if (bitpos == 0 && mode == GET_MODE (op0))
+ return op0;
+
+ if ((bitpos % BITS_PER_UNIT) == 0
+ && bitsize == GET_MODE_BITSIZE (mode1))
+ {
+ enum machine_mode opmode = GET_MODE (op0);
+
+ gcc_assert (opmode != BLKmode);
+
+ if (opmode == VOIDmode)
+ opmode = mode1;
+
+ /* This condition may hold if we're expanding the address
+ right past the end of an array that turned out not to
+ be addressable (i.e., the address was only computed in
+ debug stmts). The gen_subreg below would rightfully
+ crash, and the address doesn't really exist, so just
+ drop it. */
+ if (bitpos >= GET_MODE_BITSIZE (opmode))
+ return NULL;
+
+ return simplify_gen_subreg (mode, op0, opmode,
+ bitpos / BITS_PER_UNIT);
+ }
+
+ return simplify_gen_ternary (SCALAR_INT_MODE_P (GET_MODE (op0))
+ && TYPE_UNSIGNED (TREE_TYPE (exp))
+ ? SIGN_EXTRACT
+ : ZERO_EXTRACT, mode,
+ GET_MODE (op0) != VOIDmode
+ ? GET_MODE (op0) : mode1,
+ op0, GEN_INT (bitsize), GEN_INT (bitpos));
+ }
+
+ case EXC_PTR_EXPR:
+ /* ??? Do not call get_exception_pointer(), we don't want to gen
+ it if it hasn't been created yet. */
+ return get_exception_pointer ();
+
+ case FILTER_EXPR:
+ /* Likewise get_exception_filter(). */
+ return get_exception_filter ();
+
+ case ABS_EXPR:
+ return gen_rtx_ABS (mode, op0);
+
+ case NEGATE_EXPR:
+ return gen_rtx_NEG (mode, op0);
+
+ case BIT_NOT_EXPR:
+ return gen_rtx_NOT (mode, op0);
+
+ case FLOAT_EXPR:
+ if (unsignedp)
+ return gen_rtx_UNSIGNED_FLOAT (mode, op0);
+ else
+ return gen_rtx_FLOAT (mode, op0);
+
+ case FIX_TRUNC_EXPR:
+ if (unsignedp)
+ return gen_rtx_UNSIGNED_FIX (mode, op0);
+ else
+ return gen_rtx_FIX (mode, op0);
+
+ case POINTER_PLUS_EXPR:
+ case PLUS_EXPR:
+ return gen_rtx_PLUS (mode, op0, op1);
+
+ case MINUS_EXPR:
+ return gen_rtx_MINUS (mode, op0, op1);
+
+ case MULT_EXPR:
+ return gen_rtx_MULT (mode, op0, op1);
+
+ case RDIV_EXPR:
+ case TRUNC_DIV_EXPR:
+ case EXACT_DIV_EXPR:
+ if (unsignedp)
+ return gen_rtx_UDIV (mode, op0, op1);
+ else
+ return gen_rtx_DIV (mode, op0, op1);
+
+ case TRUNC_MOD_EXPR:
+ if (unsignedp)
+ return gen_rtx_UMOD (mode, op0, op1);
+ else
+ return gen_rtx_MOD (mode, op0, op1);
+
+ case FLOOR_DIV_EXPR:
+ if (unsignedp)
+ return gen_rtx_UDIV (mode, op0, op1);
+ else
+ {
+ rtx div = gen_rtx_DIV (mode, op0, op1);
+ rtx mod = gen_rtx_MOD (mode, op0, op1);
+ rtx adj = floor_sdiv_adjust (mode, mod, op1);
+ return gen_rtx_PLUS (mode, div, adj);
+ }
+
+ case FLOOR_MOD_EXPR:
+ if (unsignedp)
+ return gen_rtx_UMOD (mode, op0, op1);
+ else
+ {
+ rtx mod = gen_rtx_MOD (mode, op0, op1);
+ rtx adj = floor_sdiv_adjust (mode, mod, op1);
+ adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1));
+ return gen_rtx_PLUS (mode, mod, adj);
+ }
+
+ case CEIL_DIV_EXPR:
+ if (unsignedp)
+ {
+ rtx div = gen_rtx_UDIV (mode, op0, op1);
+ rtx mod = gen_rtx_UMOD (mode, op0, op1);
+ rtx adj = ceil_udiv_adjust (mode, mod, op1);
+ return gen_rtx_PLUS (mode, div, adj);
+ }
+ else
+ {
+ rtx div = gen_rtx_DIV (mode, op0, op1);
+ rtx mod = gen_rtx_MOD (mode, op0, op1);
+ rtx adj = ceil_sdiv_adjust (mode, mod, op1);
+ return gen_rtx_PLUS (mode, div, adj);
+ }
+
+ case CEIL_MOD_EXPR:
+ if (unsignedp)
+ {
+ rtx mod = gen_rtx_UMOD (mode, op0, op1);
+ rtx adj = ceil_udiv_adjust (mode, mod, op1);
+ adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1));
+ return gen_rtx_PLUS (mode, mod, adj);
+ }
+ else
+ {
+ rtx mod = gen_rtx_MOD (mode, op0, op1);
+ rtx adj = ceil_sdiv_adjust (mode, mod, op1);
+ adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1));
+ return gen_rtx_PLUS (mode, mod, adj);
+ }
+
+ case ROUND_DIV_EXPR:
+ if (unsignedp)
+ {
+ rtx div = gen_rtx_UDIV (mode, op0, op1);
+ rtx mod = gen_rtx_UMOD (mode, op0, op1);
+ rtx adj = round_udiv_adjust (mode, mod, op1);
+ return gen_rtx_PLUS (mode, div, adj);
+ }
+ else
+ {
+ rtx div = gen_rtx_DIV (mode, op0, op1);
+ rtx mod = gen_rtx_MOD (mode, op0, op1);
+ rtx adj = round_sdiv_adjust (mode, mod, op1);
+ return gen_rtx_PLUS (mode, div, adj);
+ }
+
+ case ROUND_MOD_EXPR:
+ if (unsignedp)
+ {
+ rtx mod = gen_rtx_UMOD (mode, op0, op1);
+ rtx adj = round_udiv_adjust (mode, mod, op1);
+ adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1));
+ return gen_rtx_PLUS (mode, mod, adj);
+ }
+ else
+ {
+ rtx mod = gen_rtx_MOD (mode, op0, op1);
+ rtx adj = round_sdiv_adjust (mode, mod, op1);
+ adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1));
+ return gen_rtx_PLUS (mode, mod, adj);
+ }
+
+ case LSHIFT_EXPR:
+ return gen_rtx_ASHIFT (mode, op0, op1);
+
+ case RSHIFT_EXPR:
+ if (unsignedp)
+ return gen_rtx_LSHIFTRT (mode, op0, op1);
+ else
+ return gen_rtx_ASHIFTRT (mode, op0, op1);
+
+ case LROTATE_EXPR:
+ return gen_rtx_ROTATE (mode, op0, op1);
+
+ case RROTATE_EXPR:
+ return gen_rtx_ROTATERT (mode, op0, op1);
+
+ case MIN_EXPR:
+ if (unsignedp)
+ return gen_rtx_UMIN (mode, op0, op1);
+ else
+ return gen_rtx_SMIN (mode, op0, op1);
+
+ case MAX_EXPR:
+ if (unsignedp)
+ return gen_rtx_UMAX (mode, op0, op1);
+ else
+ return gen_rtx_SMAX (mode, op0, op1);
+
+ case BIT_AND_EXPR:
+ case TRUTH_AND_EXPR:
+ return gen_rtx_AND (mode, op0, op1);
+
+ case BIT_IOR_EXPR:
+ case TRUTH_OR_EXPR:
+ return gen_rtx_IOR (mode, op0, op1);
+
+ case BIT_XOR_EXPR:
+ case TRUTH_XOR_EXPR:
+ return gen_rtx_XOR (mode, op0, op1);
+
+ case TRUTH_ANDIF_EXPR:
+ return gen_rtx_IF_THEN_ELSE (mode, op0, op1, const0_rtx);
+
+ case TRUTH_ORIF_EXPR:
+ return gen_rtx_IF_THEN_ELSE (mode, op0, const_true_rtx, op1);
+
+ case TRUTH_NOT_EXPR:
+ return gen_rtx_EQ (mode, op0, const0_rtx);
+
+ case LT_EXPR:
+ if (unsignedp)
+ return gen_rtx_LTU (mode, op0, op1);
+ else
+ return gen_rtx_LT (mode, op0, op1);
+
+ case LE_EXPR:
+ if (unsignedp)
+ return gen_rtx_LEU (mode, op0, op1);
+ else
+ return gen_rtx_LE (mode, op0, op1);
+
+ case GT_EXPR:
+ if (unsignedp)
+ return gen_rtx_GTU (mode, op0, op1);
+ else
+ return gen_rtx_GT (mode, op0, op1);
+
+ case GE_EXPR:
+ if (unsignedp)
+ return gen_rtx_GEU (mode, op0, op1);
+ else
+ return gen_rtx_GE (mode, op0, op1);
+
+ case EQ_EXPR:
+ return gen_rtx_EQ (mode, op0, op1);
+
+ case NE_EXPR:
+ return gen_rtx_NE (mode, op0, op1);
+
+ case UNORDERED_EXPR:
+ return gen_rtx_UNORDERED (mode, op0, op1);
+
+ case ORDERED_EXPR:
+ return gen_rtx_ORDERED (mode, op0, op1);
+
+ case UNLT_EXPR:
+ return gen_rtx_UNLT (mode, op0, op1);
+
+ case UNLE_EXPR:
+ return gen_rtx_UNLE (mode, op0, op1);
+
+ case UNGT_EXPR:
+ return gen_rtx_UNGT (mode, op0, op1);
+
+ case UNGE_EXPR:
+ return gen_rtx_UNGE (mode, op0, op1);
+
+ case UNEQ_EXPR:
+ return gen_rtx_UNEQ (mode, op0, op1);
+
+ case LTGT_EXPR:
+ return gen_rtx_LTGT (mode, op0, op1);
+
+ case COND_EXPR:
+ return gen_rtx_IF_THEN_ELSE (mode, op0, op1, op2);
+
+ case COMPLEX_EXPR:
+ gcc_assert (COMPLEX_MODE_P (mode));
+ if (GET_MODE (op0) == VOIDmode)
+ op0 = gen_rtx_CONST (GET_MODE_INNER (mode), op0);
+ if (GET_MODE (op1) == VOIDmode)
+ op1 = gen_rtx_CONST (GET_MODE_INNER (mode), op1);
+ return gen_rtx_CONCAT (mode, op0, op1);
+
+ case ADDR_EXPR:
+ op0 = expand_debug_expr (TREE_OPERAND (exp, 0));
+ if (!op0 || !MEM_P (op0))
+ return NULL;
+
+ return XEXP (op0, 0);
+
+ case VECTOR_CST:
+ exp = build_constructor_from_list (TREE_TYPE (exp),
+ TREE_VECTOR_CST_ELTS (exp));
+ /* Fall through. */
+
+ case CONSTRUCTOR:
+ if (TREE_CODE (TREE_TYPE (exp)) == VECTOR_TYPE)
+ {
+ unsigned i;
+ tree val;
+
+ op0 = gen_rtx_CONCATN
+ (mode, rtvec_alloc (TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp))));
+
+ FOR_EACH_CONSTRUCTOR_VALUE (CONSTRUCTOR_ELTS (exp), i, val)
+ {
+ op1 = expand_debug_expr (val);
+ if (!op1)
+ return NULL;
+ XVECEXP (op0, 0, i) = op1;
+ }
+
+ if (i < TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp)))
+ {
+ op1 = expand_debug_expr
+ (fold_convert (TREE_TYPE (TREE_TYPE (exp)), integer_zero_node));
+
+ if (!op1)
+ return NULL;
+
+ for (; i < TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp)); i++)
+ XVECEXP (op0, 0, i) = op1;
+ }
+
+ return op0;
+ }
+ else
+ goto flag_unsupported;
+
+ case CALL_EXPR:
+ /* ??? Maybe handle some builtins? */
+ return NULL;
+
+ case SSA_NAME:
+ {
+ int part = var_to_partition (SA.map, exp);
+
+ if (part == NO_PARTITION)
+ return NULL;
+
+ gcc_assert (part >= 0 && (unsigned)part < SA.map->num_partitions);
+
+ op0 = SA.partition_to_pseudo[part];
+ goto adjust_mode;
+ }
+
+ case ERROR_MARK:
+ return NULL;
+
+ default:
+ flag_unsupported:
+#ifdef ENABLE_CHECKING
+ debug_tree (exp);
+ gcc_unreachable ();
+#else
+ return NULL;
+#endif
+ }
+}
+
+/* Expand the _LOCs in debug insns. We run this after expanding all
+ regular insns, so that any variables referenced in the function
+ will have their DECL_RTLs set. */
+
+static void
+expand_debug_locations (void)
+{
+ rtx insn;
+ rtx last = get_last_insn ();
+ int save_strict_alias = flag_strict_aliasing;
+
+ /* New alias sets while setting up memory attributes cause
+ -fcompare-debug failures, even though it doesn't bring about any
+ codegen changes. */
+ flag_strict_aliasing = 0;
+
+ for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
+ if (DEBUG_INSN_P (insn))
+ {
+ tree value = (tree)INSN_VAR_LOCATION_LOC (insn);
+ rtx val;
+ enum machine_mode mode;
+
+ if (value == NULL_TREE)
+ val = NULL_RTX;
+ else
+ {
+ val = expand_debug_expr (value);
+ gcc_assert (last == get_last_insn ());
+ }
+
+ if (!val)
+ val = gen_rtx_UNKNOWN_VAR_LOC ();
+ else
+ {
+ mode = GET_MODE (INSN_VAR_LOCATION (insn));
+
+ gcc_assert (mode == GET_MODE (val)
+ || (GET_MODE (val) == VOIDmode
+ && (CONST_INT_P (val)
+ || GET_CODE (val) == CONST_FIXED
+ || GET_CODE (val) == CONST_DOUBLE
+ || GET_CODE (val) == LABEL_REF)));
+ }
+
+ INSN_VAR_LOCATION_LOC (insn) = val;
+ }
+
+ flag_strict_aliasing = save_strict_alias;
+}
+
/* Expand basic block BB from GIMPLE trees to RTL. */
static basic_block
for (; !gsi_end_p (gsi); gsi_next (&gsi))
{
- gimple stmt = gsi_stmt (gsi);
basic_block new_bb;
+ stmt = gsi_stmt (gsi);
+
/* Expand this statement, then evaluate the resulting RTL and
fixup the CFG accordingly. */
if (gimple_code (stmt) == GIMPLE_COND)
if (new_bb)
return new_bb;
}
+ else if (gimple_debug_bind_p (stmt))
+ {
+ location_t sloc = get_curr_insn_source_location ();
+ tree sblock = get_curr_insn_block ();
+ gimple_stmt_iterator nsi = gsi;
+
+ for (;;)
+ {
+ tree var = gimple_debug_bind_get_var (stmt);
+ tree value;
+ rtx val;
+ enum machine_mode mode;
+
+ if (gimple_debug_bind_has_value_p (stmt))
+ value = gimple_debug_bind_get_value (stmt);
+ else
+ value = NULL_TREE;
+
+ last = get_last_insn ();
+
+ set_curr_insn_source_location (gimple_location (stmt));
+ set_curr_insn_block (gimple_block (stmt));
+
+ if (DECL_P (var))
+ mode = DECL_MODE (var);
+ else
+ mode = TYPE_MODE (TREE_TYPE (var));
+
+ val = gen_rtx_VAR_LOCATION
+ (mode, var, (rtx)value, VAR_INIT_STATUS_INITIALIZED);
+
+ val = emit_debug_insn (val);
+
+ if (dump_file && (dump_flags & TDF_DETAILS))
+ {
+ /* We can't dump the insn with a TREE where an RTX
+ is expected. */
+ INSN_VAR_LOCATION_LOC (val) = const0_rtx;
+ maybe_dump_rtl_for_gimple_stmt (stmt, last);
+ INSN_VAR_LOCATION_LOC (val) = (rtx)value;
+ }
+
+ gsi = nsi;
+ gsi_next (&nsi);
+ if (gsi_end_p (nsi))
+ break;
+ stmt = gsi_stmt (nsi);
+ if (!gimple_debug_bind_p (stmt))
+ break;
+ }
+
+ set_curr_insn_source_location (sloc);
+ set_curr_insn_block (sblock);
+ }
else
{
if (is_gimple_call (stmt) && gimple_call_tail_p (stmt))
FOR_BB_BETWEEN (bb, init_block->next_bb, EXIT_BLOCK_PTR, next_bb)
bb = expand_gimple_basic_block (bb);
+ if (MAY_HAVE_DEBUG_INSNS)
+ expand_debug_locations ();
+
execute_free_datastructures ();
finish_out_of_ssa (&SA);
/* Hold current location information and last location information, so the
datastructures are built lazily only when some instructions in given
place are needed. */
-location_t curr_location, last_location;
+static location_t curr_location, last_location;
static tree curr_block, last_block;
static int curr_rtl_loc = -1;
time locators are not initialized. */
if (curr_rtl_loc == -1)
return;
- if (location == last_location)
- return;
curr_location = location;
}
-/* Set current scope block. */
+/* Get current location. */
+location_t
+get_curr_insn_source_location (void)
+{
+ return curr_location;
+}
+
+/* Set current scope block. */
void
set_curr_insn_block (tree b)
{
curr_block = b;
}
+/* Get current scope block. */
+tree
+get_curr_insn_block (void)
+{
+ return curr_block;
+}
+
/* Return current insn locator. */
int
curr_insn_locator (void)
{
switch (GET_CODE (insn))
{
+ case DEBUG_INSN:
case INSN:
case CALL_INSN:
case JUMP_INSN:
{
bb = bbs[i];
ninsns++;
- for (insn = BB_HEAD (bb); insn != BB_END (bb); insn = NEXT_INSN (insn))
- if (INSN_P (insn))
+ FOR_BB_INSNS (bb, insn)
+ if (NONDEBUG_INSN_P (insn))
ninsns++;
}
free(bbs);
{
bb = bbs[i];
- binsns = 1;
- for (insn = BB_HEAD (bb); insn != BB_END (bb); insn = NEXT_INSN (insn))
- if (INSN_P (insn))
+ binsns = 0;
+ FOR_BB_INSNS (bb, insn)
+ if (NONDEBUG_INSN_P (insn))
binsns++;
ratio = loop->header->frequency == 0
insn = first_insn_after_basic_block_note (bb);
if (insn)
- insn = PREV_INSN (insn);
+ {
+ rtx next = insn;
+
+ insn = PREV_INSN (insn);
+
+ /* If the block contains only debug insns, insn would have
+ been NULL in a non-debug compilation, and then we'd end
+ up emitting a DELETED note. For -fcompare-debug
+ stability, emit the note too. */
+ if (insn != BB_END (bb)
+ && DEBUG_INSN_P (next)
+ && DEBUG_INSN_P (BB_END (bb)))
+ {
+ while (next != BB_END (bb) && DEBUG_INSN_P (next))
+ next = NEXT_INSN (next);
+
+ if (next == BB_END (bb))
+ emit_note_after (NOTE_INSN_DELETED, next);
+ }
+ }
else
insn = get_last_insn ();
}
{
rtx b_head = BB_HEAD (b), b_end = BB_END (b), a_end = BB_END (a);
rtx del_first = NULL_RTX, del_last = NULL_RTX;
+ rtx b_debug_start = b_end, b_debug_end = b_end;
int b_empty = 0;
if (dump_file)
fprintf (dump_file, "merging block %d into block %d\n", b->index, a->index);
+ while (DEBUG_INSN_P (b_end))
+ b_end = PREV_INSN (b_debug_start = b_end);
+
/* If there was a CODE_LABEL beginning B, delete it. */
if (LABEL_P (b_head))
{
/* Reassociate the insns of B with A. */
if (!b_empty)
{
- update_bb_for_insn_chain (a_end, b_end, a);
+ update_bb_for_insn_chain (a_end, b_debug_end, a);
- a_end = b_end;
+ a_end = b_debug_end;
+ }
+ else if (b_end != b_debug_end)
+ {
+ /* Move any deleted labels and other notes between the end of A
+ and the debug insns that make up B after the debug insns,
+ bringing the debug insns into A while keeping the notes after
+ the end of A. */
+ if (NEXT_INSN (a_end) != b_debug_start)
+ reorder_insns_nobb (NEXT_INSN (a_end), PREV_INSN (b_debug_start),
+ b_debug_end);
+ update_bb_for_insn_chain (b_debug_start, b_debug_end, a);
+ a_end = b_debug_end;
}
df_bb_delete (b->index);
bool found;
edge_iterator ei;
+ if (DEBUG_INSN_P (insn) && insn != BB_HEAD (bb))
+ do
+ insn = PREV_INSN (insn);
+ while ((DEBUG_INSN_P (insn) || NOTE_P (insn)) && insn != BB_HEAD (bb));
+
/* If this instruction cannot trap, remove REG_EH_REGION notes. */
if (NONJUMP_INSN_P (insn)
&& (note = find_reg_note (insn, REG_EH_REGION, NULL)))
latter can appear when nonlocal gotos are used. */
if (e->flags & EDGE_EH)
{
- if (can_throw_internal (BB_END (bb))
+ if (can_throw_internal (insn)
/* If this is a call edge, verify that this is a call insn. */
&& (! (e->flags & EDGE_ABNORMAL_CALL)
- || CALL_P (BB_END (bb))))
+ || CALL_P (insn)))
{
ei_next (&ei);
continue;
}
else if (e->flags & EDGE_ABNORMAL_CALL)
{
- if (CALL_P (BB_END (bb))
+ if (CALL_P (insn)
&& (! (note = find_reg_note (insn, REG_EH_REGION, NULL))
|| INTVAL (XEXP (note, 0)) >= 0))
{
while (!CALL_P (insn)
&& insn != BB_HEAD (bb)
&& (keep_with_call_p (insn)
- || NOTE_P (insn)))
+ || NOTE_P (insn)
+ || DEBUG_INSN_P (insn)))
insn = PREV_INSN (insn);
return (CALL_P (insn));
}
{
FOR_BB_INSNS_REVERSE (bb, insn)
{
- if (!INSN_P (insn))
+ if (!NONDEBUG_INSN_P (insn))
continue;
/* Log links are created only once. */
insn = next ? next : NEXT_INSN (insn))
{
next = 0;
- if (INSN_P (insn))
+ if (NONDEBUG_INSN_P (insn))
{
/* See if we know about function return values before this
insn based upon SUBREG flags. */
&& GET_MODE_CLASS (GET_MODE (x)) == MODE_INT;
}
+#ifdef AUTO_INC_DEC
+/* Replace auto-increment addressing modes with explicit operations to
+ access the same addresses without modifying the corresponding
+ registers. If AFTER holds, SRC is meant to be reused after the
+ side effect, otherwise it is to be reused before that. */
+
+static rtx
+cleanup_auto_inc_dec (rtx src, bool after, enum machine_mode mem_mode)
+{
+ rtx x = src;
+ const RTX_CODE code = GET_CODE (x);
+ int i;
+ const char *fmt;
+
+ switch (code)
+ {
+ case REG:
+ case CONST_INT:
+ case CONST_DOUBLE:
+ case CONST_FIXED:
+ case CONST_VECTOR:
+ case SYMBOL_REF:
+ case CODE_LABEL:
+ case PC:
+ case CC0:
+ case SCRATCH:
+ /* SCRATCH must be shared because they represent distinct values. */
+ return x;
+ case CLOBBER:
+ if (REG_P (XEXP (x, 0)) && REGNO (XEXP (x, 0)) < FIRST_PSEUDO_REGISTER)
+ return x;
+ break;
+
+ case CONST:
+ if (shared_const_p (x))
+ return x;
+ break;
+
+ case MEM:
+ mem_mode = GET_MODE (x);
+ break;
+
+ case PRE_INC:
+ case PRE_DEC:
+ case POST_INC:
+ case POST_DEC:
+ gcc_assert (mem_mode != VOIDmode && mem_mode != BLKmode);
+ if (after == (code == PRE_INC || code == PRE_DEC))
+ x = cleanup_auto_inc_dec (XEXP (x, 0), after, mem_mode);
+ else
+ x = gen_rtx_PLUS (GET_MODE (x),
+ cleanup_auto_inc_dec (XEXP (x, 0), after, mem_mode),
+ GEN_INT ((code == PRE_INC || code == POST_INC)
+ ? GET_MODE_SIZE (mem_mode)
+ : -GET_MODE_SIZE (mem_mode)));
+ return x;
+
+ case PRE_MODIFY:
+ case POST_MODIFY:
+ if (after == (code == PRE_MODIFY))
+ x = XEXP (x, 0);
+ else
+ x = XEXP (x, 1);
+ return cleanup_auto_inc_dec (x, after, mem_mode);
+
+ default:
+ break;
+ }
+
+ /* Copy the various flags, fields, and other information. We assume
+ that all fields need copying, and then clear the fields that should
+ not be copied. That is the sensible default behavior, and forces
+ us to explicitly document why we are *not* copying a flag. */
+ x = shallow_copy_rtx (x);
+
+ /* We do not copy the USED flag, which is used as a mark bit during
+ walks over the RTL. */
+ RTX_FLAG (x, used) = 0;
+
+ /* We do not copy FRAME_RELATED for INSNs. */
+ if (INSN_P (x))
+ RTX_FLAG (x, frame_related) = 0;
+
+ fmt = GET_RTX_FORMAT (code);
+ for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
+ if (fmt[i] == 'e')
+ XEXP (x, i) = cleanup_auto_inc_dec (XEXP (x, i), after, mem_mode);
+ else if (fmt[i] == 'E' || fmt[i] == 'V')
+ {
+ int j;
+ XVEC (x, i) = rtvec_alloc (XVECLEN (x, i));
+ for (j = 0; j < XVECLEN (x, i); j++)
+ XVECEXP (x, i, j)
+ = cleanup_auto_inc_dec (XVECEXP (src, i, j), after, mem_mode);
+ }
+
+ return x;
+}
+#endif
+
+/* Auxiliary data structure for propagate_for_debug_stmt. */
+
+struct rtx_subst_pair
+{
+ rtx from, to;
+ bool changed;
+#ifdef AUTO_INC_DEC
+ bool adjusted;
+ bool after;
+#endif
+};
+
+/* Clean up any auto-updates in PAIR->to the first time it is called
+ for a PAIR. PAIR->adjusted is used to tell whether we've cleaned
+ up before. */
+
+static void
+auto_adjust_pair (struct rtx_subst_pair *pair ATTRIBUTE_UNUSED)
+{
+#ifdef AUTO_INC_DEC
+ if (!pair->adjusted)
+ {
+ pair->adjusted = true;
+ pair->to = cleanup_auto_inc_dec (pair->to, pair->after, VOIDmode);
+ }
+#endif
+}
+
+/* If *LOC is the same as FROM in the struct rtx_subst_pair passed as
+ DATA, replace it with a copy of TO. Handle SUBREGs of *LOC as
+ well. */
+
+static int
+propagate_for_debug_subst (rtx *loc, void *data)
+{
+ struct rtx_subst_pair *pair = (struct rtx_subst_pair *)data;
+ rtx from = pair->from, to = pair->to;
+ rtx x = *loc, s = x;
+
+ if (rtx_equal_p (x, from)
+ || (GET_CODE (x) == SUBREG && rtx_equal_p ((s = SUBREG_REG (x)), from)))
+ {
+ auto_adjust_pair (pair);
+ if (pair->to != to)
+ to = pair->to;
+ else
+ to = copy_rtx (to);
+ if (s != x)
+ {
+ gcc_assert (GET_CODE (x) == SUBREG && SUBREG_REG (x) == s);
+ to = simplify_gen_subreg (GET_MODE (x), to,
+ GET_MODE (from), SUBREG_BYTE (x));
+ }
+ *loc = to;
+ pair->changed = true;
+ return -1;
+ }
+
+ return 0;
+}
+
+/* Replace occurrences of DEST with SRC in DEBUG_INSNs between INSN
+ and LAST. If MOVE holds, debug insns must also be moved past
+ LAST. */
+
+static void
+propagate_for_debug (rtx insn, rtx last, rtx dest, rtx src, bool move)
+{
+ struct rtx_subst_pair p;
+ rtx next, move_pos = move ? last : NULL_RTX;
+
+ p.from = dest;
+ p.to = src;
+ p.changed = false;
+
+#ifdef AUTO_INC_DEC
+ p.adjusted = false;
+ p.after = move;
+#endif
+
+ next = NEXT_INSN (insn);
+ while (next != last)
+ {
+ insn = next;
+ next = NEXT_INSN (insn);
+ if (DEBUG_INSN_P (insn))
+ {
+ for_each_rtx (&INSN_VAR_LOCATION_LOC (insn),
+ propagate_for_debug_subst, &p);
+ if (!p.changed)
+ continue;
+ p.changed = false;
+ if (move_pos)
+ {
+ remove_insn (insn);
+ PREV_INSN (insn) = NEXT_INSN (insn) = NULL_RTX;
+ move_pos = emit_debug_insn_after (insn, move_pos);
+ }
+ else
+ df_insn_rescan (insn);
+ }
+ }
+}
/* Delete the conditional jump INSN and adjust the CFG correspondingly.
Note that the INSN should be deleted *after* removing dead edges, so
I2 and not in I3, a REG_DEAD note must be made. */
rtx i3dest_killed = 0;
/* SET_DEST and SET_SRC of I2 and I1. */
- rtx i2dest, i2src, i1dest = 0, i1src = 0;
+ rtx i2dest = 0, i2src = 0, i1dest = 0, i1src = 0;
+ /* Set if I2DEST was reused as a scratch register. */
+ bool i2scratch = false;
/* PATTERN (I1) and PATTERN (I2), or a copy of it in certain cases. */
rtx i1pat = 0, i2pat = 0;
/* Indicates if I2DEST or I1DEST is in I2SRC or I1_SRC. */
&& GET_CODE (SET_DEST (PATTERN (i3))) != STRICT_LOW_PART
&& ! reg_overlap_mentioned_p (SET_SRC (PATTERN (i3)),
SET_DEST (PATTERN (i3)))
- && next_real_insn (i2) == i3)
+ && next_active_insn (i2) == i3)
{
rtx p2 = PATTERN (i2);
subst_low_luid = DF_INSN_LUID (i2);
added_sets_2 = added_sets_1 = 0;
+ i2src = SET_DEST (PATTERN (i3));
i2dest = SET_SRC (PATTERN (i3));
i2dest_killed = dead_or_set_p (i2, i2dest);
undobuf.frees = buf;
}
}
+
+ i2scratch = m_split != 0;
}
/* If recog_for_combine has discarded clobbers, try to use them
bool subst_done = false;
newi2pat = NULL_RTX;
+ i2scratch = true;
+
/* Get NEWDEST as a register in the proper mode. We have already
validated that we can do this. */
if (GET_MODE (i2dest) != split_mode && split_mode != VOIDmode)
return 0;
}
+ if (MAY_HAVE_DEBUG_INSNS)
+ {
+ struct undo *undo;
+
+ for (undo = undobuf.undos; undo; undo = undo->next)
+ if (undo->kind == UNDO_MODE)
+ {
+ rtx reg = *undo->where.r;
+ enum machine_mode new_mode = GET_MODE (reg);
+ enum machine_mode old_mode = undo->old_contents.m;
+
+ /* Temporarily revert mode back. */
+ adjust_reg_mode (reg, old_mode);
+
+ if (reg == i2dest && i2scratch)
+ {
+ /* If we used i2dest as a scratch register with a
+ different mode, substitute it for the original
+ i2src while its original mode is temporarily
+ restored, and then clear i2scratch so that we don't
+ do it again later. */
+ propagate_for_debug (i2, i3, reg, i2src, false);
+ i2scratch = false;
+ /* Put back the new mode. */
+ adjust_reg_mode (reg, new_mode);
+ }
+ else
+ {
+ rtx tempreg = gen_raw_REG (old_mode, REGNO (reg));
+ rtx first, last;
+
+ if (reg == i2dest)
+ {
+ first = i2;
+ last = i3;
+ }
+ else
+ {
+ first = i3;
+ last = undobuf.other_insn;
+ gcc_assert (last);
+ }
+
+ /* We're dealing with a reg that changed mode but not
+ meaning, so we want to turn it into a subreg for
+ the new mode. However, because of REG sharing and
+ because its mode had already changed, we have to do
+ it in two steps. First, replace any debug uses of
+ reg, with its original mode temporarily restored,
+ with this copy we have created; then, replace the
+ copy with the SUBREG of the original shared reg,
+ once again changed to the new mode. */
+ propagate_for_debug (first, last, reg, tempreg, false);
+ adjust_reg_mode (reg, new_mode);
+ propagate_for_debug (first, last, tempreg,
+ lowpart_subreg (old_mode, reg, new_mode),
+ false);
+ }
+ }
+ }
+
/* If we will be able to accept this, we have made a
change to the destination of I3. This requires us to
do a few adjustments. */
if (newi2pat)
{
+ if (MAY_HAVE_DEBUG_INSNS && i2scratch)
+ propagate_for_debug (i2, i3, i2dest, i2src, false);
INSN_CODE (i2) = i2_code_number;
PATTERN (i2) = newi2pat;
}
else
- SET_INSN_DELETED (i2);
+ {
+ if (MAY_HAVE_DEBUG_INSNS && i2src)
+ propagate_for_debug (i2, i3, i2dest, i2src, i3_subst_into_i2);
+ SET_INSN_DELETED (i2);
+ }
if (i1)
{
LOG_LINKS (i1) = 0;
REG_NOTES (i1) = 0;
+ if (MAY_HAVE_DEBUG_INSNS)
+ propagate_for_debug (i1, i3, i1dest, i1src, false);
SET_INSN_DELETED (i1);
}
return 0;
}
+
+/* Return the next insn after INSN that is neither a NOTE nor a
+ DEBUG_INSN. This routine does not look inside SEQUENCEs. */
+
+static rtx
+next_nonnote_nondebug_insn (rtx insn)
+{
+ while (insn)
+ {
+ insn = NEXT_INSN (insn);
+ if (insn == 0)
+ break;
+ if (NOTE_P (insn))
+ continue;
+ if (DEBUG_INSN_P (insn))
+ continue;
+ break;
+ }
+
+ return insn;
+}
+
+
\f
/* Given a chain of REG_NOTES originally from FROM_INSN, try to place them
as appropriate. I3 and I2 are the insns resulting from the combination
place = from_insn;
else if (reg_referenced_p (XEXP (note, 0), PATTERN (i3)))
place = i3;
- else if (i2 != 0 && next_nonnote_insn (i2) == i3
+ else if (i2 != 0 && next_nonnote_nondebug_insn (i2) == i3
&& reg_referenced_p (XEXP (note, 0), PATTERN (i2)))
place = i2;
else if ((rtx_equal_p (XEXP (note, 0), elim_i2)
for (tem = PREV_INSN (tem); place == 0; tem = PREV_INSN (tem))
{
- if (! INSN_P (tem))
+ if (!NONDEBUG_INSN_P (tem))
{
if (tem == BB_HEAD (bb))
break;
for (tem = PREV_INSN (place); ;
tem = PREV_INSN (tem))
{
- if (! INSN_P (tem))
+ if (!NONDEBUG_INSN_P (tem))
{
if (tem == BB_HEAD (bb))
break;
(insn && (this_basic_block->next_bb == EXIT_BLOCK_PTR
|| BB_HEAD (this_basic_block->next_bb) != insn));
insn = NEXT_INSN (insn))
- if (INSN_P (insn) && reg_overlap_mentioned_p (reg, PATTERN (insn)))
+ if (DEBUG_INSN_P (insn))
+ continue;
+ else if (INSN_P (insn) && reg_overlap_mentioned_p (reg, PATTERN (insn)))
{
if (reg_referenced_p (reg, PATTERN (insn)))
place = insn;
Common Report Var(flag_no_common,0) Optimization
Do not put uninitialized globals in the common section
-fconserve-stack
-Common Var(flag_conserve_stack) Optimization
-Do not perform optimizations increasing noticeably stack usage
-
fcompare-debug=
Common JoinedOrMissing RejectNegative Var(flag_compare_debug_opt)
-fcompare-debug[=<opts>] Compile with and without e.g. -gtoggle, and compare the final-insns dump
Common RejectNegative Var(flag_compare_debug)
Run only the second compilation of -fcompare-debug
+fconserve-stack
+Common Var(flag_conserve_stack) Optimization
+Do not perform optimizations increasing noticeably stack usage
+
fcprop-registers
Common Report Var(flag_cprop_registers) Optimization
Perform a register copy-propagation optimization pass
Common Report Var(flag_dump_unnumbered) VarExists
Suppress output of instruction numbers, line number notes and addresses in debugging dumps
-fdwarf2-cfi-asm
-Common Report Var(flag_dwarf2_cfi_asm) Init(HAVE_GAS_CFI_DIRECTIVE)
-Enable CFI tables via GAS assembler directives.
-
fdump-unnumbered-links
Common Report Var(flag_dump_unnumbered_links) VarExists
Suppress output of previous and next insn numbers in debugging dumps
+fdwarf2-cfi-asm
+Common Report Var(flag_dwarf2_cfi_asm) Init(HAVE_GAS_CFI_DIRECTIVE)
+Enable CFI tables via GAS assembler directives.
+
fearly-inlining
Common Report Var(flag_early_inlining) Init(1) Optimization
Perform early inlining
Common Report Var(flag_var_tracking) VarExists Optimization
Perform variable tracking
+fvar-tracking-assignments
+Common Report Var(flag_var_tracking_assignments) VarExists Optimization
+Perform variable tracking by annotating assignments
+
+fvar-tracking-assignments-toggle
+Common Report Var(flag_var_tracking_assignments_toggle) VarExists Optimization
+Toggle -fvar-tracking-assignments
+
fvar-tracking-uninit
Common Report Var(flag_var_tracking_uninit) Optimization
Perform variable tracking and also tag variables that are uninitialized
the DWARF output code. */
static rtx
-ix86_delegitimize_address (rtx orig_x)
+ix86_delegitimize_address (rtx x)
{
- rtx x = orig_x;
+ rtx orig_x = delegitimize_mem_from_attrs (x);
/* reg_addend is NULL or a multiple of some register. */
rtx reg_addend = NULL_RTX;
/* const_addend is NULL or a const_int. */
/* This is the result, or NULL. */
rtx result = NULL_RTX;
+ x = orig_x;
+
if (MEM_P (x))
x = XEXP (x, 0);
{
if (recog_memoized (insn) >= 0)
return get_attr_itanium_class (insn);
+ else if (DEBUG_INSN_P (insn))
+ return ITANIUM_CLASS_IGNORE;
else
return ITANIUM_CLASS_UNKNOWN;
}
switch (GET_CODE (insn))
{
case NOTE:
+ case DEBUG_INSN:
break;
case BARRIER:
init_insn_group_barriers ();
last_label = 0;
}
- else if (INSN_P (insn))
+ else if (NONDEBUG_INSN_P (insn))
{
insns_since_last_label = 1;
init_insn_group_barriers ();
}
- else if (INSN_P (insn))
+ else if (NONDEBUG_INSN_P (insn))
{
if (recog_memoized (insn) == CODE_FOR_insn_group_barrier)
init_insn_group_barriers ();
pending_data_specs--;
}
+ if (DEBUG_INSN_P (insn))
+ return 1;
+
last_scheduled_insn = insn;
memcpy (prev_cycle_state, curr_state, dfa_state_size);
if (reload_completed)
int setup_clocks_p = FALSE;
gcc_assert (insn && INSN_P (insn));
+
+ if (DEBUG_INSN_P (insn))
+ return 0;
+
/* When a group barrier is needed for insn, last_scheduled_insn
should be set. */
gcc_assert (!(reload_completed && safe_group_barrier_needed (insn))
need_barrier_p = 0;
prev_insn = NULL_RTX;
}
- else if (INSN_P (insn))
+ else if (NONDEBUG_INSN_P (insn))
{
if (recog_memoized (insn) == CODE_FOR_insn_group_barrier)
{
/* Define the CFA after INSN with the steady-state definition. */
static void
-ia64_dwarf2out_def_steady_cfa (rtx insn)
+ia64_dwarf2out_def_steady_cfa (rtx insn, bool frame)
{
rtx fp = frame_pointer_needed
? hard_frame_pointer_rtx
: stack_pointer_rtx;
+ const char *label = ia64_emit_deleted_label_after_insn (insn);
+
+ if (!frame)
+ return;
dwarf2out_def_cfa
- (ia64_emit_deleted_label_after_insn (insn),
- REGNO (fp),
+ (label, REGNO (fp),
ia64_initial_elimination_offset
(REGNO (arg_pointer_rtx), REGNO (fp))
+ ARG_POINTER_CFA_OFFSET (current_function_decl));
if (unwind)
fprintf (asm_out_file, "\t.fframe "HOST_WIDE_INT_PRINT_DEC"\n",
-INTVAL (op1));
- if (frame)
- ia64_dwarf2out_def_steady_cfa (insn);
+ ia64_dwarf2out_def_steady_cfa (insn, frame);
}
else
process_epilogue (asm_out_file, insn, unwind, frame);
if (unwind)
fprintf (asm_out_file, "\t.vframe r%d\n",
ia64_dbx_register_number (dest_regno));
- if (frame)
- ia64_dwarf2out_def_steady_cfa (insn);
+ ia64_dwarf2out_def_steady_cfa (insn, frame);
return 1;
default:
fprintf (asm_out_file, "\t.copy_state %d\n",
cfun->machine->state_num);
}
- if (IA64_CHANGE_CFA_IN_EPILOGUE && frame)
- ia64_dwarf2out_def_steady_cfa (insn);
+ if (IA64_CHANGE_CFA_IN_EPILOGUE)
+ ia64_dwarf2out_def_steady_cfa (insn, frame);
need_copy_state = false;
}
}
static rtx rs6000_legitimize_address (rtx, rtx, enum machine_mode);
static rtx rs6000_debug_legitimize_address (rtx, rtx, enum machine_mode);
static rtx rs6000_legitimize_tls_address (rtx, enum tls_model);
+static rtx rs6000_delegitimize_address (rtx);
static void rs6000_output_dwarf_dtprel (FILE *, int, rtx) ATTRIBUTE_UNUSED;
static rtx rs6000_tls_get_addr (void);
static rtx rs6000_got_sym (void);
#undef TARGET_USE_BLOCKS_FOR_CONSTANT_P
#define TARGET_USE_BLOCKS_FOR_CONSTANT_P rs6000_use_blocks_for_constant_p
+#undef TARGET_DELEGITIMIZE_ADDRESS
+#define TARGET_DELEGITIMIZE_ADDRESS rs6000_delegitimize_address
+
#undef TARGET_BUILTIN_RECIPROCAL
#define TARGET_BUILTIN_RECIPROCAL rs6000_builtin_reciprocal
return ret;
}
+/* If ORIG_X is a constant pool reference, return its known value,
+ otherwise ORIG_X. */
+
+static rtx
+rs6000_delegitimize_address (rtx x)
+{
+ rtx orig_x = delegitimize_mem_from_attrs (x);
+
+ x = orig_x;
+
+ if (!MEM_P (x))
+ return orig_x;
+
+ x = XEXP (x, 0);
+
+ if (legitimate_constant_pool_address_p (x)
+ && GET_CODE (XEXP (x, 1)) == CONST
+ && GET_CODE (XEXP (XEXP (x, 1), 0)) == MINUS
+ && GET_CODE (XEXP (XEXP (XEXP (x, 1), 0), 0)) == SYMBOL_REF
+ && constant_pool_expr_p (XEXP (XEXP (XEXP (x, 1), 0), 0))
+ && GET_CODE (XEXP (XEXP (XEXP (x, 1), 0), 1)) == SYMBOL_REF
+ && toc_relative_expr_p (XEXP (XEXP (XEXP (x, 1), 0), 1)))
+ return get_pool_constant (XEXP (XEXP (XEXP (x, 1), 0), 0));
+
+ return orig_x;
+}
+
/* This is called from dwarf2out.c via TARGET_ASM_OUTPUT_DWARF_DTPREL.
We need to emit DTP-relative relocations. */
static bool
is_microcoded_insn (rtx insn)
{
- if (!insn || !INSN_P (insn)
+ if (!insn || !NONDEBUG_INSN_P (insn)
|| GET_CODE (PATTERN (insn)) == USE
|| GET_CODE (PATTERN (insn)) == CLOBBER)
return false;
static bool
is_cracked_insn (rtx insn)
{
- if (!insn || !INSN_P (insn)
+ if (!insn || !NONDEBUG_INSN_P (insn)
|| GET_CODE (PATTERN (insn)) == USE
|| GET_CODE (PATTERN (insn)) == CLOBBER)
return false;
static bool
is_branch_slot_insn (rtx insn)
{
- if (!insn || !INSN_P (insn)
+ if (!insn || !NONDEBUG_INSN_P (insn)
|| GET_CODE (PATTERN (insn)) == USE
|| GET_CODE (PATTERN (insn)) == CLOBBER)
return false;
is_nonpipeline_insn (rtx insn)
{
enum attr_type type;
- if (!insn || !INSN_P (insn)
+ if (!insn || !NONDEBUG_INSN_P (insn)
|| GET_CODE (PATTERN (insn)) == USE
|| GET_CODE (PATTERN (insn)) == CLOBBER)
return false;
enum attr_type type;
if (!insn
- || insn == NULL_RTX
|| GET_CODE (insn) == NOTE
+ || DEBUG_INSN_P (insn)
|| GET_CODE (PATTERN (insn)) == USE
|| GET_CODE (PATTERN (insn)) == CLOBBER)
return false;
enum attr_type type;
if (!insn
- || insn == NULL_RTX
|| GET_CODE (insn) == NOTE
+ || DEBUG_INSN_P (insn)
|| GET_CODE (PATTERN (insn)) == USE
|| GET_CODE (PATTERN (insn)) == CLOBBER)
return false;
bool end = *group_end;
int i;
- if (next_insn == NULL_RTX)
+ if (next_insn == NULL_RTX || DEBUG_INSN_P (next_insn))
return can_issue_more;
if (rs6000_sched_insert_nops > sched_finish_regroup_exact)
/* Language-dependent hooks for C++.
- Copyright 2001, 2002, 2004, 2007, 2008 Free Software Foundation, Inc.
+ Copyright 2001, 2002, 2004, 2007, 2008, 2009 Free Software Foundation, Inc.
Contributed by Alexandre Oliva <aoliva@redhat.com>
This file is part of GCC.
gcc_assert (DECL_P (t));
if (verbosity >= 2)
- return decl_as_string (t, TFF_DECL_SPECIFIERS | TFF_UNQUALIFIED_NAME);
+ return decl_as_string (t,
+ TFF_DECL_SPECIFIERS | TFF_UNQUALIFIED_NAME
+ | TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS);
return cxx_printable_name (t, verbosity);
}
TFF_EXPR_IN_PARENS: parenthesize expressions.
TFF_NO_FUNCTION_ARGUMENTS: don't show function arguments.
TFF_UNQUALIFIED_NAME: do not print the qualifying scope of the
- top-level entity. */
+ top-level entity.
+ TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS: do not omit template arguments
+ identical to their defaults. */
#define TFF_PLAIN_IDENTIFIER (0)
#define TFF_SCOPE (1)
#define TFF_EXPR_IN_PARENS (1 << 9)
#define TFF_NO_FUNCTION_ARGUMENTS (1 << 10)
#define TFF_UNQUALIFIED_NAME (1 << 11)
+#define TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS (1 << 12)
/* Returns the TEMPLATE_DECL associated to a TEMPLATE_TEMPLATE_PARM
node. */
static void dump_scope (tree, int);
static void dump_template_parms (tree, int, int);
-static int count_non_default_template_args (tree, tree);
+static int count_non_default_template_args (tree, tree, int);
static const char *function_category (tree);
static void maybe_print_instantiation_context (diagnostic_context *);
match the (optional) default template parameter in PARAMS */
static int
-count_non_default_template_args (tree args, tree params)
+count_non_default_template_args (tree args, tree params, int flags)
{
tree inner_args = INNERMOST_TEMPLATE_ARGS (args);
int n = TREE_VEC_LENGTH (inner_args);
int last;
- if (params == NULL_TREE || !flag_pretty_templates)
+ if (params == NULL_TREE
+ /* We use this flag when generating debug information. We don't
+ want to expand templates at this point, for this may generate
+ new decls, which gets decl counts out of sync, which may in
+ turn cause codegen differences between compilations with and
+ without -g. */
+ || (flags & TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS) != 0
+ || !flag_pretty_templates)
return n;
for (last = n - 1; last >= 0; --last)
static void
dump_template_argument_list (tree args, tree parms, int flags)
{
- int n = count_non_default_template_args (args, parms);
+ int n = count_non_default_template_args (args, parms, flags);
int need_comma = 0;
int i;
? DECL_INNERMOST_TEMPLATE_PARMS (TI_TEMPLATE (info))
: NULL_TREE);
- len = count_non_default_template_args (args, params);
+ len = count_non_default_template_args (args, params, flags);
args = INNERMOST_TEMPLATE_ARGS (args);
for (ix = 0; ix != len; ix++)
apply_change_group ();
fold_rtx (x, insn);
}
+ else if (DEBUG_INSN_P (insn))
+ canon_reg (PATTERN (insn), insn);
/* Store the equivalent value in SRC_EQV, if different, or if the DEST
is a STRICT_LOW_PART. The latter condition is necessary because SRC_EQV
{
prev = PREV_INSN (prev);
}
- while (prev != bb_head && NOTE_P (prev));
+ while (prev != bb_head && (NOTE_P (prev) || DEBUG_INSN_P (prev)));
/* Do not swap the registers around if the previous instruction
attaches a REG_EQUIV note to REG1.
FIXME: This is a real kludge and needs to be done some other
way. */
- if (INSN_P (insn)
+ if (NONDEBUG_INSN_P (insn)
&& num_insns++ > PARAM_VALUE (PARAM_MAX_CSE_INSNS))
{
flush_hash_table ();
incr);
return;
+ case DEBUG_INSN:
+ return;
+
case CALL_INSN:
case INSN:
case JUMP_INSN:
}
}
\f
+/* Return true if a register is dead. Can be used in for_each_rtx. */
+
+static int
+is_dead_reg (rtx *loc, void *data)
+{
+ rtx x = *loc;
+ int *counts = (int *)data;
+
+ return (REG_P (x)
+ && REGNO (x) >= FIRST_PSEUDO_REGISTER
+ && counts[REGNO (x)] == 0);
+}
+
/* Return true if set is live. */
static bool
set_live_p (rtx set, rtx insn ATTRIBUTE_UNUSED, /* Only used with HAVE_cc0. */
|| !reg_referenced_p (cc0_rtx, PATTERN (tem))))
return false;
#endif
- else if (!REG_P (SET_DEST (set))
- || REGNO (SET_DEST (set)) < FIRST_PSEUDO_REGISTER
- || counts[REGNO (SET_DEST (set))] != 0
+ else if (!is_dead_reg (&SET_DEST (set), counts)
|| side_effects_p (SET_SRC (set)))
return true;
return false;
}
return false;
}
+ else if (DEBUG_INSN_P (insn))
+ {
+ rtx next;
+
+ for (next = NEXT_INSN (insn); next; next = NEXT_INSN (next))
+ if (NOTE_P (next))
+ continue;
+ else if (!DEBUG_INSN_P (next))
+ return true;
+ else if (INSN_VAR_LOCATION_DECL (insn) == INSN_VAR_LOCATION_DECL (next))
+ return false;
+
+ /* If this debug insn references a dead register, drop the
+ location expression for now. ??? We could try to find the
+ def and see if propagation is possible. */
+ if (for_each_rtx (&INSN_VAR_LOCATION_LOC (insn), is_dead_reg, counts))
+ {
+ INSN_VAR_LOCATION_LOC (insn) = gen_rtx_UNKNOWN_VAR_LOC ();
+ df_insn_rescan (insn);
+ }
+
+ return true;
+ }
else
return true;
}
#include "output.h"
#include "ggc.h"
#include "hashtab.h"
+#include "tree-pass.h"
#include "cselib.h"
#include "params.h"
#include "alloc-pool.h"
static int discard_useless_locs (void **, void *);
static int discard_useless_values (void **, void *);
static void remove_useless_values (void);
-static rtx wrap_constant (enum machine_mode, rtx);
static unsigned int cselib_hash_rtx (rtx, int);
-static cselib_val *new_cselib_val (unsigned int, enum machine_mode);
+static cselib_val *new_cselib_val (unsigned int, enum machine_mode, rtx);
static void add_mem_for_addr (cselib_val *, cselib_val *, rtx);
static cselib_val *cselib_lookup_mem (rtx, int);
static void cselib_invalidate_regno (unsigned int, enum machine_mode);
static void cselib_record_set (rtx, cselib_val *, cselib_val *);
static void cselib_record_sets (rtx);
+struct expand_value_data
+{
+ bitmap regs_active;
+ cselib_expand_callback callback;
+ void *callback_arg;
+};
+
+static rtx cselib_expand_value_rtx_1 (rtx, struct expand_value_data *, int);
+
/* There are three ways in which cselib can look up an rtx:
- for a REG, the reg_values table (which is indexed by regno) is used
- for a MEM, we recursively look up its address and then follow the
/* If nonnull, cselib will call this function before freeing useless
VALUEs. A VALUE is deemed useless if its "locs" field is null. */
void (*cselib_discard_hook) (cselib_val *);
+
+/* If nonnull, cselib will call this function before recording sets or
+ even clobbering outputs of INSN. All the recorded sets will be
+ represented in the array sets[n_sets]. new_val_min can be used to
+ tell whether values present in sets are introduced by this
+ instruction. */
+void (*cselib_record_sets_hook) (rtx insn, struct cselib_set *sets,
+ int n_sets);
+
+#define PRESERVED_VALUE_P(RTX) \
+ (RTL_FLAG_CHECK1("PRESERVED_VALUE_P", (RTX), VALUE)->unchanging)
+#define LONG_TERM_PRESERVED_VALUE_P(RTX) \
+ (RTL_FLAG_CHECK1("LONG_TERM_PRESERVED_VALUE_P", (RTX), VALUE)->in_struct)
+
\f
/* Allocate a struct elt_list and fill in its two elements with the
}
/* Remove all entries from the hash table. Also used during
- initialization. If CLEAR_ALL isn't set, then only clear the entries
- which are known to have been used. */
+ initialization. */
void
cselib_clear_table (void)
{
+ cselib_reset_table_with_next_value (0);
+}
+
+/* Remove all entries from the hash table, arranging for the next
+ value to be numbered NUM. */
+
+void
+cselib_reset_table_with_next_value (unsigned int num)
+{
unsigned int i;
for (i = 0; i < n_used_regs; i++)
n_used_regs = 0;
+ /* ??? Preserve constants? */
htab_empty (cselib_hash_table);
n_useless_values = 0;
- next_unknown_value = 0;
+ next_unknown_value = num;
first_containing_mem = &dummy_val;
}
+/* Return the number of the next value that will be generated. */
+
+unsigned int
+cselib_get_next_unknown_value (void)
+{
+ return next_unknown_value;
+}
+
/* The equality test for our hash table. The first argument ENTRY is a table
element (i.e. a cselib_val), while the second arg X is an rtx. We know
that all callers of htab_find_slot_with_hash will wrap CONST_INTs into a
p = &(*p)->next;
}
- if (had_locs && v->locs == 0)
+ if (had_locs && v->locs == 0 && !PRESERVED_VALUE_P (v->val_rtx))
{
n_useless_values++;
values_became_useless = 1;
{
cselib_val *v = (cselib_val *)*x;
- if (v->locs == 0)
+ if (v->locs == 0 && !PRESERVED_VALUE_P (v->val_rtx))
{
if (cselib_discard_hook)
cselib_discard_hook (v);
gcc_assert (!n_useless_values);
}
+/* Arrange for a value to not be removed from the hash table even if
+ it becomes useless. */
+
+void
+cselib_preserve_value (cselib_val *v)
+{
+ PRESERVED_VALUE_P (v->val_rtx) = 1;
+}
+
+/* Test whether a value is preserved. */
+
+bool
+cselib_preserved_value_p (cselib_val *v)
+{
+ return PRESERVED_VALUE_P (v->val_rtx);
+}
+
+/* Mark preserved values as preserved for the long term. */
+
+static int
+cselib_preserve_definitely (void **slot, void *info ATTRIBUTE_UNUSED)
+{
+ cselib_val *v = (cselib_val *)*slot;
+
+ if (PRESERVED_VALUE_P (v->val_rtx)
+ && !LONG_TERM_PRESERVED_VALUE_P (v->val_rtx))
+ LONG_TERM_PRESERVED_VALUE_P (v->val_rtx) = true;
+
+ return 1;
+}
+
+/* Clear the preserve marks for values not preserved for the long
+ term. */
+
+static int
+cselib_clear_preserve (void **slot, void *info ATTRIBUTE_UNUSED)
+{
+ cselib_val *v = (cselib_val *)*slot;
+
+ if (PRESERVED_VALUE_P (v->val_rtx)
+ && !LONG_TERM_PRESERVED_VALUE_P (v->val_rtx))
+ {
+ PRESERVED_VALUE_P (v->val_rtx) = false;
+ if (!v->locs)
+ n_useless_values++;
+ }
+
+ return 1;
+}
+
+/* Clean all non-constant expressions in the hash table, but retain
+ their values. */
+
+void
+cselib_preserve_only_values (bool retain)
+{
+ int i;
+
+ htab_traverse (cselib_hash_table,
+ retain ? cselib_preserve_definitely : cselib_clear_preserve,
+ NULL);
+
+ for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
+ cselib_invalidate_regno (i, reg_raw_mode[i]);
+
+ cselib_invalidate_mem (callmem);
+
+ remove_useless_values ();
+
+ gcc_assert (first_containing_mem == &dummy_val);
+}
+
/* Return the mode in which a register was last set. If X is not a
register, return its mode. If the mode in which the register was
set is not known, or the value was already clobbered, return
return 1;
}
-/* We need to pass down the mode of constants through the hash table
- functions. For that purpose, wrap them in a CONST of the appropriate
- mode. */
-static rtx
-wrap_constant (enum machine_mode mode, rtx x)
-{
- if (!CONST_INT_P (x) && GET_CODE (x) != CONST_FIXED
- && (GET_CODE (x) != CONST_DOUBLE || GET_MODE (x) != VOIDmode))
- return x;
- gcc_assert (mode != VOIDmode);
- return gen_rtx_CONST (mode, x);
-}
-
/* Hash an rtx. Return 0 if we couldn't hash the rtx.
For registers and memory locations, we look up their cselib_val structure
and return its VALUE element.
value is MODE. */
static inline cselib_val *
-new_cselib_val (unsigned int value, enum machine_mode mode)
+new_cselib_val (unsigned int value, enum machine_mode mode, rtx x)
{
cselib_val *e = (cselib_val *) pool_alloc (cselib_val_pool);
e->addr_list = 0;
e->locs = 0;
e->next_containing_mem = 0;
+
+ if (dump_file && (dump_flags & TDF_DETAILS))
+ {
+ fprintf (dump_file, "cselib value %u ", value);
+ if (flag_dump_noaddr || flag_dump_unnumbered)
+ fputs ("# ", dump_file);
+ else
+ fprintf (dump_file, "%p ", (void*)e);
+ print_rtl_single (dump_file, x);
+ fputc ('\n', dump_file);
+ }
+
return e;
}
if (! create)
return 0;
- mem_elt = new_cselib_val (++next_unknown_value, mode);
+ mem_elt = new_cselib_val (++next_unknown_value, mode, x);
add_mem_for_addr (addr, mem_elt, x);
slot = htab_find_slot_with_hash (cselib_hash_table, wrap_constant (mode, x),
mem_elt->value, INSERT);
expand to the same place. */
static rtx
-expand_loc (struct elt_loc_list *p, bitmap regs_active, int max_depth)
+expand_loc (struct elt_loc_list *p, struct expand_value_data *evd,
+ int max_depth)
{
rtx reg_result = NULL;
unsigned int regno = UINT_MAX;
the same reg. */
if ((REG_P (p->loc))
&& (REGNO (p->loc) < regno)
- && !bitmap_bit_p (regs_active, REGNO (p->loc)))
+ && !bitmap_bit_p (evd->regs_active, REGNO (p->loc)))
{
reg_result = p->loc;
regno = REGNO (p->loc);
else if (!REG_P (p->loc))
{
rtx result, note;
- if (dump_file)
+ if (dump_file && (dump_flags & TDF_DETAILS))
{
print_inline_rtx (dump_file, p->loc, 0);
fprintf (dump_file, "\n");
&& (note = find_reg_note (p->setting_insn, REG_EQUAL, NULL_RTX))
&& XEXP (note, 0) == XEXP (p->loc, 1))
return XEXP (p->loc, 1);
- result = cselib_expand_value_rtx (p->loc, regs_active, max_depth - 1);
+ result = cselib_expand_value_rtx_1 (p->loc, evd, max_depth - 1);
if (result)
return result;
}
if (regno != UINT_MAX)
{
rtx result;
- if (dump_file)
+ if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, "r%d\n", regno);
- result = cselib_expand_value_rtx (reg_result, regs_active, max_depth - 1);
+ result = cselib_expand_value_rtx_1 (reg_result, evd, max_depth - 1);
if (result)
return result;
}
- if (dump_file)
+ if (dump_file && (dump_flags & TDF_DETAILS))
{
if (reg_result)
{
rtx
cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth)
{
+ struct expand_value_data evd;
+
+ evd.regs_active = regs_active;
+ evd.callback = NULL;
+ evd.callback_arg = NULL;
+
+ return cselib_expand_value_rtx_1 (orig, &evd, max_depth);
+}
+
+/* Same as cselib_expand_value_rtx, but using a callback to try to
+ resolve VALUEs that expand to nothing. */
+
+rtx
+cselib_expand_value_rtx_cb (rtx orig, bitmap regs_active, int max_depth,
+ cselib_expand_callback cb, void *data)
+{
+ struct expand_value_data evd;
+
+ evd.regs_active = regs_active;
+ evd.callback = cb;
+ evd.callback_arg = data;
+
+ return cselib_expand_value_rtx_1 (orig, &evd, max_depth);
+}
+
+static rtx
+cselib_expand_value_rtx_1 (rtx orig, struct expand_value_data *evd,
+ int max_depth)
+{
rtx copy, scopy;
int i, j;
RTX_CODE code;
|| regno == HARD_FRAME_POINTER_REGNUM)
return orig;
- bitmap_set_bit (regs_active, regno);
+ bitmap_set_bit (evd->regs_active, regno);
- if (dump_file)
+ if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, "expanding: r%d into: ", regno);
- result = expand_loc (l->elt->locs, regs_active, max_depth);
- bitmap_clear_bit (regs_active, regno);
+ result = expand_loc (l->elt->locs, evd, max_depth);
+ bitmap_clear_bit (evd->regs_active, regno);
if (result)
return result;
case SUBREG:
{
- rtx subreg = cselib_expand_value_rtx (SUBREG_REG (orig), regs_active,
- max_depth - 1);
+ rtx subreg = cselib_expand_value_rtx_1 (SUBREG_REG (orig), evd,
+ max_depth - 1);
if (!subreg)
return NULL;
scopy = simplify_gen_subreg (GET_MODE (orig), subreg,
if (scopy == NULL
|| (GET_CODE (scopy) == SUBREG
&& !REG_P (SUBREG_REG (scopy))
- && !MEM_P (SUBREG_REG (scopy))))
+ && !MEM_P (SUBREG_REG (scopy))
+ && (REG_P (SUBREG_REG (orig))
+ || MEM_P (SUBREG_REG (orig)))))
return shallow_copy_rtx (orig);
return scopy;
}
case VALUE:
- if (dump_file)
- fprintf (dump_file, "expanding value %s into: ",
- GET_MODE_NAME (GET_MODE (orig)));
+ {
+ rtx result;
+ if (dump_file && (dump_flags & TDF_DETAILS))
+ {
+ fputs ("\nexpanding ", dump_file);
+ print_rtl_single (dump_file, orig);
+ fputs (" into...", dump_file);
+ }
- return expand_loc (CSELIB_VAL_PTR (orig)->locs, regs_active, max_depth);
+ if (!evd->callback)
+ result = NULL;
+ else
+ {
+ result = evd->callback (orig, evd->regs_active, max_depth,
+ evd->callback_arg);
+ if (result == orig)
+ result = NULL;
+ else if (result)
+ result = cselib_expand_value_rtx_1 (result, evd, max_depth);
+ }
+ if (!result)
+ result = expand_loc (CSELIB_VAL_PTR (orig)->locs, evd, max_depth);
+ return result;
+ }
default:
break;
}
case 'e':
if (XEXP (orig, i) != NULL)
{
- rtx result = cselib_expand_value_rtx (XEXP (orig, i), regs_active, max_depth - 1);
+ rtx result = cselib_expand_value_rtx_1 (XEXP (orig, i), evd,
+ max_depth - 1);
if (!result)
return NULL;
XEXP (copy, i) = result;
XVEC (copy, i) = rtvec_alloc (XVECLEN (orig, i));
for (j = 0; j < XVECLEN (copy, i); j++)
{
- rtx result = cselib_expand_value_rtx (XVECEXP (orig, i, j), regs_active, max_depth - 1);
+ rtx result = cselib_expand_value_rtx_1 (XVECEXP (orig, i, j),
+ evd, max_depth - 1);
if (!result)
return NULL;
XVECEXP (copy, i, j) = result;
{
XEXP (copy, 0)
= gen_rtx_CONST (GET_MODE (XEXP (orig, 0)), XEXP (copy, 0));
- if (dump_file)
+ if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, " wrapping const_int result in const to preserve mode %s\n",
GET_MODE_NAME (GET_MODE (XEXP (copy, 0))));
}
scopy = simplify_rtx (copy);
if (scopy)
- return scopy;
+ {
+ if (GET_MODE (copy) != GET_MODE (scopy))
+ scopy = wrap_constant (GET_MODE (copy), scopy);
+ return scopy;
+ }
return copy;
}
{
/* This happens for autoincrements. Assign a value that doesn't
match any other. */
- e = new_cselib_val (++next_unknown_value, GET_MODE (x));
+ e = new_cselib_val (++next_unknown_value, GET_MODE (x), x);
}
return e->val_rtx;
case PRE_DEC:
case POST_MODIFY:
case PRE_MODIFY:
- e = new_cselib_val (++next_unknown_value, GET_MODE (x));
+ e = new_cselib_val (++next_unknown_value, GET_MODE (x), x);
return e->val_rtx;
default:
return copy;
}
+/* Log a lookup of X to the cselib table along with the result RET. */
+
+static cselib_val *
+cselib_log_lookup (rtx x, cselib_val *ret)
+{
+ if (dump_file && (dump_flags & TDF_DETAILS))
+ {
+ fputs ("cselib lookup ", dump_file);
+ print_inline_rtx (dump_file, x, 2);
+ fprintf (dump_file, " => %u\n", ret ? ret->value : 0);
+ }
+
+ return ret;
+}
+
/* Look up the rtl expression X in our tables and return the value it has.
If CREATE is zero, we return NULL if we don't know the value. Otherwise,
we create a new one if possible, using mode MODE if X doesn't have a mode
l = l->next;
for (; l; l = l->next)
if (mode == GET_MODE (l->elt->val_rtx))
- return l->elt;
+ return cselib_log_lookup (x, l->elt);
if (! create)
- return 0;
+ return cselib_log_lookup (x, 0);
if (i < FIRST_PSEUDO_REGISTER)
{
max_value_regs = n;
}
- e = new_cselib_val (++next_unknown_value, GET_MODE (x));
+ e = new_cselib_val (++next_unknown_value, GET_MODE (x), x);
e->locs = new_elt_loc_list (e->locs, x);
if (REG_VALUES (i) == 0)
{
REG_VALUES (i)->next = new_elt_list (REG_VALUES (i)->next, e);
slot = htab_find_slot_with_hash (cselib_hash_table, x, e->value, INSERT);
*slot = e;
- return e;
+ return cselib_log_lookup (x, e);
}
if (MEM_P (x))
- return cselib_lookup_mem (x, create);
+ return cselib_log_lookup (x, cselib_lookup_mem (x, create));
hashval = cselib_hash_rtx (x, create);
/* Can't even create if hashing is not possible. */
if (! hashval)
- return 0;
+ return cselib_log_lookup (x, 0);
slot = htab_find_slot_with_hash (cselib_hash_table, wrap_constant (mode, x),
hashval, create ? INSERT : NO_INSERT);
if (slot == 0)
- return 0;
+ return cselib_log_lookup (x, 0);
e = (cselib_val *) *slot;
if (e)
- return e;
+ return cselib_log_lookup (x, e);
- e = new_cselib_val (hashval, mode);
+ e = new_cselib_val (hashval, mode, x);
/* We have to fill the slot before calling cselib_subst_to_values:
the hash table is inconsistent until we do so, and
cselib_subst_to_values will need to do lookups. */
*slot = (void *) e;
e->locs = new_elt_loc_list (e->locs, cselib_subst_to_values (x));
- return e;
+ return cselib_log_lookup (x, e);
}
/* Invalidate any entries in reg_values that overlap REGNO. This is called
break;
}
}
- if (v->locs == 0)
+ if (v->locs == 0 && !PRESERVED_VALUE_P (v->val_rtx))
n_useless_values++;
}
}
unchain_one_elt_loc_list (p);
}
- if (had_locs && v->locs == 0)
+ if (had_locs && v->locs == 0 && !PRESERVED_VALUE_P (v->val_rtx))
n_useless_values++;
next = v->next_containing_mem;
REG_VALUES (dreg)->elt = src_elt;
}
- if (src_elt->locs == 0)
+ if (src_elt->locs == 0 && !PRESERVED_VALUE_P (src_elt->val_rtx))
n_useless_values--;
src_elt->locs = new_elt_loc_list (src_elt->locs, dest);
}
else if (MEM_P (dest) && dest_addr_elt != 0
&& cselib_record_memory)
{
- if (src_elt->locs == 0)
+ if (src_elt->locs == 0 && !PRESERVED_VALUE_P (src_elt->val_rtx))
n_useless_values--;
add_mem_for_addr (dest_addr_elt, src_elt, dest);
}
}
-/* Describe a single set that is part of an insn. */
-struct set
-{
- rtx src;
- rtx dest;
- cselib_val *src_elt;
- cselib_val *dest_addr_elt;
-};
-
/* There is no good way to determine how many elements there can be
in a PARALLEL. Since it's fairly cheap, use a really large number. */
#define MAX_SETS (FIRST_PSEUDO_REGISTER * 2)
{
int n_sets = 0;
int i;
- struct set sets[MAX_SETS];
+ struct cselib_set sets[MAX_SETS];
rtx body = PATTERN (insn);
rtx cond = 0;
}
}
+ if (cselib_record_sets_hook)
+ cselib_record_sets_hook (insn, sets, n_sets);
+
/* Invalidate all locations written by this insn. Note that the elts we
looked up in the previous loop aren't affected, just some of their
locations may go away. */
&& GET_CODE (PATTERN (insn)) == ASM_OPERANDS
&& MEM_VOLATILE_P (PATTERN (insn))))
{
- cselib_clear_table ();
+ cselib_reset_table_with_next_value (next_unknown_value);
return;
}
next_unknown_value = 0;
}
+/* Dump the cselib_val *X to FILE *info. */
+
+static int
+dump_cselib_val (void **x, void *info)
+{
+ cselib_val *v = (cselib_val *)*x;
+ FILE *out = (FILE *)info;
+ bool need_lf = true;
+
+ print_inline_rtx (out, v->val_rtx, 0);
+
+ if (v->locs)
+ {
+ struct elt_loc_list *l = v->locs;
+ if (need_lf)
+ {
+ fputc ('\n', out);
+ need_lf = false;
+ }
+ fputs (" locs:", out);
+ do
+ {
+ fprintf (out, "\n from insn %i ",
+ INSN_UID (l->setting_insn));
+ print_inline_rtx (out, l->loc, 4);
+ }
+ while ((l = l->next));
+ fputc ('\n', out);
+ }
+ else
+ {
+ fputs (" no locs", out);
+ need_lf = true;
+ }
+
+ if (v->addr_list)
+ {
+ struct elt_list *e = v->addr_list;
+ if (need_lf)
+ {
+ fputc ('\n', out);
+ need_lf = false;
+ }
+ fputs (" addr list:", out);
+ do
+ {
+ fputs ("\n ", out);
+ print_inline_rtx (out, e->elt->val_rtx, 2);
+ }
+ while ((e = e->next));
+ fputc ('\n', out);
+ }
+ else
+ {
+ fputs (" no addrs", out);
+ need_lf = true;
+ }
+
+ if (v->next_containing_mem == &dummy_val)
+ fputs (" last mem\n", out);
+ else if (v->next_containing_mem)
+ {
+ fputs (" next mem ", out);
+ print_inline_rtx (out, v->next_containing_mem->val_rtx, 2);
+ fputc ('\n', out);
+ }
+ else if (need_lf)
+ fputc ('\n', out);
+
+ return 1;
+}
+
+/* Dump to OUT everything in the CSELIB table. */
+
+void
+dump_cselib_table (FILE *out)
+{
+ fprintf (out, "cselib hash table:\n");
+ htab_traverse (cselib_hash_table, dump_cselib_val, out);
+ if (first_containing_mem != &dummy_val)
+ {
+ fputs ("first mem ", out);
+ print_inline_rtx (out, first_containing_mem->val_rtx, 2);
+ fputc ('\n', out);
+ }
+ fprintf (out, "last unknown value %i\n", next_unknown_value);
+}
+
#include "gt-cselib.h"
cselib_val *elt;
};
+/* Describe a single set that is part of an insn. */
+struct cselib_set
+{
+ rtx src;
+ rtx dest;
+ cselib_val *src_elt;
+ cselib_val *dest_addr_elt;
+};
+
extern void (*cselib_discard_hook) (cselib_val *);
+extern void (*cselib_record_sets_hook) (rtx insn, struct cselib_set *sets,
+ int n_sets);
extern cselib_val *cselib_lookup (rtx, enum machine_mode, int);
extern void cselib_init (bool record_memory);
extern int rtx_equal_for_cselib_p (rtx, rtx);
extern int references_value_p (const_rtx, int);
extern rtx cselib_expand_value_rtx (rtx, bitmap, int);
+typedef rtx (*cselib_expand_callback)(rtx, bitmap, int, void *);
+extern rtx cselib_expand_value_rtx_cb (rtx, bitmap, int,
+ cselib_expand_callback, void*);
extern rtx cselib_subst_to_values (rtx);
extern void cselib_invalidate_rtx (rtx);
+
+extern void cselib_reset_table_with_next_value (unsigned int);
+extern unsigned int cselib_get_next_unknown_value (void);
+extern void cselib_preserve_value (cselib_val *);
+extern bool cselib_preserved_value_p (cselib_val *);
+extern void cselib_preserve_only_values (bool);
+
+extern void dump_cselib_table (FILE *);
switch (GET_CODE (body))
{
case USE:
+ case VAR_LOCATION:
return false;
case CLOBBER:
struct df_link *defs;
df_ref *use_rec;
+ if (DEBUG_INSN_P (insn))
+ return;
+
for (use_rec = DF_INSN_USES (insn); *use_rec; use_rec++)
{
df_ref use = *use_rec;
else if (DEP_TYPE (link) == REG_DEP_OUTPUT)
t = OUTPUT_DEP;
+ gcc_assert (!DEBUG_INSN_P (dest_node->insn) || t == ANTI_DEP);
+ gcc_assert (!DEBUG_INSN_P (src_node->insn) || DEBUG_INSN_P (dest_node->insn));
+
/* We currently choose not to create certain anti-deps edges and
compensate for that by generating reg-moves based on the life-range
analysis. The anti-deps that will be deleted are the ones which
enum reg_note dep_kind;
struct _dep _dep, *dep = &_dep;
+ gcc_assert (!DEBUG_INSN_P (to->insn) || d_t == ANTI_DEP);
+ gcc_assert (!DEBUG_INSN_P (from->insn) || DEBUG_INSN_P (to->insn));
+
if (d_t == ANTI_DEP)
dep_kind = REG_DEP_ANTI;
else if (d_t == OUTPUT_DEP)
/* Add true deps from last_def to it's uses in the next
iteration. Any such upwards exposed use appears before
the last_def def. */
- create_ddg_dep_no_link (g, last_def_node, use_node, TRUE_DEP,
+ create_ddg_dep_no_link (g, last_def_node, use_node,
+ DEBUG_INSN_P (use_insn) ? ANTI_DEP : TRUE_DEP,
REG_DEP, 1);
}
- else
+ else if (!DEBUG_INSN_P (use_insn))
{
/* Add anti deps from last_def's uses in the current iteration
to the first def in the next iteration. We do not add ANTI
for (j = 0; j <= i; j++)
{
ddg_node_ptr j_node = &g->nodes[j];
+ if (DEBUG_INSN_P (j_node->insn))
+ continue;
if (mem_access_insn_p (j_node->insn))
/* Don't bother calculating inter-loop dep if an intra-loop dep
already exists. */
if (! INSN_P (insn) || GET_CODE (PATTERN (insn)) == USE)
continue;
- if (mem_read_insn_p (insn))
- g->num_loads++;
- if (mem_write_insn_p (insn))
- g->num_stores++;
+ if (DEBUG_INSN_P (insn))
+ g->num_debug++;
+ else
+ {
+ if (mem_read_insn_p (insn))
+ g->num_loads++;
+ if (mem_write_insn_p (insn))
+ g->num_stores++;
+ }
num_nodes++;
}
/* DDG - Data Dependence Graph - interface.
- Copyright (C) 2004, 2005, 2006, 2007
+ Copyright (C) 2004, 2005, 2006, 2007, 2008
Free Software Foundation, Inc.
Contributed by Ayal Zaks and Mustafa Hagog <zaks,mustafa@il.ibm.com>
int num_loads;
int num_stores;
+ /* Number of debug instructions in the BB. */
+ int num_debug;
+
/* This array holds the nodes in the graph; it is indexed by the node
cuid, which follows the order of the instructions in the BB. */
ddg_node_ptr nodes;
int closing_branch_deps;
/* Array and number of backarcs (edges with distance > 0) in the DDG. */
- ddg_edge_ptr *backarcs;
int num_backarcs;
+ ddg_edge_ptr *backarcs;
};
\f
{
unsigned int uid = INSN_UID (insn);
- if (!INSN_P (insn))
+ if (!NONDEBUG_INSN_P (insn))
continue;
for (def_rec = DF_INSN_UID_DEFS (uid); *def_rec; def_rec++)
rtx curr = old;
rtx prev = NULL;
+ gcc_assert (!DEBUG_INSN_P (insn));
+
while (curr)
if (XEXP (curr, 0) == reg)
{
static rtx
df_set_dead_notes_for_mw (rtx insn, rtx old, struct df_mw_hardreg *mws,
bitmap live, bitmap do_not_gen,
- bitmap artificial_uses)
+ bitmap artificial_uses, bool *added_notes_p)
{
unsigned int r;
+ bool is_debug = *added_notes_p;
+
+ *added_notes_p = false;
#ifdef REG_DEAD_DEBUGGING
if (dump_file)
if (df_whole_mw_reg_dead_p (mws, live, artificial_uses, do_not_gen))
{
/* Add a dead note for the entire multi word register. */
+ if (is_debug)
+ {
+ *added_notes_p = true;
+ return old;
+ }
old = df_set_note (REG_DEAD, insn, old, mws->mw_reg);
#ifdef REG_DEAD_DEBUGGING
df_print_note ("adding 1: ", insn, REG_NOTES (insn));
&& !bitmap_bit_p (artificial_uses, r)
&& !bitmap_bit_p (do_not_gen, r))
{
+ if (is_debug)
+ {
+ *added_notes_p = true;
+ return old;
+ }
old = df_set_note (REG_DEAD, insn, old, regno_reg_rtx[r]);
#ifdef REG_DEAD_DEBUGGING
df_print_note ("adding 2: ", insn, REG_NOTES (insn));
struct df_mw_hardreg **mws_rec;
rtx old_dead_notes;
rtx old_unused_notes;
+ int debug_insn;
if (!INSN_P (insn))
continue;
+ debug_insn = DEBUG_INSN_P (insn);
+
bitmap_clear (do_not_gen);
df_kill_notes (insn, &old_dead_notes, &old_unused_notes);
struct df_mw_hardreg *mws = *mws_rec;
if ((DF_MWS_REG_DEF_P (mws))
&& !df_ignore_stack_reg (mws->start_regno))
- old_dead_notes
- = df_set_dead_notes_for_mw (insn, old_dead_notes,
- mws, live, do_not_gen,
- artificial_uses);
+ {
+ bool really_add_notes = debug_insn != 0;
+
+ old_dead_notes
+ = df_set_dead_notes_for_mw (insn, old_dead_notes,
+ mws, live, do_not_gen,
+ artificial_uses,
+ &really_add_notes);
+
+ if (really_add_notes)
+ debug_insn = -1;
+ }
mws_rec++;
}
unsigned int uregno = DF_REF_REGNO (use);
#ifdef REG_DEAD_DEBUGGING
- if (dump_file)
+ if (dump_file && !debug_insn)
{
fprintf (dump_file, " regular looking at use ");
df_ref_debug (use, dump_file);
#endif
if (!bitmap_bit_p (live, uregno))
{
+ if (debug_insn)
+ {
+ debug_insn = -1;
+ break;
+ }
+
if ( (!(DF_REF_FLAGS (use) & DF_REF_MW_HARDREG))
&& (!bitmap_bit_p (do_not_gen, uregno))
&& (!bitmap_bit_p (artificial_uses, uregno))
free_EXPR_LIST_node (old_dead_notes);
old_dead_notes = next;
}
+
+ if (debug_insn == -1)
+ {
+ /* ??? We could probably do better here, replacing dead
+ registers with their definitions. */
+ INSN_VAR_LOCATION_LOC (insn) = gen_rtx_UNKNOWN_VAR_LOC ();
+ df_insn_rescan_debug_internal (insn);
+ }
}
}
df_ref *use_rec;
unsigned int uid = INSN_UID (insn);
+ if (DEBUG_INSN_P (insn))
+ return;
+
for (use_rec = DF_INSN_UID_USES (uid); *use_rec; use_rec++)
{
df_ref use = *use_rec;
void
df_simulate_one_insn_backwards (basic_block bb, rtx insn, bitmap live)
{
- if (! INSN_P (insn))
+ if (!NONDEBUG_INSN_P (insn))
return;
df_simulate_defs (insn, live);
return true;
}
+/* Same as df_insn_rescan, but don't mark the basic block as
+ dirty. */
+
+bool
+df_insn_rescan_debug_internal (rtx insn)
+{
+ unsigned int uid = INSN_UID (insn);
+ struct df_insn_info *insn_info;
+
+ gcc_assert (DEBUG_INSN_P (insn));
+ gcc_assert (VAR_LOC_UNKNOWN_P (INSN_VAR_LOCATION_LOC (insn)));
+
+ if (!df)
+ return false;
+
+ insn_info = DF_INSN_UID_SAFE_GET (INSN_UID (insn));
+ if (!insn_info)
+ return false;
+
+ if (dump_file)
+ fprintf (dump_file, "deleting debug_insn with uid = %d.\n", uid);
+
+ bitmap_clear_bit (df->insns_to_delete, uid);
+ bitmap_clear_bit (df->insns_to_rescan, uid);
+ bitmap_clear_bit (df->insns_to_notes_rescan, uid);
+
+ if (!insn_info->defs)
+ return false;
+
+ if (insn_info->defs == df_null_ref_rec
+ && insn_info->uses == df_null_ref_rec
+ && insn_info->eq_uses == df_null_ref_rec
+ && insn_info->mw_hardregs == df_null_mw_rec)
+ return false;
+
+ df_mw_hardreg_chain_delete (insn_info->mw_hardregs);
+
+ if (df_chain)
+ {
+ df_ref_chain_delete_du_chain (insn_info->defs);
+ df_ref_chain_delete_du_chain (insn_info->uses);
+ df_ref_chain_delete_du_chain (insn_info->eq_uses);
+ }
+
+ df_ref_chain_delete (insn_info->defs);
+ df_ref_chain_delete (insn_info->uses);
+ df_ref_chain_delete (insn_info->eq_uses);
+
+ insn_info->defs = df_null_ref_rec;
+ insn_info->uses = df_null_ref_rec;
+ insn_info->eq_uses = df_null_ref_rec;
+ insn_info->mw_hardregs = df_null_mw_rec;
+
+ return true;
+}
+
/* Rescan all of the insns in the function. Note that the artificial
uses and defs are not touched. This function will destroy def-se
break;
}
+ case VAR_LOCATION:
+ df_uses_record (cl, collection_rec,
+ &PAT_VAR_LOCATION_LOC (x),
+ DF_REF_REG_USE, bb, insn_info,
+ flags, width, offset, mode);
+ return;
+
case PRE_DEC:
case POST_DEC:
case PRE_INC:
case POST_INC:
case PRE_MODIFY:
case POST_MODIFY:
+ gcc_assert (!DEBUG_INSN_P (insn_info->insn));
/* Catch the def of the register being modified. */
df_ref_record (cl, collection_rec, XEXP (x, 0), &XEXP (x, 0),
bb, insn_info,
extern void df_insn_delete (basic_block, unsigned int);
extern void df_bb_refs_record (int, bool);
extern bool df_insn_rescan (rtx);
+extern bool df_insn_rescan_debug_internal (rtx);
extern void df_insn_rescan_all (void);
extern void df_process_deferred_rescans (void);
extern void df_recompute_luids (basic_block);
&& !diagnostic_report_warnings_p (location))
return false;
+ if (diagnostic->kind == DK_NOTE && flag_compare_debug)
+ return false;
+
if (diagnostic->kind == DK_PEDWARN)
diagnostic->kind = pedantic_warning_kind ();
@end deftypefn
@deftypefn {GIMPLE function} is_gimple_call (gimple g)
-Return true if the code of g is @code{GIMPLE_CALL}
+Return true if the code of g is @code{GIMPLE_CALL}.
@end deftypefn
+@deftypefn {GIMPLE function} is_gimple_debug (gimple g)
+Return true if the code of g is @code{GIMPLE_DEBUG}.
+@end deftypefn
+
@deftypefn {GIMPLE function} gimple_assign_cast_p (gimple g)
Return true if g is a @code{GIMPLE_ASSIGN} that performs a type cast
-operation
+operation.
+@end deftypefn
+
+@deftypefn {GIMPLE function} gimple_debug_bind_p (gimple g)
+Return true if g is a @code{GIMPLE_DEBUG} that binds the value of an
+expression to a variable.
@end deftypefn
@node Manipulating GIMPLE statements
Analogous to @code{bootstrap-O1}.
@item @samp{bootstrap-debug}
-Builds stage2 without debug information, and uses
-@file{contrib/compare-debug} to compare object files.
+Verifies that the compiler generates the same executable code, whether
+or not it is asked to emit debug information. To this end, this option
+builds stage2 host programs without debug information, and uses
+@file{contrib/compare-debug} to compare them with the stripped stage3
+object files. If @code{BOOT_CFLAGS} is overridden so as to not enable
+debug information, stage2 will have it, and stage3 won't. This option
+is enabled by default when GCC bootstrapping is enabled: in addition to
+better test coverage, it makes default bootstraps faster and leaner.
+
+@item @samp{bootstrap-debug-big}
+In addition to the checking performed by @code{bootstrap-debug}, this
+option saves internal compiler dumps during stage2 and stage3 and
+compares them as well, which helps catch additional potential problems,
+but at a great cost in terms of disk space.
+
+@item @samp{bootstrap-debug-lean}
+This option saves disk space compared with @code{bootstrap-debug-big},
+but at the expense of some recompilation. Instead of saving the dumps
+of stage2 and stage3 until the final compare, it uses
+@option{-fcompare-debug} to generate, compare and remove the dumps
+during stage3, repeating the compilation that already took place in
+stage2, whose dumps were not saved.
+
+@item @samp{bootstrap-debug-lib}
+This option tests executable code invariance over debug information
+generation on target libraries, just like @code{bootstrap-debug-lean}
+tests it on host programs. It builds stage3 libraries with
+@option{-fcompare-debug}, and it can be used along with any of the
+@code{bootstrap-debug} options above.
+
+There aren't @code{-lean} or @code{-big} counterparts to this option
+because most libraries are only build in stage3, so bootstrap compares
+would not get significant coverage. Moreover, the few libraries built
+in stage2 are used in stage3 host programs, so we wouldn't want to
+compile stage2 libraries with different options for comparison purposes.
+
+@item @samp{bootstrap-debug-ckovw}
+Arranges for error messages to be issued if the compiler built on any
+stage is run without the option @option{-fcompare-debug}. This is
+useful to verify the full @option{-fcompare-debug} testing coverage. It
+must be used along with @code{bootstrap-debug-lean} and
+@code{bootstrap-debug-lib}.
+
+@item @samp{bootstrap-time}
+Arranges for the run time of each program started by the GCC driver,
+built in any stage, to be logged to @file{time.log}, in the top level of
+the build tree.
@end table
-frandom-seed=@var{string} -fsched-verbose=@var{n} @gol
-fsel-sched-verbose -fsel-sched-dump-cfg -fsel-sched-pipelining-verbose @gol
-ftest-coverage -ftime-report -fvar-tracking @gol
+-fvar-tracking-assigments -fvar-tracking-assignments-toggle @gol
-g -g@var{level} -gtoggle -gcoff -gdwarf-@var{version} @gol
-ggdb -gstabs -gstabs+ -gvms -gxcoff -gxcoff+ @gol
-fno-merge-debug-strings -fno-dwarf2-cfi-asm @gol
@opindex gdwarf-@var{version}
Produce debugging information in DWARF format (if that is
supported). This is the format used by DBX on IRIX 6. The value
-of @var{version} may be either 2 or 3; the default version is 2.
+of @var{version} may be either 2, 3 or 4; the default version is 2.
Note that with DWARF version 2 some ports require, and will always
use, some non-conflicting DWARF 3 extensions in the unwind tables.
+Version 4 may require GDB 7.0 and @option{-fvar-tracking-assignments}
+for maximum benefit.
+
@item -gvms
@opindex gvms
Produce debugging information in VMS debug format (if that is
many times it is given. This is mainly intended to be used with
@option{-fcompare-debug}.
-@item -fdump-final-insns=@var{file}
-@opindex fdump-final-insns=
-Dump the final internal representation (RTL) to @var{file}.
+@item -fdump-final-insns@r{[}=@var{file}@r{]}
+@opindex fdump-final-insns
+Dump the final internal representation (RTL) to @var{file}. If the
+optional argument is omitted (or if @var{file} is @code{.}), the name
+of the dump file will be determined by appending @code{.gkd} to the
+compilation output file name.
@item -fcompare-debug@r{[}=@var{opts}@r{]}
@opindex fcompare-debug
@option{-O}, @option{-O2}, @dots{}), debugging information (@option{-g}) and
the debug info format supports it.
+@item -fvar-tracking-assignments
+@opindex fvar-tracking-assignments
+@opindex fno-var-tracking-assignments
+Annotate assignments to user variables early in the compilation and
+attempt to carry the annotations over throughout the compilation all the
+way to the end, in an attempt to improve debug information while
+optimizing. Use of @option{-gdwarf-4} is recommended along with it.
+
+It can be enabled even if var-tracking is disabled, in which case
+annotations will be created and maintained, but discarded at the end.
+
+@item -fvar-tracking-assignments-toggle
+@opindex fvar-tracking-assignments-toggle
+@opindex fno-var-tracking-assignments-toggle
+Toggle @option{-fvar-tracking-assignments}, in the same way that
+@option{-gtoggle} toggles @option{-g}.
+
@item -print-file-name=@var{library}
@opindex print-file-name
Print the full absolute name of the library file @var{library} that
motion optimization performed on them. The default value of the
parameter is 1000 for -O1 and 10000 for -O2 and above.
+@item min-nondebug-insn-uid
+Use uids starting at this parameter for nondebug insns. The range below
+the parameter is reserved exclusively for debug insns created by
+@option{-fvar-tracking-assignments}, but debug insns may get
+(non-overlapping) uids above it if the reserved range is exhausted.
+
@end table
@end table
insn_info->insn = insn;
bb_info->last_insn = insn_info;
+ if (DEBUG_INSN_P (insn))
+ {
+ insn_info->cannot_delete = true;
+ return;
+ }
/* Cselib clears the table for this case, so we have to essentially
do the same. */
static GTY ((param_is (struct indirect_string_node))) htab_t debug_str_hash;
+/* True if the compilation unit has location entries that reference
+ debug strings. */
+static GTY(()) bool debug_str_hash_forced = false;
+
static GTY(()) int dw2_string_counter;
static GTY(()) unsigned long dwarf2out_cfi_label_num;
dw_val_class_file
};
-/* Describe a double word constant value. */
-/* ??? Every instance of long_long in the code really means CONST_DOUBLE. */
-
-typedef struct GTY(()) dw_long_long_struct {
- unsigned long hi;
- unsigned long low;
-}
-dw_long_long_const;
-
/* Describe a floating point constant value, or a vector constant value. */
typedef struct GTY(()) dw_vec_struct {
dw_loc_descr_ref GTY ((tag ("dw_val_class_loc"))) val_loc;
HOST_WIDE_INT GTY ((default)) val_int;
unsigned HOST_WIDE_INT GTY ((tag ("dw_val_class_unsigned_const"))) val_unsigned;
- dw_long_long_const GTY ((tag ("dw_val_class_long_long"))) val_long_long;
+ rtx GTY ((tag ("dw_val_class_long_long"))) val_long_long;
dw_vec_const GTY ((tag ("dw_val_class_vec"))) val_vec;
struct dw_val_die_union
{
return "DW_OP_call4";
case DW_OP_call_ref:
return "DW_OP_call_ref";
+ case DW_OP_implicit_value:
+ return "DW_OP_implicit_value";
+ case DW_OP_stack_value:
+ return "DW_OP_stack_value";
case DW_OP_form_tls_address:
return "DW_OP_form_tls_address";
case DW_OP_call_frame_cfa:
case DW_OP_call_ref:
size += DWARF2_ADDR_SIZE;
break;
+ case DW_OP_implicit_value:
+ size += size_of_uleb128 (loc->dw_loc_oprnd1.v.val_unsigned)
+ + loc->dw_loc_oprnd1.v.val_unsigned;
+ break;
default:
break;
}
return size;
}
+#ifdef DWARF2_DEBUGGING_INFO
+static HOST_WIDE_INT extract_int (const unsigned char *, unsigned);
+#endif
+
/* Output location description stack opcode's operands (if any). */
static void
break;
case DW_OP_const8u:
case DW_OP_const8s:
- gcc_assert (HOST_BITS_PER_LONG >= 64);
+ gcc_assert (HOST_BITS_PER_WIDE_INT >= 64);
dw2_asm_output_data (8, val1->v.val_int, NULL);
break;
case DW_OP_skip:
dw2_asm_output_data (2, offset, NULL);
}
break;
+ case DW_OP_implicit_value:
+ dw2_asm_output_data_uleb128 (val1->v.val_unsigned, NULL);
+ switch (val2->val_class)
+ {
+ case dw_val_class_const:
+ dw2_asm_output_data (val1->v.val_unsigned, val2->v.val_int, NULL);
+ break;
+ case dw_val_class_vec:
+ {
+ unsigned int elt_size = val2->v.val_vec.elt_size;
+ unsigned int len = val2->v.val_vec.length;
+ unsigned int i;
+ unsigned char *p;
+
+ if (elt_size > sizeof (HOST_WIDE_INT))
+ {
+ elt_size /= 2;
+ len *= 2;
+ }
+ for (i = 0, p = val2->v.val_vec.array;
+ i < len;
+ i++, p += elt_size)
+ dw2_asm_output_data (elt_size, extract_int (p, elt_size),
+ "fp or vector constant word %u", i);
+ }
+ break;
+ case dw_val_class_long_long:
+ {
+ unsigned HOST_WIDE_INT first, second;
+
+ if (WORDS_BIG_ENDIAN)
+ {
+ first = CONST_DOUBLE_HIGH (val2->v.val_long_long);
+ second = CONST_DOUBLE_LOW (val2->v.val_long_long);
+ }
+ else
+ {
+ first = CONST_DOUBLE_LOW (val2->v.val_long_long);
+ second = CONST_DOUBLE_HIGH (val2->v.val_long_long);
+ }
+ dw2_asm_output_data (HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR,
+ first, "long long constant");
+ dw2_asm_output_data (HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR,
+ second, NULL);
+ }
+ break;
+ case dw_val_class_addr:
+ gcc_assert (val1->v.val_unsigned == DWARF2_ADDR_SIZE);
+ dw2_asm_output_addr_rtx (DWARF2_ADDR_SIZE, val2->v.val_addr, NULL);
+ break;
+ default:
+ gcc_unreachable ();
+ }
+ break;
#else
case DW_OP_const2u:
case DW_OP_const2s:
case DW_OP_const8s:
case DW_OP_skip:
case DW_OP_bra:
+ case DW_OP_implicit_value:
/* We currently don't make any attempt to make sure these are
aligned properly like we do for the main unwind info, so
don't support emitting things larger than a byte if we're
switch (loc->dw_loc_opc)
{
case DW_OP_addr:
+ case DW_OP_implicit_value:
/* We cannot output addresses in .cfi_escape, only bytes. */
gcc_unreachable ();
case DW_OP_const8u:
case DW_OP_const8s:
- gcc_assert (HOST_BITS_PER_LONG >= 64);
+ gcc_assert (HOST_BITS_PER_WIDE_INT >= 64);
fputc (',', asm_out_file);
dw2_asm_output_data_raw (8, val1->v.val_int);
break;
static inline HOST_WIDE_INT AT_int (dw_attr_ref);
static void add_AT_unsigned (dw_die_ref, enum dwarf_attribute, unsigned HOST_WIDE_INT);
static inline unsigned HOST_WIDE_INT AT_unsigned (dw_attr_ref);
-static void add_AT_long_long (dw_die_ref, enum dwarf_attribute, unsigned long,
- unsigned long);
+static void add_AT_long_long (dw_die_ref, enum dwarf_attribute, rtx);
static inline void add_AT_vec (dw_die_ref, enum dwarf_attribute, unsigned int,
unsigned int, unsigned char *);
static hashval_t debug_str_do_hash (const void *);
enum var_init_status);
static dw_loc_descr_ref concat_loc_descriptor (rtx, rtx,
enum var_init_status);
-static dw_loc_descr_ref loc_descriptor (rtx, enum var_init_status);
+static dw_loc_descr_ref loc_descriptor (rtx, enum machine_mode mode,
+ enum var_init_status);
static dw_loc_descr_ref loc_descriptor_from_tree_1 (tree, int);
static dw_loc_descr_ref loc_descriptor_from_tree (tree);
static HOST_WIDE_INT ceiling (HOST_WIDE_INT, unsigned int);
static void add_data_member_location_attribute (dw_die_ref, tree);
static void add_const_value_attribute (dw_die_ref, rtx);
static void insert_int (HOST_WIDE_INT, unsigned, unsigned char *);
-static HOST_WIDE_INT extract_int (const unsigned char *, unsigned);
static void insert_float (const_rtx, unsigned char *);
static rtx rtl_for_decl_location (tree);
static void add_location_or_const_value_attribute (dw_die_ref, tree,
static inline void
add_AT_long_long (dw_die_ref die, enum dwarf_attribute attr_kind,
- long unsigned int val_hi, long unsigned int val_low)
+ rtx val_const_double)
{
dw_attr_node attr;
attr.dw_attr = attr_kind;
attr.dw_attr_val.val_class = dw_val_class_long_long;
- attr.dw_attr_val.v.val_long_long.hi = val_hi;
- attr.dw_attr_val.v.val_long_long.low = val_low;
+ attr.dw_attr_val.v.val_long_long = val_const_double;
add_dwarf_attr (die, &attr);
}
(const char *)x2) == 0;
}
+/* Add STR to the indirect string hash table. */
+
static struct indirect_string_node *
find_AT_string (const char *str)
{
add_dwarf_attr (die, &attr);
}
+/* Create a label for an indirect string node, ensuring it is going to
+ be output, unless its reference count goes down to zero. */
+
+static inline void
+gen_label_for_indirect_string (struct indirect_string_node *node)
+{
+ char label[32];
+
+ if (node->label)
+ return;
+
+ ASM_GENERATE_INTERNAL_LABEL (label, "LASF", dw2_string_counter);
+ ++dw2_string_counter;
+ node->label = xstrdup (label);
+}
+
+/* Create a SYMBOL_REF rtx whose value is the initial address of a
+ debug string STR. */
+
+static inline rtx
+get_debug_string_label (const char *str)
+{
+ struct indirect_string_node *node = find_AT_string (str);
+
+ debug_str_hash_forced = true;
+
+ gen_label_for_indirect_string (node);
+
+ return gen_rtx_SYMBOL_REF (Pmode, node->label);
+}
+
static inline const char *
AT_string (dw_attr_ref a)
{
{
struct indirect_string_node *node;
unsigned int len;
- char label[32];
gcc_assert (a && AT_class (a) == dw_val_class_str);
&& (len - DWARF_OFFSET_SIZE) * node->refcount <= len))
return node->form = DW_FORM_string;
- ASM_GENERATE_INTERNAL_LABEL (label, "LASF", dw2_string_counter);
- ++dw2_string_counter;
- node->label = xstrdup (label);
+ gen_label_for_indirect_string (node);
return node->form = DW_FORM_strp;
}
fprintf (outfile, HOST_WIDE_INT_PRINT_UNSIGNED, AT_unsigned (a));
break;
case dw_val_class_long_long:
- fprintf (outfile, "constant (%lu,%lu)",
- a->dw_attr_val.v.val_long_long.hi,
- a->dw_attr_val.v.val_long_long.low);
+ fprintf (outfile, "constant (" HOST_WIDE_INT_PRINT_UNSIGNED
+ "," HOST_WIDE_INT_PRINT_UNSIGNED ")",
+ CONST_DOUBLE_HIGH (a->dw_attr_val.v.val_long_long),
+ CONST_DOUBLE_LOW (a->dw_attr_val.v.val_long_long));
break;
case dw_val_class_vec:
fprintf (outfile, "floating-point or vector constant");
CHECKSUM (at->dw_attr_val.v.val_unsigned);
break;
case dw_val_class_long_long:
- CHECKSUM (at->dw_attr_val.v.val_long_long);
+ CHECKSUM (CONST_DOUBLE_HIGH (at->dw_attr_val.v.val_long_long));
+ CHECKSUM (CONST_DOUBLE_LOW (at->dw_attr_val.v.val_long_long));
break;
case dw_val_class_vec:
CHECKSUM (at->dw_attr_val.v.val_vec);
case dw_val_class_unsigned_const:
return v1->v.val_unsigned == v2->v.val_unsigned;
case dw_val_class_long_long:
- return v1->v.val_long_long.hi == v2->v.val_long_long.hi
- && v1->v.val_long_long.low == v2->v.val_long_long.low;
+ return CONST_DOUBLE_HIGH (v1->v.val_long_long)
+ == CONST_DOUBLE_HIGH (v2->v.val_long_long)
+ && CONST_DOUBLE_LOW (v1->v.val_long_long)
+ == CONST_DOUBLE_LOW (v2->v.val_long_long);
case dw_val_class_vec:
if (v1->v.val_vec.length != v2->v.val_vec.length
|| v1->v.val_vec.elt_size != v2->v.val_vec.elt_size)
size += constant_size (AT_unsigned (a));
break;
case dw_val_class_long_long:
- size += 1 + 2*HOST_BITS_PER_LONG/HOST_BITS_PER_CHAR; /* block */
+ size += 1 + 2*HOST_BITS_PER_WIDE_INT/HOST_BITS_PER_CHAR; /* block */
break;
case dw_val_class_vec:
size += constant_size (a->dw_attr_val.v.val_vec.length
unsigned HOST_WIDE_INT first, second;
dw2_asm_output_data (1,
- 2 * HOST_BITS_PER_LONG / HOST_BITS_PER_CHAR,
+ 2 * HOST_BITS_PER_WIDE_INT
+ / HOST_BITS_PER_CHAR,
"%s", name);
if (WORDS_BIG_ENDIAN)
{
- first = a->dw_attr_val.v.val_long_long.hi;
- second = a->dw_attr_val.v.val_long_long.low;
+ first = CONST_DOUBLE_HIGH (a->dw_attr_val.v.val_long_long);
+ second = CONST_DOUBLE_LOW (a->dw_attr_val.v.val_long_long);
}
else
{
- first = a->dw_attr_val.v.val_long_long.low;
- second = a->dw_attr_val.v.val_long_long.hi;
+ first = CONST_DOUBLE_LOW (a->dw_attr_val.v.val_long_long);
+ second = CONST_DOUBLE_HIGH (a->dw_attr_val.v.val_long_long);
}
- dw2_asm_output_data (HOST_BITS_PER_LONG / HOST_BITS_PER_CHAR,
+ dw2_asm_output_data (HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR,
first, "long long constant");
- dw2_asm_output_data (HOST_BITS_PER_LONG / HOST_BITS_PER_CHAR,
+ dw2_asm_output_data (HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR,
second, NULL);
}
break;
{
dw_loc_descr_ref mem_loc_result = NULL;
enum dwarf_location_atom op;
+ dw_loc_descr_ref op0, op1;
/* Note that for a dynamically sized array, the location we will generate a
description of here will be the lowest numbered location which is
legitimate to make the Dwarf info refer to the whole register which
contains the given subreg. */
rtl = XEXP (rtl, 0);
+ if (GET_MODE_SIZE (GET_MODE (rtl)) > DWARF2_ADDR_SIZE)
+ break;
/* ... fall through ... */
}
break;
+ case SIGN_EXTEND:
+ case ZERO_EXTEND:
+ op0 = mem_loc_descriptor (XEXP (rtl, 0), mode,
+ VAR_INIT_STATUS_INITIALIZED);
+ if (op0 == 0)
+ break;
+ else
+ {
+ int shift = DWARF2_ADDR_SIZE
+ - GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0)));
+ shift *= BITS_PER_UNIT;
+ if (GET_CODE (rtl) == SIGN_EXTEND)
+ op = DW_OP_shra;
+ else
+ op = DW_OP_shr;
+ mem_loc_result = op0;
+ add_loc_descr (&mem_loc_result, int_loc_descriptor (shift));
+ add_loc_descr (&mem_loc_result, new_loc_descr (DW_OP_shl, 0, 0));
+ add_loc_descr (&mem_loc_result, int_loc_descriptor (shift));
+ add_loc_descr (&mem_loc_result, new_loc_descr (op, 0, 0));
+ }
+ break;
+
case MEM:
mem_loc_result = mem_loc_descriptor (XEXP (rtl, 0), GET_MODE (rtl),
VAR_INIT_STATUS_INITIALIZED);
return 0;
}
+ if (GET_CODE (rtl) == SYMBOL_REF
+ && SYMBOL_REF_TLS_MODEL (rtl) != TLS_MODEL_NONE)
+ {
+ dw_loc_descr_ref temp;
+
+ /* If this is not defined, we have no way to emit the data. */
+ if (!targetm.have_tls || !targetm.asm_out.output_dwarf_dtprel)
+ break;
+
+ temp = new_loc_descr (DW_OP_addr, 0, 0);
+ temp->dw_loc_oprnd1.val_class = dw_val_class_addr;
+ temp->dw_loc_oprnd1.v.val_addr = rtl;
+ temp->dtprel = true;
+
+ mem_loc_result = new_loc_descr (DW_OP_GNU_push_tls_address, 0, 0);
+ add_loc_descr (&mem_loc_result, temp);
+
+ break;
+ }
+
+ symref:
mem_loc_result = new_loc_descr (DW_OP_addr, 0, 0);
mem_loc_result->dw_loc_oprnd1.val_class = dw_val_class_addr;
mem_loc_result->dw_loc_oprnd1.v.val_addr = rtl;
/* If a pseudo-reg is optimized away, it is possible for it to
be replaced with a MEM containing a multiply or shift. */
+ case MINUS:
+ op = DW_OP_minus;
+ goto do_binop;
+
case MULT:
op = DW_OP_mul;
goto do_binop;
+ case DIV:
+ op = DW_OP_div;
+ goto do_binop;
+
+ case MOD:
+ op = DW_OP_mod;
+ goto do_binop;
+
case ASHIFT:
op = DW_OP_shl;
goto do_binop;
op = DW_OP_shr;
goto do_binop;
+ case AND:
+ op = DW_OP_and;
+ goto do_binop;
+
+ case IOR:
+ op = DW_OP_or;
+ goto do_binop;
+
+ case XOR:
+ op = DW_OP_xor;
+ goto do_binop;
+
do_binop:
- {
- dw_loc_descr_ref op0 = mem_loc_descriptor (XEXP (rtl, 0), mode,
- VAR_INIT_STATUS_INITIALIZED);
- dw_loc_descr_ref op1 = mem_loc_descriptor (XEXP (rtl, 1), mode,
- VAR_INIT_STATUS_INITIALIZED);
+ op0 = mem_loc_descriptor (XEXP (rtl, 0), mode,
+ VAR_INIT_STATUS_INITIALIZED);
+ op1 = mem_loc_descriptor (XEXP (rtl, 1), mode,
+ VAR_INIT_STATUS_INITIALIZED);
- if (op0 == 0 || op1 == 0)
- break;
+ if (op0 == 0 || op1 == 0)
+ break;
+
+ mem_loc_result = op0;
+ add_loc_descr (&mem_loc_result, op1);
+ add_loc_descr (&mem_loc_result, new_loc_descr (op, 0, 0));
+ break;
- mem_loc_result = op0;
- add_loc_descr (&mem_loc_result, op1);
- add_loc_descr (&mem_loc_result, new_loc_descr (op, 0, 0));
+ case NOT:
+ op = DW_OP_not;
+ goto do_unop;
+
+ case ABS:
+ op = DW_OP_abs;
+ goto do_unop;
+
+ case NEG:
+ op = DW_OP_neg;
+ goto do_unop;
+
+ do_unop:
+ op0 = mem_loc_descriptor (XEXP (rtl, 0), mode,
+ VAR_INIT_STATUS_INITIALIZED);
+
+ if (op0 == 0)
break;
- }
+
+ mem_loc_result = op0;
+ add_loc_descr (&mem_loc_result, new_loc_descr (op, 0, 0));
+ break;
case CONST_INT:
mem_loc_result = int_loc_descriptor (INTVAL (rtl));
VAR_INIT_STATUS_INITIALIZED);
break;
+ case EQ:
+ op = DW_OP_eq;
+ goto do_scompare;
+
+ case GE:
+ op = DW_OP_ge;
+ goto do_scompare;
+
+ case GT:
+ op = DW_OP_gt;
+ goto do_scompare;
+
+ case LE:
+ op = DW_OP_le;
+ goto do_scompare;
+
+ case LT:
+ op = DW_OP_lt;
+ goto do_scompare;
+
+ case NE:
+ op = DW_OP_ne;
+ goto do_scompare;
+
+ do_scompare:
+ if (GET_MODE_CLASS (GET_MODE (XEXP (rtl, 0))) != MODE_INT
+ || GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) > DWARF2_ADDR_SIZE
+ || GET_MODE (XEXP (rtl, 0)) != GET_MODE (XEXP (rtl, 1)))
+ break;
+
+ op0 = mem_loc_descriptor (XEXP (rtl, 0), mode,
+ VAR_INIT_STATUS_INITIALIZED);
+ op1 = mem_loc_descriptor (XEXP (rtl, 1), mode,
+ VAR_INIT_STATUS_INITIALIZED);
+
+ if (op0 == 0 || op1 == 0)
+ break;
+
+ if (GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) < DWARF2_ADDR_SIZE)
+ {
+ int shift = DWARF2_ADDR_SIZE
+ - GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0)));
+ shift *= BITS_PER_UNIT;
+ add_loc_descr (&op0, int_loc_descriptor (shift));
+ add_loc_descr (&op0, new_loc_descr (DW_OP_shl, 0, 0));
+ if (CONST_INT_P (XEXP (rtl, 1)))
+ op1 = int_loc_descriptor (INTVAL (XEXP (rtl, 1)) << shift);
+ else
+ {
+ add_loc_descr (&op1, int_loc_descriptor (shift));
+ add_loc_descr (&op1, new_loc_descr (DW_OP_shl, 0, 0));
+ }
+ }
+
+ do_compare:
+ mem_loc_result = op0;
+ add_loc_descr (&mem_loc_result, op1);
+ add_loc_descr (&mem_loc_result, new_loc_descr (op, 0, 0));
+ if (STORE_FLAG_VALUE != 1)
+ {
+ add_loc_descr (&mem_loc_result,
+ int_loc_descriptor (STORE_FLAG_VALUE));
+ add_loc_descr (&mem_loc_result, new_loc_descr (DW_OP_mul, 0, 0));
+ }
+ break;
+
+ case GEU:
+ op = DW_OP_ge;
+ goto do_ucompare;
+
+ case GTU:
+ op = DW_OP_gt;
+ goto do_ucompare;
+
+ case LEU:
+ op = DW_OP_le;
+ goto do_ucompare;
+
+ case LTU:
+ op = DW_OP_lt;
+ goto do_ucompare;
+
+ do_ucompare:
+ if (GET_MODE_CLASS (GET_MODE (XEXP (rtl, 0))) != MODE_INT
+ || GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) > DWARF2_ADDR_SIZE
+ || GET_MODE (XEXP (rtl, 0)) != GET_MODE (XEXP (rtl, 1)))
+ break;
+
+ op0 = mem_loc_descriptor (XEXP (rtl, 0), mode,
+ VAR_INIT_STATUS_INITIALIZED);
+ op1 = mem_loc_descriptor (XEXP (rtl, 1), mode,
+ VAR_INIT_STATUS_INITIALIZED);
+
+ if (op0 == 0 || op1 == 0)
+ break;
+
+ if (GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) < DWARF2_ADDR_SIZE)
+ {
+ HOST_WIDE_INT mask = GET_MODE_MASK (GET_MODE (XEXP (rtl, 0)));
+ add_loc_descr (&op0, int_loc_descriptor (mask));
+ add_loc_descr (&op0, new_loc_descr (DW_OP_and, 0, 0));
+ if (CONST_INT_P (XEXP (rtl, 1)))
+ op1 = int_loc_descriptor (INTVAL (XEXP (rtl, 1)) & mask);
+ else
+ {
+ add_loc_descr (&op1, int_loc_descriptor (mask));
+ add_loc_descr (&op1, new_loc_descr (DW_OP_and, 0, 0));
+ }
+ }
+ else
+ {
+ HOST_WIDE_INT bias = 1;
+ bias <<= (DWARF2_ADDR_SIZE * BITS_PER_UNIT - 1);
+ add_loc_descr (&op0, new_loc_descr (DW_OP_plus_uconst, bias, 0));
+ if (CONST_INT_P (XEXP (rtl, 1)))
+ op1 = int_loc_descriptor ((unsigned HOST_WIDE_INT) bias
+ + INTVAL (XEXP (rtl, 1)));
+ else
+ add_loc_descr (&op1, new_loc_descr (DW_OP_plus_uconst, bias, 0));
+ }
+ goto do_compare;
+
+ case SMIN:
+ case SMAX:
+ case UMIN:
+ case UMAX:
+ if (GET_MODE_CLASS (GET_MODE (XEXP (rtl, 0))) != MODE_INT
+ || GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) > DWARF2_ADDR_SIZE
+ || GET_MODE (XEXP (rtl, 0)) != GET_MODE (XEXP (rtl, 1)))
+ break;
+
+ op0 = mem_loc_descriptor (XEXP (rtl, 0), mode,
+ VAR_INIT_STATUS_INITIALIZED);
+ op1 = mem_loc_descriptor (XEXP (rtl, 1), mode,
+ VAR_INIT_STATUS_INITIALIZED);
+
+ if (op0 == 0 || op1 == 0)
+ break;
+
+ add_loc_descr (&op0, new_loc_descr (DW_OP_dup, 0, 0));
+ add_loc_descr (&op1, new_loc_descr (DW_OP_swap, 0, 0));
+ add_loc_descr (&op1, new_loc_descr (DW_OP_over, 0, 0));
+ if (GET_CODE (rtl) == UMIN || GET_CODE (rtl) == UMAX)
+ {
+ if (GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) < DWARF2_ADDR_SIZE)
+ {
+ HOST_WIDE_INT mask = GET_MODE_MASK (GET_MODE (XEXP (rtl, 0)));
+ add_loc_descr (&op0, int_loc_descriptor (mask));
+ add_loc_descr (&op0, new_loc_descr (DW_OP_and, 0, 0));
+ add_loc_descr (&op1, int_loc_descriptor (mask));
+ add_loc_descr (&op1, new_loc_descr (DW_OP_and, 0, 0));
+ }
+ else
+ {
+ HOST_WIDE_INT bias = 1;
+ bias <<= (DWARF2_ADDR_SIZE * BITS_PER_UNIT - 1);
+ add_loc_descr (&op0, new_loc_descr (DW_OP_plus_uconst, bias, 0));
+ add_loc_descr (&op1, new_loc_descr (DW_OP_plus_uconst, bias, 0));
+ }
+ }
+ else if (GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) < DWARF2_ADDR_SIZE)
+ {
+ int shift = DWARF2_ADDR_SIZE
+ - GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0)));
+ shift *= BITS_PER_UNIT;
+ add_loc_descr (&op0, int_loc_descriptor (shift));
+ add_loc_descr (&op0, new_loc_descr (DW_OP_shl, 0, 0));
+ add_loc_descr (&op1, int_loc_descriptor (shift));
+ add_loc_descr (&op1, new_loc_descr (DW_OP_shl, 0, 0));
+ }
+
+ if (GET_CODE (rtl) == SMIN || GET_CODE (rtl) == UMIN)
+ op = DW_OP_lt;
+ else
+ op = DW_OP_gt;
+ mem_loc_result = op0;
+ add_loc_descr (&mem_loc_result, op1);
+ add_loc_descr (&mem_loc_result, new_loc_descr (op, 0, 0));
+ {
+ dw_loc_descr_ref bra_node, drop_node;
+
+ bra_node = new_loc_descr (DW_OP_bra, 0, 0);
+ add_loc_descr (&mem_loc_result, bra_node);
+ add_loc_descr (&mem_loc_result, new_loc_descr (DW_OP_swap, 0, 0));
+ drop_node = new_loc_descr (DW_OP_drop, 0, 0);
+ add_loc_descr (&mem_loc_result, drop_node);
+ bra_node->dw_loc_oprnd1.val_class = dw_val_class_loc;
+ bra_node->dw_loc_oprnd1.v.val_loc = drop_node;
+ }
+ break;
+
+ case ZERO_EXTRACT:
+ case SIGN_EXTRACT:
+ if (CONST_INT_P (XEXP (rtl, 1))
+ && CONST_INT_P (XEXP (rtl, 2))
+ && ((unsigned) INTVAL (XEXP (rtl, 1))
+ + (unsigned) INTVAL (XEXP (rtl, 2))
+ <= GET_MODE_BITSIZE (GET_MODE (rtl)))
+ && GET_MODE_BITSIZE (GET_MODE (rtl)) <= DWARF2_ADDR_SIZE
+ && GET_MODE_BITSIZE (GET_MODE (XEXP (rtl, 0))) <= DWARF2_ADDR_SIZE)
+ {
+ int shift, size;
+ op0 = mem_loc_descriptor (XEXP (rtl, 0), mode,
+ VAR_INIT_STATUS_INITIALIZED);
+ if (op0 == 0)
+ break;
+ if (GET_CODE (rtl) == SIGN_EXTRACT)
+ op = DW_OP_shra;
+ else
+ op = DW_OP_shr;
+ mem_loc_result = op0;
+ size = INTVAL (XEXP (rtl, 1));
+ shift = INTVAL (XEXP (rtl, 2));
+ if (BITS_BIG_ENDIAN)
+ shift = GET_MODE_BITSIZE (GET_MODE (XEXP (rtl, 0)))
+ - shift - size;
+ add_loc_descr (&mem_loc_result,
+ int_loc_descriptor (DWARF2_ADDR_SIZE - shift - size));
+ add_loc_descr (&mem_loc_result, new_loc_descr (DW_OP_shl, 0, 0));
+ add_loc_descr (&mem_loc_result,
+ int_loc_descriptor (DWARF2_ADDR_SIZE - size));
+ add_loc_descr (&mem_loc_result, new_loc_descr (op, 0, 0));
+ }
+ break;
+
+ case COMPARE:
+ case IF_THEN_ELSE:
+ case ROTATE:
+ case ROTATERT:
+ case TRUNCATE:
+ /* In theory, we could implement the above. */
+ /* DWARF cannot represent the unsigned compare operations
+ natively. */
+ case SS_MULT:
+ case US_MULT:
+ case SS_DIV:
+ case US_DIV:
+ case UDIV:
+ case UMOD:
+ case UNORDERED:
+ case ORDERED:
+ case UNEQ:
+ case UNGE:
+ case UNLE:
+ case UNLT:
+ case LTGT:
+ case FLOAT_EXTEND:
+ case FLOAT_TRUNCATE:
+ case FLOAT:
+ case UNSIGNED_FLOAT:
+ case FIX:
+ case UNSIGNED_FIX:
+ case FRACT_CONVERT:
+ case UNSIGNED_FRACT_CONVERT:
+ case SAT_FRACT:
+ case UNSIGNED_SAT_FRACT:
+ case SQRT:
+ case BSWAP:
+ case FFS:
+ case CLZ:
+ case CTZ:
+ case POPCOUNT:
+ case PARITY:
+ case ASM_OPERANDS:
case UNSPEC:
/* If delegitimize_address couldn't do anything with the UNSPEC, we
can't express it in the debug info. This can happen e.g. with some
TLS UNSPECs. */
break;
+ case CONST_STRING:
+ rtl = get_debug_string_label (XSTR (rtl, 0));
+ goto symref;
+
default:
+#ifdef ENABLE_CHECKING
+ print_rtl (stderr, rtl);
gcc_unreachable ();
+#else
+ break;
+#endif
}
if (mem_loc_result && initialized == VAR_INIT_STATUS_UNINITIALIZED)
concat_loc_descriptor (rtx x0, rtx x1, enum var_init_status initialized)
{
dw_loc_descr_ref cc_loc_result = NULL;
- dw_loc_descr_ref x0_ref = loc_descriptor (x0, VAR_INIT_STATUS_INITIALIZED);
- dw_loc_descr_ref x1_ref = loc_descriptor (x1, VAR_INIT_STATUS_INITIALIZED);
+ dw_loc_descr_ref x0_ref
+ = loc_descriptor (x0, VOIDmode, VAR_INIT_STATUS_INITIALIZED);
+ dw_loc_descr_ref x1_ref
+ = loc_descriptor (x1, VOIDmode, VAR_INIT_STATUS_INITIALIZED);
if (x0_ref == 0 || x1_ref == 0)
return 0;
dw_loc_descr_ref ref;
rtx x = XVECEXP (concatn, 0, i);
- ref = loc_descriptor (x, VAR_INIT_STATUS_INITIALIZED);
+ ref = loc_descriptor (x, VOIDmode, VAR_INIT_STATUS_INITIALIZED);
if (ref == NULL)
return NULL;
memory location we provide a Dwarf postfix expression describing how to
generate the (dynamic) address of the object onto the address stack.
+ MODE is mode of the decl if this loc_descriptor is going to be used in
+ .debug_loc section where DW_OP_stack_value and DW_OP_implicit_value are
+ allowed, VOIDmode otherwise.
+
If we don't know how to describe it, return 0. */
static dw_loc_descr_ref
-loc_descriptor (rtx rtl, enum var_init_status initialized)
+loc_descriptor (rtx rtl, enum machine_mode mode,
+ enum var_init_status initialized)
{
dw_loc_descr_ref loc_result = NULL;
switch (GET_CODE (rtl))
{
case SUBREG:
+ case SIGN_EXTEND:
+ case ZERO_EXTEND:
/* The case of a subreg may arise when we have a local (register)
variable or a formal (register) parameter which doesn't quite fill
up an entire register. For now, just assume that it is
/* Single part. */
if (GET_CODE (XEXP (rtl, 1)) != PARALLEL)
{
- loc_result = loc_descriptor (XEXP (XEXP (rtl, 1), 0), initialized);
+ loc_result = loc_descriptor (XEXP (XEXP (rtl, 1), 0), mode,
+ initialized);
break;
}
/* Create the first one, so we have something to add to. */
loc_result = loc_descriptor (XEXP (RTVEC_ELT (par_elems, 0), 0),
- initialized);
+ VOIDmode, initialized);
if (loc_result == NULL)
return NULL;
mode = GET_MODE (XEXP (RTVEC_ELT (par_elems, 0), 0));
dw_loc_descr_ref temp;
temp = loc_descriptor (XEXP (RTVEC_ELT (par_elems, i), 0),
- initialized);
+ VOIDmode, initialized);
if (temp == NULL)
return NULL;
add_loc_descr (&loc_result, temp);
}
break;
+ case CONST_INT:
+ if (mode != VOIDmode && mode != BLKmode && dwarf_version >= 4)
+ {
+ HOST_WIDE_INT i = INTVAL (rtl);
+ int litsize;
+ if (i >= 0)
+ {
+ if (i <= 31)
+ litsize = 1;
+ else if (i <= 0xff)
+ litsize = 2;
+ else if (i <= 0xffff)
+ litsize = 3;
+ else if (HOST_BITS_PER_WIDE_INT == 32
+ || i <= 0xffffffff)
+ litsize = 5;
+ else
+ litsize = 1 + size_of_uleb128 ((unsigned HOST_WIDE_INT) i);
+ }
+ else
+ {
+ if (i >= -0x80)
+ litsize = 2;
+ else if (i >= -0x8000)
+ litsize = 3;
+ else if (HOST_BITS_PER_WIDE_INT == 32
+ || i >= -0x80000000)
+ litsize = 5;
+ else
+ litsize = 1 + size_of_sleb128 (i);
+ }
+ /* Determine if DW_OP_stack_value or DW_OP_implicit_value
+ is more compact. For DW_OP_stack_value we need:
+ litsize + 1 (DW_OP_stack_value) + 1 (DW_OP_bit_size)
+ + 1 (mode size)
+ and for DW_OP_implicit_value:
+ 1 (DW_OP_implicit_value) + 1 (length) + mode_size. */
+ if (DWARF2_ADDR_SIZE >= GET_MODE_SIZE (mode)
+ && litsize + 1 + 1 + 1 < 1 + 1 + GET_MODE_SIZE (mode))
+ {
+ loc_result = int_loc_descriptor (i);
+ add_loc_descr (&loc_result,
+ new_loc_descr (DW_OP_stack_value, 0, 0));
+ add_loc_descr_op_piece (&loc_result, GET_MODE_SIZE (mode));
+ return loc_result;
+ }
+
+ loc_result = new_loc_descr (DW_OP_implicit_value,
+ GET_MODE_SIZE (mode), 0);
+ loc_result->dw_loc_oprnd2.val_class = dw_val_class_const;
+ loc_result->dw_loc_oprnd2.v.val_int = i;
+ }
+ break;
+
+ case CONST_DOUBLE:
+ if (mode != VOIDmode && dwarf_version >= 4)
+ {
+ /* Note that a CONST_DOUBLE rtx could represent either an integer
+ or a floating-point constant. A CONST_DOUBLE is used whenever
+ the constant requires more than one word in order to be
+ adequately represented. We output CONST_DOUBLEs as blocks. */
+ if (GET_MODE (rtl) != VOIDmode)
+ mode = GET_MODE (rtl);
+
+ loc_result = new_loc_descr (DW_OP_implicit_value,
+ GET_MODE_SIZE (mode), 0);
+ if (SCALAR_FLOAT_MODE_P (mode))
+ {
+ unsigned int length = GET_MODE_SIZE (mode);
+ unsigned char *array = GGC_NEWVEC (unsigned char, length);
+
+ insert_float (rtl, array);
+ loc_result->dw_loc_oprnd2.val_class = dw_val_class_vec;
+ loc_result->dw_loc_oprnd2.v.val_vec.length = length / 4;
+ loc_result->dw_loc_oprnd2.v.val_vec.elt_size = 4;
+ loc_result->dw_loc_oprnd2.v.val_vec.array = array;
+ }
+ else
+ {
+ loc_result->dw_loc_oprnd2.val_class = dw_val_class_long_long;
+ loc_result->dw_loc_oprnd2.v.val_long_long = rtl;
+ }
+ }
+ break;
+
+ case CONST_VECTOR:
+ if (mode != VOIDmode && dwarf_version >= 4)
+ {
+ unsigned int elt_size = GET_MODE_UNIT_SIZE (GET_MODE (rtl));
+ unsigned int length = CONST_VECTOR_NUNITS (rtl);
+ unsigned char *array = GGC_NEWVEC (unsigned char, length * elt_size);
+ unsigned int i;
+ unsigned char *p;
+
+ mode = GET_MODE (rtl);
+ switch (GET_MODE_CLASS (mode))
+ {
+ case MODE_VECTOR_INT:
+ for (i = 0, p = array; i < length; i++, p += elt_size)
+ {
+ rtx elt = CONST_VECTOR_ELT (rtl, i);
+ HOST_WIDE_INT lo, hi;
+
+ switch (GET_CODE (elt))
+ {
+ case CONST_INT:
+ lo = INTVAL (elt);
+ hi = -(lo < 0);
+ break;
+
+ case CONST_DOUBLE:
+ lo = CONST_DOUBLE_LOW (elt);
+ hi = CONST_DOUBLE_HIGH (elt);
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+
+ if (elt_size <= sizeof (HOST_WIDE_INT))
+ insert_int (lo, elt_size, p);
+ else
+ {
+ unsigned char *p0 = p;
+ unsigned char *p1 = p + sizeof (HOST_WIDE_INT);
+
+ gcc_assert (elt_size == 2 * sizeof (HOST_WIDE_INT));
+ if (WORDS_BIG_ENDIAN)
+ {
+ p0 = p1;
+ p1 = p;
+ }
+ insert_int (lo, sizeof (HOST_WIDE_INT), p0);
+ insert_int (hi, sizeof (HOST_WIDE_INT), p1);
+ }
+ }
+ break;
+
+ case MODE_VECTOR_FLOAT:
+ for (i = 0, p = array; i < length; i++, p += elt_size)
+ {
+ rtx elt = CONST_VECTOR_ELT (rtl, i);
+ insert_float (elt, p);
+ }
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+
+ loc_result = new_loc_descr (DW_OP_implicit_value,
+ length * elt_size, 0);
+ loc_result->dw_loc_oprnd2.val_class = dw_val_class_vec;
+ loc_result->dw_loc_oprnd2.v.val_vec.length = length;
+ loc_result->dw_loc_oprnd2.v.val_vec.elt_size = elt_size;
+ loc_result->dw_loc_oprnd2.v.val_vec.array = array;
+ }
+ break;
+
+ case CONST:
+ if (mode == VOIDmode
+ || GET_CODE (XEXP (rtl, 0)) == CONST_INT
+ || GET_CODE (XEXP (rtl, 0)) == CONST_DOUBLE
+ || GET_CODE (XEXP (rtl, 0)) == CONST_VECTOR)
+ {
+ loc_result = loc_descriptor (XEXP (rtl, 0), mode, initialized);
+ break;
+ }
+ /* FALLTHROUGH */
+ case SYMBOL_REF:
+ if (GET_CODE (rtl) == SYMBOL_REF
+ && SYMBOL_REF_TLS_MODEL (rtl) != TLS_MODEL_NONE)
+ break;
+ case LABEL_REF:
+ if (mode != VOIDmode && GET_MODE_SIZE (mode) == DWARF2_ADDR_SIZE
+ && dwarf_version >= 4)
+ {
+ loc_result = new_loc_descr (DW_OP_implicit_value,
+ DWARF2_ADDR_SIZE, 0);
+ loc_result->dw_loc_oprnd2.val_class = dw_val_class_addr;
+ loc_result->dw_loc_oprnd2.v.val_addr = rtl;
+ VEC_safe_push (rtx, gc, used_rtx_array, rtl);
+ }
+ break;
+
default:
- gcc_unreachable ();
+ if (GET_MODE_CLASS (mode) == MODE_INT && GET_MODE (rtl) == mode
+ && GET_MODE_SIZE (GET_MODE (rtl)) <= DWARF2_ADDR_SIZE
+ && dwarf_version >= 4)
+ {
+ /* Value expression. */
+ loc_result = mem_loc_descriptor (rtl, VOIDmode, initialized);
+ if (loc_result)
+ {
+ add_loc_descr (&loc_result,
+ new_loc_descr (DW_OP_stack_value, 0, 0));
+ add_loc_descr_op_piece (&loc_result, GET_MODE_SIZE (mode));
+ }
+ }
+ break;
}
return loc_result;
/* Certain constructs can only be represented at top-level. */
if (want_address == 2)
- return loc_descriptor (rtl, VAR_INIT_STATUS_INITIALIZED);
+ return loc_descriptor (rtl, VOIDmode,
+ VAR_INIT_STATUS_INITIALIZED);
mode = GET_MODE (rtl);
if (MEM_P (rtl))
add_AT_vec (die, DW_AT_const_value, length / 4, 4, array);
}
else
- {
- /* ??? We really should be using HOST_WIDE_INT throughout. */
- gcc_assert (HOST_BITS_PER_LONG == HOST_BITS_PER_WIDE_INT);
-
- add_AT_long_long (die, DW_AT_const_value,
- CONST_DOUBLE_HIGH (rtl), CONST_DOUBLE_LOW (rtl));
- }
+ add_AT_long_long (die, DW_AT_const_value, rtl);
}
break;
add_AT_string (die, DW_AT_const_value, XSTR (rtl, 0));
break;
+ case CONST:
+ if (CONSTANT_P (XEXP (rtl, 0)))
+ {
+ add_const_value_attribute (die, XEXP (rtl, 0));
+ return;
+ }
+ /* FALLTHROUGH */
case SYMBOL_REF:
+ if (GET_CODE (rtl) == SYMBOL_REF
+ && SYMBOL_REF_TLS_MODEL (rtl) != TLS_MODEL_NONE)
+ break;
case LABEL_REF:
- case CONST:
add_AT_addr (die, DW_AT_const_value, rtl);
VEC_safe_push (rtx, gc, used_rtx_array, rtl);
break;
else
initialized = VAR_INIT_STATUS_INITIALIZED;
- descr = loc_by_reference (loc_descriptor (varloc, initialized), decl);
+ descr = loc_by_reference (loc_descriptor (varloc, DECL_MODE (decl),
+ initialized), decl);
list = new_loc_list (descr, node->label, node->next->label, secname, 1);
node = node->next;
enum var_init_status initialized =
NOTE_VAR_LOCATION_STATUS (node->var_loc_note);
varloc = NOTE_VAR_LOCATION (node->var_loc_note);
- descr = loc_by_reference (loc_descriptor (varloc, initialized),
- decl);
+ descr = loc_by_reference (loc_descriptor (varloc, DECL_MODE (decl),
+ initialized), decl);
add_loc_descr_to_loc_list (&list, descr,
node->label, node->next->label, secname);
}
current_function_funcdef_no);
endname = ggc_strdup (label_id);
}
- descr = loc_by_reference (loc_descriptor (varloc, initialized),
+ descr = loc_by_reference (loc_descriptor (varloc,
+ DECL_MODE (decl),
+ initialized),
decl);
add_loc_descr_to_loc_list (&list, descr,
node->label, endname, secname);
enum var_init_status status;
node = loc_list->first;
status = NOTE_VAR_LOCATION_STATUS (node->var_loc_note);
- descr = loc_descriptor (NOTE_VAR_LOCATION (node->var_loc_note), status);
+ rtl = NOTE_VAR_LOCATION (node->var_loc_note);
+ if (GET_CODE (rtl) == VAR_LOCATION
+ && GET_CODE (XEXP (rtl, 1)) != PARALLEL)
+ rtl = XEXP (XEXP (rtl, 1), 0);
+ if (CONSTANT_P (rtl) || GET_CODE (rtl) == CONST_STRING)
+ {
+ add_const_value_attribute (die, rtl);
+ return;
+ }
+ descr = loc_descriptor (NOTE_VAR_LOCATION (node->var_loc_note),
+ DECL_MODE (decl), status);
if (descr)
{
descr = loc_by_reference (descr, decl);
static void
dwarf2out_var_location (rtx loc_note)
{
- char loclabel[MAX_ARTIFICIAL_LABEL_BYTES];
+ char loclabel[MAX_ARTIFICIAL_LABEL_BYTES + 2];
struct var_loc_node *newloc;
rtx next_real;
static const char *last_label;
+ static const char *last_postcall_label;
static bool last_in_cold_section_p;
tree decl;
newloc = GGC_CNEW (struct var_loc_node);
/* If there were no real insns between note we processed last time
and this note, use the label we emitted last time. */
- if (last_var_location_insn != NULL_RTX
- && last_var_location_insn == next_real
- && last_in_cold_section_p == in_cold_section_p)
- newloc->label = last_label;
- else
+ if (last_var_location_insn == NULL_RTX
+ || last_var_location_insn != next_real
+ || last_in_cold_section_p != in_cold_section_p)
{
ASM_GENERATE_INTERNAL_LABEL (loclabel, "LVL", loclabel_num);
ASM_OUTPUT_DEBUG_LABEL (asm_out_file, "LVL", loclabel_num);
loclabel_num++;
- newloc->label = ggc_strdup (loclabel);
+ last_label = ggc_strdup (loclabel);
+ if (!NOTE_DURING_CALL_P (loc_note))
+ last_postcall_label = NULL;
}
newloc->var_loc_note = loc_note;
newloc->next = NULL;
+ if (!NOTE_DURING_CALL_P (loc_note))
+ newloc->label = last_label;
+ else
+ {
+ if (!last_postcall_label)
+ {
+ sprintf (loclabel, "%s-1", last_label);
+ last_postcall_label = ggc_strdup (loclabel);
+ }
+ newloc->label = last_postcall_label;
+ }
+
if (cfun && in_cold_section_p)
newloc->section_label = crtl->subsections.cold_section_label;
else
newloc->section_label = text_section_label;
last_var_location_insn = next_real;
- last_label = newloc->label;
last_in_cold_section_p = in_cold_section_p;
decl = NOTE_VAR_LOCATION_DECL (loc_note);
add_var_loc_to_decl (decl, newloc);
}
/* A helper function for dwarf2out_finish called through
- ht_forall. Emit one queued .debug_str string. */
+ htab_traverse. Emit one queued .debug_str string. */
static int
output_indirect_string (void **h, void *v ATTRIBUTE_UNUSED)
{
struct indirect_string_node *node = (struct indirect_string_node *) *h;
- if (node->form == DW_FORM_strp)
+ if (node->label && node->refcount)
{
switch_to_section (debug_str_section);
ASM_OUTPUT_LABEL (asm_out_file, node->label);
} while (c != die->die_child);
}
+/* A helper function for dwarf2out_finish called through
+ htab_traverse. Clear .debug_str strings that we haven't already
+ decided to emit. */
+
+static int
+prune_indirect_string (void **h, void *v ATTRIBUTE_UNUSED)
+{
+ struct indirect_string_node *node = (struct indirect_string_node *) *h;
+
+ if (!node->label || !node->refcount)
+ htab_clear_slot (debug_str_hash, h);
+
+ return 1;
+}
/* Remove dies representing declarations that we never use. */
prune_unused_types_mark (arange_table[i], 1);
/* Get rid of nodes that aren't marked; and update the string counts. */
- if (debug_str_hash)
+ if (debug_str_hash && debug_str_hash_forced)
+ htab_traverse (debug_str_hash, prune_indirect_string, NULL);
+ else if (debug_str_hash)
htab_empty (debug_str_hash);
prune_unused_types_prune (comp_unit_die);
for (node = limbo_die_list; node; node = node->next)
#include "langhooks.h"
#include "tree-pass.h"
#include "df.h"
+#include "params.h"
/* Commonly used modes. */
#define first_insn (crtl->emit.x_first_insn)
#define last_insn (crtl->emit.x_last_insn)
#define cur_insn_uid (crtl->emit.x_cur_insn_uid)
+#define cur_debug_insn_uid (crtl->emit.x_cur_debug_insn_uid)
#define last_location (crtl->emit.x_last_location)
#define first_label_num (crtl->emit.x_first_label_num)
last_insn = last;
cur_insn_uid = 0;
- for (insn = first; insn; insn = NEXT_INSN (insn))
- cur_insn_uid = MAX (cur_insn_uid, INSN_UID (insn));
+ if (MIN_NONDEBUG_INSN_UID || MAY_HAVE_DEBUG_INSNS)
+ {
+ int debug_count = 0;
+
+ cur_insn_uid = MIN_NONDEBUG_INSN_UID - 1;
+ cur_debug_insn_uid = 0;
+
+ for (insn = first; insn; insn = NEXT_INSN (insn))
+ if (INSN_UID (insn) < MIN_NONDEBUG_INSN_UID)
+ cur_debug_insn_uid = MAX (cur_debug_insn_uid, INSN_UID (insn));
+ else
+ {
+ cur_insn_uid = MAX (cur_insn_uid, INSN_UID (insn));
+ if (DEBUG_INSN_P (insn))
+ debug_count++;
+ }
+
+ if (debug_count)
+ cur_debug_insn_uid = MIN_NONDEBUG_INSN_UID + debug_count;
+ else
+ cur_debug_insn_uid++;
+ }
+ else
+ for (insn = first; insn; insn = NEXT_INSN (insn))
+ cur_insn_uid = MAX (cur_insn_uid, INSN_UID (insn));
cur_insn_uid++;
}
return;
break;
+ case DEBUG_INSN:
case INSN:
case JUMP_INSN:
case CALL_INSN:
case CC0:
return;
+ case DEBUG_INSN:
case INSN:
case JUMP_INSN:
case CALL_INSN:
case CC0:
return;
+ case DEBUG_INSN:
case INSN:
case JUMP_INSN:
case CALL_INSN:
{
return cur_insn_uid;
}
+
+/* Return the number of actual (non-debug) insns emitted in this
+ function. */
+
+int
+get_max_insn_count (void)
+{
+ int n = cur_insn_uid;
+
+ /* The table size must be stable across -g, to avoid codegen
+ differences due to debug insns, and not be affected by
+ -fmin-insn-uid, to avoid excessive table size and to simplify
+ debugging of -fcompare-debug failures. */
+ if (cur_debug_insn_uid > MIN_NONDEBUG_INSN_UID)
+ n -= cur_debug_insn_uid;
+ else
+ n -= MIN_NONDEBUG_INSN_UID;
+
+ return n;
+}
+
\f
/* Return the next insn. If it is a SEQUENCE, return the first insn
of the sequence. */
return insn;
}
+/* Return the next insn after INSN that is not a DEBUG_INSN. This
+ routine does not look inside SEQUENCEs. */
+
+rtx
+next_nondebug_insn (rtx insn)
+{
+ while (insn)
+ {
+ insn = NEXT_INSN (insn);
+ if (insn == 0 || !DEBUG_INSN_P (insn))
+ break;
+ }
+
+ return insn;
+}
+
+/* Return the previous insn before INSN that is not a DEBUG_INSN.
+ This routine does not look inside SEQUENCEs. */
+
+rtx
+prev_nondebug_insn (rtx insn)
+{
+ while (insn)
+ {
+ insn = PREV_INSN (insn);
+ if (insn == 0 || !DEBUG_INSN_P (insn))
+ break;
+ }
+
+ return insn;
+}
+
/* Return the next INSN, CALL_INSN or JUMP_INSN after INSN;
or 0, if there is none. This routine does not look inside
SEQUENCEs. */
return insn;
}
+/* Like `make_insn_raw' but make a DEBUG_INSN instead of an insn. */
+
+rtx
+make_debug_insn_raw (rtx pattern)
+{
+ rtx insn;
+
+ insn = rtx_alloc (DEBUG_INSN);
+ INSN_UID (insn) = cur_debug_insn_uid++;
+ if (cur_debug_insn_uid > MIN_NONDEBUG_INSN_UID)
+ INSN_UID (insn) = cur_insn_uid++;
+
+ PATTERN (insn) = pattern;
+ INSN_CODE (insn) = -1;
+ REG_NOTES (insn) = NULL;
+ INSN_LOCATOR (insn) = curr_insn_locator ();
+ BLOCK_FOR_INSN (insn) = NULL;
+
+ return insn;
+}
+
/* Like `make_insn_raw' but make a JUMP_INSN instead of an insn. */
rtx
switch (GET_CODE (x))
{
+ case DEBUG_INSN:
case INSN:
case JUMP_INSN:
case CALL_INSN:
switch (GET_CODE (x))
{
+ case DEBUG_INSN:
case INSN:
case JUMP_INSN:
case CALL_INSN:
switch (GET_CODE (x))
{
+ case DEBUG_INSN:
case INSN:
case JUMP_INSN:
case CALL_INSN:
return last;
}
+/* Make an instruction with body X and code DEBUG_INSN
+ and output it before the instruction BEFORE. */
+
+rtx
+emit_debug_insn_before_noloc (rtx x, rtx before)
+{
+ rtx last = NULL_RTX, insn;
+
+ gcc_assert (before);
+
+ switch (GET_CODE (x))
+ {
+ case DEBUG_INSN:
+ case INSN:
+ case JUMP_INSN:
+ case CALL_INSN:
+ case CODE_LABEL:
+ case BARRIER:
+ case NOTE:
+ insn = x;
+ while (insn)
+ {
+ rtx next = NEXT_INSN (insn);
+ add_insn_before (insn, before, NULL);
+ last = insn;
+ insn = next;
+ }
+ break;
+
+#ifdef ENABLE_RTL_CHECKING
+ case SEQUENCE:
+ gcc_unreachable ();
+ break;
+#endif
+
+ default:
+ last = make_debug_insn_raw (x);
+ add_insn_before (last, before, NULL);
+ break;
+ }
+
+ return last;
+}
+
/* Make an insn of code BARRIER
and output it before the insn BEFORE. */
switch (GET_CODE (x))
{
+ case DEBUG_INSN:
case INSN:
case JUMP_INSN:
case CALL_INSN:
switch (GET_CODE (x))
{
+ case DEBUG_INSN:
case INSN:
case JUMP_INSN:
case CALL_INSN:
switch (GET_CODE (x))
{
+ case DEBUG_INSN:
case INSN:
case JUMP_INSN:
case CALL_INSN:
return last;
}
+/* Make an instruction with body X and code CALL_INSN
+ and output it after the instruction AFTER. */
+
+rtx
+emit_debug_insn_after_noloc (rtx x, rtx after)
+{
+ rtx last;
+
+ gcc_assert (after);
+
+ switch (GET_CODE (x))
+ {
+ case DEBUG_INSN:
+ case INSN:
+ case JUMP_INSN:
+ case CALL_INSN:
+ case CODE_LABEL:
+ case BARRIER:
+ case NOTE:
+ last = emit_insn_after_1 (x, after, NULL);
+ break;
+
+#ifdef ENABLE_RTL_CHECKING
+ case SEQUENCE:
+ gcc_unreachable ();
+ break;
+#endif
+
+ default:
+ last = make_debug_insn_raw (x);
+ add_insn_after (last, after, NULL);
+ break;
+ }
+
+ return last;
+}
+
/* Make an insn of code BARRIER
and output it after the insn AFTER. */
rtx
emit_insn_after (rtx pattern, rtx after)
{
- if (INSN_P (after))
- return emit_insn_after_setloc (pattern, after, INSN_LOCATOR (after));
+ rtx prev = after;
+
+ while (DEBUG_INSN_P (prev))
+ prev = PREV_INSN (prev);
+
+ if (INSN_P (prev))
+ return emit_insn_after_setloc (pattern, after, INSN_LOCATOR (prev));
else
return emit_insn_after_noloc (pattern, after, NULL);
}
rtx
emit_jump_insn_after (rtx pattern, rtx after)
{
- if (INSN_P (after))
- return emit_jump_insn_after_setloc (pattern, after, INSN_LOCATOR (after));
+ rtx prev = after;
+
+ while (DEBUG_INSN_P (prev))
+ prev = PREV_INSN (prev);
+
+ if (INSN_P (prev))
+ return emit_jump_insn_after_setloc (pattern, after, INSN_LOCATOR (prev));
else
return emit_jump_insn_after_noloc (pattern, after);
}
rtx
emit_call_insn_after (rtx pattern, rtx after)
{
- if (INSN_P (after))
- return emit_call_insn_after_setloc (pattern, after, INSN_LOCATOR (after));
+ rtx prev = after;
+
+ while (DEBUG_INSN_P (prev))
+ prev = PREV_INSN (prev);
+
+ if (INSN_P (prev))
+ return emit_call_insn_after_setloc (pattern, after, INSN_LOCATOR (prev));
else
return emit_call_insn_after_noloc (pattern, after);
}
+/* Like emit_debug_insn_after_noloc, but set INSN_LOCATOR according to SCOPE. */
+rtx
+emit_debug_insn_after_setloc (rtx pattern, rtx after, int loc)
+{
+ rtx last = emit_debug_insn_after_noloc (pattern, after);
+
+ if (pattern == NULL_RTX || !loc)
+ return last;
+
+ after = NEXT_INSN (after);
+ while (1)
+ {
+ if (active_insn_p (after) && !INSN_LOCATOR (after))
+ INSN_LOCATOR (after) = loc;
+ if (after == last)
+ break;
+ after = NEXT_INSN (after);
+ }
+ return last;
+}
+
+/* Like emit_debug_insn_after_noloc, but set INSN_LOCATOR according to AFTER. */
+rtx
+emit_debug_insn_after (rtx pattern, rtx after)
+{
+ if (INSN_P (after))
+ return emit_debug_insn_after_setloc (pattern, after, INSN_LOCATOR (after));
+ else
+ return emit_debug_insn_after_noloc (pattern, after);
+}
+
/* Like emit_insn_before_noloc, but set INSN_LOCATOR according to SCOPE. */
rtx
emit_insn_before_setloc (rtx pattern, rtx before, int loc)
rtx
emit_insn_before (rtx pattern, rtx before)
{
- if (INSN_P (before))
- return emit_insn_before_setloc (pattern, before, INSN_LOCATOR (before));
+ rtx next = before;
+
+ while (DEBUG_INSN_P (next))
+ next = PREV_INSN (next);
+
+ if (INSN_P (next))
+ return emit_insn_before_setloc (pattern, before, INSN_LOCATOR (next));
else
return emit_insn_before_noloc (pattern, before, NULL);
}
rtx
emit_jump_insn_before (rtx pattern, rtx before)
{
- if (INSN_P (before))
- return emit_jump_insn_before_setloc (pattern, before, INSN_LOCATOR (before));
+ rtx next = before;
+
+ while (DEBUG_INSN_P (next))
+ next = PREV_INSN (next);
+
+ if (INSN_P (next))
+ return emit_jump_insn_before_setloc (pattern, before, INSN_LOCATOR (next));
else
return emit_jump_insn_before_noloc (pattern, before);
}
rtx
emit_call_insn_before (rtx pattern, rtx before)
{
- if (INSN_P (before))
- return emit_call_insn_before_setloc (pattern, before, INSN_LOCATOR (before));
+ rtx next = before;
+
+ while (DEBUG_INSN_P (next))
+ next = PREV_INSN (next);
+
+ if (INSN_P (next))
+ return emit_call_insn_before_setloc (pattern, before, INSN_LOCATOR (next));
else
return emit_call_insn_before_noloc (pattern, before);
}
+
+/* like emit_insn_before_noloc, but set insn_locator according to scope. */
+rtx
+emit_debug_insn_before_setloc (rtx pattern, rtx before, int loc)
+{
+ rtx first = PREV_INSN (before);
+ rtx last = emit_debug_insn_before_noloc (pattern, before);
+
+ if (pattern == NULL_RTX)
+ return last;
+
+ first = NEXT_INSN (first);
+ while (1)
+ {
+ if (active_insn_p (first) && !INSN_LOCATOR (first))
+ INSN_LOCATOR (first) = loc;
+ if (first == last)
+ break;
+ first = NEXT_INSN (first);
+ }
+ return last;
+}
+
+/* like emit_debug_insn_before_noloc,
+ but set insn_locator according to before. */
+rtx
+emit_debug_insn_before (rtx pattern, rtx before)
+{
+ if (INSN_P (before))
+ return emit_debug_insn_before_setloc (pattern, before, INSN_LOCATOR (before));
+ else
+ return emit_debug_insn_before_noloc (pattern, before);
+}
\f
/* Take X and emit it at the end of the doubly-linked
INSN list.
switch (GET_CODE (x))
{
+ case DEBUG_INSN:
case INSN:
case JUMP_INSN:
case CALL_INSN:
return last;
}
+/* Make an insn of code DEBUG_INSN with pattern X
+ and add it to the end of the doubly-linked list. */
+
+rtx
+emit_debug_insn (rtx x)
+{
+ rtx last = last_insn;
+ rtx insn;
+
+ if (x == NULL_RTX)
+ return last;
+
+ switch (GET_CODE (x))
+ {
+ case DEBUG_INSN:
+ case INSN:
+ case JUMP_INSN:
+ case CALL_INSN:
+ case CODE_LABEL:
+ case BARRIER:
+ case NOTE:
+ insn = x;
+ while (insn)
+ {
+ rtx next = NEXT_INSN (insn);
+ add_insn (insn);
+ last = insn;
+ insn = next;
+ }
+ break;
+
+#ifdef ENABLE_RTL_CHECKING
+ case SEQUENCE:
+ gcc_unreachable ();
+ break;
+#endif
+
+ default:
+ last = make_debug_insn_raw (x);
+ add_insn (last);
+ break;
+ }
+
+ return last;
+}
+
/* Make an insn of code JUMP_INSN with pattern X
and add it to the end of the doubly-linked list. */
switch (GET_CODE (x))
{
+ case DEBUG_INSN:
case INSN:
case JUMP_INSN:
case CALL_INSN:
switch (GET_CODE (x))
{
+ case DEBUG_INSN:
case INSN:
case JUMP_INSN:
case CALL_INSN:
}
case CALL_INSN:
return emit_call_insn (x);
+ case DEBUG_INSN:
+ return emit_debug_insn (x);
default:
gcc_unreachable ();
}
{
first_insn = NULL;
last_insn = NULL;
- cur_insn_uid = 1;
+ if (MIN_NONDEBUG_INSN_UID)
+ cur_insn_uid = MIN_NONDEBUG_INSN_UID;
+ else
+ cur_insn_uid = 1;
+ cur_debug_insn_uid = 1;
reg_rtx_no = LAST_VIRTUAL_REGISTER + 1;
last_location = UNKNOWN_LOCATION;
first_label_num = label_num;
new_rtx = emit_jump_insn_after (copy_insn (PATTERN (insn)), after);
break;
+ case DEBUG_INSN:
+ new_rtx = emit_debug_insn_after (copy_insn (PATTERN (insn)), after);
+ break;
+
case CALL_INSN:
new_rtx = emit_call_insn_after (copy_insn (PATTERN (insn)), after);
if (CALL_INSN_FUNCTION_USAGE (insn))
case NOTE:
case BARRIER:
case CODE_LABEL:
+ case DEBUG_INSN:
return 0;
case CALL_INSN:
&& (!NOTE_P (insn) ||
(NOTE_KIND (insn) != NOTE_INSN_VAR_LOCATION
&& NOTE_KIND (insn) != NOTE_INSN_BLOCK_BEG
- && NOTE_KIND (insn) != NOTE_INSN_BLOCK_END)))
+ && NOTE_KIND (insn) != NOTE_INSN_BLOCK_END
+ && NOTE_KIND (insn) != NOTE_INSN_CFA_RESTORE_STATE)))
print_rtl_single (final_output, insn);
}
|| GET_CODE (PATTERN (insn)) == ADDR_DIFF_VEC
|| GET_CODE (PATTERN (insn)) == ASM_INPUT)
continue;
-
- instantiate_virtual_regs_in_insn (insn);
+ else if (DEBUG_INSN_P (insn))
+ for_each_rtx (&INSN_VAR_LOCATION (insn),
+ instantiate_virtual_regs_in_rtx, NULL);
+ else
+ instantiate_virtual_regs_in_insn (insn);
if (INSN_DELETED_P (insn))
continue;
Reset to 1 for each function compiled. */
int x_cur_insn_uid;
+ /* INSN_UID for next debug insn emitted. Only used if
+ --param min-nondebug-insn-uid=<value> is given with nonzero value. */
+ int x_cur_debug_insn_uid;
+
/* Location the last line-number NOTE emitted.
This is used to avoid generating duplicates. */
location_t x_last_location;
if (INSN_CODE (use_insn) < 0)
asm_use = asm_noperands (PATTERN (use_insn));
- if (!use_set && asm_use < 0)
+ if (!use_set && asm_use < 0 && !DEBUG_INSN_P (use_insn))
return false;
/* Do not propagate into PC, CC0, etc. */
loc = &SET_DEST (use_set);
set_reg_equal = false;
}
+ else if (!use_set)
+ {
+ loc = &INSN_VAR_LOCATION_LOC (use_insn);
+ set_reg_equal = false;
+ }
else
{
rtx note = find_reg_note (use_insn, REG_EQUAL, NULL_RTX);
static const char *invoke_as =
#ifdef AS_NEEDS_DASH_FOR_PIPED_INPUT
-"%{fcompare-debug=*:%:compare-debug-dump-opt()}\
+"%{fcompare-debug=*|fdump-final-insns=*:%:compare-debug-dump-opt()}\
%{!S:-o %|.s |\n as %(asm_options) %|.s %A }";
#else
-"%{fcompare-debug=*:%:compare-debug-dump-opt()}\
+"%{fcompare-debug=*|fdump-final-insns=*:%:compare-debug-dump-opt()}\
%{!S:-o %|.s |\n as %(asm_options) %m.s %A }";
#endif
#endif
static const char *const driver_self_specs[] = {
+ "%{fdump-final-insns:-fdump-final-insns=.} %<fdump-final-insns",
DRIVER_SELF_SPECS, GOMP_SELF_SPECS
};
return NULL;
}
+/* Compute a timestamp to initialize flag_random_seed. */
+
+static unsigned
+get_local_tick (void)
+{
+ unsigned ret = 0;
+
+ /* Get some more or less random data. */
+#ifdef HAVE_GETTIMEOFDAY
+ {
+ struct timeval tv;
+
+ gettimeofday (&tv, NULL);
+ ret = tv.tv_sec * 1000 + tv.tv_usec / 1000;
+ }
+#else
+ {
+ time_t now = time (NULL);
+
+ if (now != (time_t)-1)
+ ret = (unsigned) now;
+ }
+#endif
+
+ return ret;
+}
+
/* %:compare-debug-dump-opt spec function. Save the last argument,
expected to be the last -fdump-final-insns option, or generate a
temporary. */
const char *ret;
char *name;
int which;
+ static char random_seed[HOST_BITS_PER_WIDE_INT / 4 + 3];
if (arg != 0)
fatal ("too many arguments to %%:compare-debug-dump-opt");
- if (!compare_debug)
- return NULL;
-
do_spec_2 ("%{fdump-final-insns=*:%*}");
do_spec_1 (" ", 0, NULL);
- if (argbuf_index > 0)
+ if (argbuf_index > 0 && strcmp (argv[argbuf_index - 1], "."))
{
+ if (!compare_debug)
+ return NULL;
+
name = xstrdup (argv[argbuf_index - 1]);
ret = NULL;
}
else
{
-#define OPT "-fdump-final-insns="
- ret = "-fdump-final-insns=%g.gkd";
+ const char *ext = NULL;
+
+ if (argbuf_index > 0)
+ {
+ do_spec_2 ("%{o*:%*}%{!o:%{!S:%b%O}%{S:%b.s}}");
+ ext = ".gkd";
+ }
+ else if (!compare_debug)
+ return NULL;
+ else
+ do_spec_2 ("%g.gkd");
- do_spec_2 (ret + sizeof (OPT) - 1);
do_spec_1 (" ", 0, NULL);
-#undef OPT
gcc_assert (argbuf_index > 0);
- name = xstrdup (argbuf[argbuf_index - 1]);
+ name = concat (argbuf[argbuf_index - 1], ext, NULL);
+
+ ret = concat ("-fdump-final-insns=", name, NULL);
}
which = compare_debug < 0;
debug_check_temp_file[which] = name;
-#if 0
- error ("compare-debug: [%i]=\"%s\", ret %s", which, name, ret);
-#endif
+ if (!which)
+ {
+ unsigned HOST_WIDE_INT value = get_local_tick () ^ getpid ();
+
+ sprintf (random_seed, HOST_WIDE_INT_PRINT_HEX, value);
+ }
+
+ if (*random_seed)
+ ret = concat ("%{!frandom-seed=*:-frandom-seed=", random_seed, "} ",
+ ret, NULL);
+
+ if (which)
+ *random_seed = 0;
return ret;
}
memcpy (name + sizeof (OPT) - 1, argv[0], len);
name[sizeof (OPT) - 1 + len] = '\0';
+#undef OPT
+
return name;
}
static void record_last_mem_set_info (rtx);
static void record_last_set_info (rtx, const_rtx, void *);
static void compute_hash_table (struct hash_table_d *);
-static void alloc_hash_table (int, struct hash_table_d *, int);
+static void alloc_hash_table (struct hash_table_d *, int);
static void free_hash_table (struct hash_table_d *);
static void compute_hash_table_work (struct hash_table_d *);
static void dump_hash_table (FILE *, const char *, struct hash_table_d *);
}
/* Allocate space for the set/expr hash TABLE.
- N_INSNS is the number of instructions in the function.
It is used to determine the number of buckets to use.
SET_P determines whether set or expression table will
be created. */
static void
-alloc_hash_table (int n_insns, struct hash_table_d *table, int set_p)
+alloc_hash_table (struct hash_table_d *table, int set_p)
{
int n;
- table->size = n_insns / 4;
+ n = get_max_insn_count ();
+
+ table->size = n / 4;
if (table->size < 11)
table->size = 11;
}
}
+ if (changed && DEBUG_INSN_P (insn))
+ return 0;
+
return changed;
}
{
setcc = NULL_RTX;
FOR_BB_INSNS (bb, insn)
- if (NONJUMP_INSN_P (insn))
+ if (DEBUG_INSN_P (insn))
+ continue;
+ else if (NONJUMP_INSN_P (insn))
{
if (setcc)
break;
gcc_obstack_init (&gcse_obstack);
alloc_gcse_mem ();
- alloc_hash_table (get_max_uid (), &expr_hash_table, 0);
+ alloc_hash_table (&expr_hash_table, 0);
add_noreturn_fake_exit_edges ();
if (flag_gcse_lm)
compute_ld_motion_mems ();
gcc_obstack_init (&gcse_obstack);
alloc_gcse_mem ();
- alloc_hash_table (get_max_uid (), &expr_hash_table, 0);
+ alloc_hash_table (&expr_hash_table, 0);
compute_hash_table (&expr_hash_table);
if (dump_file)
dump_hash_table (dump_file, "Code Hosting Expressions", &expr_hash_table);
{
FOR_BB_INSNS (bb, insn)
{
- if (INSN_P (insn))
+ if (NONDEBUG_INSN_P (insn))
{
if (GET_CODE (PATTERN (insn)) == SET)
{
implicit_sets = XCNEWVEC (rtx, last_basic_block);
find_implicit_sets ();
- alloc_hash_table (get_max_uid (), &set_hash_table, 1);
+ alloc_hash_table (&set_hash_table, 1);
compute_hash_table (&set_hash_table);
/* Free implicit_sets before peak usage. */
dump_gimple_fmt (buffer, spc, flags, "resx %d", gimple_resx_region (gs));
}
+/* Dump a GIMPLE_DEBUG tuple on the pretty_printer BUFFER, SPC spaces
+ of indent. FLAGS specifies details to show in the dump (see TDF_*
+ in tree-pass.h). */
+
+static void
+dump_gimple_debug (pretty_printer *buffer, gimple gs, int spc, int flags)
+{
+ switch (gs->gsbase.subcode)
+ {
+ case GIMPLE_DEBUG_BIND:
+ if (flags & TDF_RAW)
+ dump_gimple_fmt (buffer, spc, flags, "%G BIND <%T, %T>", gs,
+ gimple_debug_bind_get_var (gs),
+ gimple_debug_bind_get_value (gs));
+ else
+ dump_gimple_fmt (buffer, spc, flags, "# DEBUG %T => %T",
+ gimple_debug_bind_get_var (gs),
+ gimple_debug_bind_get_value (gs));
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+}
+
/* Dump a GIMPLE_OMP_FOR tuple on the pretty_printer BUFFER. */
static void
dump_gimple_omp_for (pretty_printer *buffer, gimple gs, int spc, int flags)
dump_gimple_resx (buffer, gs, spc, flags);
break;
+ case GIMPLE_DEBUG:
+ dump_gimple_debug (buffer, gs, spc, flags);
+ break;
+
case GIMPLE_PREDICT:
pp_string (buffer, "// predicted ");
if (gimple_predict_outcome (gs))
gimple_stmt_iterator gsi;
for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
- if (get_lineno (gsi_stmt (gsi)) != -1)
+ if (!is_gimple_debug (gsi_stmt (gsi))
+ && get_lineno (gsi_stmt (gsi)) != UNKNOWN_LOCATION)
{
pp_string (buffer, ", starting at line ");
pp_decimal_int (buffer, get_lineno (gsi_stmt (gsi)));
case GIMPLE_COND:
case GIMPLE_GOTO:
case GIMPLE_LABEL:
+ case GIMPLE_DEBUG:
case GIMPLE_SWITCH: return GSS_WITH_OPS;
case GIMPLE_ASM: return GSS_ASM;
case GIMPLE_BIND: return GSS_BIND;
gimple_build_with_ops_stat (c, s, n MEM_STAT_INFO)
static gimple
-gimple_build_with_ops_stat (enum gimple_code code, enum tree_code subcode,
+gimple_build_with_ops_stat (enum gimple_code code, unsigned subcode,
unsigned num_ops MEM_STAT_DECL)
{
gimple s = gimple_alloc_stat (code, num_ops PASS_MEM_STAT);
code). */
num_ops = get_gimple_rhs_num_ops (subcode) + 1;
- p = gimple_build_with_ops_stat (GIMPLE_ASSIGN, subcode, num_ops
+ p = gimple_build_with_ops_stat (GIMPLE_ASSIGN, (unsigned)subcode, num_ops
PASS_MEM_STAT);
gimple_assign_set_lhs (p, lhs);
gimple_assign_set_rhs1 (p, op1);
}
+/* Build a new GIMPLE_DEBUG_BIND statement.
+
+ VAR is bound to VALUE; block and location are taken from STMT. */
+
+gimple
+gimple_build_debug_bind_stat (tree var, tree value, gimple stmt MEM_STAT_DECL)
+{
+ gimple p = gimple_build_with_ops_stat (GIMPLE_DEBUG,
+ (unsigned)GIMPLE_DEBUG_BIND, 2
+ PASS_MEM_STAT);
+
+ gimple_debug_bind_set_var (p, var);
+ gimple_debug_bind_set_value (p, value);
+ if (stmt)
+ {
+ gimple_set_block (p, gimple_block (stmt));
+ gimple_set_location (p, gimple_location (stmt));
+ }
+
+ return p;
+}
+
+
/* Build a GIMPLE_OMP_CRITICAL statement.
BODY is the sequence of statements for which only one thread can execute.
{
gimple_stmt_iterator i;
-
if (gimple_seq_empty_p (body))
return true;
for (i = gsi_start (body); !gsi_end_p (i); gsi_next (&i))
- if (!empty_stmt_p (gsi_stmt (i)))
+ if (!empty_stmt_p (gsi_stmt (i))
+ && !is_gimple_debug (gsi_stmt (i)))
return false;
return true;
{
unsigned i;
+ if (is_gimple_debug (s))
+ return false;
+
/* We don't have to scan the arguments to check for
volatile arguments, though, at present, we still
do a scan to check for TREE_SIDE_EFFECTS. */
return true;
}
}
+ else if (is_gimple_debug (s))
+ return false;
else
{
/* For statements without an LHS, examine all arguments. */
jump target for the comparison. */
DEFGSCODE(GIMPLE_COND, "gimple_cond", struct gimple_statement_with_ops)
+/* GIMPLE_DEBUG represents a debug statement. */
+DEFGSCODE(GIMPLE_DEBUG, "gimple_debug", struct gimple_statement_with_ops)
+
/* GIMPLE_GOTO <TARGET> represents unconditional jumps.
TARGET is a LABEL_DECL or an expression node for computed GOTOs. */
DEFGSCODE(GIMPLE_GOTO, "gimple_goto", struct gimple_statement_with_ops)
GF_PREDICT_TAKEN = 1 << 15
};
+/* Currently, there's only one type of gimple debug stmt. Others are
+ envisioned, for example, to enable the generation of is_stmt notes
+ in line number information, to mark sequence points, etc. This
+ subcode is to be used to tell them apart. */
+enum gimple_debug_subcode {
+ GIMPLE_DEBUG_BIND = 0
+};
+
/* Masks for selecting a pass local flag (PLF) to work on. These
masks are used by gimple_set_plf and gimple_plf. */
enum plf_mask {
#define gimple_build_assign_with_ops(c,o1,o2,o3) \
gimple_build_assign_with_ops_stat (c, o1, o2, o3 MEM_STAT_INFO)
+gimple gimple_build_debug_bind_stat (tree, tree, gimple MEM_STAT_DECL);
+#define gimple_build_debug_bind(var,val,stmt) \
+ gimple_build_debug_bind_stat ((var), (val), (stmt) MEM_STAT_INFO)
+
gimple gimple_build_call_vec (tree, VEC(tree, heap) *);
gimple gimple_build_call (tree, unsigned, ...);
gimple gimple_build_call_from_tree (tree);
gimple_switch_set_label (gs, 0, label);
}
+/* Return true if GS is a GIMPLE_DEBUG statement. */
+
+static inline bool
+is_gimple_debug (const_gimple gs)
+{
+ return gimple_code (gs) == GIMPLE_DEBUG;
+}
+
+/* Return true if S is a GIMPLE_DEBUG BIND statement. */
+
+static inline bool
+gimple_debug_bind_p (const_gimple s)
+{
+ if (is_gimple_debug (s))
+ return s->gsbase.subcode == GIMPLE_DEBUG_BIND;
+
+ return false;
+}
+
+/* Return the variable bound in a GIMPLE_DEBUG bind statement. */
+
+static inline tree
+gimple_debug_bind_get_var (gimple dbg)
+{
+ GIMPLE_CHECK (dbg, GIMPLE_DEBUG);
+ gcc_assert (gimple_debug_bind_p (dbg));
+ return gimple_op (dbg, 0);
+}
+
+/* Return the value bound to the variable in a GIMPLE_DEBUG bind
+ statement. */
+
+static inline tree
+gimple_debug_bind_get_value (gimple dbg)
+{
+ GIMPLE_CHECK (dbg, GIMPLE_DEBUG);
+ gcc_assert (gimple_debug_bind_p (dbg));
+ return gimple_op (dbg, 1);
+}
+
+/* Return a pointer to the value bound to the variable in a
+ GIMPLE_DEBUG bind statement. */
+
+static inline tree *
+gimple_debug_bind_get_value_ptr (gimple dbg)
+{
+ GIMPLE_CHECK (dbg, GIMPLE_DEBUG);
+ gcc_assert (gimple_debug_bind_p (dbg));
+ return gimple_op_ptr (dbg, 1);
+}
+
+/* Set the variable bound in a GIMPLE_DEBUG bind statement. */
+
+static inline void
+gimple_debug_bind_set_var (gimple dbg, tree var)
+{
+ GIMPLE_CHECK (dbg, GIMPLE_DEBUG);
+ gcc_assert (gimple_debug_bind_p (dbg));
+ gimple_set_op (dbg, 0, var);
+}
+
+/* Set the value bound to the variable in a GIMPLE_DEBUG bind
+ statement. */
+
+static inline void
+gimple_debug_bind_set_value (gimple dbg, tree value)
+{
+ GIMPLE_CHECK (dbg, GIMPLE_DEBUG);
+ gcc_assert (gimple_debug_bind_p (dbg));
+ gimple_set_op (dbg, 1, value);
+}
+
+/* The second operand of a GIMPLE_DEBUG_BIND, when the value was
+ optimized away. */
+#define GIMPLE_DEBUG_BIND_NOVALUE NULL_TREE /* error_mark_node */
+
+/* Remove the value bound to the variable in a GIMPLE_DEBUG bind
+ statement. */
+
+static inline void
+gimple_debug_bind_reset_value (gimple dbg)
+{
+ GIMPLE_CHECK (dbg, GIMPLE_DEBUG);
+ gcc_assert (gimple_debug_bind_p (dbg));
+ gimple_set_op (dbg, 1, GIMPLE_DEBUG_BIND_NOVALUE);
+}
+
+/* Return true if the GIMPLE_DEBUG bind statement is bound to a
+ value. */
+
+static inline bool
+gimple_debug_bind_has_value_p (gimple dbg)
+{
+ GIMPLE_CHECK (dbg, GIMPLE_DEBUG);
+ gcc_assert (gimple_debug_bind_p (dbg));
+ return gimple_op (dbg, 1) != GIMPLE_DEBUG_BIND_NOVALUE;
+}
+
+#undef GIMPLE_DEBUG_BIND_NOVALUE
/* Return the body for the OMP statement GS. */
return gsi;
}
+/* Advance the iterator to the next non-debug gimple statement. */
+
+static inline void
+gsi_next_nondebug (gimple_stmt_iterator *i)
+{
+ do
+ {
+ gsi_next (i);
+ }
+ while (!gsi_end_p (*i) && is_gimple_debug (gsi_stmt (*i)));
+}
+
+/* Advance the iterator to the next non-debug gimple statement. */
+
+static inline void
+gsi_prev_nondebug (gimple_stmt_iterator *i)
+{
+ do
+ {
+ gsi_prev (i);
+ }
+ while (!gsi_end_p (*i) && is_gimple_debug (gsi_stmt (*i)));
+}
+
+/* Return a new iterator pointing to the first non-debug statement in
+ basic block BB. */
+
+static inline gimple_stmt_iterator
+gsi_start_nondebug_bb (basic_block bb)
+{
+ gimple_stmt_iterator i = gsi_start_bb (bb);
+
+ if (!gsi_end_p (i) && is_gimple_debug (gsi_stmt (i)))
+ gsi_next_nondebug (&i);
+
+ return i;
+}
+
+/* Return a new iterator pointing to the last non-debug statement in
+ basic block BB. */
+
+static inline gimple_stmt_iterator
+gsi_last_nondebug_bb (basic_block bb)
+{
+ gimple_stmt_iterator i = gsi_last_bb (bb);
+
+ if (!gsi_end_p (i) && is_gimple_debug (gsi_stmt (i)))
+ gsi_prev_nondebug (&i);
+
+ return i;
+}
+
/* Return a pointer to the current stmt.
NOTE: You may want to use gsi_replace on the iterator itself,
char *ready_try = NULL;
/* The ready list. */
-struct ready_list ready = {NULL, 0, 0, 0};
+struct ready_list ready = {NULL, 0, 0, 0, 0};
/* The pointer to the ready list (to be removed). */
static struct ready_list *readyp = &ready;
static bool
contributes_to_priority_p (dep_t dep)
{
+ if (DEBUG_INSN_P (DEP_CON (dep))
+ || DEBUG_INSN_P (DEP_PRO (dep)))
+ return false;
+
/* Critical path is meaningful in block boundaries only. */
if (!current_sched_info->contributes_to_priority (DEP_CON (dep),
DEP_PRO (dep)))
return true;
}
+/* Compute the number of nondebug forward deps of an insn. */
+
+static int
+dep_list_size (rtx insn)
+{
+ sd_iterator_def sd_it;
+ dep_t dep;
+ int dbgcount = 0, nodbgcount = 0;
+
+ if (!MAY_HAVE_DEBUG_INSNS)
+ return sd_lists_size (insn, SD_LIST_FORW);
+
+ FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
+ {
+ if (DEBUG_INSN_P (DEP_CON (dep)))
+ dbgcount++;
+ else
+ nodbgcount++;
+ }
+
+ gcc_assert (dbgcount + nodbgcount == sd_lists_size (insn, SD_LIST_FORW));
+
+ return nodbgcount;
+}
+
/* Compute the priority number for INSN. */
static int
priority (rtx insn)
{
int this_priority = -1;
- if (sd_lists_empty_p (insn, SD_LIST_FORW))
+ if (dep_list_size (insn) == 0)
/* ??? We should set INSN_PRIORITY to insn_cost when and insn has
some forward deps but all of them are ignored by
contributes_to_priority hook. At the moment we set priority of
{
rtx tmp = *(const rtx *) y;
rtx tmp2 = *(const rtx *) x;
+ rtx last;
int tmp_class, tmp2_class;
int val, priority_val, weight_val, info_val;
+ if (MAY_HAVE_DEBUG_INSNS)
+ {
+ /* Schedule debug insns as early as possible. */
+ if (DEBUG_INSN_P (tmp) && !DEBUG_INSN_P (tmp2))
+ return -1;
+ else if (DEBUG_INSN_P (tmp2))
+ return 1;
+ }
+
/* The insn in a schedule group should be issued the first. */
if (flag_sched_group_heuristic &&
SCHED_GROUP_P (tmp) != SCHED_GROUP_P (tmp2))
if(flag_sched_rank_heuristic && info_val)
return info_val;
- /* Compare insns based on their relation to the last-scheduled-insn. */
- if (flag_sched_last_insn_heuristic && INSN_P (last_scheduled_insn))
+ if (flag_sched_last_insn_heuristic)
+ {
+ last = last_scheduled_insn;
+
+ if (DEBUG_INSN_P (last) && last != current_sched_info->prev_head)
+ do
+ last = PREV_INSN (last);
+ while (!NONDEBUG_INSN_P (last)
+ && last != current_sched_info->prev_head);
+ }
+
+ /* Compare insns based on their relation to the last scheduled
+ non-debug insn. */
+ if (flag_sched_last_insn_heuristic && NONDEBUG_INSN_P (last))
{
dep_t dep1;
dep_t dep2;
2) Anti/Output dependent on last scheduled insn.
3) Independent of last scheduled insn, or has latency of one.
Choose the insn from the highest numbered class if different. */
- dep1 = sd_find_dep_between (last_scheduled_insn, tmp, true);
+ dep1 = sd_find_dep_between (last, tmp, true);
if (dep1 == NULL || dep_cost (dep1) == 1)
tmp_class = 3;
else
tmp_class = 2;
- dep2 = sd_find_dep_between (last_scheduled_insn, tmp2, true);
+ dep2 = sd_find_dep_between (last, tmp2, true);
if (dep2 == NULL || dep_cost (dep2) == 1)
tmp2_class = 3;
This gives the scheduler more freedom when scheduling later
instructions at the expense of added register pressure. */
- val = (sd_lists_size (tmp2, SD_LIST_FORW)
- - sd_lists_size (tmp, SD_LIST_FORW));
+ val = (dep_list_size (tmp2) - dep_list_size (tmp));
if (flag_sched_dep_count_heuristic && val != 0)
return val;
rtx link = alloc_INSN_LIST (insn, insn_queue[next_q]);
gcc_assert (n_cycles <= max_insn_queue_index);
+ gcc_assert (!DEBUG_INSN_P (insn));
insn_queue[next_q] = link;
q_size += 1;
}
ready->n_ready++;
+ if (DEBUG_INSN_P (insn))
+ ready->n_debug++;
gcc_assert (QUEUE_INDEX (insn) != QUEUE_READY);
QUEUE_INDEX (insn) = QUEUE_READY;
gcc_assert (ready->n_ready);
t = ready->vec[ready->first--];
ready->n_ready--;
+ if (DEBUG_INSN_P (t))
+ ready->n_debug--;
/* If the queue becomes empty, reset it. */
if (ready->n_ready == 0)
ready->first = ready->veclen - 1;
gcc_assert (ready->n_ready && index < ready->n_ready);
t = ready->vec[ready->first - index];
ready->n_ready--;
+ if (DEBUG_INSN_P (t))
+ ready->n_debug--;
for (i = index; i < ready->n_ready; i++)
ready->vec[ready->first - i] = ready->vec[ready->first - i - 1];
QUEUE_INDEX (t) = QUEUE_NOWHERE;
be aligned. */
if (issue_rate > 1
&& GET_CODE (PATTERN (insn)) != USE
- && GET_CODE (PATTERN (insn)) != CLOBBER)
+ && GET_CODE (PATTERN (insn)) != CLOBBER
+ && !DEBUG_INSN_P (insn))
{
if (reload_completed)
PUT_MODE (insn, clock_var > last_clock_var ? TImode : VOIDmode);
beg_head = NEXT_INSN (beg_head);
while (beg_head != beg_tail)
- if (NOTE_P (beg_head))
+ if (NOTE_P (beg_head) || BOUNDARY_DEBUG_INSN_P (beg_head))
beg_head = NEXT_INSN (beg_head);
else
break;
end_head = NEXT_INSN (end_head);
while (end_head != end_tail)
- if (NOTE_P (end_tail))
+ if (NOTE_P (end_tail) || BOUNDARY_DEBUG_INSN_P (end_tail))
end_tail = PREV_INSN (end_tail);
else
break;
{
while (head != NEXT_INSN (tail))
{
- if (!NOTE_P (head) && !LABEL_P (head))
+ if (!NOTE_P (head) && !LABEL_P (head)
+ && !BOUNDARY_DEBUG_INSN_P (head))
return 0;
head = NEXT_INSN (head);
}
q_ptr = NEXT_Q (q_ptr);
if (dbg_cnt (sched_insn) == false)
- /* If debug counter is activated do not requeue insn next after
- last_scheduled_insn. */
- skip_insn = next_nonnote_insn (last_scheduled_insn);
+ {
+ /* If debug counter is activated do not requeue insn next after
+ last_scheduled_insn. */
+ skip_insn = next_nonnote_insn (last_scheduled_insn);
+ while (skip_insn && DEBUG_INSN_P (skip_insn))
+ skip_insn = next_nonnote_insn (skip_insn);
+ }
else
skip_insn = NULL_RTX;
/* If the ready list is full, delay the insn for 1 cycle.
See the comment in schedule_block for the rationale. */
if (!reload_completed
- && ready->n_ready > MAX_SCHED_READY_INSNS
+ && ready->n_ready - ready->n_debug > MAX_SCHED_READY_INSNS
&& !SCHED_GROUP_P (insn)
&& insn != skip_insn)
{
if (targetm.sched.first_cycle_multipass_dfa_lookahead)
lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
- if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0)))
+ if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0))
+ || DEBUG_INSN_P (ready_element (ready, 0)))
{
*insn_ptr = ready_remove_first (ready);
return 0;
/* Clear the ready list. */
ready.first = ready.veclen - 1;
ready.n_ready = 0;
+ ready.n_debug = 0;
/* It is used for first cycle multipass scheduling. */
temp_state = alloca (dfa_state_size);
/* We start inserting insns after PREV_HEAD. */
last_scheduled_insn = prev_head;
- gcc_assert (NOTE_P (last_scheduled_insn)
+ gcc_assert ((NOTE_P (last_scheduled_insn)
+ || BOUNDARY_DEBUG_INSN_P (last_scheduled_insn))
&& BLOCK_FOR_INSN (last_scheduled_insn) == *target_bb);
/* Initialize INSN_QUEUE. Q_SIZE is the total number of insns in the
/* The algorithm is O(n^2) in the number of ready insns at any given
time in the worst case. Before reload we are more likely to have
big lists so truncate them to a reasonable size. */
- if (!reload_completed && ready.n_ready > MAX_SCHED_READY_INSNS)
+ if (!reload_completed
+ && ready.n_ready - ready.n_debug > MAX_SCHED_READY_INSNS)
{
ready_sort (&ready);
- /* Find first free-standing insn past MAX_SCHED_READY_INSNS. */
- for (i = MAX_SCHED_READY_INSNS; i < ready.n_ready; i++)
+ /* Find first free-standing insn past MAX_SCHED_READY_INSNS.
+ If there are debug insns, we know they're first. */
+ for (i = MAX_SCHED_READY_INSNS + ready.n_debug; i < ready.n_ready; i++)
if (!SCHED_GROUP_P (ready_element (&ready, i)))
break;
}
}
+ /* We don't want md sched reorder to even see debug isns, so put
+ them out right away. */
+ if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
+ {
+ if (control_flow_insn_p (last_scheduled_insn))
+ {
+ *target_bb = current_sched_info->advance_target_bb
+ (*target_bb, 0);
+
+ if (sched_verbose)
+ {
+ rtx x;
+
+ x = next_real_insn (last_scheduled_insn);
+ gcc_assert (x);
+ dump_new_block_header (1, *target_bb, x, tail);
+ }
+
+ last_scheduled_insn = bb_note (*target_bb);
+ }
+
+ while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
+ {
+ rtx insn = ready_remove_first (&ready);
+ gcc_assert (DEBUG_INSN_P (insn));
+ (*current_sched_info->begin_schedule_ready) (insn,
+ last_scheduled_insn);
+ move_insn (insn, last_scheduled_insn,
+ current_sched_info->next_tail);
+ last_scheduled_insn = insn;
+ advance = schedule_insn (insn);
+ gcc_assert (advance == 0);
+ if (ready.n_ready > 0)
+ ready_sort (&ready);
+ }
+
+ if (!ready.n_ready)
+ continue;
+ }
+
/* Allow the target to reorder the list, typically for
better instruction bundling. */
if (sort_p && targetm.sched.reorder
ready_sort (&ready);
}
- if (ready.n_ready == 0 || !can_issue_more
+ if (ready.n_ready == 0
+ || !can_issue_more
|| state_dead_lock_p (curr_state)
|| !(*current_sched_info->schedule_more_p) ())
break;
if (targetm.sched.variable_issue)
can_issue_more =
targetm.sched.variable_issue (sched_dump, sched_verbose,
- insn, can_issue_more);
+ insn, can_issue_more);
/* A naked CLOBBER or USE generates no instruction, so do
not count them against the issue rate. */
else if (GET_CODE (PATTERN (insn)) != USE
if (ready.n_ready > 0)
ready_sort (&ready);
+ /* Quickly go through debug insns such that md sched
+ reorder2 doesn't have to deal with debug insns. */
+ if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0))
+ && (*current_sched_info->schedule_more_p) ())
+ {
+ if (control_flow_insn_p (last_scheduled_insn))
+ {
+ *target_bb = current_sched_info->advance_target_bb
+ (*target_bb, 0);
+
+ if (sched_verbose)
+ {
+ rtx x;
+
+ x = next_real_insn (last_scheduled_insn);
+ gcc_assert (x);
+ dump_new_block_header (1, *target_bb, x, tail);
+ }
+
+ last_scheduled_insn = bb_note (*target_bb);
+ }
+
+ while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
+ {
+ insn = ready_remove_first (&ready);
+ gcc_assert (DEBUG_INSN_P (insn));
+ (*current_sched_info->begin_schedule_ready)
+ (insn, last_scheduled_insn);
+ move_insn (insn, last_scheduled_insn,
+ current_sched_info->next_tail);
+ advance = schedule_insn (insn);
+ last_scheduled_insn = insn;
+ gcc_assert (advance == 0);
+ if (ready.n_ready > 0)
+ ready_sort (&ready);
+ }
+ }
+
if (targetm.sched.reorder2
&& (ready.n_ready == 0
|| !SCHED_GROUP_P (ready_element (&ready, 0))))
if (current_sched_info->queue_must_finish_empty)
/* Sanity check -- queue must be empty now. Meaningless if region has
multiple bbs. */
- gcc_assert (!q_size && !ready.n_ready);
+ gcc_assert (!q_size && !ready.n_ready && !ready.n_debug);
else
{
/* We must maintain QUEUE_INDEX between blocks in region. */
current_sched_info->sched_max_insns_priority;
rtx prev_head;
- if (head == tail && (! INSN_P (head)))
- return 0;
+ if (head == tail && (! INSN_P (head) || BOUNDARY_DEBUG_INSN_P (head)))
+ gcc_unreachable ();
n_insn = 0;
if (insn == jump)
break;
- if (sd_lists_empty_p (insn, SD_LIST_FORW))
+ if (dep_list_size (insn) == 0)
{
dep_def _new_dep, *new_dep = &_new_dep;
return 0;
}
+/* Search back, starting at INSN, for an insn that is not a
+ NOTE_INSN_VAR_LOCATION. Don't search beyond HEAD, and return it if
+ no such insn can be found. */
+static inline rtx
+prev_non_location_insn (rtx insn, rtx head)
+{
+ while (insn != head && NOTE_P (insn)
+ && NOTE_KIND (insn) == NOTE_INSN_VAR_LOCATION)
+ insn = PREV_INSN (insn);
+
+ return insn;
+}
+
/* Check few properties of CFG between HEAD and TAIL.
If HEAD (TAIL) is NULL check from the beginning (till the end) of the
instruction stream. */
{
if (control_flow_insn_p (head))
{
- gcc_assert (BB_END (bb) == head);
-
+ gcc_assert (prev_non_location_insn (BB_END (bb), head)
+ == head);
+
if (any_uncondjump_p (head))
gcc_assert (EDGE_COUNT (bb->succs) == 1
&& BARRIER_P (NEXT_INSN (head)));
if (BB_END (bb) == head)
{
if (EDGE_COUNT (bb->succs) > 1)
- gcc_assert (control_flow_insn_p (head)
+ gcc_assert (control_flow_insn_p (prev_non_location_insn
+ (head, BB_HEAD (bb)))
|| has_edge_p (bb->succs, EDGE_COMPLEX));
bb = 0;
}
-
+
head = NEXT_INSN (head);
}
}
insn = NEXT_INSN (insn);
}
- while (NOTE_P (insn))
+ while (NOTE_P (insn) || DEBUG_INSN_P (insn))
{
if (insn == BB_END (bb))
return NULL_RTX;
while (NOTE_P (insn)
|| JUMP_P (insn)
+ || DEBUG_INSN_P (insn)
|| (skip_use_p
&& NONJUMP_INSN_P (insn)
&& GET_CODE (PATTERN (insn)) == USE))
for (insn = start; ; insn = NEXT_INSN (insn))
{
- if (NOTE_P (insn))
+ if (NOTE_P (insn) || DEBUG_INSN_P (insn))
goto insn_done;
gcc_assert(NONJUMP_INSN_P (insn) || CALL_P (insn));
else
{
insn_b = prev_nonnote_insn (if_info->cond_earliest);
+ while (insn_b && DEBUG_INSN_P (insn_b))
+ insn_b = prev_nonnote_insn (insn_b);
/* We're going to be moving the evaluation of B down from above