+2010-07-26 Naveen.H.S <naveen.S@kpitcummins.com>
+
+ * configure.ac: Support all v850 targets.
+ * configure: Regenerate.
+
2010-07-23 Marc Glisse <marc.glisse@normalesup.org>
PR bootstrap/44455
v810-*-*)
noconfigdirs="$noconfigdirs bfd binutils gas gcc gdb ld target-libstdc++-v3 opcodes target-libgloss ${libgcj}"
;;
- v850-*-*)
- noconfigdirs="$noconfigdirs target-libgloss ${libgcj}"
- ;;
- v850e-*-*)
- noconfigdirs="$noconfigdirs target-libgloss ${libgcj}"
- ;;
- v850ea-*-*)
+ v850*-*-*)
noconfigdirs="$noconfigdirs target-libgloss ${libgcj}"
;;
vax-*-vms)
v810-*-*)
noconfigdirs="$noconfigdirs bfd binutils gas gcc gdb ld target-libstdc++-v3 opcodes target-libgloss ${libgcj}"
;;
- v850-*-*)
- noconfigdirs="$noconfigdirs target-libgloss ${libgcj}"
- ;;
- v850e-*-*)
- noconfigdirs="$noconfigdirs target-libgloss ${libgcj}"
- ;;
- v850ea-*-*)
+ v850*-*-*)
noconfigdirs="$noconfigdirs target-libgloss ${libgcj}"
;;
vax-*-vms)
+2010-07-26 Naveen.H.S <naveen.S@kpitcummins.com>
+
+ * config/v850/lib1funcs.asm (save_r2_r31, return_r2_r31,
+ save_r20_r31, return_r20_r31, save_r21_r31, return_r21_r31,
+ save_r22_r31, return_r22_r31, save_r23_r31, return_r23_r31,
+ save_r24_r31, return_r24_r31, save_r25_r31, return_r25_r31,
+ save_r26_r31, return_r26_r31, save_r27_r31, return_r27_r31,
+ save_r28_r31, return_r28_r31, save_r29_r31, return_r29_r31,
+ save_r31, return_r31, save_interrupt, return_interrupt,
+ save_all_interrupt, return_all_interrupt, L_save_r2_r31,
+ L_return_interrupt, callt_return_interrupt, L_restore_all_interrupt,
+ L_save_##START##_r31c, L_callt_save_r31c: Updated as per the
+ new ABI requirements.
+ save_r6_r9, L_callt_save_r6_r9: Remove.
+ * config/v850/predicates.md (even_reg_operand, disp23_operand,
+ const_float_1_operand const_float_0_operand): New Predicates.
+ (pattern_is_ok_for_prepare, pattern_is_ok_for_prologue,
+ pattern_is_ok_for_epilogue): Update as per the ABI requirements.
+ * config/v850/t-v850: Update multilibs for new target variants.
+ (save_varargs, callt_save_varargs, callt_save_r6_r9): Remove.
+ * config/v850/t-v850e: Likewise.
+ * config/v850/v850.c (v850_issue_rate): New.
+ (v850_strict_argument_naming): New.
+ (function_arg): Modify to generate a different ABI.
+ (print_operand): Update case 'z' to support float modes.
+ (output_move_single): Modify to generate appropriate and better
+ assembly.
+ (v850_float_z_comparison_operator, v850_select_cc_mode,
+ v850_float_nz_comparison_operator, v850_gen_float_compare,
+ v850_gen_compare): New functions to support comparison of
+ float values.
+ (ep_memory_offset): Add support for V850E2 targets.
+ (INTERRUPT_FIXED_NUM, INTERRUPT_ALL_SAVE_NUM): Update.
+ (INTERRUPT_REGPARM_NUM): Remove.
+ (compute_register_save_size): Add extra case to save/restore
+ long call.
+ (use_prolog_function): New function to support prologue.
+ (expand_prologue): Add support for V850E2 targets and modified
+ as per the current ABI requirements.
+ (expand_epilogue): Likewise.
+ (construct_restore_jr): Modify based on TARGET_LONG_CALLS.
+ (construct_save_jarl): Likewise.
+ (construct_dispose_instruction): Update as per the current ABI
+ requirements.
+ (construct_prepare_instruction): Likewise.
+ * config/v850/v850.h(TARGET_CPU_DEFAULT): Add target predefines.
+ (TARGET_CPU_v850e2, TARGET_CPU_v850e2v3): Define
+ (CPP_SPEC): Updated to support v850e2 targets.
+ (STRICT_ALIGNMENT): Modified.
+ (FIRST_PSEUDO_REGISTER): Updated to add even registers.
+ (FIXED_REGISTERS): Likewise.
+ (CALL_USED_REGISTERS): Likewise.
+ (CONDITIONAL_REGISTER_USAGE): Updated.
+ (HARD_REGNO_MODE_OK): Updated.
+ (reg_class): Updated to add even registers.
+ (REG_CLASS_NAMES): Likewise.
+ (REG_CLASS_CONTENTS): Likewise.
+ (REGNO_REG_CLASS): Updated for CC registers.
+ (REG_CLASS_FROM_LETTER): Added support for even registers.
+ (REGNO_OK_FOR_BASE_P): Updated for CC registers.
+ (STACK_POINTER_REGNUM, FRAME_POINTER_REGNUM, LINK_POINTER_REGNUM,
+ ARG_POINTER_REGNUM): Updated.
+ (FUNCTION_ARG_ADVANCE): Define.
+ (REG_PARM_STACK_SPACE): Update as per the current ABI requirements.
+ (OUTGOING_REG_PARM_STACK_SPACE): Remove.
+ (EXTRA_CONSTRAINT): Add new constraint 'W' for 23-bit displacement.
+ (GO_IF_LEGITIMATE_ADDRESS): Updated.
+ (SELECT_CC_MODE): Define.
+ (REGISTER_NAMES): Updated to add psw and fcc registers.
+ (ADDITIONAL_REGISTER_NAMES): Updated.
+ (ASM_OUTPUT_ADDR_DIFF_ELT): Updated to support new targets.
+ (JUMP_TABLES_IN_TEXT_SECTION): Updated.
+ * config/v850/v850.md (define_constants): Define new constants.
+ (type): Update store,bit1,macc,div,fpu and single attributes.
+ (cpu): New attribute.
+ (cc): Add set_z attribute.
+ (unsign23byte_load, sign23byte_load, unsign23hword_load,
+ sign23hword_load, 23word_load, 23byte_store, 23hword_store,
+ 23word_store): New instructions for 23-bit displacement load and
+ store.
+ (movqi_internal, movhi_internal): Update the attributes.
+ (movsi, movsi_internal_v850e): Updated to support v850e2 targets.
+ (movsi_internal_v850e, movsi_internal, movsf_internal): Update
+ the attributes.
+ (v850_tst1): Modified using CC_REGNUM.
+ (tstsi): Remove.
+ (cmpsi): Modified as define_expand from define_insn.
+ (cmpsi_insn, cmpsf, cmpdf): New instructions.
+ (addsi3, subsi3, negsi2, divmodsi4, udivmodsi4, divmodhi4,
+ udivmodhi4, v850_clr1_1, v850_clr1_2, v850_clr1_3, andsi3,
+ v850_set1_1, v850_set1_3, iorsi3, v850_not1_1, v850_not1_3, xorsi3,
+ one_cmplsi2): Clobber the CC_REGNUM register.
+ (v850_clr1_1, v850_clr1_2, v850_clr1_3, andsi3, v850_set1_1,
+ v850_set1_2, v850_set1_3, iorsi3, v850_not1_1, v850_not1_2,
+ v850_not1_3, xorsi3, one_cmplsi2): Update the attributes
+ accordingly.
+ (setf_insn, set_z_insn, set_nz_insn): New instructions for
+ v850e2v3 target.
+ (movsicc_normal_cc, movsicc_reversed_cc): New instructions.
+ (movsicc, movsicc_normal, movsicc_reversed): Add support for V850E2
+ targets.
+ (sasf_1, sasf_2): Remove.
+ (sasf): New instruction.
+ (rotlhi3, rotlhi3_8, rotlsi3, rotlsi3_16): Update to support V850E2
+ targets. CC_REGNUM register is clobbered and attributes are
+ updated.
+ (branch_z_normal, branch_z_invert, branch_nz_normal,
+ branch_nz_invert): New branch related instructions.
+ (jump): Updated the attributes.
+ (switch): Update to support new targets. CC_REGNUM register is
+ clobbered and attributes are updated.
+ (call_internal_short, call_internal_long, call_value_internal_short,
+ call_value_internal_long): Updated the attributes.
+ (zero_extendhisi2, zero_extendqisi2): CC_REGNUM register is
+ clobbered and attributes are updated.
+ (extendhisi_insn, extendhisi2, extendqisi_insn, extendqisi2):
+ Update to support new targets. CC_REGNUM register is clobbered.
+ (ashlsi3_v850e2, lshrsi3_v850e2, ashrsi3_v850e2): New shift
+ instructions.
+ (lshrsi3, ashrsi3): CC_REGNUM register is clobbered and attributes
+ are updated.
+ (ffssi2, addsf3, adddf3, subsf3, subdf3, mulsf3, muldf3, divsf3,
+ divdf3, minsf3, mindf3, maxsf3, maxdf3, abssf2, absdf2, negsf2,
+ negdf2, sqrtsf2, sqrtdf2, truncsfsi2, truncdfsi2, floatsisf2,
+ floatsidf2, extendsfdf2, extenddfsf2, recipsf2, recipdf2,
+ rsqrtsf2, rsqrtdf2, maddsf4, msubsf4, nmaddsf4, nmsubsf4,
+ cmpsf_le_insn, cmpsf_lt_insn, cmpsf_ge_insn, cmpsf_gt_insn,
+ cmpsf_eq_insn, cmpsf_ne_insn, cmpdf_le_insn, cmpdf_lt_insn,
+ cmpdf_ge_insn, cmpdf_gt_insn, cmpdf_eq_insn, cmpdf_ne_insn, trfsr,
+ movsfcc, movdfcc, movsfcc_z_insn, movsfcc_nz_insn, movdfcc_z_insn,
+ movdfcc_nz_insn, movedfcc_z_zero, movedfcc_nz_zero): New floating
+ point instructions defined for V850e2v3 target.
+ (callt_save_interrupt, callt_return_interrupt, return_interrupt):
+ Add support for V850E2 targets and CC_REGNUM register is clobbered.
+ (callt_save_all_interrupt, callt_restore_all_interrupt): Add
+ support for new targets.
+ * config/v850/v850-modes.def: New file.
+ * config/v850/v850.opt(mstrict-align): Remove.
+ (mno-strict-align, mjump-tables-in-data-section, mv850e2,
+ mv850e2v3): New command line options for V850.
+ * config.gcc: Update the newly added files.
+ * doc/invoke.texi: Update the newly added command line options for
+ V850 target.
+
2010-07-26 Richard Guenther <rguenther@suse.de>
PR tree-optimization/45052
tm_p_file=v850/v850-protos.h
tmake_file=v850/t-v850e
md_file=v850/v850.md
+ extra_modes=v850/v850-modes.def
out_file=v850/v850.c
extra_options="${extra_options} v850/v850.opt"
if test x$stabs = xyes
tm_p_file=v850/v850-protos.h
tmake_file=v850/t-v850e
md_file=v850/v850.md
+ extra_modes=v850/v850-modes.def
out_file=v850/v850.c
extra_options="${extra_options} v850/v850.opt"
if test x$stabs = xyes
add r7, r10
jmp [r31]
#endif /* __v850__ */
-#if defined(__v850e__) || defined(__v850ea__)
+#if defined(__v850e__) || defined(__v850ea__) || defined(__v850e2__) || defined(__v850e2v3__)
/* This routine is almost unneccesarry because gcc
generates the MUL instruction for the RTX mulsi3.
But if someone wants to link his application with
.align 2
.globl __save_r2_r29
.type __save_r2_r29,@function
- /* Allocate space and save registers 2, 20 .. 29 on the stack */
- /* Called via: jalr __save_r2_r29,r10 */
+ /* Allocate space and save registers 2, 20 .. 29 on the stack. */
+ /* Called via: jalr __save_r2_r29,r10. */
__save_r2_r29:
#ifdef __EP__
mov ep,r1
jmp [r10]
.size __save_r2_r29,.-__save_r2_r29
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r2_r29 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r2_r29. */
.align 2
.globl __return_r2_r29
.type __return_r2_r29,@function
.align 2
.globl __save_r20_r29
.type __save_r20_r29,@function
- /* Allocate space and save registers 20 .. 29 on the stack */
- /* Called via: jalr __save_r20_r29,r10 */
+ /* Allocate space and save registers 20 .. 29 on the stack. */
+ /* Called via: jalr __save_r20_r29,r10. */
__save_r20_r29:
#ifdef __EP__
mov ep,r1
jmp [r10]
.size __save_r20_r29,.-__save_r20_r29
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r20_r29 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r20_r29. */
.align 2
.globl __return_r20_r29
.type __return_r20_r29,@function
.align 2
.globl __save_r21_r29
.type __save_r21_r29,@function
- /* Allocate space and save registers 21 .. 29 on the stack */
- /* Called via: jalr __save_r21_r29,r10 */
+ /* Allocate space and save registers 21 .. 29 on the stack. */
+ /* Called via: jalr __save_r21_r29,r10. */
__save_r21_r29:
#ifdef __EP__
mov ep,r1
jmp [r10]
.size __save_r21_r29,.-__save_r21_r29
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r21_r29 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r21_r29. */
.align 2
.globl __return_r21_r29
.type __return_r21_r29,@function
.align 2
.globl __save_r22_r29
.type __save_r22_r29,@function
- /* Allocate space and save registers 22 .. 29 on the stack */
- /* Called via: jalr __save_r22_r29,r10 */
+ /* Allocate space and save registers 22 .. 29 on the stack. */
+ /* Called via: jalr __save_r22_r29,r10. */
__save_r22_r29:
#ifdef __EP__
mov ep,r1
jmp [r10]
.size __save_r22_r29,.-__save_r22_r29
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r22_r29 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r22_r29. */
.align 2
.globl __return_r22_r29
.type __return_r22_r29,@function
.align 2
.globl __save_r23_r29
.type __save_r23_r29,@function
- /* Allocate space and save registers 23 .. 29 on the stack */
- /* Called via: jalr __save_r23_r29,r10 */
+ /* Allocate space and save registers 23 .. 29 on the stack. */
+ /* Called via: jalr __save_r23_r29,r10. */
__save_r23_r29:
#ifdef __EP__
mov ep,r1
jmp [r10]
.size __save_r23_r29,.-__save_r23_r29
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r23_r29 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r23_r29. */
.align 2
.globl __return_r23_r29
.type __return_r23_r29,@function
.align 2
.globl __save_r24_r29
.type __save_r24_r29,@function
- /* Allocate space and save registers 24 .. 29 on the stack */
- /* Called via: jalr __save_r24_r29,r10 */
+ /* Allocate space and save registers 24 .. 29 on the stack. */
+ /* Called via: jalr __save_r24_r29,r10. */
__save_r24_r29:
#ifdef __EP__
mov ep,r1
jmp [r10]
.size __save_r24_r29,.-__save_r24_r29
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r24_r29 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r24_r29. */
.align 2
.globl __return_r24_r29
.type __return_r24_r29,@function
.align 2
.globl __save_r25_r29
.type __save_r25_r29,@function
- /* Allocate space and save registers 25 .. 29 on the stack */
- /* Called via: jalr __save_r25_r29,r10 */
+ /* Allocate space and save registers 25 .. 29 on the stack. */
+ /* Called via: jalr __save_r25_r29,r10. */
__save_r25_r29:
#ifdef __EP__
mov ep,r1
jmp [r10]
.size __save_r25_r29,.-__save_r25_r29
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r25_r29 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r25_r29. */
.align 2
.globl __return_r25_r29
.type __return_r25_r29,@function
.align 2
.globl __save_r26_r29
.type __save_r26_r29,@function
- /* Allocate space and save registers 26 .. 29 on the stack */
- /* Called via: jalr __save_r26_r29,r10 */
+ /* Allocate space and save registers 26 .. 29 on the stack. */
+ /* Called via: jalr __save_r26_r29,r10. */
__save_r26_r29:
#ifdef __EP__
mov ep,r1
jmp [r10]
.size __save_r26_r29,.-__save_r26_r29
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r26_r29 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r26_r29. */
.align 2
.globl __return_r26_r29
.type __return_r26_r29,@function
.align 2
.globl __save_r27_r29
.type __save_r27_r29,@function
- /* Allocate space and save registers 27 .. 29 on the stack */
- /* Called via: jalr __save_r27_r29,r10 */
+ /* Allocate space and save registers 27 .. 29 on the stack. */
+ /* Called via: jalr __save_r27_r29,r10. */
__save_r27_r29:
add -12,sp
st.w r29,0[sp]
jmp [r10]
.size __save_r27_r29,.-__save_r27_r29
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r27_r29 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r27_r29. */
.align 2
.globl __return_r27_r29
.type __return_r27_r29,@function
.align 2
.globl __save_r28_r29
.type __save_r28_r29,@function
- /* Allocate space and save registers 28,29 on the stack */
- /* Called via: jalr __save_r28_r29,r10 */
+ /* Allocate space and save registers 28,29 on the stack. */
+ /* Called via: jalr __save_r28_r29,r10. */
__save_r28_r29:
add -8,sp
st.w r29,0[sp]
jmp [r10]
.size __save_r28_r29,.-__save_r28_r29
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r28_r29 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r28_r29. */
.align 2
.globl __return_r28_r29
.type __return_r28_r29,@function
.align 2
.globl __save_r29
.type __save_r29,@function
- /* Allocate space and save register 29 on the stack */
- /* Called via: jalr __save_r29,r10 */
+ /* Allocate space and save register 29 on the stack. */
+ /* Called via: jalr __save_r29,r10. */
__save_r29:
add -4,sp
st.w r29,0[sp]
jmp [r10]
.size __save_r29,.-__save_r29
- /* Restore saved register 29, deallocate stack and return to the user */
- /* Called via: jr __return_r29 */
+ /* Restore saved register 29, deallocate stack and return to the user. */
+ /* Called via: jr __return_r29. */
.align 2
.globl __return_r29
.type __return_r29,@function
__save_r2_r31:
#ifdef __EP__
mov ep,r1
- addi -64,sp,sp
+ addi -48,sp,sp
mov sp,ep
- sst.w r29,16[ep]
- sst.w r28,20[ep]
- sst.w r27,24[ep]
- sst.w r26,28[ep]
- sst.w r25,32[ep]
- sst.w r24,36[ep]
- sst.w r23,40[ep]
- sst.w r22,44[ep]
- sst.w r21,48[ep]
- sst.w r20,52[ep]
- sst.w r2,56[ep]
- sst.w r31,60[ep]
+ sst.w r29,0[ep]
+ sst.w r28,4[ep]
+ sst.w r27,8[ep]
+ sst.w r26,12[ep]
+ sst.w r25,16[ep]
+ sst.w r24,20[ep]
+ sst.w r23,24[ep]
+ sst.w r22,28[ep]
+ sst.w r21,32[ep]
+ sst.w r20,36[ep]
+ sst.w r2,40[ep]
+ sst.w r31,44[ep]
mov r1,ep
#else
- addi -64,sp,sp
- st.w r29,16[sp]
- st.w r28,20[sp]
- st.w r27,24[sp]
- st.w r26,28[sp]
- st.w r25,32[sp]
- st.w r24,36[sp]
- st.w r23,40[sp]
- st.w r22,44[sp]
- st.w r21,48[sp]
- st.w r20,52[sp]
- st.w r2,56[sp]
- st.w r31,60[sp]
+ addi -48,sp,sp
+ st.w r29,0[sp]
+ st.w r28,4[sp]
+ st.w r27,8[sp]
+ st.w r26,12[sp]
+ st.w r25,16[sp]
+ st.w r24,20[sp]
+ st.w r23,24[sp]
+ st.w r22,28[sp]
+ st.w r21,32[sp]
+ st.w r20,36[sp]
+ st.w r2,40[sp]
+ st.w r31,44[sp]
#endif
jmp [r10]
.size __save_r2_r31,.-__save_r2_r31
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r20_r31 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r20_r31. */
.align 2
.globl __return_r2_r31
.type __return_r2_r31,@function
#ifdef __EP__
mov ep,r1
mov sp,ep
- sld.w 16[ep],r29
- sld.w 20[ep],r28
- sld.w 24[ep],r27
- sld.w 28[ep],r26
- sld.w 32[ep],r25
- sld.w 36[ep],r24
- sld.w 40[ep],r23
- sld.w 44[ep],r22
- sld.w 48[ep],r21
- sld.w 52[ep],r20
- sld.w 56[ep],r2
- sld.w 60[ep],r31
- addi 64,sp,sp
+ sld.w 0[ep],r29
+ sld.w 4[ep],r28
+ sld.w 8[ep],r27
+ sld.w 12[ep],r26
+ sld.w 16[ep],r25
+ sld.w 20[ep],r24
+ sld.w 24[ep],r23
+ sld.w 28[ep],r22
+ sld.w 32[ep],r21
+ sld.w 36[ep],r20
+ sld.w 40[ep],r2
+ sld.w 44[ep],r31
+ addi 48,sp,sp
mov r1,ep
#else
- ld.w 16[sp],r29
- ld.w 20[sp],r28
- ld.w 24[sp],r27
- ld.w 28[sp],r26
- ld.w 32[sp],r25
- ld.w 36[sp],r24
- ld.w 40[sp],r23
- ld.w 44[sp],r22
- ld.w 48[sp],r21
- ld.w 52[sp],r20
- ld.w 56[sp],r2
- ld.w 60[sp],r31
- addi 64,sp,sp
+ ld.w 44[sp],r29
+ ld.w 40[sp],r28
+ ld.w 36[sp],r27
+ ld.w 32[sp],r26
+ ld.w 28[sp],r25
+ ld.w 24[sp],r24
+ ld.w 20[sp],r23
+ ld.w 16[sp],r22
+ ld.w 12[sp],r21
+ ld.w 8[sp],r20
+ ld.w 4[sp],r2
+ ld.w 0[sp],r31
+ addi 48,sp,sp
#endif
jmp [r31]
.size __return_r2_r31,.-__return_r2_r31
.align 2
.globl __save_r20_r31
.type __save_r20_r31,@function
- /* Allocate space and save registers 20 .. 29, 31 on the stack */
- /* Also allocate space for the argument save area */
- /* Called via: jalr __save_r20_r31,r10 */
+ /* Allocate space and save registers 20 .. 29, 31 on the stack. */
+ /* Also allocate space for the argument save area. */
+ /* Called via: jalr __save_r20_r31,r10. */
__save_r20_r31:
#ifdef __EP__
mov ep,r1
- addi -60,sp,sp
+ addi -44,sp,sp
mov sp,ep
- sst.w r29,16[ep]
- sst.w r28,20[ep]
- sst.w r27,24[ep]
- sst.w r26,28[ep]
- sst.w r25,32[ep]
- sst.w r24,36[ep]
- sst.w r23,40[ep]
- sst.w r22,44[ep]
- sst.w r21,48[ep]
- sst.w r20,52[ep]
- sst.w r31,56[ep]
+ sst.w r29,0[ep]
+ sst.w r28,4[ep]
+ sst.w r27,8[ep]
+ sst.w r26,12[ep]
+ sst.w r25,16[ep]
+ sst.w r24,20[ep]
+ sst.w r23,24[ep]
+ sst.w r22,28[ep]
+ sst.w r21,32[ep]
+ sst.w r20,36[ep]
+ sst.w r31,40[ep]
mov r1,ep
#else
- addi -60,sp,sp
- st.w r29,16[sp]
- st.w r28,20[sp]
- st.w r27,24[sp]
- st.w r26,28[sp]
- st.w r25,32[sp]
- st.w r24,36[sp]
- st.w r23,40[sp]
- st.w r22,44[sp]
- st.w r21,48[sp]
- st.w r20,52[sp]
- st.w r31,56[sp]
+ addi -44,sp,sp
+ st.w r29,0[sp]
+ st.w r28,4[sp]
+ st.w r27,8[sp]
+ st.w r26,12[sp]
+ st.w r25,16[sp]
+ st.w r24,20[sp]
+ st.w r23,24[sp]
+ st.w r22,28[sp]
+ st.w r21,32[sp]
+ st.w r20,36[sp]
+ st.w r31,40[sp]
#endif
jmp [r10]
.size __save_r20_r31,.-__save_r20_r31
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r20_r31 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r20_r31. */
.align 2
.globl __return_r20_r31
.type __return_r20_r31,@function
#ifdef __EP__
mov ep,r1
mov sp,ep
- sld.w 16[ep],r29
- sld.w 20[ep],r28
- sld.w 24[ep],r27
- sld.w 28[ep],r26
- sld.w 32[ep],r25
- sld.w 36[ep],r24
- sld.w 40[ep],r23
- sld.w 44[ep],r22
- sld.w 48[ep],r21
- sld.w 52[ep],r20
- sld.w 56[ep],r31
- addi 60,sp,sp
+ sld.w 0[ep],r29
+ sld.w 4[ep],r28
+ sld.w 8[ep],r27
+ sld.w 12[ep],r26
+ sld.w 16[ep],r25
+ sld.w 20[ep],r24
+ sld.w 24[ep],r23
+ sld.w 28[ep],r22
+ sld.w 32[ep],r21
+ sld.w 36[ep],r20
+ sld.w 40[ep],r31
+ addi 44,sp,sp
mov r1,ep
#else
- ld.w 16[sp],r29
- ld.w 20[sp],r28
- ld.w 24[sp],r27
- ld.w 28[sp],r26
- ld.w 32[sp],r25
- ld.w 36[sp],r24
- ld.w 40[sp],r23
- ld.w 44[sp],r22
- ld.w 48[sp],r21
- ld.w 52[sp],r20
- ld.w 56[sp],r31
- addi 60,sp,sp
+ ld.w 0[sp],r29
+ ld.w 4[sp],r28
+ ld.w 8[sp],r27
+ ld.w 12[sp],r26
+ ld.w 16[sp],r25
+ ld.w 20[sp],r24
+ ld.w 24[sp],r23
+ ld.w 28[sp],r22
+ ld.w 32[sp],r21
+ ld.w 36[sp],r20
+ ld.w 40[sp],r31
+ addi 44,sp,sp
#endif
jmp [r31]
.size __return_r20_r31,.-__return_r20_r31
.align 2
.globl __save_r21_r31
.type __save_r21_r31,@function
- /* Allocate space and save registers 21 .. 29, 31 on the stack */
- /* Also allocate space for the argument save area */
- /* Called via: jalr __save_r21_r31,r10 */
+ /* Allocate space and save registers 21 .. 29, 31 on the stack. */
+ /* Also allocate space for the argument save area. */
+ /* Called via: jalr __save_r21_r31,r10. */
__save_r21_r31:
-#ifdef __EP__
+#ifdef __EP__
mov ep,r1
- addi -56,sp,sp
+ addi -40,sp,sp
mov sp,ep
- sst.w r29,16[ep]
- sst.w r28,20[ep]
- sst.w r27,24[ep]
- sst.w r26,28[ep]
- sst.w r25,32[ep]
- sst.w r24,36[ep]
- sst.w r23,40[ep]
- sst.w r22,44[ep]
- sst.w r21,48[ep]
- sst.w r31,52[ep]
+ sst.w r29,0[ep]
+ sst.w r28,4[ep]
+ sst.w r27,8[ep]
+ sst.w r26,12[ep]
+ sst.w r25,16[ep]
+ sst.w r24,20[ep]
+ sst.w r23,24[ep]
+ sst.w r22,28[ep]
+ sst.w r21,32[ep]
+ sst.w r31,36[ep]
mov r1,ep
-#else
- addi -56,sp,sp
- st.w r29,16[sp]
- st.w r28,20[sp]
- st.w r27,24[sp]
- st.w r26,28[sp]
- st.w r25,32[sp]
- st.w r24,36[sp]
- st.w r23,40[sp]
- st.w r22,44[sp]
- st.w r21,48[sp]
- st.w r31,52[sp]
-#endif
jmp [r10]
+#else
+ addi -40,sp,sp
+ st.w r29,0[sp]
+ st.w r28,4[sp]
+ st.w r27,8[sp]
+ st.w r26,12[sp]
+ st.w r25,16[sp]
+ st.w r24,20[sp]
+ st.w r23,24[sp]
+ st.w r22,28[sp]
+ st.w r21,32[sp]
+ st.w r31,36[sp]
+ jmp [r10]
+#endif
.size __save_r21_r31,.-__save_r21_r31
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r21_r31 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r21_r31. */
.align 2
.globl __return_r21_r31
.type __return_r21_r31,@function
#ifdef __EP__
mov ep,r1
mov sp,ep
- sld.w 16[ep],r29
- sld.w 20[ep],r28
- sld.w 24[ep],r27
- sld.w 28[ep],r26
- sld.w 32[ep],r25
- sld.w 36[ep],r24
- sld.w 40[ep],r23
- sld.w 44[ep],r22
- sld.w 48[ep],r21
- sld.w 52[ep],r31
- addi 56,sp,sp
+ sld.w 0[ep],r29
+ sld.w 4[ep],r28
+ sld.w 8[ep],r27
+ sld.w 12[ep],r26
+ sld.w 16[ep],r25
+ sld.w 20[ep],r24
+ sld.w 24[ep],r23
+ sld.w 28[ep],r22
+ sld.w 32[ep],r21
+ sld.w 36[ep],r31
+ addi 40,sp,sp
mov r1,ep
#else
- ld.w 16[sp],r29
- ld.w 20[sp],r28
- ld.w 24[sp],r27
- ld.w 28[sp],r26
- ld.w 32[sp],r25
- ld.w 36[sp],r24
- ld.w 40[sp],r23
- ld.w 44[sp],r22
- ld.w 48[sp],r21
- ld.w 52[sp],r31
- addi 56,sp,sp
+ ld.w 0[sp],r29
+ ld.w 4[sp],r28
+ ld.w 8[sp],r27
+ ld.w 12[sp],r26
+ ld.w 16[sp],r25
+ ld.w 20[sp],r24
+ ld.w 24[sp],r23
+ ld.w 28[sp],r22
+ ld.w 32[sp],r21
+ ld.w 36[sp],r31
+ addi 40,sp,sp
#endif
jmp [r31]
.size __return_r21_r31,.-__return_r21_r31
.align 2
.globl __save_r22_r31
.type __save_r22_r31,@function
- /* Allocate space and save registers 22 .. 29, 31 on the stack */
- /* Also allocate space for the argument save area */
- /* Called via: jalr __save_r22_r31,r10 */
+ /* Allocate space and save registers 22 .. 29, 31 on the stack. */
+ /* Also allocate space for the argument save area. */
+ /* Called via: jalr __save_r22_r31,r10. */
__save_r22_r31:
#ifdef __EP__
mov ep,r1
- addi -52,sp,sp
+ addi -36,sp,sp
mov sp,ep
- sst.w r29,16[ep]
- sst.w r28,20[ep]
- sst.w r27,24[ep]
- sst.w r26,28[ep]
- sst.w r25,32[ep]
- sst.w r24,36[ep]
- sst.w r23,40[ep]
- sst.w r22,44[ep]
- sst.w r31,48[ep]
+ sst.w r29,0[ep]
+ sst.w r28,4[ep]
+ sst.w r27,8[ep]
+ sst.w r26,12[ep]
+ sst.w r25,16[ep]
+ sst.w r24,20[ep]
+ sst.w r23,24[ep]
+ sst.w r22,28[ep]
+ sst.w r31,32[ep]
mov r1,ep
#else
- addi -52,sp,sp
- st.w r29,16[sp]
- st.w r28,20[sp]
- st.w r27,24[sp]
- st.w r26,28[sp]
- st.w r25,32[sp]
- st.w r24,36[sp]
- st.w r23,40[sp]
- st.w r22,44[sp]
- st.w r31,48[sp]
+ addi -36,sp,sp
+ st.w r29,0[sp]
+ st.w r28,4[sp]
+ st.w r27,8[sp]
+ st.w r26,12[sp]
+ st.w r25,16[sp]
+ st.w r24,20[sp]
+ st.w r23,24[sp]
+ st.w r22,28[sp]
+ st.w r31,32[sp]
#endif
jmp [r10]
.size __save_r22_r31,.-__save_r22_r31
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r22_r31 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r22_r31. */
.align 2
.globl __return_r22_r31
.type __return_r22_r31,@function
#ifdef __EP__
mov ep,r1
mov sp,ep
- sld.w 16[ep],r29
- sld.w 20[ep],r28
- sld.w 24[ep],r27
- sld.w 28[ep],r26
- sld.w 32[ep],r25
- sld.w 36[ep],r24
- sld.w 40[ep],r23
- sld.w 44[ep],r22
- sld.w 48[ep],r31
- addi 52,sp,sp
+ sld.w 0[ep],r29
+ sld.w 4[ep],r28
+ sld.w 8[ep],r27
+ sld.w 12[ep],r26
+ sld.w 16[ep],r25
+ sld.w 20[ep],r24
+ sld.w 24[ep],r23
+ sld.w 28[ep],r22
+ sld.w 32[ep],r31
+ addi 36,sp,sp
mov r1,ep
#else
- ld.w 16[sp],r29
- ld.w 20[sp],r28
- ld.w 24[sp],r27
- ld.w 28[sp],r26
- ld.w 32[sp],r25
- ld.w 36[sp],r24
- ld.w 40[sp],r23
- ld.w 44[sp],r22
- ld.w 48[sp],r31
- addi 52,sp,sp
+ ld.w 0[sp],r29
+ ld.w 4[sp],r28
+ ld.w 8[sp],r27
+ ld.w 12[sp],r26
+ ld.w 16[sp],r25
+ ld.w 20[sp],r24
+ ld.w 24[sp],r23
+ ld.w 28[sp],r22
+ ld.w 32[sp],r31
+ addi 36,sp,sp
#endif
jmp [r31]
.size __return_r22_r31,.-__return_r22_r31
.align 2
.globl __save_r23_r31
.type __save_r23_r31,@function
- /* Allocate space and save registers 23 .. 29, 31 on the stack */
- /* Also allocate space for the argument save area */
- /* Called via: jalr __save_r23_r31,r10 */
+ /* Allocate space and save registers 23 .. 29, 31 on the stack. */
+ /* Also allocate space for the argument save area. */
+ /* Called via: jalr __save_r23_r31,r10. */
__save_r23_r31:
#ifdef __EP__
mov ep,r1
- addi -48,sp,sp
+ addi -32,sp,sp
mov sp,ep
- sst.w r29,16[ep]
- sst.w r28,20[ep]
- sst.w r27,24[ep]
- sst.w r26,28[ep]
- sst.w r25,32[ep]
- sst.w r24,36[ep]
- sst.w r23,40[ep]
- sst.w r31,44[ep]
+ sst.w r29,0[ep]
+ sst.w r28,4[ep]
+ sst.w r27,8[ep]
+ sst.w r26,12[ep]
+ sst.w r25,16[ep]
+ sst.w r24,20[ep]
+ sst.w r23,24[ep]
+ sst.w r31,28[ep]
mov r1,ep
#else
- addi -48,sp,sp
- st.w r29,16[sp]
- st.w r28,20[sp]
- st.w r27,24[sp]
- st.w r26,28[sp]
- st.w r25,32[sp]
- st.w r24,36[sp]
- st.w r23,40[sp]
- st.w r31,44[sp]
+ addi -32,sp,sp
+ st.w r29,0[sp]
+ st.w r28,4[sp]
+ st.w r27,8[sp]
+ st.w r26,12[sp]
+ st.w r25,16[sp]
+ st.w r24,20[sp]
+ st.w r23,24[sp]
+ st.w r31,28[sp]
#endif
jmp [r10]
.size __save_r23_r31,.-__save_r23_r31
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r23_r31 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r23_r31. */
.align 2
.globl __return_r23_r31
.type __return_r23_r31,@function
#ifdef __EP__
mov ep,r1
mov sp,ep
- sld.w 16[ep],r29
- sld.w 20[ep],r28
- sld.w 24[ep],r27
- sld.w 28[ep],r26
- sld.w 32[ep],r25
- sld.w 36[ep],r24
- sld.w 40[ep],r23
- sld.w 44[ep],r31
- addi 48,sp,sp
+ sld.w 0[ep],r29
+ sld.w 4[ep],r28
+ sld.w 8[ep],r27
+ sld.w 12[ep],r26
+ sld.w 16[ep],r25
+ sld.w 20[ep],r24
+ sld.w 24[ep],r23
+ sld.w 28[ep],r31
+ addi 32,sp,sp
mov r1,ep
#else
- ld.w 16[sp],r29
- ld.w 20[sp],r28
- ld.w 24[sp],r27
- ld.w 28[sp],r26
- ld.w 32[sp],r25
- ld.w 36[sp],r24
- ld.w 40[sp],r23
- ld.w 44[sp],r31
- addi 48,sp,sp
+ ld.w 0[sp],r29
+ ld.w 4[sp],r28
+ ld.w 8[sp],r27
+ ld.w 12[sp],r26
+ ld.w 16[sp],r25
+ ld.w 20[sp],r24
+ ld.w 24[sp],r23
+ ld.w 28[sp],r31
+ addi 32,sp,sp
#endif
jmp [r31]
.size __return_r23_r31,.-__return_r23_r31
.align 2
.globl __save_r24_r31
.type __save_r24_r31,@function
- /* Allocate space and save registers 24 .. 29, 31 on the stack */
- /* Also allocate space for the argument save area */
- /* Called via: jalr __save_r24_r31,r10 */
+ /* Allocate space and save registers 24 .. 29, 31 on the stack. */
+ /* Also allocate space for the argument save area. */
+ /* Called via: jalr __save_r24_r31,r10. */
__save_r24_r31:
#ifdef __EP__
mov ep,r1
- addi -44,sp,sp
+ addi -28,sp,sp
mov sp,ep
- sst.w r29,16[ep]
- sst.w r28,20[ep]
- sst.w r27,24[ep]
- sst.w r26,28[ep]
- sst.w r25,32[ep]
- sst.w r24,36[ep]
- sst.w r31,40[ep]
+ sst.w r29,0[ep]
+ sst.w r28,4[ep]
+ sst.w r27,8[ep]
+ sst.w r26,12[ep]
+ sst.w r25,16[ep]
+ sst.w r24,20[ep]
+ sst.w r31,24[ep]
mov r1,ep
#else
- addi -44,sp,sp
- st.w r29,16[sp]
- st.w r28,20[sp]
- st.w r27,24[sp]
- st.w r26,28[sp]
- st.w r25,32[sp]
- st.w r24,36[sp]
- st.w r31,40[sp]
+ addi -28,sp,sp
+ st.w r29,0[sp]
+ st.w r28,4[sp]
+ st.w r27,8[sp]
+ st.w r26,12[sp]
+ st.w r25,16[sp]
+ st.w r24,20[sp]
+ st.w r31,24[sp]
#endif
jmp [r10]
.size __save_r24_r31,.-__save_r24_r31
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r24_r31 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r24_r31. */
.align 2
.globl __return_r24_r31
.type __return_r24_r31,@function
#ifdef __EP__
mov ep,r1
mov sp,ep
- sld.w 16[ep],r29
- sld.w 20[ep],r28
- sld.w 24[ep],r27
- sld.w 28[ep],r26
- sld.w 32[ep],r25
- sld.w 36[ep],r24
- sld.w 40[ep],r31
- addi 44,sp,sp
+ sld.w 0[ep],r29
+ sld.w 4[ep],r28
+ sld.w 8[ep],r27
+ sld.w 12[ep],r26
+ sld.w 16[ep],r25
+ sld.w 20[ep],r24
+ sld.w 24[ep],r31
+ addi 28,sp,sp
mov r1,ep
#else
- ld.w 16[sp],r29
- ld.w 20[sp],r28
- ld.w 24[sp],r27
- ld.w 28[sp],r26
- ld.w 32[sp],r25
- ld.w 36[sp],r24
- ld.w 40[sp],r31
- addi 44,sp,sp
+ ld.w 0[sp],r29
+ ld.w 4[sp],r28
+ ld.w 8[sp],r27
+ ld.w 12[sp],r26
+ ld.w 16[sp],r25
+ ld.w 20[sp],r24
+ ld.w 24[sp],r31
+ addi 28,sp,sp
#endif
jmp [r31]
.size __return_r24_r31,.-__return_r24_r31
.align 2
.globl __save_r25_r31
.type __save_r25_r31,@function
- /* Allocate space and save registers 25 .. 29, 31 on the stack */
- /* Also allocate space for the argument save area */
- /* Called via: jalr __save_r25_r31,r10 */
+ /* Allocate space and save registers 25 .. 29, 31 on the stack. */
+ /* Also allocate space for the argument save area. */
+ /* Called via: jalr __save_r25_r31,r10. */
__save_r25_r31:
#ifdef __EP__
mov ep,r1
- addi -40,sp,sp
+ addi -24,sp,sp
mov sp,ep
- sst.w r29,16[ep]
- sst.w r28,20[ep]
- sst.w r27,24[ep]
- sst.w r26,28[ep]
- sst.w r25,32[ep]
- sst.w r31,36[ep]
+ sst.w r29,0[ep]
+ sst.w r28,4[ep]
+ sst.w r27,8[ep]
+ sst.w r26,12[ep]
+ sst.w r25,16[ep]
+ sst.w r31,20[ep]
mov r1,ep
#else
- addi -40,sp,sp
- st.w r29,16[sp]
- st.w r28,20[sp]
- st.w r27,24[sp]
- st.w r26,28[sp]
- st.w r25,32[sp]
- st.w r31,36[sp]
+ addi -24,sp,sp
+ st.w r29,0[sp]
+ st.w r28,4[sp]
+ st.w r27,8[sp]
+ st.w r26,12[sp]
+ st.w r25,16[sp]
+ st.w r31,20[sp]
#endif
jmp [r10]
.size __save_r25_r31,.-__save_r25_r31
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r25_r31 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r25_r31. */
.align 2
.globl __return_r25_r31
.type __return_r25_r31,@function
#ifdef __EP__
mov ep,r1
mov sp,ep
- sld.w 16[ep],r29
- sld.w 20[ep],r28
- sld.w 24[ep],r27
- sld.w 28[ep],r26
- sld.w 32[ep],r25
- sld.w 36[ep],r31
- addi 40,sp,sp
+ sld.w 0[ep],r29
+ sld.w 4[ep],r28
+ sld.w 8[ep],r27
+ sld.w 12[ep],r26
+ sld.w 16[ep],r25
+ sld.w 20[ep],r31
+ addi 24,sp,sp
mov r1,ep
#else
- ld.w 16[sp],r29
- ld.w 20[sp],r28
- ld.w 24[sp],r27
- ld.w 28[sp],r26
- ld.w 32[sp],r25
- ld.w 36[sp],r31
- addi 40,sp,sp
+ ld.w 0[sp],r29
+ ld.w 4[sp],r28
+ ld.w 8[sp],r27
+ ld.w 12[sp],r26
+ ld.w 16[sp],r25
+ ld.w 20[sp],r31
+ addi 24,sp,sp
#endif
jmp [r31]
.size __return_r25_r31,.-__return_r25_r31
.align 2
.globl __save_r26_r31
.type __save_r26_r31,@function
- /* Allocate space and save registers 26 .. 29, 31 on the stack */
- /* Also allocate space for the argument save area */
- /* Called via: jalr __save_r26_r31,r10 */
+ /* Allocate space and save registers 26 .. 29, 31 on the stack. */
+ /* Also allocate space for the argument save area. */
+ /* Called via: jalr __save_r26_r31,r10. */
__save_r26_r31:
#ifdef __EP__
mov ep,r1
- addi -36,sp,sp
+ addi -20,sp,sp
mov sp,ep
- sst.w r29,16[ep]
- sst.w r28,20[ep]
- sst.w r27,24[ep]
- sst.w r26,28[ep]
- sst.w r31,32[ep]
+ sst.w r29,0[ep]
+ sst.w r28,4[ep]
+ sst.w r27,8[ep]
+ sst.w r26,12[ep]
+ sst.w r31,16[ep]
mov r1,ep
#else
- addi -36,sp,sp
- st.w r29,16[sp]
- st.w r28,20[sp]
- st.w r27,24[sp]
- st.w r26,28[sp]
- st.w r31,32[sp]
+ addi -20,sp,sp
+ st.w r29,0[sp]
+ st.w r28,4[sp]
+ st.w r27,8[sp]
+ st.w r26,12[sp]
+ st.w r31,16[sp]
#endif
jmp [r10]
.size __save_r26_r31,.-__save_r26_r31
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r26_r31 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r26_r31. */
.align 2
.globl __return_r26_r31
.type __return_r26_r31,@function
#ifdef __EP__
mov ep,r1
mov sp,ep
- sld.w 16[ep],r29
- sld.w 20[ep],r28
- sld.w 24[ep],r27
- sld.w 28[ep],r26
- sld.w 32[ep],r31
- addi 36,sp,sp
+ sld.w 0[ep],r29
+ sld.w 4[ep],r28
+ sld.w 8[ep],r27
+ sld.w 12[ep],r26
+ sld.w 16[ep],r31
+ addi 20,sp,sp
mov r1,ep
#else
- ld.w 16[sp],r29
- ld.w 20[sp],r28
- ld.w 24[sp],r27
- ld.w 28[sp],r26
- ld.w 32[sp],r31
- addi 36,sp,sp
+ ld.w 0[sp],r29
+ ld.w 4[sp],r28
+ ld.w 8[sp],r27
+ ld.w 12[sp],r26
+ ld.w 16[sp],r31
+ addi 20,sp,sp
#endif
jmp [r31]
.size __return_r26_r31,.-__return_r26_r31
.align 2
.globl __save_r27_r31
.type __save_r27_r31,@function
- /* Allocate space and save registers 27 .. 29, 31 on the stack */
- /* Also allocate space for the argument save area */
- /* Called via: jalr __save_r27_r31,r10 */
+ /* Allocate space and save registers 27 .. 29, 31 on the stack. */
+ /* Also allocate space for the argument save area. */
+ /* Called via: jalr __save_r27_r31,r10. */
__save_r27_r31:
#ifdef __EP__
mov ep,r1
- addi -32,sp,sp
+ addi -16,sp,sp
mov sp,ep
- sst.w r29,16[ep]
- sst.w r28,20[ep]
- sst.w r27,24[ep]
- sst.w r31,28[ep]
+ sst.w r29,0[ep]
+ sst.w r28,4[ep]
+ sst.w r27,8[ep]
+ sst.w r31,12[ep]
mov r1,ep
#else
- addi -32,sp,sp
- st.w r29,16[sp]
- st.w r28,20[sp]
- st.w r27,24[sp]
- st.w r31,28[sp]
+ addi -16,sp,sp
+ st.w r29,0[sp]
+ st.w r28,4[sp]
+ st.w r27,8[sp]
+ st.w r31,12[sp]
#endif
jmp [r10]
.size __save_r27_r31,.-__save_r27_r31
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r27_r31 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r27_r31. */
.align 2
.globl __return_r27_r31
.type __return_r27_r31,@function
#ifdef __EP__
mov ep,r1
mov sp,ep
- sld.w 16[ep],r29
- sld.w 20[ep],r28
- sld.w 24[ep],r27
- sld.w 28[ep],r31
- addi 32,sp,sp
+ sld.w 0[ep],r29
+ sld.w 4[ep],r28
+ sld.w 8[ep],r27
+ sld.w 12[ep],r31
+ addi 16,sp,sp
mov r1,ep
#else
- ld.w 16[sp],r29
- ld.w 20[sp],r28
- ld.w 24[sp],r27
- ld.w 28[sp],r31
- addi 32,sp,sp
+ ld.w 0[sp],r29
+ ld.w 4[sp],r28
+ ld.w 8[sp],r27
+ ld.w 12[sp],r31
+ addi 16,sp,sp
#endif
jmp [r31]
.size __return_r27_r31,.-__return_r27_r31
.align 2
.globl __save_r28_r31
.type __save_r28_r31,@function
- /* Allocate space and save registers 28 .. 29, 31 on the stack */
- /* Also allocate space for the argument save area */
- /* Called via: jalr __save_r28_r31,r10 */
+ /* Allocate space and save registers 28 .. 29, 31 on the stack. */
+ /* Also allocate space for the argument save area. */
+ /* Called via: jalr __save_r28_r31,r10. */
__save_r28_r31:
- addi -28,sp,sp
- st.w r29,16[sp]
- st.w r28,20[sp]
- st.w r31,24[sp]
+ addi -12,sp,sp
+ st.w r29,0[sp]
+ st.w r28,4[sp]
+ st.w r31,8[sp]
jmp [r10]
.size __save_r28_r31,.-__save_r28_r31
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r28_r31 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r28_r31. */
.align 2
.globl __return_r28_r31
.type __return_r28_r31,@function
__return_r28_r31:
- ld.w 16[sp],r29
- ld.w 20[sp],r28
- ld.w 24[sp],r31
- addi 28,sp,sp
+ ld.w 0[sp],r29
+ ld.w 4[sp],r28
+ ld.w 8[sp],r31
+ addi 12,sp,sp
jmp [r31]
.size __return_r28_r31,.-__return_r28_r31
#endif /* L_save_28c */
.align 2
.globl __save_r29_r31
.type __save_r29_r31,@function
- /* Allocate space and save registers 29 & 31 on the stack */
- /* Also allocate space for the argument save area */
- /* Called via: jalr __save_r29_r31,r10 */
+ /* Allocate space and save registers 29 & 31 on the stack. */
+ /* Also allocate space for the argument save area. */
+ /* Called via: jalr __save_r29_r31,r10. */
__save_r29_r31:
- addi -24,sp,sp
- st.w r29,16[sp]
- st.w r31,20[sp]
+ addi -8,sp,sp
+ st.w r29,0[sp]
+ st.w r31,4[sp]
jmp [r10]
.size __save_r29_r31,.-__save_r29_r31
- /* Restore saved registers, deallocate stack and return to the user */
- /* Called via: jr __return_r29_r31 */
+ /* Restore saved registers, deallocate stack and return to the user. */
+ /* Called via: jr __return_r29_r31. */
.align 2
.globl __return_r29_r31
.type __return_r29_r31,@function
__return_r29_r31:
- ld.w 16[sp],r29
- ld.w 20[sp],r31
- addi 24,sp,sp
+ ld.w 0[sp],r29
+ ld.w 4[sp],r31
+ addi 8,sp,sp
jmp [r31]
.size __return_r29_r31,.-__return_r29_r31
#endif /* L_save_29c */
.type __save_r31,@function
/* Allocate space and save register 31 on the stack. */
/* Also allocate space for the argument save area. */
- /* Called via: jalr __save_r31,r10 */
+ /* Called via: jalr __save_r31,r10. */
__save_r31:
- addi -20,sp,sp
- st.w r31,16[sp]
+ addi -4,sp,sp
+ st.w r31,0[sp]
jmp [r10]
.size __save_r31,.-__save_r31
/* Restore saved registers, deallocate stack and return to the user. */
- /* Called via: jr __return_r31 */
+ /* Called via: jr __return_r31. */
.align 2
.globl __return_r31
.type __return_r31,@function
__return_r31:
- ld.w 16[sp],r31
- addi 20,sp,sp
+ ld.w 0[sp],r31
+ addi 4,sp,sp
jmp [r31]
.size __return_r31,.-__return_r31
#endif /* L_save_31c */
-#ifdef L_save_varargs
- .text
- .align 2
- .globl __save_r6_r9
- .type __save_r6_r9,@function
- /* Save registers 6 .. 9 on the stack for variable argument functions. */
- /* Called via: jalr __save_r6_r9,r10 */
-__save_r6_r9:
-#ifdef __EP__
- mov ep,r1
- mov sp,ep
- sst.w r6,0[ep]
- sst.w r7,4[ep]
- sst.w r8,8[ep]
- sst.w r9,12[ep]
- mov r1,ep
-#else
- st.w r6,0[sp]
- st.w r7,4[sp]
- st.w r8,8[sp]
- st.w r9,12[sp]
-#endif
- jmp [r10]
- .size __save_r6_r9,.-__save_r6_r9
-#endif /* L_save_varargs */
-
#ifdef L_save_interrupt
.text
.align 2
.globl __save_interrupt
.type __save_interrupt,@function
/* Save registers r1, r4 on stack and load up with expected values. */
- /* Note, 12 bytes of stack have already been allocated. */
- /* Called via: jalr __save_interrupt,r10 */
+ /* Note, 20 bytes of stack have already been allocated. */
+ /* Called via: jalr __save_interrupt,r10. */
__save_interrupt:
+ /* add -20,sp ; st.w r11,16[sp] ; st.w r10,12[sp] ; */
st.w ep,0[sp]
st.w gp,4[sp]
st.w r1,8[sp]
.size __save_interrupt,.-__save_interrupt
/* Restore saved registers, deallocate stack and return from the interrupt. */
- /* Called via: jr __return_interrupt */
+ /* Called via: jr __return_interrupt. */
.align 2
.globl __return_interrupt
.type __return_interrupt,@function
ld.w 4[sp],gp
ld.w 8[sp],r1
ld.w 12[sp],r10
- addi 16,sp,sp
+ ld.w 16[sp],r11
+ addi 20,sp,sp
reti
.size __return_interrupt,.-__return_interrupt
#endif /* L_save_interrupt */
.type __save_all_interrupt,@function
/* Save all registers except for those saved in __save_interrupt. */
/* Allocate enough stack for all of the registers & 16 bytes of space. */
- /* Called via: jalr __save_all_interrupt,r10 */
+ /* Called via: jalr __save_all_interrupt,r10. */
__save_all_interrupt:
- addi -120,sp,sp
+ addi -104,sp,sp
#ifdef __EP__
mov ep,r1
mov sp,ep
- sst.w r31,116[ep]
- sst.w r2,112[ep]
- sst.w gp,108[ep]
- sst.w r6,104[ep]
- sst.w r7,100[ep]
- sst.w r8,96[ep]
- sst.w r9,92[ep]
- sst.w r11,88[ep]
- sst.w r12,84[ep]
- sst.w r13,80[ep]
- sst.w r14,76[ep]
- sst.w r15,72[ep]
- sst.w r16,68[ep]
- sst.w r17,64[ep]
- sst.w r18,60[ep]
- sst.w r19,56[ep]
- sst.w r20,52[ep]
- sst.w r21,48[ep]
- sst.w r22,44[ep]
- sst.w r23,40[ep]
- sst.w r24,36[ep]
- sst.w r25,32[ep]
- sst.w r26,28[ep]
- sst.w r27,24[ep]
- sst.w r28,20[ep]
- sst.w r29,16[ep]
+ sst.w r31,100[ep]
+ sst.w r2,96[ep]
+ sst.w gp,92[ep]
+ sst.w r6,88[ep]
+ sst.w r7,84[ep]
+ sst.w r8,80[ep]
+ sst.w r9,76[ep]
+ sst.w r11,72[ep]
+ sst.w r12,68[ep]
+ sst.w r13,64[ep]
+ sst.w r14,60[ep]
+ sst.w r15,56[ep]
+ sst.w r16,52[ep]
+ sst.w r17,48[ep]
+ sst.w r18,44[ep]
+ sst.w r19,40[ep]
+ sst.w r20,36[ep]
+ sst.w r21,32[ep]
+ sst.w r22,28[ep]
+ sst.w r23,24[ep]
+ sst.w r24,20[ep]
+ sst.w r25,16[ep]
+ sst.w r26,12[ep]
+ sst.w r27,8[ep]
+ sst.w r28,4[ep]
+ sst.w r29,0[ep]
mov r1,ep
#else
- st.w r31,116[sp]
- st.w r2,112[sp]
- st.w gp,108[sp]
- st.w r6,104[sp]
- st.w r7,100[sp]
- st.w r8,96[sp]
- st.w r9,92[sp]
- st.w r11,88[sp]
- st.w r12,84[sp]
- st.w r13,80[sp]
- st.w r14,76[sp]
- st.w r15,72[sp]
- st.w r16,68[sp]
- st.w r17,64[sp]
- st.w r18,60[sp]
- st.w r19,56[sp]
- st.w r20,52[sp]
- st.w r21,48[sp]
- st.w r22,44[sp]
- st.w r23,40[sp]
- st.w r24,36[sp]
- st.w r25,32[sp]
- st.w r26,28[sp]
- st.w r27,24[sp]
- st.w r28,20[sp]
- st.w r29,16[sp]
+ st.w r31,100[sp]
+ st.w r2,96[sp]
+ st.w gp,92[sp]
+ st.w r6,88[sp]
+ st.w r7,84[sp]
+ st.w r8,80[sp]
+ st.w r9,76[sp]
+ st.w r11,72[sp]
+ st.w r12,68[sp]
+ st.w r13,64[sp]
+ st.w r14,60[sp]
+ st.w r15,56[sp]
+ st.w r16,52[sp]
+ st.w r17,48[sp]
+ st.w r18,44[sp]
+ st.w r19,40[sp]
+ st.w r20,36[sp]
+ st.w r21,32[sp]
+ st.w r22,28[sp]
+ st.w r23,24[sp]
+ st.w r24,20[sp]
+ st.w r25,16[sp]
+ st.w r26,12[sp]
+ st.w r27,8[sp]
+ st.w r28,4[sp]
+ st.w r29,0[sp]
#endif
jmp [r10]
.size __save_all_interrupt,.-__save_all_interrupt
.type __restore_all_interrupt,@function
/* Restore all registers saved in __save_all_interrupt and
deallocate the stack space. */
- /* Called via: jalr __restore_all_interrupt,r10 */
+ /* Called via: jalr __restore_all_interrupt,r10. */
__restore_all_interrupt:
#ifdef __EP__
mov ep,r1
mov sp,ep
- sld.w 116[ep],r31
- sld.w 112[ep],r2
- sld.w 108[ep],gp
- sld.w 104[ep],r6
- sld.w 100[ep],r7
- sld.w 96[ep],r8
- sld.w 92[ep],r9
- sld.w 88[ep],r11
- sld.w 84[ep],r12
- sld.w 80[ep],r13
- sld.w 76[ep],r14
- sld.w 72[ep],r15
- sld.w 68[ep],r16
- sld.w 64[ep],r17
- sld.w 60[ep],r18
- sld.w 56[ep],r19
- sld.w 52[ep],r20
- sld.w 48[ep],r21
- sld.w 44[ep],r22
- sld.w 40[ep],r23
- sld.w 36[ep],r24
- sld.w 32[ep],r25
- sld.w 28[ep],r26
- sld.w 24[ep],r27
- sld.w 20[ep],r28
- sld.w 16[ep],r29
+ sld.w 100[ep],r31
+ sld.w 96[ep],r2
+ sld.w 92[ep],gp
+ sld.w 88[ep],r6
+ sld.w 84[ep],r7
+ sld.w 80[ep],r8
+ sld.w 76[ep],r9
+ sld.w 72[ep],r11
+ sld.w 68[ep],r12
+ sld.w 64[ep],r13
+ sld.w 60[ep],r14
+ sld.w 56[ep],r15
+ sld.w 52[ep],r16
+ sld.w 48[ep],r17
+ sld.w 44[ep],r18
+ sld.w 40[ep],r19
+ sld.w 36[ep],r20
+ sld.w 32[ep],r21
+ sld.w 28[ep],r22
+ sld.w 24[ep],r23
+ sld.w 20[ep],r24
+ sld.w 16[ep],r25
+ sld.w 12[ep],r26
+ sld.w 8[ep],r27
+ sld.w 4[ep],r28
+ sld.w 0[ep],r29
mov r1,ep
#else
- ld.w 116[sp],r31
- ld.w 112[sp],r2
- ld.w 108[sp],gp
- ld.w 104[sp],r6
- ld.w 100[sp],r7
- ld.w 96[sp],r8
- ld.w 92[sp],r9
- ld.w 88[sp],r11
- ld.w 84[sp],r12
- ld.w 80[sp],r13
- ld.w 76[sp],r14
- ld.w 72[sp],r15
- ld.w 68[sp],r16
- ld.w 64[sp],r17
- ld.w 60[sp],r18
- ld.w 56[sp],r19
- ld.w 52[sp],r20
- ld.w 48[sp],r21
- ld.w 44[sp],r22
- ld.w 40[sp],r23
- ld.w 36[sp],r24
- ld.w 32[sp],r25
- ld.w 28[sp],r26
- ld.w 24[sp],r27
- ld.w 20[sp],r28
- ld.w 16[sp],r29
-#endif
- addi 120,sp,sp
+ ld.w 100[sp],r31
+ ld.w 96[sp],r2
+ ld.w 92[sp],gp
+ ld.w 88[sp],r6
+ ld.w 84[sp],r7
+ ld.w 80[sp],r8
+ ld.w 76[sp],r9
+ ld.w 72[sp],r11
+ ld.w 68[sp],r12
+ ld.w 64[sp],r13
+ ld.w 60[sp],r14
+ ld.w 56[sp],r15
+ ld.w 52[sp],r16
+ ld.w 48[sp],r17
+ ld.w 44[sp],r18
+ ld.w 40[sp],r19
+ ld.w 36[sp],r20
+ ld.w 32[sp],r21
+ ld.w 28[sp],r22
+ ld.w 24[sp],r23
+ ld.w 20[sp],r24
+ ld.w 16[sp],r25
+ ld.w 12[sp],r26
+ ld.w 8[sp],r27
+ ld.w 4[sp],r28
+ ld.w 0[sp],r29
+#endif
+ addi 104,sp,sp
jmp [r10]
.size __restore_all_interrupt,.-__restore_all_interrupt
#endif /* L_save_all_interrupt */
-
-#if defined __v850e__
+#if defined(__v850e__) || defined(__v850e1__) || defined(__v850e2__) || defined(__v850e2v3__)
#ifdef L_callt_save_r2_r29
/* Put these functions into the call table area. */
.call_table_text
.type __callt_return_r2_r29,@function
__callt_return_r2_r29: .short ctoff(.L_return_r2_r29)
-#endif /* L_callt_save_r2_r29 */
+#endif /* L_callt_save_r2_r29. */
#ifdef L_callt_save_r2_r31
/* Put these functions into the call table area. */
.L_save_r2_r31:
add -4, sp
st.w r2, 0[sp]
- prepare {r20 - r29, r31}, 4
+ prepare {r20 - r29, r31}, 0
ctret
/* Restore saved registers, deallocate stack and return to the user. */
/* Called via: callt ctoff(__callt_return_r2_r31). */
.align 2
.L_return_r2_r31:
- dispose 4, {r20 - r29, r31}
+ dispose 0, {r20 - r29, r31}
ld.w 0[sp], r2
addi 4, sp, sp
jmp [r31]
#endif /* L_callt_save_r2_r31 */
-
-#ifdef L_callt_save_r6_r9
- /* Put these functions into the call table area. */
- .call_table_text
-
- /* Save registers r6 - r9 onto the stack in the space reserved for them.
- Use by variable argument functions.
- Called via: callt ctoff(__callt_save_r6_r9). */
- .align 2
-.L_save_r6_r9:
-#ifdef __EP__
- mov ep,r1
- mov sp,ep
- sst.w r6,0[ep]
- sst.w r7,4[ep]
- sst.w r8,8[ep]
- sst.w r9,12[ep]
- mov r1,ep
-#else
- st.w r6,0[sp]
- st.w r7,4[sp]
- st.w r8,8[sp]
- st.w r9,12[sp]
-#endif
- ctret
-
- /* Place the offsets of the start of this routines into the call table. */
- .call_table_data
-
- .global __callt_save_r6_r9
- .type __callt_save_r6_r9,@function
-__callt_save_r6_r9: .short ctoff(.L_save_r6_r9)
-#endif /* L_callt_save_r6_r9 */
-
-
#ifdef L_callt_save_interrupt
/* Put these functions into the call table area. */
.call_table_text
.align 2
.L_save_interrupt:
/* SP has already been moved before callt ctoff(_save_interrupt). */
- /* addi -24, sp, sp */
+ /* R1,R10,R11,ctpc,ctpsw has alread been saved bofore callt ctoff(_save_interrupt). */
+ /* addi -28, sp, sp */
+ /* st.w r1, 24[sp] */
+ /* st.w r10, 12[sp] */
+ /* st.w r11, 16[sp] */
+ /* stsr ctpc, r10 */
+ /* st.w r10, 20[sp] */
+ /* stsr ctpsw, r10 */
+ /* st.w r10, 24[sp] */
st.w ep, 0[sp]
st.w gp, 4[sp]
st.w r1, 8[sp]
- /* R10 has already been saved before callt ctoff(_save_interrupt). */
- /* st.w r10, 12[sp] */
mov hilo(__ep),ep
mov hilo(__gp),gp
ctret
+ .call_table_text
/* Restore saved registers, deallocate stack and return from the interrupt. */
/* Called via: callt ctoff(__callt_restore_interrupt). */
.align 2
.globl __return_interrupt
.type __return_interrupt,@function
.L_return_interrupt:
- ld.w 20[sp], r1
+ ld.w 24[sp], r1
ldsr r1, ctpsw
- ld.w 16[sp], r1
+ ld.w 20[sp], r1
ldsr r1, ctpc
+ ld.w 16[sp], r11
ld.w 12[sp], r10
ld.w 8[sp], r1
ld.w 4[sp], gp
ld.w 0[sp], ep
- addi 24, sp, sp
+ addi 28, sp, sp
reti
/* Place the offsets of the start of these routines into the call table. */
st.w r18, 4[sp]
st.w r19, 0[sp]
#endif
- prepare {r20 - r29, r31}, 4
+ prepare {r20 - r29, r31}, 0
ctret
/* Restore all registers saved in __save_all_interrupt
/* Called via: callt ctoff(__callt_restore_all_interrupt). */
.align 2
.L_restore_all_interrupt:
- dispose 4, {r20 - r29, r31}
-#ifdef __EP__
+ dispose 0, {r20 - r29, r31}
+#ifdef __EP__
mov ep, r1
mov sp, ep
sld.w 0 [ep], r19
ctret ;\
;\
/* Restore saved registers, deallocate stack and return. */ ;\
- /* Called via: callt ctoff(__return_START_r29) */ ;\
+ /* Called via: callt ctoff(__return_START_r29). */ ;\
.align 2 ;\
.L_return_##START##_r29: ;\
dispose 0, { START - r29 }, r31 ;\
/* Allocate space and save registers START .. r31 on the stack. */ ;\
/* Called via: callt ctoff(__callt_save_START_r31c). */ ;\
.L_save_##START##_r31c: ;\
- prepare { START - r29, r31}, 4 ;\
+ prepare { START - r29, r31}, 0 ;\
ctret ;\
;\
/* Restore saved registers, deallocate stack and return. */ ;\
/* Called via: callt ctoff(__return_START_r31c). */ ;\
.align 2 ;\
.L_return_##START##_r31c: ;\
- dispose 4, { START - r29, r31}, r31 ;\
+ dispose 0, { START - r29, r31}, r31 ;\
;\
/* Place the offsets of the start of these funcs into the call table. */;\
.call_table_data ;\
/* Allocate space and save register r31 on the stack. */
/* Called via: callt ctoff(__callt_save_r31c). */
.L_callt_save_r31c:
- prepare {r31}, 4
+ prepare {r31}, 0
ctret
/* Restore saved registers, deallocate stack and return. */
/* Called via: callt ctoff(__return_r31c). */
.align 2
.L_callt_return_r31c:
- dispose 4, {r31}, r31
+ dispose 0, {r31}, r31
/* Place the offsets of the start of these funcs into the call table. */
.call_table_data
mov r26, r10
mov r27, r11
jr __return_r26_r31
-#endif /* __v850__ */
-#if defined(__v850e__) || defined(__v850ea__)
+#else /* defined(__v850e__) */
/* (Ahi << 32 + Alo) * (Bhi << 32 + Blo) */
/* r7 r6 r9 r8 */
mov r8, r10
add r8, r11
add r9, r11
jmp [r31]
-
-#endif /* defined(__v850e__) || defined(__v850ea__) */
+#endif /* defined(__v850e__) */
.size ___muldi3, . - ___muldi3
#endif
+
return register_operand (op, mode);
})
+;; Return true if OP is a even number register.
+
+(define_predicate "even_reg_operand"
+ (match_code "reg")
+{
+ return (GET_CODE (op) == REG
+ && (REGNO (op) >= FIRST_PSEUDO_REGISTER
+ || ((REGNO (op) > 0) && (REGNO (op) < 32)
+ && ((REGNO (op) & 1)==0))));
+})
+
;; Return true if OP is a valid call operand.
(define_predicate "call_address_operand"
return (GET_CODE (op) == SYMBOL_REF || GET_CODE (op) == REG);
})
-;; TODO: Add a comment here.
+;; Return true if OP is a valid source operand for SImode move.
(define_predicate "movsi_source_operand"
(match_code "label_ref,symbol_ref,const_int,const_double,const,high,mem,reg,subreg")
return general_operand (op, mode);
})
-;; TODO: Add a comment here.
+;; Return true if OP is a valid operand for 23 bit displacement
+;; operations.
+
+(define_predicate "disp23_operand"
+ (match_code "const_int")
+{
+ if (GET_CODE (op) == CONST_INT
+ && ((unsigned)(INTVAL (op)) >= 0x8000)
+ && ((unsigned)(INTVAL (op)) < 0x400000))
+ return 1;
+ else
+ return 0;
+})
+
+;; Return true if OP is a symbol ref with 16-bit signed value.
(define_predicate "special_symbolref_operand"
(match_code "symbol_ref")
return FALSE;
})
-;; TODO: Add a comment here.
+;; Return true if OP is a valid operand for bit related operations
+;; containing only single 1 in its binary representation.
(define_predicate "power_of_two_operand"
(match_code "const_int")
/* If there are no registers to save then the function prologue
is not suitable. */
- if (count <= 2)
+ if (count <= (TARGET_LONG_CALLS ? 3 : 2))
return 0;
/* The pattern matching has already established that we are adjusting the
}
/* Make sure that the last entries in the vector are clobbers. */
- for (; i < count; i++)
+ vector_element = XVECEXP (op, 0, i++);
+
+ if (GET_CODE (vector_element) != CLOBBER
+ || GET_CODE (XEXP (vector_element, 0)) != REG
+ || REGNO (XEXP (vector_element, 0)) != 10)
+ return 0;
+
+ if (TARGET_LONG_CALLS)
{
- vector_element = XVECEXP (op, 0, i);
+ vector_element = XVECEXP (op, 0, i++);
if (GET_CODE (vector_element) != CLOBBER
|| GET_CODE (XEXP (vector_element, 0)) != REG
- || !(REGNO (XEXP (vector_element, 0)) == 10
- || (TARGET_LONG_CALLS ? (REGNO (XEXP (vector_element, 0)) == 11) : 0 )))
+ || REGNO (XEXP (vector_element, 0)) != 11)
return 0;
}
- return 1;
+ return i == count;
})
;; Return nonzero if the given RTX is suitable for collapsing into
(mem:SI (plus:SI (reg:SI 3) (match_operand:SI n "immediate_operand" "i"))))
*/
- for (i = 3; i < count; i++)
+ for (i = 2; i < count; i++)
{
rtx vector_element = XVECEXP (op, 0, i);
rtx dest;
*/
- for (i = 2; i < count; i++)
+ for (i = 1; i < count; i++)
{
rtx vector_element = XVECEXP (op, 0, i);
rtx dest;
rtx src;
rtx plus;
+ if (GET_CODE (vector_element) == CLOBBER)
+ continue;
+
if (GET_CODE (vector_element) != SET)
return 0;
space just acquired by the first operand then abandon this quest.
Note: the test is <= because both values are negative. */
if (INTVAL (XEXP (plus, 1))
- <= INTVAL (XEXP (SET_SRC (XVECEXP (op, 0, 0)), 1)))
+ < INTVAL (XEXP (SET_SRC (XVECEXP (op, 0, 0)), 1)))
return 0;
}
return 1;
})
-;; TODO: Add a comment here.
+;; Return true if OP is a valid operand for bit related operations
+;; containing only single 0 in its binary representation.
(define_predicate "not_power_of_two_operand"
(match_code "const_int")
return 0;
return 1;
})
+
+;; Return true if OP is a float value operand with value as 1.
+
+(define_predicate "const_float_1_operand"
+ (match_code "const_int")
+{
+ if (GET_CODE (op) != CONST_DOUBLE
+ || mode != GET_MODE (op)
+ || (mode != DFmode && mode != SFmode))
+ return 0;
+
+ return op == CONST1_RTX(mode);
+})
+
+;; Return true if OP is a float value operand with value as 0.
+
+(define_predicate "const_float_0_operand"
+ (match_code "const_int")
+{
+ if (GET_CODE (op) != CONST_DOUBLE
+ || mode != GET_MODE (op)
+ || (mode != DFmode && mode != SFmode))
+ return 0;
+
+ return op == CONST0_RTX(mode);
+})
+
+
_save_28c \
_save_29c \
_save_31c \
- _save_varargs \
_save_interrupt \
_save_all_interrupt \
_callt_save_20 \
_callt_save_28c \
_callt_save_29c \
_callt_save_31c \
- _callt_save_varargs \
_callt_save_interrupt \
_callt_save_all_interrupt \
_callt_save_r2_r29 \
_callt_save_r2_r31 \
- _callt_save_r6_r9 \
_negdi2 \
_cmpdi2 \
_ucmpdi2 \
cat $(srcdir)/config/fp-bit.c >> fp-bit.c
# Create target-specific versions of the libraries
-MULTILIB_OPTIONS = mv850e
-MULTILIB_DIRNAMES = v850e
+MULTILIB_OPTIONS = mv850/mv850e/mv850e2/mv850e2v3
+MULTILIB_DIRNAMES = v850 v850e v850e2 v850e2v3
INSTALL_LIBGCC = install-multilib
-MULTILIB_MATCHES = mv850e=mv850e1
+MULTILIB_MATCHES = mv850e=mv850e1
TCFLAGS = -mno-app-regs -msmall-sld -Wa,-mwarn-signed-overflow -Wa,-mwarn-unsigned-overflow
_save_28c \
_save_29c \
_save_31c \
- _save_varargs \
_save_interrupt \
_save_all_interrupt \
_callt_save_20 \
_callt_save_28c \
_callt_save_29c \
_callt_save_31c \
- _callt_save_varargs \
_callt_save_interrupt \
_callt_save_all_interrupt \
_callt_save_r2_r29 \
_callt_save_r2_r31 \
- _callt_save_r6_r9 \
_negdi2 \
_cmpdi2 \
_ucmpdi2 \
--- /dev/null
+/* Definitions of target machine for GNU compiler. NEC V850 series
+ Copyright (C) 2005
+ Free Software Foundation, Inc.
+ Contributed by NEC EL
+
+ This file is part of GCC.
+
+ GCC is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ GCC is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with GCC; see the file COPYING. If not, write to
+ the Free Software Foundation, 59 Temple Place - Suite 330,
+ Boston, MA 02111-1307, USA. */
+
+CC_MODE (CC_FPU_LT);
+CC_MODE (CC_FPU_LE);
+CC_MODE (CC_FPU_GT);
+CC_MODE (CC_FPU_GE);
+CC_MODE (CC_FPU_EQ);
+CC_MODE (CC_FPU_NE);
+
extern char * construct_dispose_instruction (rtx);
extern char * construct_prepare_instruction (rtx);
extern int ep_memory_operand (rtx, Mmode, int);
+extern int v850_float_z_comparison_operator (rtx, Mmode);
+extern int v850_float_nz_comparison_operator (rtx, Mmode);
+extern rtx v850_gen_compare (enum rtx_code, Mmode, rtx, rtx);
+extern Mmode v850_gen_float_compare (enum rtx_code, Mmode, rtx, rtx);
+extern Mmode v850_select_cc_mode (RTX_CODE, rtx, rtx);
#ifdef TREE_CODE
extern rtx function_arg (CUMULATIVE_ARGS *, Mmode, tree, int);
#endif
static section *v850_select_section (tree, int, unsigned HOST_WIDE_INT);
static void v850_encode_data_area (tree, rtx);
static void v850_encode_section_info (tree, rtx, int);
+static int v850_issue_rate (void);
static bool v850_return_in_memory (const_tree, const_tree);
static rtx v850_function_value (const_tree, const_tree, bool);
static void v850_setup_incoming_varargs (CUMULATIVE_ARGS *, enum machine_mode,
const_tree, bool);
static int v850_arg_partial_bytes (CUMULATIVE_ARGS *, enum machine_mode,
tree, bool);
+static bool v850_strict_argument_naming (CUMULATIVE_ARGS *);
static bool v850_can_eliminate (const int, const int);
static void v850_asm_trampoline_template (FILE *);
static void v850_trampoline_init (rtx, tree, rtx);
function is an interrupt handler. */
static int v850_interrupt_cache_p = FALSE;
+rtx v850_compare_op0, v850_compare_op1;
+
/* Whether current function is an interrupt handler. */
static int v850_interrupt_p = FALSE;
#undef TARGET_MACHINE_DEPENDENT_REORG
#define TARGET_MACHINE_DEPENDENT_REORG v850_reorg
+#undef TARGET_SCHED_ISSUE_RATE
+#define TARGET_SCHED_ISSUE_RATE v850_issue_rate
+
#undef TARGET_PROMOTE_PROTOTYPES
#define TARGET_PROMOTE_PROTOTYPES hook_bool_const_tree_true
#undef TARGET_TRAMPOLINE_INIT
#define TARGET_TRAMPOLINE_INIT v850_trampoline_init
+#undef TARGET_STRICT_ARGUMENT_NAMING
+#define TARGET_STRICT_ARGUMENT_NAMING v850_strict_argument_naming
+
struct gcc_target targetm = TARGET_INITIALIZER;
\f
/* Set the maximum size of small memory area TYPE to the value given
return true;
}
}
-\f
+
+/* Handle the TARGET_PASS_BY_REFERENCE target hook.
+ Specify whether to pass the argument by reference. */
+
static bool
v850_pass_by_reference (CUMULATIVE_ARGS *cum ATTRIBUTE_UNUSED,
enum machine_mode mode, const_tree type,
return size > 8;
}
+/* Implementing the Varargs Macros. */
+
+static bool
+v850_strict_argument_naming (CUMULATIVE_ARGS * ca ATTRIBUTE_UNUSED)
+{
+ return !TARGET_GHS ? true : false;
+}
+
/* Return an RTX to represent where an argument with mode MODE
and type TYPE will be passed to a function. If the result
is NULL_RTX, the argument will be pushed. */
rtx result = NULL_RTX;
int size, align;
- if (TARGET_GHS && !named)
+ if (!named)
return NULL_RTX;
if (mode == BLKmode)
else
size = GET_MODE_SIZE (mode);
+ size = (size + UNITS_PER_WORD -1) & ~(UNITS_PER_WORD -1);
+
if (size < 1)
{
/* Once we have stopped using argument registers, do not start up again. */
return NULL_RTX;
}
- if (type)
+ if (size <= UNITS_PER_WORD && type)
align = TYPE_ALIGN (type) / BITS_PER_UNIT;
else
align = size;
return result;
}
-\f
/* Return the number of bytes which must be put into registers
for values which are part in registers and part in memory. */
-
static int
v850_arg_partial_bytes (CUMULATIVE_ARGS * cum, enum machine_mode mode,
tree type, bool named)
return 4 * UNITS_PER_WORD - cum->nbytes;
}
-\f
/* Return the high and low words of a CONST_DOUBLE */
static void
case 'z': /* reg or zero */
if (GET_CODE (x) == REG)
fputs (reg_names[REGNO (x)], file);
+ else if ((GET_MODE(x) == SImode
+ || GET_MODE(x) == DFmode
+ || GET_MODE(x) == SFmode)
+ && x == CONST0_RTX(GET_MODE(x)))
+ fputs (reg_names[0], file);
else
{
gcc_assert (x == const0_rtx);
return "mov %1,%0";
else if (CONST_OK_FOR_K (value)) /* Signed 16-bit immediate. */
- return "movea lo(%1),%.,%0";
+ return "movea %1,%.,%0";
else if (CONST_OK_FOR_L (value)) /* Upper 16 bits were set. */
- return "movhi hi(%1),%.,%0";
+ return "movhi hi0(%1),%.,%0";
/* A random constant. */
- else if (TARGET_V850E)
+ else if (TARGET_V850E || TARGET_V850E2_ALL)
return "mov %1,%0";
else
return "movhi hi(%1),%.,%0\n\tmovea lo(%1),%0,%0";
return "mov %F1,%0";
else if (CONST_OK_FOR_K (high)) /* Signed 16-bit immediate. */
- return "movea lo(%F1),%.,%0";
+ return "movea %F1,%.,%0";
else if (CONST_OK_FOR_L (high)) /* Upper 16 bits were set. */
- return "movhi hi(%F1),%.,%0";
+ return "movhi hi0(%F1),%.,%0";
/* A random constant. */
- else if (TARGET_V850E)
+ else if (TARGET_V850E || TARGET_V850E2_ALL)
return "mov %F1,%0";
else
|| GET_CODE (src) == SYMBOL_REF
|| GET_CODE (src) == CONST)
{
- if (TARGET_V850E)
+ if (TARGET_V850E || TARGET_V850E2_ALL)
return "mov hilo(%1),%0";
else
return "movhi hi(%1),%.,%0\n\tmovea lo(%1),%0,%0";
return "";
}
-\f
+/* Generate comparison code. */
+int
+v850_float_z_comparison_operator (rtx op, enum machine_mode mode)
+{
+ enum rtx_code code = GET_CODE (op);
+
+ if (GET_RTX_CLASS (code) != RTX_COMPARE
+ && GET_RTX_CLASS (code) != RTX_COMM_COMPARE)
+ return 0;
+
+ if (mode != GET_MODE (op) && mode != VOIDmode)
+ return 0;
+
+ if ((GET_CODE (XEXP (op, 0)) != REG
+ || REGNO (XEXP (op, 0)) != CC_REGNUM)
+ || XEXP (op, 1) != const0_rtx)
+ return 0;
+
+ if (GET_MODE (XEXP (op, 0)) == CC_FPU_LTmode)
+ return code == LT;
+ if (GET_MODE (XEXP (op, 0)) == CC_FPU_LEmode)
+ return code == LE;
+ if (GET_MODE (XEXP (op, 0)) == CC_FPU_EQmode)
+ return code == EQ;
+
+ return 0;
+}
+
+int
+v850_float_nz_comparison_operator (rtx op, enum machine_mode mode)
+{
+ enum rtx_code code = GET_CODE (op);
+
+ if (GET_RTX_CLASS (code) != RTX_COMPARE
+ && GET_RTX_CLASS (code) != RTX_COMM_COMPARE)
+ return 0;
+
+ if (mode != GET_MODE (op) && mode != VOIDmode)
+ return 0;
+
+ if ((GET_CODE (XEXP (op, 0)) != REG
+ || REGNO (XEXP (op, 0)) != CC_REGNUM)
+ || XEXP (op, 1) != const0_rtx)
+ return 0;
+
+ if (GET_MODE (XEXP (op, 0)) == CC_FPU_GTmode)
+ return code == GT;
+ if (GET_MODE (XEXP (op, 0)) == CC_FPU_GEmode)
+ return code == GE;
+ if (GET_MODE (XEXP (op, 0)) == CC_FPU_NEmode)
+ return code == NE;
+
+ return 0;
+}
+
+enum machine_mode
+v850_select_cc_mode (enum rtx_code cond, rtx op0, rtx op1)
+{
+ if (GET_MODE_CLASS (GET_MODE (op0)) == MODE_FLOAT)
+ {
+ switch (cond)
+ {
+ case LE:
+ return CC_FPU_LEmode;
+ case GE:
+ return CC_FPU_GEmode;
+ case LT:
+ return CC_FPU_LTmode;
+ case GT:
+ return CC_FPU_GTmode;
+ case EQ:
+ return CC_FPU_EQmode;
+ case NE:
+ return CC_FPU_NEmode;
+ default:
+ abort ();
+ }
+ }
+ return CCmode;
+}
+
+enum machine_mode
+v850_gen_float_compare (enum rtx_code cond, enum machine_mode mode ATTRIBUTE_UNUSED, rtx op0, rtx op1)
+{
+ if (GET_MODE(op0) == DFmode)
+ {
+ switch (cond)
+ {
+ case LE:
+ emit_insn (gen_cmpdf_le_insn (op0, op1));
+ break;
+ case GE:
+ emit_insn (gen_cmpdf_ge_insn (op0, op1));
+ break;
+ case LT:
+ emit_insn (gen_cmpdf_lt_insn (op0, op1));
+ break;
+ case GT:
+ emit_insn (gen_cmpdf_gt_insn (op0, op1));
+ break;
+ case EQ:
+ emit_insn (gen_cmpdf_eq_insn (op0, op1));
+ break;
+ case NE:
+ emit_insn (gen_cmpdf_ne_insn (op0, op1));
+ break;
+ default:
+ abort ();
+ }
+ }
+ else if (GET_MODE(v850_compare_op0) == SFmode)
+ {
+ switch (cond)
+ {
+ case LE:
+ emit_insn (gen_cmpsf_le_insn(op0, op1));
+ break;
+ case GE:
+ emit_insn (gen_cmpsf_ge_insn(op0, op1));
+ break;
+ case LT:
+ emit_insn (gen_cmpsf_lt_insn(op0, op1));
+ break;
+ case GT:
+ emit_insn (gen_cmpsf_gt_insn(op0, op1));
+ break;
+ case EQ:
+ emit_insn (gen_cmpsf_eq_insn(op0, op1));
+ break;
+ case NE:
+ emit_insn (gen_cmpsf_ne_insn(op0, op1));
+ break;
+ default:
+ abort ();
+ }
+ }
+ else
+ {
+ abort ();
+ }
+
+ return v850_select_cc_mode (cond, op0, op1);
+}
+
+rtx
+v850_gen_compare (enum rtx_code cond, enum machine_mode mode, rtx op0, rtx op1)
+{
+ if (GET_MODE_CLASS(GET_MODE (op0)) != MODE_FLOAT)
+ {
+ emit_insn (gen_cmpsi_insn (op0, op1));
+ return gen_rtx_fmt_ee (cond, mode, gen_rtx_REG(CCmode, CC_REGNUM), const0_rtx);
+ }
+ else
+ {
+ rtx cc_reg;
+ mode = v850_gen_float_compare (cond, mode, op0, op1);
+ cc_reg = gen_rtx_REG (mode, CC_REGNUM);
+ emit_insn (gen_rtx_SET(mode, cc_reg, gen_rtx_REG (mode, FCC_REGNUM)));
+
+ return gen_rtx_fmt_ee (cond, mode, cc_reg, const0_rtx);
+ }
+}
+
/* Return maximum offset supported for a short EP memory reference of mode
MODE and signedness UNSIGNEDP. */
case QImode:
if (TARGET_SMALL_SLD)
max_offset = (1 << 4);
- else if (TARGET_V850E
- && ( ( unsignedp && ! TARGET_US_BIT_SET)
- || (! unsignedp && TARGET_US_BIT_SET)))
+ else if ((TARGET_V850E || TARGET_V850E2_ALL)
+ && unsignedp)
max_offset = (1 << 4);
else
max_offset = (1 << 7);
case HImode:
if (TARGET_SMALL_SLD)
max_offset = (1 << 5);
- else if (TARGET_V850E
- && ( ( unsignedp && ! TARGET_US_BIT_SET)
- || (! unsignedp && TARGET_US_BIT_SET)))
+ else if ((TARGET_V850E || TARGET_V850E2_ALL)
+ && unsignedp)
max_offset = (1 << 5);
else
max_offset = (1 << 8);
}
}
-\f
/* # of registers saved by the interrupt handler. */
-#define INTERRUPT_FIXED_NUM 4
+#define INTERRUPT_FIXED_NUM 5
/* # of bytes for registers saved by the interrupt handler. */
#define INTERRUPT_FIXED_SAVE_SIZE (4 * INTERRUPT_FIXED_NUM)
-/* # of registers saved in register parameter area. */
-#define INTERRUPT_REGPARM_NUM 4
/* # of words saved for other registers. */
#define INTERRUPT_ALL_SAVE_NUM \
- (30 - INTERRUPT_FIXED_NUM + INTERRUPT_REGPARM_NUM)
+ (30 - INTERRUPT_FIXED_NUM)
#define INTERRUPT_ALL_SAVE_SIZE (4 * INTERRUPT_ALL_SAVE_NUM)
case 1: /* temp used to hold ep */
case 4: /* gp */
case 10: /* temp used to call interrupt save/restore */
+ case 11: /* temp used to call interrupt save/restore (long call) */
case EP_REGNUM: /* ep */
size += 4;
break;
+ crtl->outgoing_args_size);
}
-\f
+static int
+use_prolog_function (int num_save, int frame_size)
+{
+ int alloc_stack = (4 * num_save);
+ int unalloc_stack = frame_size - alloc_stack;
+ int save_func_len, restore_func_len;
+ int save_normal_len, restore_normal_len;
+
+ if (! TARGET_DISABLE_CALLT)
+ save_func_len = restore_func_len = 2;
+ else
+ save_func_len = restore_func_len = TARGET_LONG_CALLS ? (4+4+4+2+2) : 4;
+
+ if (unalloc_stack)
+ {
+ save_func_len += CONST_OK_FOR_J (-unalloc_stack) ? 2 : 4;
+ restore_func_len += CONST_OK_FOR_J (-unalloc_stack) ? 2 : 4;
+ }
+
+ /* See if we would have used ep to save the stack. */
+ if (TARGET_EP && num_save > 3 && (unsigned)frame_size < 255)
+ save_normal_len = restore_normal_len = (3 * 2) + (2 * num_save);
+ else
+ save_normal_len = restore_normal_len = 4 * num_save;
+
+ save_normal_len += CONST_OK_FOR_J (-frame_size) ? 2 : 4;
+ restore_normal_len += (CONST_OK_FOR_J (frame_size) ? 2 : 4) + 2;
+
+ /* Don't bother checking if we don't actually save any space.
+ This happens for instance if one register is saved and additional
+ stack space is allocated. */
+ return ((save_func_len + restore_func_len) < (save_normal_len + restore_normal_len));
+}
+
void
expand_prologue (void)
{
unsigned int i;
- int offset;
unsigned int size = get_frame_size ();
unsigned int actual_fsize;
unsigned int init_stack_alloc = 0;
rtx save_regs[32];
rtx save_all;
unsigned int num_save;
- unsigned int default_stack;
int code;
int interrupt_handler = v850_interrupt_function_p (current_function_decl);
long reg_saved = 0;
/* Save/setup global registers for interrupt functions right now. */
if (interrupt_handler)
{
- if (TARGET_V850E && ! TARGET_DISABLE_CALLT)
+ if (! TARGET_DISABLE_CALLT)
emit_insn (gen_callt_save_interrupt ());
else
emit_insn (gen_save_interrupt ());
actual_fsize -= INTERRUPT_ALL_SAVE_SIZE;
}
- /* Save arg registers to the stack if necessary. */
- else if (crtl->args.info.anonymous_args)
- {
- if (TARGET_PROLOG_FUNCTION && TARGET_V850E && !TARGET_DISABLE_CALLT)
- emit_insn (gen_save_r6_r9_v850e ());
- else if (TARGET_PROLOG_FUNCTION && ! TARGET_LONG_CALLS)
- emit_insn (gen_save_r6_r9 ());
- else
- {
- offset = 0;
- for (i = 6; i < 10; i++)
- {
- emit_move_insn (gen_rtx_MEM (SImode,
- plus_constant (stack_pointer_rtx,
- offset)),
- gen_rtx_REG (SImode, i));
- offset += 4;
- }
- }
- }
-
/* Identify all of the saved registers. */
num_save = 0;
- default_stack = 0;
- for (i = 1; i < 31; i++)
+ for (i = 1; i < 32; i++)
{
if (((1L << i) & reg_saved) != 0)
save_regs[num_save++] = gen_rtx_REG (Pmode, i);
}
- /* If the return pointer is saved, the helper functions also allocate
- 16 bytes of stack for arguments to be saved in. */
- if (((1L << LINK_POINTER_REGNUM) & reg_saved) != 0)
- {
- save_regs[num_save++] = gen_rtx_REG (Pmode, LINK_POINTER_REGNUM);
- default_stack = 16;
- }
-
/* See if we have an insn that allocates stack space and saves the particular
registers we want to. */
save_all = NULL_RTX;
- if (TARGET_PROLOG_FUNCTION && num_save > 0 && actual_fsize >= default_stack)
+ if (TARGET_PROLOG_FUNCTION && num_save > 0)
{
- int alloc_stack = (4 * num_save) + default_stack;
- int unalloc_stack = actual_fsize - alloc_stack;
- int save_func_len = 4;
- int save_normal_len;
-
- if (unalloc_stack)
- save_func_len += CONST_OK_FOR_J (unalloc_stack) ? 2 : 4;
-
- /* see if we would have used ep to save the stack */
- if (TARGET_EP && num_save > 3 && (unsigned)actual_fsize < 255)
- save_normal_len = (3 * 2) + (2 * num_save);
- else
- save_normal_len = 4 * num_save;
-
- save_normal_len += CONST_OK_FOR_J (actual_fsize) ? 2 : 4;
-
- /* Don't bother checking if we don't actually save any space.
- This happens for instance if one register is saved and additional
- stack space is allocated. */
- if (save_func_len < save_normal_len)
+ if (use_prolog_function (num_save, actual_fsize))
{
+ int alloc_stack = 4 * num_save;
+ int offset = 0;
+
save_all = gen_rtx_PARALLEL
(VOIDmode,
rtvec_alloc (num_save + 1
- + (TARGET_V850 ? (TARGET_LONG_CALLS ? 2 : 1) : 0)));
+ + (TARGET_DISABLE_CALLT ? (TARGET_LONG_CALLS ? 2 : 1) : 0)));
XVECEXP (save_all, 0, 0)
= gen_rtx_SET (VOIDmode,
stack_pointer_rtx,
- plus_constant (stack_pointer_rtx, -alloc_stack));
-
- offset = - default_stack;
+ gen_rtx_PLUS (Pmode,
+ stack_pointer_rtx,
+ GEN_INT(-alloc_stack)));
for (i = 0; i < num_save; i++)
{
+ offset -= 4;
XVECEXP (save_all, 0, i+1)
= gen_rtx_SET (VOIDmode,
gen_rtx_MEM (Pmode,
- plus_constant (stack_pointer_rtx,
- offset)),
+ gen_rtx_PLUS (Pmode,
+ stack_pointer_rtx,
+ GEN_INT(offset))),
save_regs[i]);
- offset -= 4;
}
- if (TARGET_V850)
+ if (TARGET_DISABLE_CALLT)
{
XVECEXP (save_all, 0, num_save + 1)
= gen_rtx_CLOBBER (VOIDmode, gen_rtx_REG (Pmode, 10));
INSN_CODE (insn) = code;
actual_fsize -= alloc_stack;
- if (TARGET_DEBUG)
- fprintf (stderr, "\
-Saved %d bytes via prologue function (%d vs. %d) for function %s\n",
- save_normal_len - save_func_len,
- save_normal_len, save_func_len,
- IDENTIFIER_POINTER (DECL_NAME (current_function_decl)));
}
else
save_all = NULL_RTX;
/* Special case interrupt functions that save all registers for a call. */
if (interrupt_handler && ((1L << LINK_POINTER_REGNUM) & reg_saved) != 0)
{
- if (TARGET_V850E && ! TARGET_DISABLE_CALLT)
+ if (! TARGET_DISABLE_CALLT)
emit_insn (gen_callt_save_all_interrupt ());
else
emit_insn (gen_save_all_interrupt ());
}
else
{
+ int offset;
/* If the stack is too big, allocate it in chunks so we can do the
register saves. We use the register save size so we use the ep
register. */
if (actual_fsize > init_stack_alloc)
{
int diff = actual_fsize - init_stack_alloc;
- if (CONST_OK_FOR_K (diff))
+ if (CONST_OK_FOR_K (-diff))
emit_insn (gen_addsi3 (stack_pointer_rtx,
stack_pointer_rtx,
GEN_INT (-diff)));
expand_epilogue (void)
{
unsigned int i;
- int offset;
unsigned int size = get_frame_size ();
long reg_saved = 0;
int actual_fsize = compute_frame_size (size, ®_saved);
- unsigned int init_stack_free = 0;
rtx restore_regs[32];
rtx restore_all;
unsigned int num_restore;
- unsigned int default_stack;
int code;
int interrupt_handler = v850_interrupt_function_p (current_function_decl);
/* Identify all of the saved registers. */
num_restore = 0;
- default_stack = 0;
- for (i = 1; i < 31; i++)
+ for (i = 1; i < 32; i++)
{
if (((1L << i) & reg_saved) != 0)
restore_regs[num_restore++] = gen_rtx_REG (Pmode, i);
}
- /* If the return pointer is saved, the helper functions also allocate
- 16 bytes of stack for arguments to be saved in. */
- if (((1L << LINK_POINTER_REGNUM) & reg_saved) != 0)
- {
- restore_regs[num_restore++] = gen_rtx_REG (Pmode, LINK_POINTER_REGNUM);
- default_stack = 16;
- }
-
/* See if we have an insn that restores the particular registers we
want to. */
restore_all = NULL_RTX;
-
+
if (TARGET_PROLOG_FUNCTION
&& num_restore > 0
- && actual_fsize >= (signed) default_stack
&& !interrupt_handler)
{
- int alloc_stack = (4 * num_restore) + default_stack;
- int unalloc_stack = actual_fsize - alloc_stack;
- int restore_func_len = 4;
+ int alloc_stack = (4 * num_restore);
+ int restore_func_len;
int restore_normal_len;
- if (unalloc_stack)
- restore_func_len += CONST_OK_FOR_J (unalloc_stack) ? 2 : 4;
-
- /* See if we would have used ep to restore the registers. */
- if (TARGET_EP && num_restore > 3 && (unsigned)actual_fsize < 255)
- restore_normal_len = (3 * 2) + (2 * num_restore);
- else
- restore_normal_len = 4 * num_restore;
-
- restore_normal_len += (CONST_OK_FOR_J (actual_fsize) ? 2 : 4) + 2;
-
/* Don't bother checking if we don't actually save any space. */
- if (restore_func_len < restore_normal_len)
+ if (use_prolog_function (num_restore, actual_fsize))
{
+ int offset;
restore_all = gen_rtx_PARALLEL (VOIDmode,
rtvec_alloc (num_restore + 2));
XVECEXP (restore_all, 0, 0) = gen_rtx_RETURN (VOIDmode);
= gen_rtx_SET (VOIDmode,
restore_regs[i],
gen_rtx_MEM (Pmode,
- plus_constant (stack_pointer_rtx,
- offset)));
+ gen_rtx_PLUS (Pmode,
+ stack_pointer_rtx,
+ GEN_INT(offset))));
offset -= 4;
}
insn = emit_jump_insn (restore_all);
INSN_CODE (insn) = code;
- if (TARGET_DEBUG)
- fprintf (stderr, "\
-Saved %d bytes via epilogue function (%d vs. %d) in function %s\n",
- restore_normal_len - restore_func_len,
- restore_normal_len, restore_func_len,
- IDENTIFIER_POINTER (DECL_NAME (current_function_decl)));
}
else
restore_all = NULL_RTX;
old fashioned way (one by one). */
if (!restore_all)
{
+ unsigned int init_stack_free;
+
/* If the stack is large, we need to cut it down in 2 pieces. */
- if (actual_fsize && !CONST_OK_FOR_K (-actual_fsize))
+ if (interrupt_handler)
+ init_stack_free = 0;
+ else if (actual_fsize && !CONST_OK_FOR_K (-actual_fsize))
init_stack_free = 4 * num_restore;
else
init_stack_free = (signed) actual_fsize;
{
int diff;
- diff = actual_fsize - ((interrupt_handler) ? 0 : init_stack_free);
+ diff = actual_fsize - init_stack_free;
if (CONST_OK_FOR_K (diff))
emit_insn (gen_addsi3 (stack_pointer_rtx,
for a call. */
if (interrupt_handler && ((1L << LINK_POINTER_REGNUM) & reg_saved) != 0)
{
- if (TARGET_V850E && ! TARGET_DISABLE_CALLT)
+ if (! TARGET_DISABLE_CALLT)
emit_insn (gen_callt_restore_all_interrupt ());
else
emit_insn (gen_restore_all_interrupt ());
else
{
/* Restore registers from the beginning of the stack frame. */
- offset = init_stack_free - 4;
+ int offset = init_stack_free - 4;
/* Restore the return pointer first. */
if (num_restore > 0
/* And return or use reti for interrupt handlers. */
if (interrupt_handler)
{
- if (TARGET_V850E && ! TARGET_DISABLE_CALLT)
+ if (! TARGET_DISABLE_CALLT)
emit_insn (gen_callt_return_interrupt ());
else
emit_jump_insn (gen_return_interrupt ());
v850_interrupt_p = FALSE;
}
-\f
/* Update the condition code from the insn. */
-
void
notice_update_cc (rtx body, rtx insn)
{
case CC_SET_ZNV:
/* Insn sets the Z,N,V flags of CC to recog_data.operand[0].
- C is in an unusable state. */
+ C is in an unusable state. */
CC_STATUS_INIT;
cc_status.flags |= CC_NO_CARRY;
cc_status.value1 = recog_data.operand[0];
break;
}
}
-\f
+
/* Retrieve the data area that has been chosen for the given decl. */
v850_data_area
pops registers off the stack and possibly releases some extra stack space
as well. The code has already verified that the RTL matches these
requirements. */
+
char *
construct_restore_jr (rtx op)
{
stack_bytes -= (count - 2) * 4;
/* Make sure that the amount we are popping either 0 or 16 bytes. */
- if (stack_bytes != 0 && stack_bytes != 16)
+ if (stack_bytes != 0)
{
error ("bad amount of stack space removal: %d", stack_bytes);
return NULL;
/* Discover the last register to pop. */
if (mask & (1 << LINK_POINTER_REGNUM))
{
- gcc_assert (stack_bytes == 16);
-
last = LINK_POINTER_REGNUM;
}
else
int i;
static char buff [100]; /* XXX */
- if (count <= 2)
+ if (count <= (TARGET_LONG_CALLS ? 3 : 2))
{
error ("bogus JARL construction: %d\n", count);
return NULL;
stack_bytes += (count - (TARGET_LONG_CALLS ? 3 : 2)) * 4;
/* Make sure that the amount we are popping either 0 or 16 bytes. */
- if (stack_bytes != 0 && stack_bytes != -16)
+ if (stack_bytes != 0)
{
error ("bad amount of stack space removal: %d", stack_bytes);
return NULL;
/* Discover the last register to push. */
if (mask & (1 << LINK_POINTER_REGNUM))
{
- gcc_assert (stack_bytes == -16);
-
last = LINK_POINTER_REGNUM;
}
else
}
if (! TARGET_DISABLE_CALLT
- && (use_callt || stack_bytes == 0 || stack_bytes == 16))
+ && (use_callt || stack_bytes == 0))
{
if (use_callt)
{
if (i == 31)
sprintf (buff, "callt ctoff(__callt_return_r31c)");
else
- sprintf (buff, "callt ctoff(__callt_return_r%d_r%d%s)",
- i, (mask & (1 << 31)) ? 31 : 29, stack_bytes ? "c" : "");
+ sprintf (buff, "callt ctoff(__callt_return_r%d_r%s)",
+ i, (mask & (1 << 31)) ? "31c" : "29");
}
}
else
char *
construct_prepare_instruction (rtx op)
{
- int count = XVECLEN (op, 0);
+ int count;
int stack_bytes;
unsigned long int mask;
int i;
static char buff[ 100 ]; /* XXX */
int use_callt = 0;
- if (count <= 1)
+ if (XVECLEN (op, 0) <= 1)
{
- error ("bogus PREPEARE construction: %d", count);
+ error ("bogus PREPEARE construction: %d", XVECLEN (op, 0));
return NULL;
}
stack_bytes = INTVAL (XEXP (SET_SRC (XVECEXP (op, 0, 0)), 1));
- /* Each push will put 4 bytes from the stack. */
- stack_bytes += (count - 1) * 4;
/* Make sure that the amount we are popping
will fit into the DISPOSE instruction. */
}
/* Now compute the bit mask of registers to push. */
+ count = 0;
mask = 0;
- for (i = 1; i < count; i++)
+ for (i = 1; i < XVECLEN (op, 0); i++)
{
rtx vector_element = XVECEXP (op, 0, i);
+ if (GET_CODE (vector_element) == CLOBBER)
+ continue;
+
gcc_assert (GET_CODE (vector_element) == SET);
gcc_assert (GET_CODE (SET_SRC (vector_element)) == REG);
gcc_assert (register_is_ok_for_epilogue (SET_SRC (vector_element),
use_callt = 1;
else
mask |= 1 << REGNO (SET_SRC (vector_element));
+ count++;
}
+ stack_bytes += count * 4;
+
if ((! TARGET_DISABLE_CALLT)
- && (use_callt || stack_bytes == 0 || stack_bytes == -16))
+ && (use_callt || stack_bytes == 0))
{
if (use_callt)
{
if (i == 31)
sprintf (buff, "callt ctoff(__callt_save_r31c)");
else
- sprintf (buff, "callt ctoff(__callt_save_r%d_r%d%s)",
- i, (mask & (1 << 31)) ? 31 : 29, stack_bytes ? "c" : "");
+ sprintf (buff, "callt ctoff(__callt_save_r%d_r%s)",
+ i, (mask & (1 << 31)) ? "31c" : "29");
}
else
{
return buff;
}
-\f
+
/* Return an RTX indicating where the return address to the
calling function can be found. */
mem = adjust_address (m_tramp, SImode, 20);
emit_move_insn (mem, fnaddr);
}
-\f
+
+static int
+v850_issue_rate (void)
+{
+ return (TARGET_V850E2_ALL? 2 : 1);
+}
#include "gt-v850.h"
#ifndef GCC_V850_H
#define GCC_V850_H
+extern GTY(()) rtx v850_compare_op0;
+extern GTY(()) rtx v850_compare_op1;
+
/* These are defined in svr4.h but we want to override them. */
#undef LIB_SPEC
+#define LIB_SPEC "%{!shared:%{!symbolic:--start-group -lc -lgcc --end-group}}"
+
#undef ENDFILE_SPEC
#undef LINK_SPEC
#undef STARTFILE_SPEC
#define TARGET_CPU_generic 1
#define TARGET_CPU_v850e 2
-#define TARGET_CPU_v850e1 3
+#define TARGET_CPU_v850e1 3
+#define TARGET_CPU_v850e2 4
+#define TARGET_CPU_v850e2v3 5
+
#ifndef TARGET_CPU_DEFAULT
#define TARGET_CPU_DEFAULT TARGET_CPU_generic
#if TARGET_CPU_DEFAULT == TARGET_CPU_v850e1
#undef MASK_DEFAULT
-#define MASK_DEFAULT MASK_V850E /* No practical difference. */
+#define MASK_DEFAULT MASK_V850E /* No practical difference. */
+#undef SUBTARGET_ASM_SPEC
+#define SUBTARGET_ASM_SPEC "%{!mv*:-mv850e1}"
+#undef SUBTARGET_CPP_SPEC
+#define SUBTARGET_CPP_SPEC "%{!mv*:-D__v850e1__} %{mv850e1:-D__v850e1__}"
+#undef TARGET_VERSION
+#define TARGET_VERSION fprintf (stderr, " (NEC V850E1)");
+#endif
+
+#if TARGET_CPU_DEFAULT == TARGET_CPU_v850e2
+#undef MASK_DEFAULT
+#define MASK_DEFAULT MASK_V850E2
+#undef SUBTARGET_ASM_SPEC
+#define SUBTARGET_ASM_SPEC "%{!mv*:-mv850e2}"
+#undef SUBTARGET_CPP_SPEC
+#define SUBTARGET_CPP_SPEC "%{!mv*:-D__v850e2__} %{mv850e2:-D__v850e2__}"
+#undef TARGET_VERSION
+#define TARGET_VERSION fprintf (stderr, " (NEC V850E2)");
+#endif
+
+#if TARGET_CPU_DEFAULT == TARGET_CPU_v850e2v3
+#undef MASK_DEFAULT
+#define MASK_DEFAULT MASK_V850E2V3
#undef SUBTARGET_ASM_SPEC
-#define SUBTARGET_ASM_SPEC "%{!mv*:-mv850e1}"
+#define SUBTARGET_ASM_SPEC "%{!mv*:-mv850e2v3}"
#undef SUBTARGET_CPP_SPEC
-#define SUBTARGET_CPP_SPEC "%{!mv*:-D__v850e1__} %{mv850e1:-D__v850e1__}"
+#define SUBTARGET_CPP_SPEC "%{!mv*:-D__v850e2v3__} %{mv850e2v3:-D__v850e2v3__}"
#undef TARGET_VERSION
-#define TARGET_VERSION fprintf (stderr, " (NEC V850E1)");
+#define TARGET_VERSION fprintf (stderr, " (NEC V850E2V3)");
#endif
+#define TARGET_V850E2_ALL (TARGET_V850E2 || TARGET_V850E2V3)
+
#define ASM_SPEC "%{mv*:-mv%*}"
-#define CPP_SPEC "%{mv850e:-D__v850e__} %{mv850:-D__v850__} %(subtarget_cpp_spec)"
+#define CPP_SPEC "%{mv850e2v3:-D__v850e2v3__} %{mv850e2:-D__v850e2__} %{mv850e:-D__v850e__} %{mv850:-D__v850__} %(subtarget_cpp_spec)" \
+ " %{mep:-D__EP__}"
#define EXTRA_SPECS \
{ "subtarget_asm_spec", SUBTARGET_ASM_SPEC }, \
/* Names to predefine in the preprocessor for this target machine. */
#define TARGET_CPU_CPP_BUILTINS() do { \
- builtin_define( "__v851__" ); \
+ builtin_define( "__v851__" ); \
builtin_define( "__v850" ); \
builtin_assert( "machine=v850" ); \
builtin_assert( "cpu=v850" ); \
#define OPTIMIZATION_OPTIONS(LEVEL,SIZE) \
{ \
- target_flags |= MASK_STRICT_ALIGN; \
if (LEVEL) \
/* Note - we no longer enable MASK_EP when optimizing. This is \
because of a hardware bug which stops the SLD and SST instructions\
/* Define this if move instructions will actually fail to work
when given unaligned data. */
-#define STRICT_ALIGNMENT TARGET_STRICT_ALIGN
+#define STRICT_ALIGNMENT (!TARGET_NO_STRICT_ALIGN)
/* Define this as 1 if `char' should by default be signed; else as 0.
All registers that the compiler knows about must be given numbers,
even those that are not normally considered general registers. */
-#define FIRST_PSEUDO_REGISTER 34
+#define FIRST_PSEUDO_REGISTER 36
/* 1 for registers that have pervasive standard uses
and are not available for the register allocator. */
#define FIXED_REGISTERS \
- { 1, 1, 0, 1, 1, 0, 0, 0, \
+ { 1, 1, 1, 1, 1, 1, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 1, 0, \
+ 1, 1, \
1, 1}
/* 1 for registers not available across function calls.
like. */
#define CALL_USED_REGISTERS \
- { 1, 1, 0, 1, 1, 1, 1, 1, \
+ { 1, 1, 1, 1, 1, 1, 1, 1, \
1, 1, 1, 1, 1, 1, 1, 1, \
1, 1, 1, 1, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 1, 1, \
+ 1, 1, \
1, 1}
/* List the order in which to allocate registers. Each register must be
6, 7, 8, 9, 31, /* argument registers */ \
29, 28, 27, 26, 25, 24, 23, 22, /* saved registers */ \
21, 20, 2, \
- 0, 1, 3, 4, 5, 30, 32, 33 /* fixed registers */ \
+ 0, 1, 3, 4, 5, 30, 32, 33, /* fixed registers */ \
+ 34, 35 \
}
/* If TARGET_APP_REGS is not defined then add r2 and r5 to
the pool of fixed registers. See PR 14505. */
-#define CONDITIONAL_REGISTER_USAGE \
-{ \
- if (!TARGET_APP_REGS) \
- { \
- fixed_regs[2] = 1; call_used_regs[2] = 1; \
- fixed_regs[5] = 1; call_used_regs[5] = 1; \
- } \
-}
+#define CONDITIONAL_REGISTER_USAGE \
+{ \
+ if (TARGET_APP_REGS) \
+ { \
+ fixed_regs[2] = 0; call_used_regs[2] = 0; \
+ fixed_regs[5] = 0; call_used_regs[5] = 1; \
+ } \
+ }
+
/* Return number of consecutive hard regs needed starting at reg REGNO
to hold something of mode MODE.
MODE. */
#define HARD_REGNO_MODE_OK(REGNO, MODE) \
- ((((REGNO) & 1) == 0) || (GET_MODE_SIZE (MODE) <= 4))
+ ((GET_MODE_SIZE (MODE) <= 4) || (((REGNO) & 1) == 0 && (REGNO) != 0))
/* Value is 1 if it is a good idea to tie two pseudo registers
when one has mode MODE1 and one has mode MODE2.
enum reg_class
{
- NO_REGS, GENERAL_REGS, ALL_REGS, LIM_REG_CLASSES
+ NO_REGS, GENERAL_REGS, EVEN_REGS, ALL_REGS, LIM_REG_CLASSES
};
#define N_REG_CLASSES (int) LIM_REG_CLASSES
/* Give names of register classes as strings for dump file. */
#define REG_CLASS_NAMES \
-{ "NO_REGS", "GENERAL_REGS", "ALL_REGS", "LIM_REGS" }
+{ "NO_REGS", "GENERAL_REGS", "EVEN_REGS", "ALL_REGS", "LIM_REGS" }
/* Define which registers fit in which classes.
This is an initializer for a vector of HARD_REG_SET
of length N_REG_CLASSES. */
-#define REG_CLASS_CONTENTS \
-{ \
- { 0x00000000 }, /* NO_REGS */ \
- { 0xffffffff }, /* GENERAL_REGS */ \
- { 0xffffffff }, /* ALL_REGS */ \
+#define REG_CLASS_CONTENTS \
+{ \
+ { 0x00000000,0x0 }, /* NO_REGS */ \
+ { 0xffffffff,0x0 }, /* GENERAL_REGS */ \
+ { 0x55555554,0x0 }, /* EVEN_REGS */ \
+ { 0xffffffff,0x0 }, /* ALL_REGS */ \
}
/* The same information, inverted:
reg number REGNO. This could be a conditional expression
or could index an array. */
-#define REGNO_REG_CLASS(REGNO) GENERAL_REGS
+#define REGNO_REG_CLASS(REGNO) ((REGNO == CC_REGNUM || REGNO == FCC_REGNUM) ? NO_REGS : GENERAL_REGS)
/* The class value for index registers, and the one for base regs. */
/* Get reg_class from a letter such as appears in the machine description. */
-#define REG_CLASS_FROM_LETTER(C) (NO_REGS)
+#define REG_CLASS_FROM_LETTER(C) \
+ (C == 'e' ? EVEN_REGS : (NO_REGS))
/* Macros to check register numbers against specific register classes. */
Since they use reg_renumber, they are safe only once reg_renumber
has been allocated, which happens in local-alloc.c. */
-#define REGNO_OK_FOR_BASE_P(regno) \
- ((regno) < FIRST_PSEUDO_REGISTER || reg_renumber[regno] >= 0)
+#define REGNO_OK_FOR_BASE_P(regno) \
+ (((regno) < FIRST_PSEUDO_REGISTER \
+ && (regno) != CC_REGNUM \
+ && (regno) != FCC_REGNUM) \
+ || reg_renumber[regno] >= 0)
#define REGNO_OK_FOR_INDEX_P(regno) 0
The values of these macros are register numbers. */
/* Register to use for pushing function arguments. */
-#define STACK_POINTER_REGNUM 3
+#define STACK_POINTER_REGNUM SP_REGNUM
/* Base register for access to local variables of the function. */
-#define FRAME_POINTER_REGNUM 32
+#define FRAME_POINTER_REGNUM 34
/* Register containing return address from latest function call. */
-#define LINK_POINTER_REGNUM 31
+#define LINK_POINTER_REGNUM LP_REGNUM
/* On some machines the offset between the frame pointer and starting
offset of the automatic variables is not known until after register
#define HARD_FRAME_POINTER_REGNUM 29
/* Base register for access to arguments of the function. */
-#define ARG_POINTER_REGNUM 33
+#define ARG_POINTER_REGNUM 35
/* Register in which static-chain is passed to a function. */
#define STATIC_CHAIN_REGNUM 20
/* Update the data in CUM to advance over an argument
of mode MODE and data type TYPE.
(TYPE is null for libcalls where that information may not be available.) */
-
-#define FUNCTION_ARG_ADVANCE(CUM, MODE, TYPE, NAMED) \
- ((CUM).nbytes += ((MODE) != BLKmode \
- ? (GET_MODE_SIZE (MODE) + UNITS_PER_WORD - 1) & -UNITS_PER_WORD \
- : (int_size_in_bytes (TYPE) + UNITS_PER_WORD - 1) & -UNITS_PER_WORD))
+#define FUNCTION_ARG_ADVANCE(CUM, MODE, TYPE, NAMED) \
+ ((CUM).nbytes += \
+ ((((TYPE) && int_size_in_bytes (TYPE) > 8) \
+ ? GET_MODE_SIZE (Pmode) \
+ : ((MODE) != BLKmode \
+ ? GET_MODE_SIZE ((MODE)) \
+ : int_size_in_bytes ((TYPE)))) \
+ + UNITS_PER_WORD - 1) & -UNITS_PER_WORD)
/* When a parameter is passed in a register, stack space is still
allocated for it. */
-#define REG_PARM_STACK_SPACE(DECL) (!TARGET_GHS ? 16 : 0)
-
-/* Define this if the above stack space is to be considered part of the
- space allocated by the caller. */
-#define OUTGOING_REG_PARM_STACK_SPACE(FNTYPE) 1
+#define REG_PARM_STACK_SPACE(DECL) 0
/* 1 if N is a possible register number for function argument passing. */
&& SYMBOL_REF_ZDA_P (OP)) \
|| (GET_CODE (OP) == CONST \
&& GET_CODE (XEXP (OP, 0)) == PLUS \
- && GET_CODE (XEXP (XEXP (OP, 0), 0)) == SYMBOL_REF \
+ && GET_CODE (XEXP (XEXP (OP, 0), 0)) == SYMBOL_REF\
&& SYMBOL_REF_ZDA_P (XEXP (XEXP (OP, 0), 0)))) \
+ : (C) == 'W' ? (GET_CODE (OP) == CONST_INT \
+ && ((unsigned)(INTVAL (OP)) >= 0x8000) \
+ && ((unsigned)(INTVAL (OP)) < 0x400000)) \
: 0)
\f
/* GO_IF_LEGITIMATE_ADDRESS recognizes an RTL expression
goto ADDR; \
if (GET_CODE (X) == PLUS \
&& RTX_OK_FOR_BASE_P (XEXP (X, 0)) \
- && CONSTANT_ADDRESS_P (XEXP (X, 1)) \
+ && (GET_CODE (XEXP (X,1)) == CONST_INT && CONST_OK_FOR_K (INTVAL(XEXP (X,1)) + GET_MODE_NUNITS(MODE) * UNITS_PER_WORD)) \
&& ((MODE == QImode || INTVAL (XEXP (X, 1)) % 2 == 0) \
&& CONST_OK_FOR_K (INTVAL (XEXP (X, 1)) \
+ (GET_MODE_NUNITS (MODE) * UNITS_PER_WORD)))) \
&& GET_CODE (XEXP (XEXP (X, 0), 0)) == SYMBOL_REF \
&& GET_CODE (XEXP (XEXP (X, 0), 1)) == CONST_INT \
&& ! CONST_OK_FOR_K (INTVAL (XEXP (XEXP (X, 0), 1)))))
-\f
+
+/* Given a comparison code (EQ, NE, etc.) and the first operand of a COMPARE,
+ return the mode to be used for the comparison.
+
+ For floating-point equality comparisons, CCFPEQmode should be used.
+ VOIDmode should be used in all other cases.
+
+ For integer comparisons against zero, reduce to CCNOmode or CCZmode if
+ possible, to allow for more combinations. */
+
+#define SELECT_CC_MODE(OP, X, Y) v850_select_cc_mode (OP, X, Y)
+
/* Tell final.c how to eliminate redundant test instructions. */
/* Here we define machine-dependent flags and fields in cc_status
/* How to refer to registers in assembler output.
This sequence is indexed by compiler's hard-register-number (see above). */
-#define REGISTER_NAMES \
-{ "r0", "r1", "r2", "sp", "gp", "r5", "r6" , "r7", \
- "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15", \
- "r16", "r17", "r18", "r19", "r20", "r21", "r22", "r23", \
- "r24", "r25", "r26", "r27", "r28", "r29", "ep", "r31", \
+#define REGISTER_NAMES \
+{ "r0", "r1", "r2", "sp", "gp", "r5", "r6" , "r7", \
+ "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15", \
+ "r16", "r17", "r18", "r19", "r20", "r21", "r22", "r23", \
+ "r24", "r25", "r26", "r27", "r28", "r29", "ep", "r31", \
+ "psw", "fcc", \
".fp", ".ap"}
-#define ADDITIONAL_REGISTER_NAMES \
-{ { "zero", 0 }, \
- { "hp", 2 }, \
- { "r3", 3 }, \
- { "r4", 4 }, \
- { "tp", 5 }, \
- { "fp", 29 }, \
- { "r30", 30 }, \
- { "lp", 31} }
+/* Register numbers */
+
+#define ADDITIONAL_REGISTER_NAMES \
+{ { "zero", ZERO_REGNUM }, \
+ { "hp", 2 }, \
+ { "r3", 3 }, \
+ { "r4", 4 }, \
+ { "tp", 5 }, \
+ { "fp", 29 }, \
+ { "r30", 30 }, \
+ { "lp", LP_REGNUM} }
#define ASM_OUTPUT_REG_PUSH(FILE,REGNO)
#define ASM_OUTPUT_REG_POP(FILE,REGNO)
/* Disable the shift, which is for the currently disabled "switch"
opcode. Se casesi in v850.md. */
+
#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t%s %s.L%d-.L%d%s\n", \
(TARGET_BIG_SWITCH ? ".long" : ".short"), \
- (0 && ! TARGET_BIG_SWITCH && TARGET_V850E ? "(" : ""), \
+ (0 && ! TARGET_BIG_SWITCH && (TARGET_V850E || TARGET_V850E2_ALL) ? "(" : ""), \
VALUE, REL, \
- (0 && ! TARGET_BIG_SWITCH && TARGET_V850E ? ")>>1" : ""))
+ (0 && ! TARGET_BIG_SWITCH && (TARGET_V850E || TARGET_V850E2_ALL) ? ")>>1" : ""))
#define ASM_OUTPUT_ALIGN(FILE, LOG) \
if ((LOG) != 0) \
/* The switch instruction requires that the jump table immediately follow
it. */
-#define JUMP_TABLES_IN_TEXT_SECTION 1
+#define JUMP_TABLES_IN_TEXT_SECTION (!TARGET_JUMP_TABLES_IN_DATA_SECTION)
/* svr4.h defines this assuming that 4 byte alignment is required. */
#undef ASM_OUTPUT_BEFORE_CASE_LABEL
#define TARGET_ASM_INIT_SECTIONS v850_asm_init_sections
#endif /* ! GCC_V850_H */
+
+
;; The size of instructions in bytes.
+;;---------------------------------------------------------------------------
+;; Constants
+
+;;
+(define_constants
+ [(ZERO_REGNUM 0) ; constant zero
+ (SP_REGNUM 3) ; Stack Pointer
+ (GP_REGNUM 4) ; GP Pointer
+ (EP_REGNUM 30) ; EP pointer
+ (LP_REGNUM 31) ; Return address register
+ (CC_REGNUM 32) ; Condition code pseudo register
+ (FCC_REGNUM 33) ; Floating Condition code pseudo register
+ ]
+)
+
(define_attr "length" ""
(const_int 4))
;; Types of instructions (for scheduling purposes).
-(define_attr "type" "load,mult,other"
+(define_attr "type" "load,store,bit1,mult,macc,div,fpu,single,other"
(const_string "other"))
+(define_attr "cpu" "none,v850,v850e,v850e1,v850e2,v850e2v3"
+ (cond [(ne (symbol_ref "TARGET_V850") (const_int 0))
+ (const_string "v850")
+ (ne (symbol_ref "TARGET_V850E") (const_int 0))
+ (const_string "v850e")
+ (ne (symbol_ref "TARGET_V850E1") (const_int 0))
+ (const_string "v850e1")
+ (ne (symbol_ref "TARGET_V850E2") (const_int 0))
+ (const_string "v850e2")
+ (ne (symbol_ref "TARGET_V850E2") (const_int 0))
+ (const_string "v850e2v3")]
+ (const_string "none")))
+
;; Condition code settings.
;; none - insn does not affect cc
;; none_0hit - insn does not affect cc but it does modify operand 0
;; set_zn - sets z,n to usable values; v,c is unknown.
;; compare - compare instruction
;; clobber - value of cc is unknown
-(define_attr "cc" "none,none_0hit,set_zn,set_znv,compare,clobber"
+(define_attr "cc" "none,none_0hit,set_z,set_zn,set_znv,compare,clobber"
(const_string "clobber"))
\f
;; Function units for the V850. As best as I can tell, there's
;; ----------------------------------------------------------------------
;; MOVE INSTRUCTIONS
;; ----------------------------------------------------------------------
+(define_insn "sign23byte_load"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (sign_extend:SI
+ (mem:QI (plus:SI (match_operand:SI 1 "register_operand" "r")
+ (match_operand 2 "disp23_operand" "W")))))]
+ "TARGET_V850E2V3"
+ "ld.b %2[%1],%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")])
+
+(define_insn "unsign23byte_load"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (zero_extend:SI
+ (mem:QI (plus:SI (match_operand:SI 1 "register_operand" "r")
+ (match_operand 2 "disp23_operand" "W")))))]
+ "TARGET_V850E2V3"
+ "ld.bu %2[%1],%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")])
+
+(define_insn "sign23hword_load"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (sign_extend:SI
+ (mem:HI (plus:SI (match_operand:SI 1 "register_operand" "r")
+ (match_operand 2 "disp23_operand" "W")))))]
+ "TARGET_V850E2V3"
+ "ld.h %2[%1],%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")])
+
+(define_insn "unsign23hword_load"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (zero_extend:SI
+ (mem:HI (plus:SI (match_operand:SI 1 "register_operand" "r")
+ (match_operand 2 "disp23_operand" "W")))))]
+ "TARGET_V850E2V3"
+ "ld.hu %2[%1],%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")])
+(define_insn "23word_load"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (mem:SI (plus:SI (match_operand:SI 1 "register_operand" "r")
+ (match_operand 2 "disp23_operand" "W"))))]
+ "TARGET_V850E2V3"
+ "ld.w %2[%1],%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")])
+
+(define_insn "23byte_store"
+ [(set (mem:QI (plus:SI (match_operand:SI 0 "register_operand" "r")
+ (match_operand 1 "disp23_operand" "W")))
+ (match_operand:QI 2 "register_operand" "r"))]
+ "TARGET_V850E2V3"
+ "st.b %2,%1[%0]"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")])
+
+(define_insn "23hword_store"
+ [(set (mem:HI (plus:SI (match_operand:SI 0 "register_operand" "r")
+ (match_operand 1 "disp23_operand" "W")))
+ (match_operand:HI 2 "register_operand" "r"))]
+ "TARGET_V850E2V3"
+ "st.h %2,%1[%0]"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")])
+
+(define_insn "23word_store"
+ [(set (mem:SI (plus:SI (match_operand:SI 0 "register_operand" "r")
+ (match_operand 1 "disp23_operand" "W")))
+ (match_operand:SI 2 "register_operand" "r"))]
+ "TARGET_V850E2V3"
+ "st.w %2,%1[%0]"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")])
;; movqi
(define_expand "movqi"
"* return output_move_single (operands);"
[(set_attr "length" "2,4,2,2,4,4,4")
(set_attr "cc" "none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit")
- (set_attr "type" "other,other,load,other,load,other,other")])
+ (set_attr "type" "other,other,load,other,load,store,store")])
;; movhi
"* return output_move_single (operands);"
[(set_attr "length" "2,4,2,2,4,4,4")
(set_attr "cc" "none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit")
- (set_attr "type" "other,other,load,other,load,other,other")])
+ (set_attr "type" "other,other,load,other,load,store,store")])
;; movsi and helpers
must be done with HIGH & LO_SUM patterns. */
if (CONSTANT_P (operands[1])
&& GET_CODE (operands[1]) != HIGH
- && ! TARGET_V850E
+ && ! (TARGET_V850E || TARGET_V850E2_ALL)
&& !special_symbolref_operand (operands[1], VOIDmode)
&& !(GET_CODE (operands[1]) == CONST_INT
&& (CONST_OK_FOR_J (INTVAL (operands[1]))
(define_insn "*movsi_internal_v850e"
[(set (match_operand:SI 0 "general_operand" "=r,r,r,r,Q,r,r,m,m,r")
(match_operand:SI 1 "general_operand" "Jr,K,L,Q,Ir,m,R,r,I,i"))]
- "TARGET_V850E
+ "(TARGET_V850E || TARGET_V850E2_ALL)
&& (register_operand (operands[0], SImode)
|| reg_or_0_operand (operands[1], SImode))"
"* return output_move_single (operands);"
[(set_attr "length" "2,4,4,2,2,4,4,4,4,6")
(set_attr "cc" "none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit")
- (set_attr "type" "other,other,other,load,other,load,other,other,other,other")])
+ (set_attr "type" "other,other,other,load,other,load,other,store,store,other")])
(define_insn "*movsi_internal"
[(set (match_operand:SI 0 "general_operand" "=r,r,r,r,Q,r,r,m,m")
"* return output_move_single (operands);"
[(set_attr "length" "2,4,4,2,2,4,4,4,4")
(set_attr "cc" "none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit")
- (set_attr "type" "other,other,other,load,other,load,other,other,other")])
+ (set_attr "type" "other,other,other,load,other,load,store,store,other")])
(define_insn "*movsf_internal"
[(set (match_operand:SF 0 "general_operand" "=r,r,r,r,r,Q,r,m,m,r")
"* return output_move_single (operands);"
[(set_attr "length" "2,4,4,8,2,2,4,4,4,8")
(set_attr "cc" "none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit,none_0hit")
- (set_attr "type" "other,other,other,other,load,other,load,other,other,other")])
+ (set_attr "type" "other,other,other,other,load,other,load,store,store,other")])
-\f
;; ----------------------------------------------------------------------
;; TEST INSTRUCTIONS
;; ----------------------------------------------------------------------
(const_int 0)]))]
"")
-(define_insn "*cmpsi"
+(define_expand "cmpsi"
[(set (cc0)
- (compare (match_operand:SI 0 "register_operand" "r,r,r")
- (match_operand:SI 1 "reg_or_int5_operand" "r,I,J")))]
+ (compare (match_operand:SI 0 "register_operand" "r,r")
+ (match_operand:SI 1 "reg_or_int5_operand" "r,J")))]
+ ""
+ "
+{
+ v850_compare_op0 = operands[0];
+ v850_compare_op1 = operands[1];
+ DONE;
+}")
+
+(define_insn "cmpsi_insn"
+ [(set (cc0)
+ (compare (match_operand:SI 0 "register_operand" "r,r")
+ (match_operand:SI 1 "reg_or_int5_operand" "r,J")))]
""
"@
cmp %1,%0
- cmp %.,%0
cmp %1,%0"
- [(set_attr "length" "2,2,2")
- (set_attr "cc" "compare,set_znv,compare")])
+ [(set_attr "length" "2,2")
+ (set_attr "cc" "compare")])
+
+(define_expand "cmpsf"
+ [(set (reg:CC CC_REGNUM)
+ (compare (match_operand:SF 0 "register_operand" "r")
+ (match_operand:SF 1 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "
+{
+ v850_compare_op0 = operands[0];
+ v850_compare_op1 = operands[1];
+ DONE;
+}")
+
+(define_expand "cmpdf"
+ [(set (reg:CC CC_REGNUM)
+ (compare (match_operand:DF 0 "even_reg_operand" "r")
+ (match_operand:DF 1 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "
+{
+ v850_compare_op0 = operands[0];
+ v850_compare_op1 = operands[1];
+ DONE;
+}")
-\f
;; ----------------------------------------------------------------------
;; ADD INSTRUCTIONS
;; ----------------------------------------------------------------------
(define_insn "addsi3"
[(set (match_operand:SI 0 "register_operand" "=r,r,r")
(plus:SI (match_operand:SI 1 "register_operand" "%0,r,r")
- (match_operand:SI 2 "nonmemory_operand" "rJ,K,U")))]
+ (match_operand:SI 2 "nonmemory_operand" "rJ,K,U")))
+ (clobber (reg:CC CC_REGNUM))]
+
""
"@
add %2,%0
(define_insn "subsi3"
[(set (match_operand:SI 0 "register_operand" "=r,r")
(minus:SI (match_operand:SI 1 "register_operand" "0,r")
- (match_operand:SI 2 "register_operand" "r,0")))]
+ (match_operand:SI 2 "register_operand" "r,0")))
+ (clobber (reg:CC CC_REGNUM))]
""
"@
sub %2,%0
subr %1,%0"
[(set_attr "length" "2,2")
- (set_attr "cc" "set_zn")])
+ (set_attr "cc" "set_zn,set_zn")])
(define_insn "negsi2"
[(set (match_operand:SI 0 "register_operand" "=r")
- (neg:SI (match_operand:SI 1 "register_operand" "0")))]
+ (neg:SI (match_operand:SI 1 "register_operand" "0")))
+ (clobber (reg:CC CC_REGNUM))]
""
"subr %.,%0"
[(set_attr "length" "2")
[(set (match_operand:SI 0 "register_operand" "=r")
(mult:SI (match_operand:SI 1 "register_operand" "%0")
(match_operand:SI 2 "reg_or_int9_operand" "rO")))]
- "TARGET_V850E"
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"mul %2,%1,%."
[(set_attr "length" "4")
(set_attr "cc" "none_0hit")
(match_operand:SI 2 "register_operand" "r")))
(set (match_operand:SI 3 "register_operand" "=r")
(mod:SI (match_dup 1)
- (match_dup 2)))]
+ (match_dup 2)))
+ (clobber (reg:CC CC_REGNUM))]
"TARGET_V850E"
"div %2,%0,%3"
[(set_attr "length" "4")
(set_attr "cc" "clobber")
- (set_attr "type" "other")])
+ (set_attr "type" "div")])
(define_insn "udivmodsi4"
[(set (match_operand:SI 0 "register_operand" "=r")
(match_operand:SI 2 "register_operand" "r")))
(set (match_operand:SI 3 "register_operand" "=r")
(umod:SI (match_dup 1)
- (match_dup 2)))]
+ (match_dup 2)))
+ (clobber (reg:CC CC_REGNUM))]
"TARGET_V850E"
"divu %2,%0,%3"
[(set_attr "length" "4")
(set_attr "cc" "clobber")
- (set_attr "type" "other")])
+ (set_attr "type" "div")])
;; ??? There is a 2 byte instruction for generating only the quotient.
;; However, it isn't clear how to compute the length field correctly.
(match_operand:HI 2 "register_operand" "r")))
(set (match_operand:HI 3 "register_operand" "=r")
(mod:HI (match_dup 1)
- (match_dup 2)))]
+ (match_dup 2)))
+ (clobber (reg:CC CC_REGNUM))]
"TARGET_V850E"
"divh %2,%0,%3"
[(set_attr "length" "4")
(set_attr "cc" "clobber")
- (set_attr "type" "other")])
+ (set_attr "type" "div")])
;; Half-words are sign-extended by default, so we must zero extend to a word
;; here before doing the divide.
(match_operand:HI 2 "register_operand" "r")))
(set (match_operand:HI 3 "register_operand" "=r")
(umod:HI (match_dup 1)
- (match_dup 2)))]
+ (match_dup 2)))
+ (clobber (reg:CC CC_REGNUM))]
"TARGET_V850E"
"zxh %0 ; divhu %2,%0,%3"
[(set_attr "length" "4")
(set_attr "cc" "clobber")
- (set_attr "type" "other")])
+ (set_attr "type" "div")])
\f
;; ----------------------------------------------------------------------
;; AND INSTRUCTIONS
[(set (match_operand:QI 0 "memory_operand" "=m")
(subreg:QI
(and:SI (subreg:SI (match_dup 0) 0)
- (match_operand:QI 1 "not_power_of_two_operand" "")) 0))]
+ (match_operand:QI 1 "not_power_of_two_operand" "")) 0))
+ (clobber (reg:CC CC_REGNUM))]
""
"*
{
return \"\";
}"
[(set_attr "length" "4")
- (set_attr "cc" "clobber")])
+ (set_attr "cc" "clobber")
+ (set_attr "type" "bit1")])
(define_insn "*v850_clr1_2"
[(set (match_operand:HI 0 "indirect_operand" "=m")
(subreg:HI
(and:SI (subreg:SI (match_dup 0) 0)
- (match_operand:HI 1 "not_power_of_two_operand" "")) 0))]
+ (match_operand:HI 1 "not_power_of_two_operand" "")) 0))
+ (clobber (reg:CC CC_REGNUM))]
""
"*
{
return \"\";
}"
[(set_attr "length" "4")
- (set_attr "cc" "clobber")])
+ (set_attr "cc" "clobber")
+ (set_attr "type" "bit1")])
(define_insn "*v850_clr1_3"
[(set (match_operand:SI 0 "indirect_operand" "=m")
(and:SI (match_dup 0)
- (match_operand:SI 1 "not_power_of_two_operand" "")))]
+ (match_operand:SI 1 "not_power_of_two_operand" "")))
+ (clobber (reg:CC CC_REGNUM))]
""
"*
{
return \"\";
}"
[(set_attr "length" "4")
- (set_attr "cc" "clobber")])
+ (set_attr "cc" "clobber")
+ (set_attr "type" "bit1")])
(define_insn "andsi3"
[(set (match_operand:SI 0 "register_operand" "=r,r,r")
(and:SI (match_operand:SI 1 "register_operand" "%0,0,r")
- (match_operand:SI 2 "nonmemory_operand" "r,I,M")))]
+ (match_operand:SI 2 "nonmemory_operand" "r,I,M")))
+ (clobber (reg:CC CC_REGNUM))]
""
"@
and %2,%0
and %.,%0
andi %2,%1,%0"
[(set_attr "length" "2,2,4")
- (set_attr "cc" "set_znv")])
+ (set_attr "cc" "set_zn")])
;; ----------------------------------------------------------------------
;; OR INSTRUCTIONS
(define_insn "*v850_set1_1"
[(set (match_operand:QI 0 "memory_operand" "=m")
(subreg:QI (ior:SI (subreg:SI (match_dup 0) 0)
- (match_operand 1 "power_of_two_operand" "")) 0))]
+ (match_operand 1 "power_of_two_operand" "")) 0))
+ (clobber (reg:CC CC_REGNUM))]
""
"set1 %M1,%0"
[(set_attr "length" "4")
- (set_attr "cc" "clobber")])
+ (set_attr "cc" "clobber")
+ (set_attr "type" "bit1")])
(define_insn "*v850_set1_2"
[(set (match_operand:HI 0 "indirect_operand" "=m")
return \"\";
}"
[(set_attr "length" "4")
- (set_attr "cc" "clobber")])
+ (set_attr "cc" "clobber")
+ (set_attr "type" "bit1")])
(define_insn "*v850_set1_3"
[(set (match_operand:SI 0 "indirect_operand" "=m")
(ior:SI (match_dup 0)
- (match_operand 1 "power_of_two_operand" "")))]
+ (match_operand 1 "power_of_two_operand" "")))
+ (clobber (reg:CC CC_REGNUM))]
""
"*
{
return \"\";
}"
[(set_attr "length" "4")
- (set_attr "cc" "clobber")])
+ (set_attr "cc" "clobber")
+ (set_attr "type" "bit1")])
(define_insn "iorsi3"
[(set (match_operand:SI 0 "register_operand" "=r,r,r")
(ior:SI (match_operand:SI 1 "register_operand" "%0,0,r")
- (match_operand:SI 2 "nonmemory_operand" "r,I,M")))]
+ (match_operand:SI 2 "nonmemory_operand" "r,I,M")))
+ (clobber (reg:CC CC_REGNUM))]
""
"@
or %2,%0
or %.,%0
ori %2,%1,%0"
[(set_attr "length" "2,2,4")
- (set_attr "cc" "set_znv")])
+ (set_attr "cc" "set_zn")])
;; ----------------------------------------------------------------------
;; XOR INSTRUCTIONS
(define_insn "*v850_not1_1"
[(set (match_operand:QI 0 "memory_operand" "=m")
(subreg:QI (xor:SI (subreg:SI (match_dup 0) 0)
- (match_operand 1 "power_of_two_operand" "")) 0))]
+ (match_operand 1 "power_of_two_operand" "")) 0))
+ (clobber (reg:CC CC_REGNUM))]
""
"not1 %M1,%0"
[(set_attr "length" "4")
- (set_attr "cc" "clobber")])
+ (set_attr "cc" "clobber")
+ (set_attr "type" "bit1")])
(define_insn "*v850_not1_2"
[(set (match_operand:HI 0 "indirect_operand" "=m")
return \"\";
}"
[(set_attr "length" "4")
- (set_attr "cc" "clobber")])
+ (set_attr "cc" "clobber")
+ (set_attr "type" "bit1")])
(define_insn "*v850_not1_3"
[(set (match_operand:SI 0 "indirect_operand" "=m")
(xor:SI (match_dup 0)
- (match_operand 1 "power_of_two_operand" "")))]
+ (match_operand 1 "power_of_two_operand" "")))
+ (clobber (reg:CC CC_REGNUM))]
""
"*
{
return \"\";
}"
[(set_attr "length" "4")
- (set_attr "cc" "clobber")])
+ (set_attr "cc" "clobber")
+ (set_attr "type" "bit1")])
(define_insn "xorsi3"
[(set (match_operand:SI 0 "register_operand" "=r,r,r")
(xor:SI (match_operand:SI 1 "register_operand" "%0,0,r")
- (match_operand:SI 2 "nonmemory_operand" "r,I,M")))]
+ (match_operand:SI 2 "nonmemory_operand" "r,I,M")))
+ (clobber (reg:CC CC_REGNUM))]
""
"@
xor %2,%0
xor %.,%0
xori %2,%1,%0"
[(set_attr "length" "2,2,4")
- (set_attr "cc" "set_znv")])
+ (set_attr "cc" "set_zn")])
\f
;; ----------------------------------------------------------------------
;; NOT INSTRUCTIONS
(define_insn "one_cmplsi2"
[(set (match_operand:SI 0 "register_operand" "=r")
- (not:SI (match_operand:SI 1 "register_operand" "r")))]
+ (not:SI (match_operand:SI 1 "register_operand" "r")))
+ (clobber (reg:CC CC_REGNUM))]
""
"not %1,%0"
[(set_attr "length" "2")
- (set_attr "cc" "set_znv")])
-\f
+ (set_attr "cc" "set_zn")])
+
;; -----------------------------------------------------------------
;; BIT FIELDS
;; -----------------------------------------------------------------
[(set_attr "length" "4")
(set_attr "cc" "none_0hit")])
+(define_insn "setf_insn"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (match_operator:SI 1 "comparison_operator"
+ [(reg:CC CC_REGNUM) (const_int 0)]))]
+ ""
+ "setf %b1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")])
+
+(define_insn "set_z_insn"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (match_operand 1 "v850_float_z_comparison_operator" ""))]
+ "TARGET_V850E2V3"
+ "setf z,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")])
+
+(define_insn "set_nz_insn"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (match_operand 1 "v850_float_nz_comparison_operator" ""))]
+ "TARGET_V850E2V3"
+ "setf nz,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")])
+
;; ----------------------------------------------------------------------
;; CONDITIONAL MOVE INSTRUCTIONS
;; ----------------------------------------------------------------------
(match_operand 1 "comparison_operator")
(match_operand:SI 2 "reg_or_const_operand" "rJ")
(match_operand:SI 3 "reg_or_const_operand" "rI")))]
- "TARGET_V850E"
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"
{
if ( (GET_CODE (operands[2]) == CONST_INT
;; condition codes may have already been set by an earlier instruction,
;; but we have no code here to avoid the compare if it is unnecessary.
+(define_insn "movsicc_normal_cc"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (if_then_else:SI
+ (match_operator 1 "comparison_operator"
+ [(reg:CC CC_REGNUM) (const_int 0)])
+ (match_operand:SI 2 "reg_or_int5_operand" "rJ")
+ (match_operand:SI 3 "reg_or_0_operand" "rI")))]
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
+ "cmov %c1,%2,%z3,%0";
+ [(set_attr "length" "6")
+ (set_attr "cc" "compare")])
+
+(define_insn "movsicc_reversed_cc"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (if_then_else:SI
+ (match_operator 1 "comparison_operator"
+ [(reg:CC CC_REGNUM) (const_int 0)])
+ (match_operand:SI 2 "reg_or_0_operand" "rI")
+ (match_operand:SI 3 "reg_or_int5_operand" "rJ")))]
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
+ "cmov %C1,%3,%z2,%0"
+ [(set_attr "length" "6")
+ (set_attr "cc" "compare")])
+
(define_insn "*movsicc_normal"
[(set (match_operand:SI 0 "register_operand" "=r")
(if_then_else:SI
(match_operand:SI 5 "reg_or_int5_operand" "rJ")])
(match_operand:SI 2 "reg_or_int5_operand" "rJ")
(match_operand:SI 3 "reg_or_0_operand" "rI")))]
- "TARGET_V850E"
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"cmp %5,%4 ; cmov %c1,%2,%z3,%0"
[(set_attr "length" "6")
(set_attr "cc" "clobber")])
(match_operand:SI 5 "reg_or_int5_operand" "rJ")])
(match_operand:SI 2 "reg_or_0_operand" "rI")
(match_operand:SI 3 "reg_or_int5_operand" "rJ")))]
- "TARGET_V850E"
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"cmp %5,%4 ; cmov %C1,%3,%z2,%0"
[(set_attr "length" "6")
(set_attr "cc" "clobber")])
(const_int 0)])
(match_operand:SI 4 "reg_or_int5_operand" "rJ")
(match_operand:SI 5 "reg_or_0_operand" "rI")))]
- "TARGET_V850E"
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"tst1 %3,%2 ; cmov %c1,%4,%z5,%0"
[(set_attr "length" "8")
(set_attr "cc" "clobber")])
(const_int 0)])
(match_operand:SI 4 "reg_or_0_operand" "rI")
(match_operand:SI 5 "reg_or_int5_operand" "rJ")))]
- "TARGET_V850E"
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"tst1 %3,%2 ; cmov %C1,%5,%z4,%0"
[(set_attr "length" "8")
(set_attr "cc" "clobber")])
;; second pattern by subsequent combining. As above, we must include the
;; comparison to avoid input reloads in an insn using cc0.
-(define_insn "*sasf_1"
- [(set (match_operand:SI 0 "register_operand" "")
- (ior:SI (match_operator 1 "comparison_operator" [(cc0) (const_int 0)])
- (ashift:SI (match_operand:SI 2 "register_operand" "")
- (const_int 1))))]
- "TARGET_V850E"
- "* gcc_unreachable ();")
-
-(define_insn "*sasf_2"
+(define_insn "*sasf"
[(set (match_operand:SI 0 "register_operand" "=r")
(ior:SI
(match_operator 1 "comparison_operator"
[(match_operand:SI 3 "register_operand" "r")
(match_operand:SI 4 "reg_or_int5_operand" "rJ")])
(ashift:SI (match_operand:SI 2 "register_operand" "0")
- (const_int 1))))]
- "TARGET_V850E"
+ (const_int 1))))
+ (clobber (reg:CC CC_REGNUM))]
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"cmp %4,%3 ; sasf %c1,%0"
[(set_attr "length" "6")
(set_attr "cc" "clobber")])
[(match_operand:SI 4 "register_operand" "")
(match_operand:SI 5 "reg_or_int5_operand" "")])
(match_operand:SI 2 "const_int_operand" "")
- (match_operand:SI 3 "const_int_operand" "")))]
- "TARGET_V850E
+ (match_operand:SI 3 "const_int_operand" "")))
+ (clobber (reg:CC CC_REGNUM))]
+ "(TARGET_V850E || TARGET_V850E2_ALL)
&& ((INTVAL (operands[2]) ^ INTVAL (operands[3])) == 1)
&& ((INTVAL (operands[2]) + INTVAL (operands[3])) != 1)
&& (GET_CODE (operands[5]) == CONST_INT
|| REGNO (operands[0]) != REGNO (operands[5]))
&& REGNO (operands[0]) != REGNO (operands[4])"
[(set (match_dup 0) (match_dup 6))
- (set (match_dup 0)
- (ior:SI (match_op_dup 7 [(match_dup 4) (match_dup 5)])
- (ashift:SI (match_dup 0) (const_int 1))))]
+ (parallel [(set (match_dup 0)
+ (ior:SI (match_op_dup 7 [(match_dup 4) (match_dup 5)])
+ (ashift:SI (match_dup 0) (const_int 1))))
+ (clobber (reg:CC CC_REGNUM))])]
"
{
operands[6] = GEN_INT (INTVAL (operands[2]) >> 1);
GET_MODE (operands[1]),
XEXP (operands[1], 0), XEXP (operands[1], 1));
}")
+
;; ---------------------------------------------------------------------
;; BYTE SWAP INSTRUCTIONS
;; ---------------------------------------------------------------------
-
(define_expand "rotlhi3"
- [(set (match_operand:HI 0 "register_operand" "")
- (rotate:HI (match_operand:HI 1 "register_operand" "")
- (match_operand:HI 2 "const_int_operand" "")))]
- "TARGET_V850E"
+ [(parallel [(set (match_operand:HI 0 "register_operand" "")
+ (rotate:HI (match_operand:HI 1 "register_operand" "")
+ (match_operand:HI 2 "const_int_operand" "")))
+ (clobber (reg:CC CC_REGNUM))])]
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"
{
if (INTVAL (operands[2]) != 8)
(define_insn "*rotlhi3_8"
[(set (match_operand:HI 0 "register_operand" "=r")
(rotate:HI (match_operand:HI 1 "register_operand" "r")
- (const_int 8)))]
- "TARGET_V850E"
+ (const_int 8)))
+ (clobber (reg:CC CC_REGNUM))]
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"bsh %1,%0"
[(set_attr "length" "4")
(set_attr "cc" "clobber")])
(define_expand "rotlsi3"
- [(set (match_operand:SI 0 "register_operand" "")
- (rotate:SI (match_operand:SI 1 "register_operand" "")
- (match_operand:SI 2 "const_int_operand" "")))]
- "TARGET_V850E"
+ [(parallel [(set (match_operand:SI 0 "register_operand" "")
+ (rotate:SI (match_operand:SI 1 "register_operand" "")
+ (match_operand:SI 2 "const_int_operand" "")))
+ (clobber (reg:CC CC_REGNUM))])]
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"
{
if (INTVAL (operands[2]) != 16)
(define_insn "*rotlsi3_16"
[(set (match_operand:SI 0 "register_operand" "=r")
(rotate:SI (match_operand:SI 1 "register_operand" "r")
- (const_int 16)))]
- "TARGET_V850E"
+ (const_int 16)))
+ (clobber (reg:CC CC_REGNUM))]
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"hsw %1,%0"
[(set_attr "length" "4")
(set_attr "cc" "clobber")])
-\f
+
;; ----------------------------------------------------------------------
;; JUMP INSTRUCTIONS
;; ----------------------------------------------------------------------
(const_int 6)))
(set_attr "cc" "none")])
+(define_insn "branch_z_normal"
+ [(set (pc)
+ (if_then_else (match_operand 1 "v850_float_z_comparison_operator" "")
+ (label_ref (match_operand 0 "" ""))
+ (pc)))]
+ "TARGET_V850E2V3"
+ "*
+{
+ if (get_attr_length (insn) == 2)
+ return \"bz %l0\";
+ else
+ return \"bnz 1f ; jr %l0 ; 1:\";
+}"
+ [(set (attr "length")
+ (if_then_else (lt (abs (minus (match_dup 0) (pc)))
+ (const_int 256))
+ (const_int 2)
+ (const_int 6)))
+ (set_attr "cc" "none")])
+
+(define_insn "*branch_z_invert"
+ [(set (pc)
+ (if_then_else (match_operand 1 "v850_float_z_comparison_operator" "")
+ (pc)
+ (label_ref (match_operand 0 "" ""))))]
+ "TARGET_V850E2V3"
+ "*
+{
+ if (get_attr_length (insn) == 2)
+ return \"bnz %l0\";
+ else
+ return \"bz 1f ; jr %l0 ; 1:\";
+}"
+ [(set (attr "length")
+ (if_then_else (lt (abs (minus (match_dup 0) (pc)))
+ (const_int 256))
+ (const_int 2)
+ (const_int 6)))
+ (set_attr "cc" "none")])
+
+(define_insn "branch_nz_normal"
+ [(set (pc)
+ (if_then_else (match_operand 1 "v850_float_nz_comparison_operator" "")
+ (label_ref (match_operand 0 "" ""))
+ (pc)))]
+ "TARGET_V850E2V3"
+ "*
+{
+ if (get_attr_length (insn) == 2)
+ return \"bnz %l0\";
+ else
+ return \"bz 1f ; jr %l0 ; 1:\";
+}"
+[(set (attr "length")
+ (if_then_else (lt (abs (minus (match_dup 0) (pc)))
+ (const_int 256))
+ (const_int 2)
+ (const_int 6)))
+ (set_attr "cc" "none")])
+
+(define_insn "*branch_nz_invert"
+ [(set (pc)
+ (if_then_else (match_operand 1 "v850_float_nz_comparison_operator" "")
+ (pc)
+ (label_ref (match_operand 0 "" ""))))]
+ "TARGET_V850E2V3"
+ "*
+{
+ if (get_attr_length (insn) == 2)
+ return \"bz %l0\";
+ else
+ return \"bnz 1f ; jr %l0 ; 1:\";
+}"
+ [(set (attr "length")
+ (if_then_else (lt (abs (minus (match_dup 0) (pc)))
+ (const_int 256))
+ (const_int 2)
+ (const_int 6)))
+ (set_attr "cc" "none")])
+
;; Unconditional and other jump instructions.
(define_insn "jump"
""
"*
{
- if (get_attr_length (insn) == 2)
+ if (get_attr_length (insn) == 2)
return \"br %0\";
else
return \"jr %0\";
[(set (pc)
(plus:SI
(sign_extend:SI
- (mem:HI
- (plus:SI (ashift:SI (match_operand:SI 0 "register_operand" "r")
- (const_int 1))
- (label_ref (match_operand 1 "" "")))))
- (label_ref (match_dup 1))))]
- "TARGET_V850E"
+ (mem:HI
+ (plus:SI (ashift:SI (match_operand:SI 0 "register_operand" "r")
+ (const_int 1))
+ (label_ref (match_operand 1 "" "")))))
+ (label_ref (match_dup 1))))]
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"switch %0"
[(set_attr "length" "2")
(set_attr "cc" "none")])
"@
jarl %0,r31
jarl .+4,r31 ; add 4,r31 ; jmp %0"
- [(set_attr "length" "4,8")]
+ [(set_attr "length" "4,8")
+ (set_attr "cc" "clobber,clobber")]
)
(define_insn "call_internal_long"
else
return \"jarl .+4,r31 ; add 4,r31 ; jmp %0\";
}"
- [(set_attr "length" "16,8")]
+ [(set_attr "length" "16,8")
+ (set_attr "cc" "clobber,clobber")]
)
;; Call subroutine, returning value in operand 0
"@
jarl %1,r31
jarl .+4,r31 ; add 4,r31 ; jmp %1"
- [(set_attr "length" "4,8")]
+ [(set_attr "length" "4,8")
+ (set_attr "cc" "clobber,clobber")]
)
(define_insn "call_value_internal_long"
else
return \"jarl .+4, r31 ; add 4, r31 ; jmp %1\";
}"
- [(set_attr "length" "16,8")]
+ [(set_attr "length" "16,8")
+ (set_attr "cc" "clobber,clobber")]
)
(define_insn "nop"
(define_insn ""
[(set (match_operand:SI 0 "register_operand" "=r,r,r,r")
(zero_extend:SI
- (match_operand:HI 1 "nonimmediate_operand" "0,r,T,m")))]
- "TARGET_V850E"
+ (match_operand:HI 1 "nonimmediate_operand" "0,r,T,m")))
+ (clobber (reg:CC CC_REGNUM))]
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"@
zxh %0
andi 65535,%1,%0
sld.hu %1,%0
ld.hu %1,%0"
[(set_attr "length" "2,4,2,4")
- (set_attr "cc" "none_0hit,set_znv,none_0hit,none_0hit")])
+ (set_attr "cc" "none_0hit,set_zn,none_0hit,none_0hit")])
(define_insn "zero_extendhisi2"
[(set (match_operand:SI 0 "register_operand" "=r")
(zero_extend:SI
- (match_operand:HI 1 "register_operand" "r")))]
+ (match_operand:HI 1 "register_operand" "r")))
+ (clobber (reg:CC CC_REGNUM))]
""
"andi 65535,%1,%0"
[(set_attr "length" "4")
- (set_attr "cc" "set_znv")])
+ (set_attr "cc" "set_zn")])
(define_insn ""
[(set (match_operand:SI 0 "register_operand" "=r,r,r,r")
(zero_extend:SI
- (match_operand:QI 1 "nonimmediate_operand" "0,r,T,m")))]
- "TARGET_V850E"
+ (match_operand:QI 1 "nonimmediate_operand" "0,r,T,m")))
+ (clobber (reg:CC CC_REGNUM))]
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"@
zxb %0
andi 255,%1,%0
sld.bu %1,%0
ld.bu %1,%0"
[(set_attr "length" "2,4,2,4")
- (set_attr "cc" "none_0hit,set_znv,none_0hit,none_0hit")])
+ (set_attr "cc" "none_0hit,set_zn,none_0hit,none_0hit")])
(define_insn "zero_extendqisi2"
[(set (match_operand:SI 0 "register_operand" "=r")
(zero_extend:SI
- (match_operand:QI 1 "register_operand" "r")))]
+ (match_operand:QI 1 "register_operand" "r")))
+ (clobber (reg:CC CC_REGNUM))]
""
"andi 255,%1,%0"
[(set_attr "length" "4")
- (set_attr "cc" "set_znv")])
+ (set_attr "cc" "set_zn")])
;;- sign extension instructions
(define_insn "*extendhisi_insn"
[(set (match_operand:SI 0 "register_operand" "=r,r,r")
- (sign_extend:SI (match_operand:HI 1 "nonimmediate_operand" "0,Q,m")))]
- "TARGET_V850E"
+ (sign_extend:SI (match_operand:HI 1 "nonimmediate_operand" "0,Q,m")))
+ (clobber (reg:CC CC_REGNUM))]
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"@
sxh %0
sld.h %1,%0
;; instruction.
(define_expand "extendhisi2"
- [(set (match_dup 2)
- (ashift:SI (match_operand:HI 1 "register_operand" "")
- (const_int 16)))
- (set (match_operand:SI 0 "register_operand" "")
- (ashiftrt:SI (match_dup 2)
- (const_int 16)))]
+ [(parallel [(set (match_dup 2)
+ (ashift:SI (match_operand:HI 1 "register_operand" "")
+ (const_int 16)))
+ (clobber (reg:CC CC_REGNUM))])
+ (parallel [(set (match_operand:SI 0 "register_operand" "")
+ (ashiftrt:SI (match_dup 2)
+ (const_int 16)))
+ (clobber (reg:CC CC_REGNUM))])]
""
"
{
(define_insn "*extendqisi_insn"
[(set (match_operand:SI 0 "register_operand" "=r,r,r")
- (sign_extend:SI (match_operand:QI 1 "nonimmediate_operand" "0,Q,m")))]
- "TARGET_V850E"
+ (sign_extend:SI (match_operand:QI 1 "nonimmediate_operand" "0,Q,m")))
+ (clobber (reg:CC CC_REGNUM))]
+ "(TARGET_V850E || TARGET_V850E2_ALL)"
"@
sxb %0
sld.b %1,%0
;; instruction.
(define_expand "extendqisi2"
- [(set (match_dup 2)
- (ashift:SI (match_operand:QI 1 "register_operand" "")
- (const_int 24)))
- (set (match_operand:SI 0 "register_operand" "")
- (ashiftrt:SI (match_dup 2)
- (const_int 24)))]
+ [(parallel [(set (match_dup 2)
+ (ashift:SI (match_operand:QI 1 "register_operand" "")
+ (const_int 24)))
+ (clobber (reg:CC CC_REGNUM))])
+ (parallel [(set (match_operand:SI 0 "register_operand" "")
+ (ashiftrt:SI (match_dup 2)
+ (const_int 24)))
+ (clobber (reg:CC CC_REGNUM))])]
""
"
{
(define_insn "ashlsi3"
[(set (match_operand:SI 0 "register_operand" "=r,r")
- (ashift:SI
- (match_operand:SI 1 "register_operand" "0,0")
- (match_operand:SI 2 "nonmemory_operand" "r,N")))]
+ (ashift:SI
+ (match_operand:SI 1 "register_operand" "0,0")
+ (match_operand:SI 2 "nonmemory_operand" "r,N")))
+ (clobber (reg:CC CC_REGNUM))]
""
"@
shl %2,%0
shl %2,%0"
[(set_attr "length" "4,2")
+ (set_attr "cc" "set_zn")])
+
+(define_insn "ashlsi3_v850e2"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (ashift:SI
+ (match_operand:SI 1 "register_operand" "r")
+ (match_operand:SI 2 "nonmemory_operand" "r")))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_V850E2_ALL"
+ "shl %2,%1,%0"
+ [(set_attr "length" "4")
(set_attr "cc" "set_znv")])
(define_insn "lshrsi3"
[(set (match_operand:SI 0 "register_operand" "=r,r")
- (lshiftrt:SI
- (match_operand:SI 1 "register_operand" "0,0")
- (match_operand:SI 2 "nonmemory_operand" "r,N")))]
+ (lshiftrt:SI
+ (match_operand:SI 1 "register_operand" "0,0")
+ (match_operand:SI 2 "nonmemory_operand" "r,N")))
+ (clobber (reg:CC CC_REGNUM))]
""
"@
shr %2,%0
shr %2,%0"
[(set_attr "length" "4,2")
- (set_attr "cc" "set_znv")])
+ (set_attr "cc" "set_zn")])
+
+(define_insn "lshrsi3_v850e2"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (lshiftrt:SI
+ (match_operand:SI 1 "register_operand" "r")
+ (match_operand:SI 2 "nonmemory_operand" "r")))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_V850E2_ALL"
+ "shr %2,%1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "set_zn")])
(define_insn "ashrsi3"
[(set (match_operand:SI 0 "register_operand" "=r,r")
- (ashiftrt:SI
- (match_operand:SI 1 "register_operand" "0,0")
- (match_operand:SI 2 "nonmemory_operand" "r,N")))]
+ (ashiftrt:SI
+ (match_operand:SI 1 "register_operand" "0,0")
+ (match_operand:SI 2 "nonmemory_operand" "r,N")))
+ (clobber (reg:CC CC_REGNUM))]
""
"@
sar %2,%0
sar %2,%0"
[(set_attr "length" "4,2")
- (set_attr "cc" "set_znv")])
+ (set_attr "cc" "set_zn, set_zn")])
+
+(define_insn "ashrsi3_v850e2"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (ashiftrt:SI
+ (match_operand:SI 1 "register_operand" "r")
+ (match_operand:SI 2 "nonmemory_operand" "r")))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_V850E2_ALL"
+ "sar %2,%1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "set_zn")])
+
+;; ----------------------------------------------------------------------
+;; FIND FIRST BIT INSTRUCTION
+;; ----------------------------------------------------------------------
+
+(define_insn "ffssi2"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (ffs:SI (match_operand:SI 1 "register_operand" "r")))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_V850E2_ALL"
+ "sch1r %1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "clobber")])
;; ----------------------------------------------------------------------
;; PROLOGUE/EPILOGUE
[(set_attr "length" "2")
(set_attr "cc" "none")])
+;; ----------------------------------------------------------------------
+;; v850e2V3 floating-point hardware support
+;; ----------------------------------------------------------------------
+
+
+(define_insn "addsf3"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (plus:SF (match_operand:SF 1 "register_operand" "r")
+ (match_operand:SF 2 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "addf.s %1,%2,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "adddf3"
+ [(set (match_operand:DF 0 "even_reg_operand" "=r")
+ (plus:DF (match_operand:DF 1 "even_reg_operand" "r")
+ (match_operand:DF 2 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "addf.d %1,%2,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "subsf3"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (minus:SF (match_operand:SF 1 "register_operand" "r")
+ (match_operand:SF 2 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "subf.s %2,%1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "subdf3"
+ [(set (match_operand:DF 0 "even_reg_operand" "=r")
+ (minus:DF (match_operand:DF 1 "even_reg_operand" "r")
+ (match_operand:DF 2 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "subf.d %2,%1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "mulsf3"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (mult:SF (match_operand:SF 1 "register_operand" "r")
+ (match_operand:SF 2 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "mulf.s %1,%2,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "muldf3"
+ [(set (match_operand:DF 0 "even_reg_operand" "=r")
+ (mult:DF (match_operand:DF 1 "even_reg_operand" "r")
+ (match_operand:DF 2 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "mulf.d %1,%2,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "divsf3"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (div:SF (match_operand:SF 1 "register_operand" "r")
+ (match_operand:SF 2 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "divf.s %2,%1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "divdf3"
+ [(set (match_operand:DF 0 "register_operand" "=r")
+ (div:DF (match_operand:DF 1 "even_reg_operand" "r")
+ (match_operand:DF 2 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "divf.d %2,%1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "minsf3"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (smin:SF (match_operand:SF 1 "reg_or_0_operand" "r")
+ (match_operand:SF 2 "reg_or_0_operand" "r")))]
+ "TARGET_V850E2V3"
+ "minf.s %z1,%z2,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "mindf3"
+ [(set (match_operand:DF 0 "even_reg_operand" "=r")
+ (smin:DF (match_operand:DF 1 "even_reg_operand" "r")
+ (match_operand:DF 2 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "minf.d %1,%2,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "maxsf3"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (smax:SF (match_operand:SF 1 "reg_or_0_operand" "r")
+ (match_operand:SF 2 "reg_or_0_operand" "r")))]
+ "TARGET_V850E2V3"
+ "maxf.s %z1,%z2,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "maxdf3"
+ [(set (match_operand:DF 0 "even_reg_operand" "=r")
+ (smax:DF (match_operand:DF 1 "even_reg_operand" "r")
+ (match_operand:DF 2 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "maxf.d %1,%2,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "abssf2"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (abs:SF (match_operand:SF 1 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "absf.s %1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "absdf2"
+ [(set (match_operand:DF 0 "even_reg_operand" "=r")
+ (abs:DF (match_operand:DF 1 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "absf.d %1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "negsf2"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (neg:SF (match_operand:SF 1 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "negf.s %1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "negdf2"
+ [(set (match_operand:DF 0 "even_reg_operand" "=r")
+ (neg:DF (match_operand:DF 1 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "negf.d %1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+;; square-root
+(define_insn "sqrtsf2"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (sqrt:SF (match_operand:SF 1 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "sqrtf.s %1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "sqrtdf2"
+ [(set (match_operand:DF 0 "even_reg_operand" "=r")
+ (sqrt:DF (match_operand:DF 1 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "sqrtf.d %1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+;; float -> int
+(define_insn "fix_truncsfsi2"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (fix:SI (fix:SF (match_operand:SF 1 "register_operand" "r"))))]
+ "TARGET_V850E2V3"
+ "trncf.sw %1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "fix_truncdfsi2"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (fix:SI (fix:DF (match_operand:DF 1 "even_reg_operand" "r"))))]
+ "TARGET_V850E2V3"
+ "trncf.dw %1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+;; int -> float
+(define_insn "floatsisf2"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (float:SF (match_operand:SI 1 "reg_or_0_operand" "rI")))]
+ "TARGET_V850E2V3"
+ "cvtf.ws %z1, %0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "floatsidf2"
+ [(set (match_operand:DF 0 "even_reg_operand" "=r")
+ (float:DF (match_operand:SI 1 "reg_or_0_operand" "rI")))]
+ "TARGET_V850E2V3"
+ "cvtf.wd %z1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+;; single-float -> double-float
+(define_insn "extendsfdf2"
+ [(set (match_operand:DF 0 "even_reg_operand" "=r")
+ (float_extend:DF
+ (match_operand:SF 1 "reg_or_0_operand" "rI")))]
+ "TARGET_V850E2V3"
+ "cvtf.sd %z1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+;; double-float -> single-float
+(define_insn "truncdfsf2"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (float_truncate:SF
+ (match_operand:DF 1 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cvtf.ds %1,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+;;
+;; ---------------- special insns
+;;
+
+;;; reciprocal
+(define_insn "recipsf2"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (div:SF (match_operand:SF 1 "const_float_1_operand" "")
+ (match_operand:SF 2 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "recipf.s %2,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "recipdf2"
+ [(set (match_operand:DF 0 "even_reg_operand" "=r")
+ (div:DF (match_operand:DF 1 "const_float_1_operand" "")
+ (match_operand:DF 2 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "recipf.d %2,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+;;; reciprocal of square-root
+(define_insn "rsqrtsf2"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (div:SF (match_operand:SF 1 "const_float_1_operand" "")
+ (sqrt:SF (match_operand:SF 2 "register_operand" "r"))))]
+ "TARGET_V850E2V3"
+ "rsqrtf.s %2,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "rsqrtdf2"
+ [(set (match_operand:DF 0 "even_reg_operand" "=r")
+ (div:DF (match_operand:DF 1 "const_float_1_operand" "")
+ (sqrt:DF (match_operand:DF 2 "even_reg_operand" "r"))))]
+ "TARGET_V850E2V3"
+ "rsqrtf.d %2,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+;;; multiply-add
+(define_insn "maddsf4"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (plus:SF (mult:SF (match_operand:SF 1 "register_operand" "r")
+ (match_operand:SF 2 "register_operand" "r"))
+ (match_operand:SF 3 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "maddf.s %2,%1,%3,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+
+;;; multiply-subtract
+(define_insn "msubsf4"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (minus:SF (mult:SF (match_operand:SF 1 "register_operand" "r")
+ (match_operand:SF 2 "register_operand" "r"))
+ (match_operand:SF 3 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "msubf.s %2,%1,%3,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+;;; negative-multiply-add
+(define_insn "nmaddsf4"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (neg:SF (plus:SF (mult:SF (match_operand:SF 1 "register_operand" "r")
+ (match_operand:SF 2 "register_operand" "r"))
+ (match_operand:SF 3 "register_operand" "r"))))]
+ "TARGET_V850E2V3"
+ "nmaddf.s %2,%1,%3,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+;; negative-multiply-subtract
+(define_insn "nmsubsf4"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (neg:SF (minus:SF (mult:SF (match_operand:SF 1 "register_operand" "r")
+ (match_operand:SF 2 "register_operand" "r"))
+ (match_operand:SF 3 "register_operand" "r"))))]
+ "TARGET_V850E2V3"
+ "nmsubf.s %2,%1,%3,%0"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+;
+; ---------------- comparison/conditionals
+;
+; SF
+
+(define_insn "cmpsf_le_insn"
+ [(set (reg:CC_FPU_LE FCC_REGNUM)
+ (compare:CC_FPU_LE (match_operand:SF 0 "register_operand" "r")
+ (match_operand:SF 1 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cmpf.s le,%z0,%z1"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "cmpsf_lt_insn"
+ [(set (reg:CC_FPU_LT FCC_REGNUM)
+ (compare:CC_FPU_LT (match_operand:SF 0 "register_operand" "r")
+ (match_operand:SF 1 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cmpf.s lt,%z0,%z1"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "cmpsf_ge_insn"
+ [(set (reg:CC_FPU_GE FCC_REGNUM)
+ (compare:CC_FPU_GE (match_operand:SF 0 "register_operand" "r")
+ (match_operand:SF 1 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cmpf.s ge,%z0,%z1"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "cmpsf_gt_insn"
+ [(set (reg:CC_FPU_GT FCC_REGNUM)
+ (compare:CC_FPU_GT (match_operand:SF 0 "register_operand" "r")
+ (match_operand:SF 1 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cmpf.s gt,%z0,%z1"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "cmpsf_eq_insn"
+ [(set (reg:CC_FPU_EQ FCC_REGNUM)
+ (compare:CC_FPU_EQ (match_operand:SF 0 "register_operand" "r")
+ (match_operand:SF 1 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cmpf.s eq,%z0,%z1"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "cmpsf_ne_insn"
+ [(set (reg:CC_FPU_NE FCC_REGNUM)
+ (compare:CC_FPU_NE (match_operand:SF 0 "register_operand" "r")
+ (match_operand:SF 1 "register_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cmpf.s neq,%z0,%z1"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+; DF
+
+(define_insn "cmpdf_le_insn"
+ [(set (reg:CC_FPU_LE FCC_REGNUM)
+ (compare:CC_FPU_LE (match_operand:DF 0 "even_reg_operand" "r")
+ (match_operand:DF 1 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cmpf.d le,%z0,%z1"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "cmpdf_lt_insn"
+ [(set (reg:CC_FPU_LT FCC_REGNUM)
+ (compare:CC_FPU_LT (match_operand:DF 0 "even_reg_operand" "r")
+ (match_operand:DF 1 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cmpf.d lt,%z0,%z1"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "cmpdf_ge_insn"
+ [(set (reg:CC_FPU_GE FCC_REGNUM)
+ (compare:CC_FPU_GE (match_operand:DF 0 "even_reg_operand" "r")
+ (match_operand:DF 1 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cmpf.d ge,%z0,%z1"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "cmpdf_gt_insn"
+ [(set (reg:CC_FPU_GT FCC_REGNUM)
+ (compare:CC_FPU_GT (match_operand:DF 0 "even_reg_operand" "r")
+ (match_operand:DF 1 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cmpf.d gt,%z0,%z1"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "cmpdf_eq_insn"
+ [(set (reg:CC_FPU_EQ FCC_REGNUM)
+ (compare:CC_FPU_EQ (match_operand:DF 0 "even_reg_operand" "r")
+ (match_operand:DF 1 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cmpf.d eq,%z0,%z1"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+(define_insn "cmpdf_ne_insn"
+ [(set (reg:CC_FPU_NE FCC_REGNUM)
+ (compare:CC_FPU_NE (match_operand:DF 0 "even_reg_operand" "r")
+ (match_operand:DF 1 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cmpf.d neq,%z0,%z1"
+ [(set_attr "length" "4")
+ (set_attr "cc" "none_0hit")
+ (set_attr "type" "fpu")])
+
+
+;;
+;; Transfer a v850e2v3 fcc to the Z bit of CC0 (this is necessary to do a
+;; conditional branch based on a floating-point compare)
+;;
+
+(define_insn "trfsr"
+ [(set (match_operand 0 "" "") (match_operand 1 "" ""))]
+ "TARGET_V850E2V3
+ && GET_MODE(operands[0]) == GET_MODE(operands[1])
+ && GET_CODE(operands[0]) == REG && REGNO (operands[0]) == CC_REGNUM
+ && GET_CODE(operands[1]) == REG && REGNO (operands[1]) == FCC_REGNUM
+ && (GET_MODE(operands[0]) == CC_FPU_LEmode
+ || GET_MODE(operands[0]) == CC_FPU_GEmode
+ || GET_MODE(operands[0]) == CC_FPU_LTmode
+ || GET_MODE(operands[0]) == CC_FPU_GTmode
+ || GET_MODE(operands[0]) == CC_FPU_EQmode
+ || GET_MODE(operands[0]) == CC_FPU_NEmode)"
+ "trfsr"
+ [(set_attr "length" "4")
+ (set_attr "cc" "set_z")
+ (set_attr "type" "fpu")])
+
+;;
+;; Floating-point conditional moves for the v850e2v3.
+;;
+
+;; The actual v850e2v3 conditional move instructions
+;;
+(define_insn "movsfcc_z_insn"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (if_then_else:SF
+ (match_operand 3 "v850_float_z_comparison_operator" "")
+ (match_operand:SF 1 "reg_or_0_operand" "rIG")
+ (match_operand:SF 2 "reg_or_0_operand" "rIG")))]
+ "TARGET_V850E2V3"
+ "cmovf.s 0,%z1,%z2,%0"
+ [(set_attr "cc" "clobber")]) ;; ??? or none_0hit
+
+(define_insn "movsfcc_nz_insn"
+ [(set (match_operand:SF 0 "register_operand" "=r")
+ (if_then_else:SF
+ (match_operand 3 "v850_float_nz_comparison_operator" "")
+ (match_operand:SF 1 "reg_or_0_operand" "rIG")
+ (match_operand:SF 2 "reg_or_0_operand" "rIG")))]
+ "TARGET_V850E2V3"
+ "cmovf.s 0,%z2,%z1,%0"
+ [(set_attr "cc" "clobber")]) ;; ??? or none_0hit
+
+(define_insn "movdfcc_z_insn"
+ [(set (match_operand:DF 0 "even_reg_operand" "=r")
+ (if_then_else:DF
+ (match_operand 3 "v850_float_z_comparison_operator" "")
+ (match_operand:DF 1 "even_reg_operand" "r")
+ (match_operand:DF 2 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cmovf.d 0,%z1,%z2,%0"
+ [(set_attr "cc" "clobber")]) ;; ??? or none_0hit
+
+(define_insn "movdfcc_nz_insn"
+ [(set (match_operand:DF 0 "even_reg_operand" "=r")
+ (if_then_else:DF
+ (match_operand 3 "v850_float_nz_comparison_operator" "")
+ (match_operand:DF 1 "even_reg_operand" "r")
+ (match_operand:DF 2 "even_reg_operand" "r")))]
+ "TARGET_V850E2V3"
+ "cmovf.d 0,%z2,%z1,%0"
+ [(set_attr "cc" "clobber")]) ;; ??? or none_0hit
+
+(define_insn "movedfcc_z_zero"
+ [(set (match_operand:DF 0 "register_operand" "=r")
+ (if_then_else:DF
+ (match_operand 3 "v850_float_z_comparison_operator" "")
+ (match_operand:DF 1 "reg_or_0_operand" "rIG")
+ (match_operand:DF 2 "reg_or_0_operand" "rIG")))]
+ "TARGET_V850E2V3"
+ "cmovf.s 0,%z1,%z2,%0 ; cmovf.s 0,%Z1,%Z2,%R0"
+ [(set_attr "length" "8")
+ (set_attr "cc" "clobber")]) ;; ??? or none_0hit
+
+(define_insn "movedfcc_nz_zero"
+ [(set (match_operand:DF 0 "register_operand" "=r")
+ (if_then_else:DF
+ (match_operand 3 "v850_float_nz_comparison_operator" "")
+ (match_operand:DF 1 "reg_or_0_operand" "rIG")
+ (match_operand:DF 2 "reg_or_0_operand" "rIG")))]
+ "TARGET_V850E2V3"
+ "cmovf.s 0,%z2,%z1,%0 ; cmovf.s 0,%Z2,%Z1,%R0"
+ [(set_attr "length" "8")
+ (set_attr "cc" "clobber")]) ;; ??? or none_0hit
+
-\f
;; ----------------------------------------------------------------------
;; HELPER INSTRUCTIONS for saving the prologue and epilogue registers
;; ----------------------------------------------------------------------
;;
;; Actually, convert the RTXs into a PREPARE instruction.
;;
+
(define_insn ""
[(match_parallel 0 "pattern_is_ok_for_prepare"
[(set (reg:SI 3)
(set (mem:SI (plus:SI (reg:SI 3)
(match_operand:SI 2 "immediate_operand" "i")))
(match_operand:SI 3 "register_is_ok_for_epilogue" "r"))])]
- "TARGET_PROLOG_FUNCTION && TARGET_V850E"
+ "TARGET_PROLOG_FUNCTION && (TARGET_V850E || TARGET_V850E2_ALL)"
"* return construct_prepare_instruction (operands[0]);
"
[(set_attr "length" "4")
- (set_attr "cc" "none")])
+ (set_attr "cc" "clobber")])
(define_insn ""
[(match_parallel 0 "pattern_is_ok_for_prologue"
(set (mem:SI (plus:SI (reg:SI 3)
(match_operand:SI 2 "immediate_operand" "i")))
(match_operand:SI 3 "register_is_ok_for_epilogue" "r"))])]
- "TARGET_PROLOG_FUNCTION && TARGET_V850"
+ "TARGET_PROLOG_FUNCTION"
"* return construct_save_jarl (operands[0]);
"
[(set (attr "length") (if_then_else (eq_attr "long_calls" "yes")
(set (match_operand:SI 2 "register_is_ok_for_epilogue" "=r")
(mem:SI (plus:SI (reg:SI 3)
(match_operand:SI 3 "immediate_operand" "i"))))])]
- "TARGET_PROLOG_FUNCTION && TARGET_V850E"
+ "TARGET_PROLOG_FUNCTION && (TARGET_V850E || TARGET_V850E2_ALL)"
"* return construct_dispose_instruction (operands[0]);
"
[(set_attr "length" "4")
- (set_attr "cc" "none")])
+ (set_attr "cc" "clobber")])
;; This pattern will match a return RTX followed by any number of pop RTXs
;; and possible a stack adjustment as well. These RTXs will be turned into
(set (match_operand:SI 2 "register_is_ok_for_epilogue" "=r")
(mem:SI (plus:SI (reg:SI 3)
(match_operand:SI 3 "immediate_operand" "i"))))])]
- "TARGET_PROLOG_FUNCTION && TARGET_V850"
+ "TARGET_PROLOG_FUNCTION"
"* return construct_restore_jr (operands[0]);
"
[(set (attr "length") (if_then_else (eq_attr "long_calls" "yes")
;; Initialize an interrupt function. Do not depend on TARGET_PROLOG_FUNCTION.
(define_insn "callt_save_interrupt"
[(unspec_volatile [(const_int 0)] 2)]
- "TARGET_V850E && !TARGET_DISABLE_CALLT"
+ "(TARGET_V850E || TARGET_V850E2_ALL) && !TARGET_DISABLE_CALLT"
;; The CALLT instruction stores the next address of CALLT to CTPC register
;; without saving its previous value. So if the interrupt handler
;; or its caller could possibly execute the CALLT insn, save_interrupt
;; MUST NOT be called via CALLT.
"*
{
- output_asm_insn (\"addi -24, sp, sp\", operands);
+ output_asm_insn (\"addi -28, sp, sp\", operands);
+ output_asm_insn (\"st.w r1, 24[sp]\", operands);
output_asm_insn (\"st.w r10, 12[sp]\", operands);
+ output_asm_insn (\"st.w r11, 16[sp]\", operands);
output_asm_insn (\"stsr ctpc, r10\", operands);
- output_asm_insn (\"st.w r10, 16[sp]\", operands);
- output_asm_insn (\"stsr ctpsw, r10\", operands);
output_asm_insn (\"st.w r10, 20[sp]\", operands);
+ output_asm_insn (\"stsr ctpsw, r10\", operands);
+ output_asm_insn (\"st.w r10, 24[sp]\", operands);
output_asm_insn (\"callt ctoff(__callt_save_interrupt)\", operands);
return \"\";
}"
[(set_attr "length" "26")
- (set_attr "cc" "none")])
+ (set_attr "cc" "clobber")])
(define_insn "callt_return_interrupt"
[(unspec_volatile [(const_int 0)] 3)]
- "TARGET_V850E && !TARGET_DISABLE_CALLT"
+ "(TARGET_V850E || TARGET_V850E2_ALL) && !TARGET_DISABLE_CALLT"
"callt ctoff(__callt_return_interrupt)"
[(set_attr "length" "2")
(set_attr "cc" "clobber")])
(define_insn "save_interrupt"
- [(set (reg:SI 3) (plus:SI (reg:SI 3) (const_int -16)))
- (set (mem:SI (plus:SI (reg:SI 3) (const_int -16))) (reg:SI 30))
- (set (mem:SI (plus:SI (reg:SI 3) (const_int -12))) (reg:SI 4))
- (set (mem:SI (plus:SI (reg:SI 3) (const_int -8))) (reg:SI 1))
- (set (mem:SI (plus:SI (reg:SI 3) (const_int -4))) (reg:SI 10))]
+ [(set (reg:SI 3) (plus:SI (reg:SI 3) (const_int -20)))
+ (set (mem:SI (plus:SI (reg:SI 3) (const_int -20))) (reg:SI 30))
+ (set (mem:SI (plus:SI (reg:SI 3) (const_int -16))) (reg:SI 4))
+ (set (mem:SI (plus:SI (reg:SI 3) (const_int -12))) (reg:SI 1))
+ (set (mem:SI (plus:SI (reg:SI 3) (const_int -8))) (reg:SI 10))
+ (set (mem:SI (plus:SI (reg:SI 3) (const_int -4))) (reg:SI 11))]
""
"*
{
if (TARGET_PROLOG_FUNCTION && !TARGET_LONG_CALLS)
- return \"add -16,sp\;st.w r10,12[sp]\;jarl __save_interrupt,r10\";
+ return \"addi -20,sp,sp \; st.w r11,16[sp] \; st.w r10,12[sp] \; jarl __save_interrupt,r10\";
else
{
- output_asm_insn (\"add -16, sp\", operands);
+ output_asm_insn (\"addi -20, sp, sp\", operands);
+ output_asm_insn (\"st.w r11, 16[sp]\", operands);
output_asm_insn (\"st.w r10, 12[sp]\", operands);
output_asm_insn (\"st.w ep, 0[sp]\", operands);
output_asm_insn (\"st.w gp, 4[sp]\", operands);
;; Restore r1, r4, r10, and return from the interrupt
(define_insn "return_interrupt"
[(return)
- (set (reg:SI 3) (plus:SI (reg:SI 3) (const_int 16)))
+ (set (reg:SI 3) (plus:SI (reg:SI 3) (const_int 20)))
+ (set (reg:SI 11) (mem:SI (plus:SI (reg:SI 3) (const_int 16))))
(set (reg:SI 10) (mem:SI (plus:SI (reg:SI 3) (const_int 12))))
(set (reg:SI 1) (mem:SI (plus:SI (reg:SI 3) (const_int 8))))
(set (reg:SI 4) (mem:SI (plus:SI (reg:SI 3) (const_int 4))))
output_asm_insn (\"ld.w 4[sp], gp\", operands);
output_asm_insn (\"ld.w 8[sp], r1\", operands);
output_asm_insn (\"ld.w 12[sp], r10\", operands);
- output_asm_insn (\"addi 16, sp, sp\", operands);
+ output_asm_insn (\"ld.w 16[sp], r11\", operands);
+ output_asm_insn (\"addi 20, sp, sp\", operands);
output_asm_insn (\"reti\", operands);
return \"\";
}
(define_insn "callt_save_all_interrupt"
[(unspec_volatile [(const_int 0)] 0)]
- "TARGET_V850E && !TARGET_DISABLE_CALLT"
+ "(TARGET_V850E || TARGET_V850E2_ALL) && !TARGET_DISABLE_CALLT"
"callt ctoff(__callt_save_all_interrupt)"
[(set_attr "length" "2")
(set_attr "cc" "none")])
(define_insn "callt_restore_all_interrupt"
[(unspec_volatile [(const_int 0)] 1)]
- "TARGET_V850E && !TARGET_DISABLE_CALLT"
+ "(TARGET_V850E || TARGET_V850E2_ALL) && !TARGET_DISABLE_CALLT"
"callt ctoff(__callt_restore_all_interrupt)"
[(set_attr "length" "2")
(set_attr "cc" "none")])
[(set_attr "length" "4")
(set_attr "cc" "clobber")])
-;; Save r6-r9 for a variable argument function
-(define_insn "save_r6_r9_v850e"
- [(set (mem:SI (reg:SI 3)) (reg:SI 6))
- (set (mem:SI (plus:SI (reg:SI 3) (const_int 4))) (reg:SI 7))
- (set (mem:SI (plus:SI (reg:SI 3) (const_int 8))) (reg:SI 8))
- (set (mem:SI (plus:SI (reg:SI 3) (const_int 12))) (reg:SI 9))
- ]
- "TARGET_PROLOG_FUNCTION && TARGET_V850E && !TARGET_DISABLE_CALLT"
- "callt ctoff(__callt_save_r6_r9)"
- [(set_attr "length" "2")
- (set_attr "cc" "none")])
-
-(define_insn "save_r6_r9"
- [(set (mem:SI (reg:SI 3)) (reg:SI 6))
- (set (mem:SI (plus:SI (reg:SI 3) (const_int 4))) (reg:SI 7))
- (set (mem:SI (plus:SI (reg:SI 3) (const_int 8))) (reg:SI 8))
- (set (mem:SI (plus:SI (reg:SI 3) (const_int 12))) (reg:SI 9))
- (clobber (reg:SI 10))]
- "TARGET_PROLOG_FUNCTION && ! TARGET_LONG_CALLS"
- "jarl __save_r6_r9,r10"
- [(set_attr "length" "4")
- (set_attr "cc" "clobber")])
Target RejectNegative Joined
Set the max size of data eligible for the TDA area
-mstrict-align
-Target Report Mask(STRICT_ALIGN)
+mno-strict-align
+Target Report Mask(NO_STRICT_ALIGN)
Enforce strict alignment
+mjump-tables-in-data-section
+Target Report Mask(JUMP_TABLES_IN_DATA_SECTION)
+Enforce table jump
+
mUS-bit-set
Target Report Mask(US_BIT_SET)
Compile for the v850e processor
mv850e1
-Target RejectNegative Mask(V850E) MaskExists
+Target RejectNegative Mask(V850E1)
Compile for the v850e1 processor
+mv850e2
+Target Report RejectNegative Mask(V850E2)
+Compile for the v850e2 processor
+
+mv850e2v3
+Target Report RejectNegative Mask(V850E2V3)
+Compile for the v850e2v3 processor
+
mzda
Target RejectNegative Joined
Set the max size of data eligible for the ZDA area
-mtda=@var{n} -msda=@var{n} -mzda=@var{n} @gol
-mapp-regs -mno-app-regs @gol
-mdisable-callt -mno-disable-callt @gol
+-mv850e2v3 @gol
+-mv850e2 @gol
-mv850e1 @gol
-mv850e @gol
-mv850 -mbig-switch}
@opindex mno-app-regs
This option will cause r2 and r5 to be treated as fixed registers.
+@item -mv850e2v3
+@opindex mv850e2v3
+Specify that the target processor is the V850E2V3. The preprocessor
+constants @samp{__v850e2v3__} will be defined if
+this option is used.
+
+@item -mv850e2
+@opindex mv850e2
+Specify that the target processor is the V850E2. The preprocessor
+constants @samp{__v850e2__} will be defined if
+
@item -mv850e1
@opindex mv850e1
Specify that the target processor is the V850E1. The preprocessor
constants @samp{__v850e1__} and @samp{__v850e__} will be defined if
-this option is used.
@item -mv850e
@opindex mv850e
constant @samp{__v850e__} will be defined if this option is used.
If neither @option{-mv850} nor @option{-mv850e} nor @option{-mv850e1}
+nor @option{-mv850e2} nor @option{-mv850e2v3}
are defined then a default target processor will be chosen and the
relevant @samp{__v850*__} preprocessor constant will be defined.
@item -mdisable-callt
@opindex mdisable-callt
This option will suppress generation of the CALLT instruction for the
-v850e and v850e1 flavors of the v850 architecture. The default is
+v850e, v850e1, v850e2 and v850e2v3 flavors of the v850 architecture. The default is
@option{-mno-disable-callt} which allows the CALLT instruction to be used.
@end table