Merge branch 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 asm updates from Ingo Molnar:
 "The main changes in this cycle were:

   - Cross-arch changes to move the linker sections for NOTES and
     EXCEPTION_TABLE into the RO_DATA area, where they belong on most
     architectures. (Kees Cook)

   - Switch the x86 linker fill byte from x90 (NOP) to 0xcc (INT3), to
     trap jumps into the middle of those padding areas instead of
     sliding execution. (Kees Cook)

   - A thorough cleanup of symbol definitions within x86 assembler code.
     The rather randomly named macros got streamlined around a
     (hopefully) straightforward naming scheme:

        SYM_START(name, linkage, align...)
        SYM_END(name, sym_type)

        SYM_FUNC_START(name)
        SYM_FUNC_END(name)

        SYM_CODE_START(name)
        SYM_CODE_END(name)

        SYM_DATA_START(name)
        SYM_DATA_END(name)

     etc - with about three times of these basic primitives with some
     label, local symbol or attribute variant, expressed via postfixes.

     No change in functionality intended. (Jiri Slaby)

   - Misc other changes, cleanups and smaller fixes"

* 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (67 commits)
  x86/entry/64: Remove pointless jump in paranoid_exit
  x86/entry/32: Remove unused resume_userspace label
  x86/build/vdso: Remove meaningless CFLAGS_REMOVE_*.o
  m68k: Convert missed RODATA to RO_DATA
  x86/vmlinux: Use INT3 instead of NOP for linker fill bytes
  x86/mm: Report actual image regions in /proc/iomem
  x86/mm: Report which part of kernel image is freed
  x86/mm: Remove redundant address-of operators on addresses
  xtensa: Move EXCEPTION_TABLE to RO_DATA segment
  powerpc: Move EXCEPTION_TABLE to RO_DATA segment
  parisc: Move EXCEPTION_TABLE to RO_DATA segment
  microblaze: Move EXCEPTION_TABLE to RO_DATA segment
  ia64: Move EXCEPTION_TABLE to RO_DATA segment
  h8300: Move EXCEPTION_TABLE to RO_DATA segment
  c6x: Move EXCEPTION_TABLE to RO_DATA segment
  arm64: Move EXCEPTION_TABLE to RO_DATA segment
  alpha: Move EXCEPTION_TABLE to RO_DATA segment
  x86/vmlinux: Move EXCEPTION_TABLE to RO_DATA segment
  x86/vmlinux: Actually use _etext for the end of the text segment
  vmlinux.lds.h: Allow EXCEPTION_TABLE to live in RO_DATA
  ...
This commit is contained in:
Linus Torvalds 2019-11-26 10:42:40 -08:00
commit 1d87200446
165 changed files with 1656 additions and 1188 deletions

View file

@ -0,0 +1,216 @@
Assembler Annotations
=====================
Copyright (c) 2017-2019 Jiri Slaby
This document describes the new macros for annotation of data and code in
assembly. In particular, it contains information about ``SYM_FUNC_START``,
``SYM_FUNC_END``, ``SYM_CODE_START``, and similar.
Rationale
---------
Some code like entries, trampolines, or boot code needs to be written in
assembly. The same as in C, such code is grouped into functions and
accompanied with data. Standard assemblers do not force users into precisely
marking these pieces as code, data, or even specifying their length.
Nevertheless, assemblers provide developers with such annotations to aid
debuggers throughout assembly. On top of that, developers also want to mark
some functions as *global* in order to be visible outside of their translation
units.
Over time, the Linux kernel has adopted macros from various projects (like
``binutils``) to facilitate such annotations. So for historic reasons,
developers have been using ``ENTRY``, ``END``, ``ENDPROC``, and other
annotations in assembly. Due to the lack of their documentation, the macros
are used in rather wrong contexts at some locations. Clearly, ``ENTRY`` was
intended to denote the beginning of global symbols (be it data or code).
``END`` used to mark the end of data or end of special functions with
*non-standard* calling convention. In contrast, ``ENDPROC`` should annotate
only ends of *standard* functions.
When these macros are used correctly, they help assemblers generate a nice
object with both sizes and types set correctly. For example, the result of
``arch/x86/lib/putuser.S``::
Num: Value Size Type Bind Vis Ndx Name
25: 0000000000000000 33 FUNC GLOBAL DEFAULT 1 __put_user_1
29: 0000000000000030 37 FUNC GLOBAL DEFAULT 1 __put_user_2
32: 0000000000000060 36 FUNC GLOBAL DEFAULT 1 __put_user_4
35: 0000000000000090 37 FUNC GLOBAL DEFAULT 1 __put_user_8
This is not only important for debugging purposes. When there are properly
annotated objects like this, tools can be run on them to generate more useful
information. In particular, on properly annotated objects, ``objtool`` can be
run to check and fix the object if needed. Currently, ``objtool`` can report
missing frame pointer setup/destruction in functions. It can also
automatically generate annotations for :doc:`ORC unwinder <x86/orc-unwinder>`
for most code. Both of these are especially important to support reliable
stack traces which are in turn necessary for :doc:`Kernel live patching
<livepatch/livepatch>`.
Caveat and Discussion
---------------------
As one might realize, there were only three macros previously. That is indeed
insufficient to cover all the combinations of cases:
* standard/non-standard function
* code/data
* global/local symbol
There was a discussion_ and instead of extending the current ``ENTRY/END*``
macros, it was decided that brand new macros should be introduced instead::
So how about using macro names that actually show the purpose, instead
of importing all the crappy, historic, essentially randomly chosen
debug symbol macro names from the binutils and older kernels?
.. _discussion: https://lkml.kernel.org/r/20170217104757.28588-1-jslaby@suse.cz
Macros Description
------------------
The new macros are prefixed with the ``SYM_`` prefix and can be divided into
three main groups:
1. ``SYM_FUNC_*`` -- to annotate C-like functions. This means functions with
standard C calling conventions, i.e. the stack contains a return address at
the predefined place and a return from the function can happen in a
standard way. When frame pointers are enabled, save/restore of frame
pointer shall happen at the start/end of a function, respectively, too.
Checking tools like ``objtool`` should ensure such marked functions conform
to these rules. The tools can also easily annotate these functions with
debugging information (like *ORC data*) automatically.
2. ``SYM_CODE_*`` -- special functions called with special stack. Be it
interrupt handlers with special stack content, trampolines, or startup
functions.
Checking tools mostly ignore checking of these functions. But some debug
information still can be generated automatically. For correct debug data,
this code needs hints like ``UNWIND_HINT_REGS`` provided by developers.
3. ``SYM_DATA*`` -- obviously data belonging to ``.data`` sections and not to
``.text``. Data do not contain instructions, so they have to be treated
specially by the tools: they should not treat the bytes as instructions,
nor assign any debug information to them.
Instruction Macros
~~~~~~~~~~~~~~~~~~
This section covers ``SYM_FUNC_*`` and ``SYM_CODE_*`` enumerated above.
* ``SYM_FUNC_START`` and ``SYM_FUNC_START_LOCAL`` are supposed to be **the
most frequent markings**. They are used for functions with standard calling
conventions -- global and local. Like in C, they both align the functions to
architecture specific ``__ALIGN`` bytes. There are also ``_NOALIGN`` variants
for special cases where developers do not want this implicit alignment.
``SYM_FUNC_START_WEAK`` and ``SYM_FUNC_START_WEAK_NOALIGN`` markings are
also offered as an assembler counterpart to the *weak* attribute known from
C.
All of these **shall** be coupled with ``SYM_FUNC_END``. First, it marks
the sequence of instructions as a function and computes its size to the
generated object file. Second, it also eases checking and processing such
object files as the tools can trivially find exact function boundaries.
So in most cases, developers should write something like in the following
example, having some asm instructions in between the macros, of course::
SYM_FUNC_START(memset)
... asm insns ...
SYM_FUNC_END(memset)
In fact, this kind of annotation corresponds to the now deprecated ``ENTRY``
and ``ENDPROC`` macros.
* ``SYM_FUNC_START_ALIAS`` and ``SYM_FUNC_START_LOCAL_ALIAS`` serve for those
who decided to have two or more names for one function. The typical use is::
SYM_FUNC_START_ALIAS(__memset)
SYM_FUNC_START(memset)
... asm insns ...
SYM_FUNC_END(memset)
SYM_FUNC_END_ALIAS(__memset)
In this example, one can call ``__memset`` or ``memset`` with the same
result, except the debug information for the instructions is generated to
the object file only once -- for the non-``ALIAS`` case.
* ``SYM_CODE_START`` and ``SYM_CODE_START_LOCAL`` should be used only in
special cases -- if you know what you are doing. This is used exclusively
for interrupt handlers and similar where the calling convention is not the C
one. ``_NOALIGN`` variants exist too. The use is the same as for the ``FUNC``
category above::
SYM_CODE_START_LOCAL(bad_put_user)
... asm insns ...
SYM_CODE_END(bad_put_user)
Again, every ``SYM_CODE_START*`` **shall** be coupled by ``SYM_CODE_END``.
To some extent, this category corresponds to deprecated ``ENTRY`` and
``END``. Except ``END`` had several other meanings too.
* ``SYM_INNER_LABEL*`` is used to denote a label inside some
``SYM_{CODE,FUNC}_START`` and ``SYM_{CODE,FUNC}_END``. They are very similar
to C labels, except they can be made global. An example of use::
SYM_CODE_START(ftrace_caller)
/* save_mcount_regs fills in first two parameters */
...
SYM_INNER_LABEL(ftrace_caller_op_ptr, SYM_L_GLOBAL)
/* Load the ftrace_ops into the 3rd parameter */
...
SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
call ftrace_stub
...
retq
SYM_CODE_END(ftrace_caller)
Data Macros
~~~~~~~~~~~
Similar to instructions, there is a couple of macros to describe data in the
assembly.
* ``SYM_DATA_START`` and ``SYM_DATA_START_LOCAL`` mark the start of some data
and shall be used in conjunction with either ``SYM_DATA_END``, or
``SYM_DATA_END_LABEL``. The latter adds also a label to the end, so that
people can use ``lstack`` and (local) ``lstack_end`` in the following
example::
SYM_DATA_START_LOCAL(lstack)
.skip 4096
SYM_DATA_END_LABEL(lstack, SYM_L_LOCAL, lstack_end)
* ``SYM_DATA`` and ``SYM_DATA_LOCAL`` are variants for simple, mostly one-line
data::
SYM_DATA(HEAP, .long rm_heap)
SYM_DATA(heap_end, .long rm_stack)
In the end, they expand to ``SYM_DATA_START`` with ``SYM_DATA_END``
internally.
Support Macros
~~~~~~~~~~~~~~
All the above reduce themselves to some invocation of ``SYM_START``,
``SYM_END``, or ``SYM_ENTRY`` at last. Normally, developers should avoid using
these.
Further, in the above examples, one could see ``SYM_L_LOCAL``. There are also
``SYM_L_GLOBAL`` and ``SYM_L_WEAK``. All are intended to denote linkage of a
symbol marked by them. They are used either in ``_LABEL`` variants of the
earlier macros, or in ``SYM_START``.
Overriding Macros
~~~~~~~~~~~~~~~~~
Architecture can also override any of the macros in their own
``asm/linkage.h``, including macros specifying the type of a symbol
(``SYM_T_FUNC``, ``SYM_T_OBJECT``, and ``SYM_T_NONE``). As every macro
described in this file is surrounded by ``#ifdef`` + ``#endif``, it is enough
to define the macros differently in the aforementioned architecture-dependent
header.

View file

@ -135,6 +135,14 @@ needed).
mic/index mic/index
scheduler/index scheduler/index
Architecture-agnostic documentation
-----------------------------------
.. toctree::
:maxdepth: 2
asm-annotations
Architecture-specific documentation Architecture-specific documentation
----------------------------------- -----------------------------------

View file

@ -1,4 +1,8 @@
/* SPDX-License-Identifier: GPL-2.0 */ /* SPDX-License-Identifier: GPL-2.0 */
#define EMITS_PT_NOTE
#define RO_EXCEPTION_TABLE_ALIGN 16
#include <asm-generic/vmlinux.lds.h> #include <asm-generic/vmlinux.lds.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
#include <asm/cache.h> #include <asm/cache.h>
@ -8,7 +12,7 @@
OUTPUT_FORMAT("elf64-alpha") OUTPUT_FORMAT("elf64-alpha")
OUTPUT_ARCH(alpha) OUTPUT_ARCH(alpha)
ENTRY(__start) ENTRY(__start)
PHDRS { kernel PT_LOAD; note PT_NOTE; } PHDRS { text PT_LOAD; note PT_NOTE; }
jiffies = jiffies_64; jiffies = jiffies_64;
SECTIONS SECTIONS
{ {
@ -27,17 +31,11 @@ SECTIONS
LOCK_TEXT LOCK_TEXT
*(.fixup) *(.fixup)
*(.gnu.warning) *(.gnu.warning)
} :kernel } :text
swapper_pg_dir = SWAPPER_PGD; swapper_pg_dir = SWAPPER_PGD;
_etext = .; /* End of text section */ _etext = .; /* End of text section */
NOTES :kernel :note RO_DATA(4096)
.dummy : {
*(.dummy)
} :kernel
RODATA
EXCEPTION_TABLE(16)
/* Will be freed after init */ /* Will be freed after init */
__init_begin = ALIGN(PAGE_SIZE); __init_begin = ALIGN(PAGE_SIZE);
@ -52,7 +50,7 @@ SECTIONS
_sdata = .; /* Start of rw data section */ _sdata = .; /* Start of rw data section */
_data = .; _data = .;
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE) RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
.got : { .got : {
*(.got) *(.got)

View file

@ -95,13 +95,13 @@ SECTIONS
_etext = .; _etext = .;
_sdata = .; _sdata = .;
RO_DATA_SECTION(PAGE_SIZE) RO_DATA(PAGE_SIZE)
/* /*
* 1. this is .data essentially * 1. this is .data essentially
* 2. THREAD_SIZE for init.task, must be kernel-stk sz aligned * 2. THREAD_SIZE for init.task, must be kernel-stk sz aligned
*/ */
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE) RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
_edata = .; _edata = .;
@ -118,8 +118,6 @@ SECTIONS
/DISCARD/ : { *(.eh_frame) } /DISCARD/ : { *(.eh_frame) }
#endif #endif
NOTES
. = ALIGN(PAGE_SIZE); . = ALIGN(PAGE_SIZE);
_end = . ; _end = . ;

View file

@ -70,8 +70,6 @@ SECTIONS
ARM_UNWIND_SECTIONS ARM_UNWIND_SECTIONS
#endif #endif
NOTES
_etext = .; /* End of text and rodata section */ _etext = .; /* End of text and rodata section */
ARM_VECTORS ARM_VECTORS
@ -114,7 +112,7 @@ SECTIONS
. = ALIGN(THREAD_SIZE); . = ALIGN(THREAD_SIZE);
_sdata = .; _sdata = .;
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE) RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
.data.ro_after_init : AT(ADDR(.data.ro_after_init) - LOAD_OFFSET) { .data.ro_after_init : AT(ADDR(.data.ro_after_init) - LOAD_OFFSET) {
*(.data..ro_after_init) *(.data..ro_after_init)
} }

View file

@ -81,8 +81,6 @@ SECTIONS
ARM_UNWIND_SECTIONS ARM_UNWIND_SECTIONS
#endif #endif
NOTES
#ifdef CONFIG_STRICT_KERNEL_RWX #ifdef CONFIG_STRICT_KERNEL_RWX
. = ALIGN(1<<SECTION_SHIFT); . = ALIGN(1<<SECTION_SHIFT);
#else #else
@ -143,7 +141,7 @@ SECTIONS
__init_end = .; __init_end = .;
_sdata = .; _sdata = .;
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE) RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
_edata = .; _edata = .;
BSS_SECTION(0, 0, 0) BSS_SECTION(0, 0, 0)

View file

@ -5,6 +5,8 @@
* Written by Martin Mares <mj@atrey.karlin.mff.cuni.cz> * Written by Martin Mares <mj@atrey.karlin.mff.cuni.cz>
*/ */
#define RO_EXCEPTION_TABLE_ALIGN 8
#include <asm-generic/vmlinux.lds.h> #include <asm-generic/vmlinux.lds.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/kernel-pgtable.h> #include <asm/kernel-pgtable.h>
@ -132,11 +134,9 @@ SECTIONS
. = ALIGN(SEGMENT_ALIGN); . = ALIGN(SEGMENT_ALIGN);
_etext = .; /* End of text section */ _etext = .; /* End of text section */
RO_DATA(PAGE_SIZE) /* everything from this point to */ /* everything from this point to __init_begin will be marked RO NX */
EXCEPTION_TABLE(8) /* __init_begin will be marked RO NX */ RO_DATA(PAGE_SIZE)
NOTES
. = ALIGN(PAGE_SIZE);
idmap_pg_dir = .; idmap_pg_dir = .;
. += IDMAP_DIR_SIZE; . += IDMAP_DIR_SIZE;
@ -212,7 +212,7 @@ SECTIONS
_data = .; _data = .;
_sdata = .; _sdata = .;
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN) RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN)
/* /*
* Data written with the MMU off but read with the MMU on requires * Data written with the MMU off but read with the MMU on requires

View file

@ -5,6 +5,9 @@
* Copyright (C) 2010, 2011 Texas Instruments Incorporated * Copyright (C) 2010, 2011 Texas Instruments Incorporated
* Mark Salter <msalter@redhat.com> * Mark Salter <msalter@redhat.com>
*/ */
#define RO_EXCEPTION_TABLE_ALIGN 16
#include <asm-generic/vmlinux.lds.h> #include <asm-generic/vmlinux.lds.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
#include <asm/page.h> #include <asm/page.h>
@ -80,10 +83,7 @@ SECTIONS
*(.gnu.warning) *(.gnu.warning)
} }
EXCEPTION_TABLE(16) RO_DATA(PAGE_SIZE)
NOTES
RO_DATA_SECTION(PAGE_SIZE)
.const : .const :
{ {
*(.const .const.* .gnu.linkonce.r.*) *(.const .const.* .gnu.linkonce.r.*)

View file

@ -49,11 +49,10 @@ SECTIONS
_sdata = .; _sdata = .;
RO_DATA_SECTION(PAGE_SIZE) RO_DATA(PAGE_SIZE)
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE) RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
_edata = .; _edata = .;
NOTES
EXCEPTION_TABLE(L1_CACHE_BYTES) EXCEPTION_TABLE(L1_CACHE_BYTES)
BSS_SECTION(L1_CACHE_BYTES, PAGE_SIZE, L1_CACHE_BYTES) BSS_SECTION(L1_CACHE_BYTES, PAGE_SIZE, L1_CACHE_BYTES)
VBR_BASE VBR_BASE

View file

@ -1,4 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */ /* SPDX-License-Identifier: GPL-2.0 */
#define RO_EXCEPTION_TABLE_ALIGN 16
#include <asm-generic/vmlinux.lds.h> #include <asm-generic/vmlinux.lds.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
@ -37,9 +40,7 @@ SECTIONS
#endif #endif
_etext = . ; _etext = . ;
} }
EXCEPTION_TABLE(16) RO_DATA(4)
NOTES
RO_DATA_SECTION(4)
ROMEND = .; ROMEND = .;
#if defined(CONFIG_ROMKERNEL) #if defined(CONFIG_ROMKERNEL)
. = RAMTOP; . = RAMTOP;
@ -48,7 +49,7 @@ SECTIONS
#endif #endif
_sdata = . ; _sdata = . ;
__data_start = . ; __data_start = . ;
RW_DATA_SECTION(0, PAGE_SIZE, THREAD_SIZE) RW_DATA(0, PAGE_SIZE, THREAD_SIZE)
#if defined(CONFIG_ROMKERNEL) #if defined(CONFIG_ROMKERNEL)
#undef ADDR #undef ADDR
#endif #endif

View file

@ -49,12 +49,11 @@ SECTIONS
INIT_DATA_SECTION(PAGE_SIZE) INIT_DATA_SECTION(PAGE_SIZE)
_sdata = .; _sdata = .;
RW_DATA_SECTION(32,PAGE_SIZE,_THREAD_SIZE) RW_DATA(32,PAGE_SIZE,_THREAD_SIZE)
RO_DATA_SECTION(PAGE_SIZE) RO_DATA(PAGE_SIZE)
_edata = .; _edata = .;
EXCEPTION_TABLE(16) EXCEPTION_TABLE(16)
NOTES
BSS_SECTION(_PAGE_SIZE, _PAGE_SIZE, _PAGE_SIZE) BSS_SECTION(_PAGE_SIZE, _PAGE_SIZE, _PAGE_SIZE)

View file

@ -5,6 +5,9 @@
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
#define EMITS_PT_NOTE
#define RO_EXCEPTION_TABLE_ALIGN 16
#include <asm-generic/vmlinux.lds.h> #include <asm-generic/vmlinux.lds.h>
OUTPUT_FORMAT("elf64-ia64-little") OUTPUT_FORMAT("elf64-ia64-little")
@ -13,7 +16,7 @@ ENTRY(phys_start)
jiffies = jiffies_64; jiffies = jiffies_64;
PHDRS { PHDRS {
code PT_LOAD; text PT_LOAD;
percpu PT_LOAD; percpu PT_LOAD;
data PT_LOAD; data PT_LOAD;
note PT_NOTE; note PT_NOTE;
@ -36,7 +39,7 @@ SECTIONS {
phys_start = _start - LOAD_OFFSET; phys_start = _start - LOAD_OFFSET;
code : { code : {
} :code } :text
. = KERNEL_START; . = KERNEL_START;
_text = .; _text = .;
@ -68,11 +71,6 @@ SECTIONS {
/* /*
* Read-only data * Read-only data
*/ */
NOTES :code :note /* put .notes in text and mark in PT_NOTE */
code_continues : {
} : code /* switch back to regular program... */
EXCEPTION_TABLE(16)
/* MCA table */ /* MCA table */
. = ALIGN(16); . = ALIGN(16);
@ -102,11 +100,11 @@ SECTIONS {
__start_unwind = .; __start_unwind = .;
*(.IA_64.unwind*) *(.IA_64.unwind*)
__end_unwind = .; __end_unwind = .;
} :code :unwind } :text :unwind
code_continues2 : { code_continues2 : {
} : code } :text
RODATA RO_DATA(4096)
.opd : AT(ADDR(.opd) - LOAD_OFFSET) { .opd : AT(ADDR(.opd) - LOAD_OFFSET) {
__start_opd = .; __start_opd = .;
@ -214,7 +212,7 @@ SECTIONS {
_end = .; _end = .;
code : { code : {
} :code } :text
STABS_DEBUG STABS_DEBUG
DWARF_DEBUG DWARF_DEBUG

View file

@ -60,8 +60,8 @@ SECTIONS {
#endif #endif
_sdata = .; _sdata = .;
RO_DATA_SECTION(PAGE_SIZE) RO_DATA(PAGE_SIZE)
RW_DATA_SECTION(16, PAGE_SIZE, THREAD_SIZE) RW_DATA(16, PAGE_SIZE, THREAD_SIZE)
_edata = .; _edata = .;
EXCEPTION_TABLE(16) EXCEPTION_TABLE(16)

View file

@ -31,9 +31,9 @@ SECTIONS
_sdata = .; /* Start of data section */ _sdata = .; /* Start of data section */
RODATA RO_DATA(4096)
RW_DATA_SECTION(16, PAGE_SIZE, THREAD_SIZE) RW_DATA(16, PAGE_SIZE, THREAD_SIZE)
BSS_SECTION(0, 0, 0) BSS_SECTION(0, 0, 0)

View file

@ -24,13 +24,13 @@ SECTIONS
*(.fixup) *(.fixup)
*(.gnu.warning) *(.gnu.warning)
} :text = 0x4e75 } :text = 0x4e75
RODATA RO_DATA(4096)
_etext = .; /* End of text section */ _etext = .; /* End of text section */
EXCEPTION_TABLE(16) :data EXCEPTION_TABLE(16) :data
_sdata = .; /* Start of rw data section */ _sdata = .; /* Start of rw data section */
RW_DATA_SECTION(16, PAGE_SIZE, THREAD_SIZE) :data RW_DATA(16, PAGE_SIZE, THREAD_SIZE) :data
/* End of data goes *here* so that freeing init code works properly. */ /* End of data goes *here* so that freeing init code works properly. */
_edata = .; _edata = .;
NOTES NOTES

View file

@ -11,6 +11,8 @@
OUTPUT_ARCH(microblaze) OUTPUT_ARCH(microblaze)
ENTRY(microblaze_start) ENTRY(microblaze_start)
#define RO_EXCEPTION_TABLE_ALIGN 16
#include <asm/page.h> #include <asm/page.h>
#include <asm-generic/vmlinux.lds.h> #include <asm-generic/vmlinux.lds.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
@ -51,9 +53,7 @@ SECTIONS {
} }
. = ALIGN(16); . = ALIGN(16);
RODATA RO_DATA(4096)
EXCEPTION_TABLE(16)
NOTES
/* /*
* sdata2 section can go anywhere, but must be word aligned * sdata2 section can go anywhere, but must be word aligned
@ -70,7 +70,7 @@ SECTIONS {
} }
_sdata = . ; _sdata = . ;
RW_DATA_SECTION(32, PAGE_SIZE, THREAD_SIZE) RW_DATA(32, PAGE_SIZE, THREAD_SIZE)
_edata = . ; _edata = . ;
/* Under the microblaze ABI, .sdata and .sbss must be contiguous */ /* Under the microblaze ABI, .sdata and .sbss must be contiguous */

View file

@ -10,6 +10,11 @@
*/ */
#define BSS_FIRST_SECTIONS *(.bss..swapper_pg_dir) #define BSS_FIRST_SECTIONS *(.bss..swapper_pg_dir)
/* Cavium Octeon should not have a separate PT_NOTE Program Header. */
#ifndef CONFIG_CAVIUM_OCTEON_SOC
#define EMITS_PT_NOTE
#endif
#include <asm-generic/vmlinux.lds.h> #include <asm-generic/vmlinux.lds.h>
#undef mips #undef mips
@ -76,16 +81,8 @@ SECTIONS
__stop___dbe_table = .; __stop___dbe_table = .;
} }
#ifdef CONFIG_CAVIUM_OCTEON_SOC
#define NOTES_HEADER
#else /* CONFIG_CAVIUM_OCTEON_SOC */
#define NOTES_HEADER :note
#endif /* CONFIG_CAVIUM_OCTEON_SOC */
NOTES :text NOTES_HEADER
.dummy : { *(.dummy) } :text
_sdata = .; /* Start of data section */ _sdata = .; /* Start of data section */
RODATA RO_DATA(4096)
/* writeable */ /* writeable */
.data : { /* Data */ .data : { /* Data */

View file

@ -53,12 +53,11 @@ SECTIONS
_etext = .; /* End of text and rodata section */ _etext = .; /* End of text and rodata section */
_sdata = .; _sdata = .;
RO_DATA_SECTION(PAGE_SIZE) RO_DATA(PAGE_SIZE)
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE) RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
_edata = .; _edata = .;
EXCEPTION_TABLE(16) EXCEPTION_TABLE(16)
NOTES
BSS_SECTION(4, 4, 4) BSS_SECTION(4, 4, 4)
_end = .; _end = .;

View file

@ -49,8 +49,8 @@ SECTIONS
__init_end = .; __init_end = .;
_sdata = .; _sdata = .;
RO_DATA_SECTION(PAGE_SIZE) RO_DATA(PAGE_SIZE)
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE) RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
_edata = .; _edata = .;
BSS_SECTION(0, 0, 0) BSS_SECTION(0, 0, 0)
@ -58,7 +58,6 @@ SECTIONS
STABS_DEBUG STABS_DEBUG
DWARF_DEBUG DWARF_DEBUG
NOTES
DISCARDS DISCARDS
} }

View file

@ -67,19 +67,18 @@ SECTIONS
_sdata = .; _sdata = .;
/* Page alignment required for RO_DATA_SECTION */ /* Page alignment required for RO_DATA */
RO_DATA_SECTION(PAGE_SIZE) RO_DATA(PAGE_SIZE)
_e_kernel_ro = .; _e_kernel_ro = .;
/* Whatever comes after _e_kernel_ro had better be page-aligend, too */ /* Whatever comes after _e_kernel_ro had better be page-aligend, too */
/* 32 here is cacheline size... recheck this */ /* 32 here is cacheline size... recheck this */
RW_DATA_SECTION(32, PAGE_SIZE, PAGE_SIZE) RW_DATA(32, PAGE_SIZE, PAGE_SIZE)
_edata = .; _edata = .;
EXCEPTION_TABLE(4) EXCEPTION_TABLE(4)
NOTES
/* Init code and data */ /* Init code and data */
. = ALIGN(PAGE_SIZE); . = ALIGN(PAGE_SIZE);

View file

@ -19,6 +19,7 @@
*(.data..vm0.pte) *(.data..vm0.pte)
#define CC_USING_PATCHABLE_FUNCTION_ENTRY #define CC_USING_PATCHABLE_FUNCTION_ENTRY
#define RO_EXCEPTION_TABLE_ALIGN 8
#include <asm-generic/vmlinux.lds.h> #include <asm-generic/vmlinux.lds.h>
@ -109,7 +110,7 @@ SECTIONS
_sdata = .; _sdata = .;
/* Architecturally we need to keep __gp below 0x1000000 and thus /* Architecturally we need to keep __gp below 0x1000000 and thus
* in front of RO_DATA_SECTION() which stores lots of tracepoint * in front of RO_DATA() which stores lots of tracepoint
* and ftrace symbols. */ * and ftrace symbols. */
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
. = ALIGN(16); . = ALIGN(16);
@ -127,11 +128,7 @@ SECTIONS
} }
#endif #endif
RO_DATA_SECTION(8) RO_DATA(8)
/* RO because of BUILDTIME_EXTABLE_SORT */
EXCEPTION_TABLE(8)
NOTES
/* unwind info */ /* unwind info */
.PARISC.unwind : { .PARISC.unwind : {
@ -149,7 +146,7 @@ SECTIONS
data_start = .; data_start = .;
/* Data */ /* Data */
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, PAGE_SIZE) RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, PAGE_SIZE)
/* PA-RISC locks requires 16-byte alignment */ /* PA-RISC locks requires 16-byte alignment */
. = ALIGN(16); . = ALIGN(16);

View file

@ -6,6 +6,8 @@
#endif #endif
#define BSS_FIRST_SECTIONS *(.bss.prominit) #define BSS_FIRST_SECTIONS *(.bss.prominit)
#define EMITS_PT_NOTE
#define RO_EXCEPTION_TABLE_ALIGN 0
#include <asm/page.h> #include <asm/page.h>
#include <asm-generic/vmlinux.lds.h> #include <asm-generic/vmlinux.lds.h>
@ -18,22 +20,8 @@
ENTRY(_stext) ENTRY(_stext)
PHDRS { PHDRS {
kernel PT_LOAD FLAGS(7); /* RWX */ text PT_LOAD FLAGS(7); /* RWX */
notes PT_NOTE FLAGS(0); note PT_NOTE FLAGS(0);
dummy PT_NOTE FLAGS(0);
/* binutils < 2.18 has a bug that makes it misbehave when taking an
ELF file with all segments at load address 0 as input. This
happens when running "strip" on vmlinux, because of the AT() magic
in this linker script. People using GCC >= 4.2 won't run into
this problem, because the "build-id" support will put some data
into the "notes" segment (at a non-zero load address).
To work around this, we force some data into both the "dummy"
segment and the kernel segment, so the dummy segment will get a
non-zero load address. It's not enough to always create the
"notes" segment, since if nothing gets assigned to it, its load
address will be zero. */
} }
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
@ -77,7 +65,7 @@ SECTIONS
#else /* !CONFIG_PPC64 */ #else /* !CONFIG_PPC64 */
HEAD_TEXT HEAD_TEXT
#endif #endif
} :kernel } :text
__head_end = .; __head_end = .;
@ -126,7 +114,7 @@ SECTIONS
__got2_end = .; __got2_end = .;
#endif /* CONFIG_PPC32 */ #endif /* CONFIG_PPC32 */
} :kernel } :text
. = ALIGN(ETEXT_ALIGN_SIZE); . = ALIGN(ETEXT_ALIGN_SIZE);
_etext = .; _etext = .;
@ -175,17 +163,6 @@ SECTIONS
__stop__btb_flush_fixup = .; __stop__btb_flush_fixup = .;
} }
#endif #endif
EXCEPTION_TABLE(0)
NOTES :kernel :notes
/* The dummy segment contents for the bug workaround mentioned above
near PHDRS. */
.dummy : AT(ADDR(.dummy) - LOAD_OFFSET) {
LONG(0)
LONG(0)
LONG(0)
} :kernel :dummy
/* /*
* Init sections discarded at runtime * Init sections discarded at runtime
@ -200,7 +177,7 @@ SECTIONS
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
*(.tramp.ftrace.init); *(.tramp.ftrace.init);
#endif #endif
} :kernel } :text
/* .exit.text is discarded at runtime, not link time, /* .exit.text is discarded at runtime, not link time,
* to deal with references from __bug_table * to deal with references from __bug_table

View file

@ -52,12 +52,12 @@ SECTIONS
/* Start of data section */ /* Start of data section */
_sdata = .; _sdata = .;
RO_DATA_SECTION(L1_CACHE_BYTES) RO_DATA(L1_CACHE_BYTES)
.srodata : { .srodata : {
*(.srodata*) *(.srodata*)
} }
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE) RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
.sdata : { .sdata : {
__global_pointer$ = . + 0x800; __global_pointer$ = . + 0x800;
*(.sdata*) *(.sdata*)
@ -69,7 +69,6 @@ SECTIONS
BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0) BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0)
EXCEPTION_TABLE(0x10) EXCEPTION_TABLE(0x10)
NOTES
.rel.dyn : { .rel.dyn : {
*(.rel.dyn*) *(.rel.dyn*)

View file

@ -15,6 +15,8 @@
/* Handle ro_after_init data on our own. */ /* Handle ro_after_init data on our own. */
#define RO_AFTER_INIT_DATA #define RO_AFTER_INIT_DATA
#define EMITS_PT_NOTE
#include <asm-generic/vmlinux.lds.h> #include <asm-generic/vmlinux.lds.h>
#include <asm/vmlinux.lds.h> #include <asm/vmlinux.lds.h>
@ -50,11 +52,7 @@ SECTIONS
_etext = .; /* End of text section */ _etext = .; /* End of text section */
} :text = 0x0700 } :text = 0x0700
NOTES :text :note RO_DATA(PAGE_SIZE)
.dummy : { *(.dummy) } :data
RO_DATA_SECTION(PAGE_SIZE)
. = ALIGN(PAGE_SIZE); . = ALIGN(PAGE_SIZE);
_sdata = .; /* Start of data section */ _sdata = .; /* Start of data section */
@ -64,12 +62,12 @@ SECTIONS
.data..ro_after_init : { .data..ro_after_init : {
*(.data..ro_after_init) *(.data..ro_after_init)
JUMP_TABLE_DATA JUMP_TABLE_DATA
} } :data
EXCEPTION_TABLE(16) EXCEPTION_TABLE(16)
. = ALIGN(PAGE_SIZE); . = ALIGN(PAGE_SIZE);
__end_ro_after_init = .; __end_ro_after_init = .;
RW_DATA_SECTION(0x100, PAGE_SIZE, THREAD_SIZE) RW_DATA(0x100, PAGE_SIZE, THREAD_SIZE)
BOOT_DATA_PRESERVED BOOT_DATA_PRESERVED
_edata = .; /* End of data section */ _edata = .; /* End of data section */

View file

@ -48,11 +48,10 @@ SECTIONS
} = 0x0009 } = 0x0009
EXCEPTION_TABLE(16) EXCEPTION_TABLE(16)
NOTES
_sdata = .; _sdata = .;
RO_DATA(PAGE_SIZE) RO_DATA(PAGE_SIZE)
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE) RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
_edata = .; _edata = .;
DWARF_EH_FRAME DWARF_EH_FRAME

View file

@ -67,7 +67,7 @@ SECTIONS
.data1 : { .data1 : {
*(.data1) *(.data1)
} }
RW_DATA_SECTION(SMP_CACHE_BYTES, 0, THREAD_SIZE) RW_DATA(SMP_CACHE_BYTES, 0, THREAD_SIZE)
/* End of data section */ /* End of data section */
_edata = .; _edata = .;
@ -78,7 +78,6 @@ SECTIONS
__stop___fixup = .; __stop___fixup = .;
} }
EXCEPTION_TABLE(16) EXCEPTION_TABLE(16)
NOTES
. = ALIGN(PAGE_SIZE); . = ALIGN(PAGE_SIZE);
__init_begin = ALIGN(PAGE_SIZE); __init_begin = ALIGN(PAGE_SIZE);

View file

@ -9,14 +9,13 @@
_sdata = .; _sdata = .;
PROVIDE (sdata = .); PROVIDE (sdata = .);
RODATA RO_DATA(4096)
.unprotected : { *(.unprotected) } .unprotected : { *(.unprotected) }
. = ALIGN(4096); . = ALIGN(4096);
PROVIDE (_unprotected_end = .); PROVIDE (_unprotected_end = .);
. = ALIGN(4096); . = ALIGN(4096);
NOTES
EXCEPTION_TABLE(0) EXCEPTION_TABLE(0)
BUG_TABLE BUG_TABLE

View file

@ -43,12 +43,11 @@ SECTIONS
_etext = .; _etext = .;
_sdata = .; _sdata = .;
RO_DATA_SECTION(PAGE_SIZE) RO_DATA(PAGE_SIZE)
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE) RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
_edata = .; _edata = .;
EXCEPTION_TABLE(L1_CACHE_BYTES) EXCEPTION_TABLE(L1_CACHE_BYTES)
NOTES
BSS_SECTION(0, 0, 0) BSS_SECTION(0, 0, 0)
_end = .; _end = .;

View file

@ -67,6 +67,7 @@ clean-files += cpustr.h
KBUILD_CFLAGS := $(REALMODE_CFLAGS) -D_SETUP KBUILD_CFLAGS := $(REALMODE_CFLAGS) -D_SETUP
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
KBUILD_CFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=)
GCOV_PROFILE := n GCOV_PROFILE := n
UBSAN_SANITIZE := n UBSAN_SANITIZE := n

View file

@ -38,6 +38,7 @@ KBUILD_CFLAGS += $(call cc-option,-fno-stack-protector)
KBUILD_CFLAGS += $(call cc-disable-warning, address-of-packed-member) KBUILD_CFLAGS += $(call cc-disable-warning, address-of-packed-member)
KBUILD_CFLAGS += $(call cc-disable-warning, gnu) KBUILD_CFLAGS += $(call cc-disable-warning, gnu)
KBUILD_CFLAGS += -Wno-pointer-sign KBUILD_CFLAGS += -Wno-pointer-sign
KBUILD_CFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=)
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
GCOV_PROFILE := n GCOV_PROFILE := n

View file

@ -24,7 +24,7 @@
*/ */
.text .text
ENTRY(efi_call_phys) SYM_FUNC_START(efi_call_phys)
/* /*
* 0. The function can only be called in Linux kernel. So CS has been * 0. The function can only be called in Linux kernel. So CS has been
* set to 0x0010, DS and SS have been set to 0x0018. In EFI, I found * set to 0x0010, DS and SS have been set to 0x0018. In EFI, I found
@ -77,7 +77,7 @@ ENTRY(efi_call_phys)
movl saved_return_addr(%edx), %ecx movl saved_return_addr(%edx), %ecx
pushl %ecx pushl %ecx
ret ret
ENDPROC(efi_call_phys) SYM_FUNC_END(efi_call_phys)
.previous .previous
.data .data

View file

@ -23,7 +23,7 @@
.code64 .code64
.text .text
ENTRY(efi64_thunk) SYM_FUNC_START(efi64_thunk)
push %rbp push %rbp
push %rbx push %rbx
@ -97,14 +97,14 @@ ENTRY(efi64_thunk)
pop %rbx pop %rbx
pop %rbp pop %rbp
ret ret
ENDPROC(efi64_thunk) SYM_FUNC_END(efi64_thunk)
ENTRY(efi_exit32) SYM_FUNC_START_LOCAL(efi_exit32)
movq func_rt_ptr(%rip), %rax movq func_rt_ptr(%rip), %rax
push %rax push %rax
mov %rdi, %rax mov %rdi, %rax
ret ret
ENDPROC(efi_exit32) SYM_FUNC_END(efi_exit32)
.code32 .code32
/* /*
@ -112,7 +112,7 @@ ENDPROC(efi_exit32)
* *
* The stack should represent the 32-bit calling convention. * The stack should represent the 32-bit calling convention.
*/ */
ENTRY(efi_enter32) SYM_FUNC_START_LOCAL(efi_enter32)
movl $__KERNEL_DS, %eax movl $__KERNEL_DS, %eax
movl %eax, %ds movl %eax, %ds
movl %eax, %es movl %eax, %es
@ -172,20 +172,23 @@ ENTRY(efi_enter32)
btsl $X86_CR0_PG_BIT, %eax btsl $X86_CR0_PG_BIT, %eax
movl %eax, %cr0 movl %eax, %cr0
lret lret
ENDPROC(efi_enter32) SYM_FUNC_END(efi_enter32)
.data .data
.balign 8 .balign 8
.global efi32_boot_gdt SYM_DATA_START(efi32_boot_gdt)
efi32_boot_gdt: .word 0 .word 0
.quad 0 .quad 0
SYM_DATA_END(efi32_boot_gdt)
save_gdt: .word 0 SYM_DATA_START_LOCAL(save_gdt)
.quad 0 .word 0
func_rt_ptr: .quad 0 .quad 0
SYM_DATA_END(save_gdt)
.global efi_gdt64 SYM_DATA_LOCAL(func_rt_ptr, .quad 0)
efi_gdt64:
SYM_DATA_START(efi_gdt64)
.word efi_gdt64_end - efi_gdt64 .word efi_gdt64_end - efi_gdt64
.long 0 /* Filled out by user */ .long 0 /* Filled out by user */
.word 0 .word 0
@ -194,4 +197,4 @@ efi_gdt64:
.quad 0x00cf92000000ffff /* __KERNEL_DS */ .quad 0x00cf92000000ffff /* __KERNEL_DS */
.quad 0x0080890000000000 /* TS descriptor */ .quad 0x0080890000000000 /* TS descriptor */
.quad 0x0000000000000000 /* TS continued */ .quad 0x0000000000000000 /* TS continued */
efi_gdt64_end: SYM_DATA_END_LABEL(efi_gdt64, SYM_L_LOCAL, efi_gdt64_end)

View file

@ -61,7 +61,7 @@
.hidden _egot .hidden _egot
__HEAD __HEAD
ENTRY(startup_32) SYM_FUNC_START(startup_32)
cld cld
/* /*
* Test KEEP_SEGMENTS flag to see if the bootloader is asking * Test KEEP_SEGMENTS flag to see if the bootloader is asking
@ -142,14 +142,14 @@ ENTRY(startup_32)
*/ */
leal .Lrelocated(%ebx), %eax leal .Lrelocated(%ebx), %eax
jmp *%eax jmp *%eax
ENDPROC(startup_32) SYM_FUNC_END(startup_32)
#ifdef CONFIG_EFI_STUB #ifdef CONFIG_EFI_STUB
/* /*
* We don't need the return address, so set up the stack so efi_main() can find * We don't need the return address, so set up the stack so efi_main() can find
* its arguments. * its arguments.
*/ */
ENTRY(efi_pe_entry) SYM_FUNC_START(efi_pe_entry)
add $0x4, %esp add $0x4, %esp
call 1f call 1f
@ -174,9 +174,9 @@ ENTRY(efi_pe_entry)
pushl %eax pushl %eax
pushl %ecx pushl %ecx
jmp 2f /* Skip efi_config initialization */ jmp 2f /* Skip efi_config initialization */
ENDPROC(efi_pe_entry) SYM_FUNC_END(efi_pe_entry)
ENTRY(efi32_stub_entry) SYM_FUNC_START(efi32_stub_entry)
add $0x4, %esp add $0x4, %esp
popl %ecx popl %ecx
popl %edx popl %edx
@ -205,11 +205,11 @@ fail:
movl BP_code32_start(%esi), %eax movl BP_code32_start(%esi), %eax
leal startup_32(%eax), %eax leal startup_32(%eax), %eax
jmp *%eax jmp *%eax
ENDPROC(efi32_stub_entry) SYM_FUNC_END(efi32_stub_entry)
#endif #endif
.text .text
.Lrelocated: SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
/* /*
* Clear BSS (stack is currently empty) * Clear BSS (stack is currently empty)
@ -260,6 +260,7 @@ ENDPROC(efi32_stub_entry)
*/ */
xorl %ebx, %ebx xorl %ebx, %ebx
jmp *%eax jmp *%eax
SYM_FUNC_END(.Lrelocated)
#ifdef CONFIG_EFI_STUB #ifdef CONFIG_EFI_STUB
.data .data

View file

@ -45,7 +45,7 @@
__HEAD __HEAD
.code32 .code32
ENTRY(startup_32) SYM_FUNC_START(startup_32)
/* /*
* 32bit entry is 0 and it is ABI so immutable! * 32bit entry is 0 and it is ABI so immutable!
* If we come here directly from a bootloader, * If we come here directly from a bootloader,
@ -222,11 +222,11 @@ ENTRY(startup_32)
/* Jump from 32bit compatibility mode into 64bit mode. */ /* Jump from 32bit compatibility mode into 64bit mode. */
lret lret
ENDPROC(startup_32) SYM_FUNC_END(startup_32)
#ifdef CONFIG_EFI_MIXED #ifdef CONFIG_EFI_MIXED
.org 0x190 .org 0x190
ENTRY(efi32_stub_entry) SYM_FUNC_START(efi32_stub_entry)
add $0x4, %esp /* Discard return address */ add $0x4, %esp /* Discard return address */
popl %ecx popl %ecx
popl %edx popl %edx
@ -245,12 +245,12 @@ ENTRY(efi32_stub_entry)
movl %eax, efi_config(%ebp) movl %eax, efi_config(%ebp)
jmp startup_32 jmp startup_32
ENDPROC(efi32_stub_entry) SYM_FUNC_END(efi32_stub_entry)
#endif #endif
.code64 .code64
.org 0x200 .org 0x200
ENTRY(startup_64) SYM_CODE_START(startup_64)
/* /*
* 64bit entry is 0x200 and it is ABI so immutable! * 64bit entry is 0x200 and it is ABI so immutable!
* We come here either from startup_32 or directly from a * We come here either from startup_32 or directly from a
@ -442,11 +442,12 @@ trampoline_return:
*/ */
leaq .Lrelocated(%rbx), %rax leaq .Lrelocated(%rbx), %rax
jmp *%rax jmp *%rax
SYM_CODE_END(startup_64)
#ifdef CONFIG_EFI_STUB #ifdef CONFIG_EFI_STUB
/* The entry point for the PE/COFF executable is efi_pe_entry. */ /* The entry point for the PE/COFF executable is efi_pe_entry. */
ENTRY(efi_pe_entry) SYM_FUNC_START(efi_pe_entry)
movq %rcx, efi64_config(%rip) /* Handle */ movq %rcx, efi64_config(%rip) /* Handle */
movq %rdx, efi64_config+8(%rip) /* EFI System table pointer */ movq %rdx, efi64_config+8(%rip) /* EFI System table pointer */
@ -495,10 +496,10 @@ fail:
movl BP_code32_start(%esi), %eax movl BP_code32_start(%esi), %eax
leaq startup_64(%rax), %rax leaq startup_64(%rax), %rax
jmp *%rax jmp *%rax
ENDPROC(efi_pe_entry) SYM_FUNC_END(efi_pe_entry)
.org 0x390 .org 0x390
ENTRY(efi64_stub_entry) SYM_FUNC_START(efi64_stub_entry)
movq %rdi, efi64_config(%rip) /* Handle */ movq %rdi, efi64_config(%rip) /* Handle */
movq %rsi, efi64_config+8(%rip) /* EFI System table pointer */ movq %rsi, efi64_config+8(%rip) /* EFI System table pointer */
@ -507,11 +508,11 @@ ENTRY(efi64_stub_entry)
movq %rdx, %rsi movq %rdx, %rsi
jmp handover_entry jmp handover_entry
ENDPROC(efi64_stub_entry) SYM_FUNC_END(efi64_stub_entry)
#endif #endif
.text .text
.Lrelocated: SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
/* /*
* Clear BSS (stack is currently empty) * Clear BSS (stack is currently empty)
@ -540,6 +541,7 @@ ENDPROC(efi64_stub_entry)
* Jump to the decompressed kernel. * Jump to the decompressed kernel.
*/ */
jmp *%rax jmp *%rax
SYM_FUNC_END(.Lrelocated)
/* /*
* Adjust the global offset table * Adjust the global offset table
@ -570,7 +572,7 @@ ENDPROC(efi64_stub_entry)
* ECX contains the base address of the trampoline memory. * ECX contains the base address of the trampoline memory.
* Non zero RDX means trampoline needs to enable 5-level paging. * Non zero RDX means trampoline needs to enable 5-level paging.
*/ */
ENTRY(trampoline_32bit_src) SYM_CODE_START(trampoline_32bit_src)
/* Set up data and stack segments */ /* Set up data and stack segments */
movl $__KERNEL_DS, %eax movl $__KERNEL_DS, %eax
movl %eax, %ds movl %eax, %ds
@ -633,11 +635,13 @@ ENTRY(trampoline_32bit_src)
movl %eax, %cr0 movl %eax, %cr0
lret lret
SYM_CODE_END(trampoline_32bit_src)
.code64 .code64
.Lpaging_enabled: SYM_FUNC_START_LOCAL_NOALIGN(.Lpaging_enabled)
/* Return from the trampoline */ /* Return from the trampoline */
jmp *%rdi jmp *%rdi
SYM_FUNC_END(.Lpaging_enabled)
/* /*
* The trampoline code has a size limit. * The trampoline code has a size limit.
@ -647,20 +651,22 @@ ENTRY(trampoline_32bit_src)
.org trampoline_32bit_src + TRAMPOLINE_32BIT_CODE_SIZE .org trampoline_32bit_src + TRAMPOLINE_32BIT_CODE_SIZE
.code32 .code32
.Lno_longmode: SYM_FUNC_START_LOCAL_NOALIGN(.Lno_longmode)
/* This isn't an x86-64 CPU, so hang intentionally, we cannot continue */ /* This isn't an x86-64 CPU, so hang intentionally, we cannot continue */
1: 1:
hlt hlt
jmp 1b jmp 1b
SYM_FUNC_END(.Lno_longmode)
#include "../../kernel/verify_cpu.S" #include "../../kernel/verify_cpu.S"
.data .data
gdt64: SYM_DATA_START_LOCAL(gdt64)
.word gdt_end - gdt .word gdt_end - gdt
.quad 0 .quad 0
SYM_DATA_END(gdt64)
.balign 8 .balign 8
gdt: SYM_DATA_START_LOCAL(gdt)
.word gdt_end - gdt .word gdt_end - gdt
.long gdt .long gdt
.word 0 .word 0
@ -669,25 +675,24 @@ gdt:
.quad 0x00cf92000000ffff /* __KERNEL_DS */ .quad 0x00cf92000000ffff /* __KERNEL_DS */
.quad 0x0080890000000000 /* TS descriptor */ .quad 0x0080890000000000 /* TS descriptor */
.quad 0x0000000000000000 /* TS continued */ .quad 0x0000000000000000 /* TS continued */
gdt_end: SYM_DATA_END_LABEL(gdt, SYM_L_LOCAL, gdt_end)
#ifdef CONFIG_EFI_STUB #ifdef CONFIG_EFI_STUB
efi_config: SYM_DATA_LOCAL(efi_config, .quad 0)
.quad 0
#ifdef CONFIG_EFI_MIXED #ifdef CONFIG_EFI_MIXED
.global efi32_config SYM_DATA_START(efi32_config)
efi32_config:
.fill 5,8,0 .fill 5,8,0
.quad efi64_thunk .quad efi64_thunk
.byte 0 .byte 0
SYM_DATA_END(efi32_config)
#endif #endif
.global efi64_config SYM_DATA_START(efi64_config)
efi64_config:
.fill 5,8,0 .fill 5,8,0
.quad efi_call .quad efi_call
.byte 1 .byte 1
SYM_DATA_END(efi64_config)
#endif /* CONFIG_EFI_STUB */ #endif /* CONFIG_EFI_STUB */
/* /*
@ -695,23 +700,21 @@ efi64_config:
*/ */
.bss .bss
.balign 4 .balign 4
boot_heap: SYM_DATA_LOCAL(boot_heap, .fill BOOT_HEAP_SIZE, 1, 0)
.fill BOOT_HEAP_SIZE, 1, 0
boot_stack: SYM_DATA_START_LOCAL(boot_stack)
.fill BOOT_STACK_SIZE, 1, 0 .fill BOOT_STACK_SIZE, 1, 0
boot_stack_end: SYM_DATA_END_LABEL(boot_stack, SYM_L_LOCAL, boot_stack_end)
/* /*
* Space for page tables (not in .bss so not zeroed) * Space for page tables (not in .bss so not zeroed)
*/ */
.section ".pgtable","a",@nobits .section ".pgtable","a",@nobits
.balign 4096 .balign 4096
pgtable: SYM_DATA_LOCAL(pgtable, .fill BOOT_PGT_SIZE, 1, 0)
.fill BOOT_PGT_SIZE, 1, 0
/* /*
* The page table is going to be used instead of page table in the trampoline * The page table is going to be used instead of page table in the trampoline
* memory. * memory.
*/ */
top_pgtable: SYM_DATA_LOCAL(top_pgtable, .fill PAGE_SIZE, 1, 0)
.fill PAGE_SIZE, 1, 0

View file

@ -15,7 +15,7 @@
.text .text
.code32 .code32
ENTRY(get_sev_encryption_bit) SYM_FUNC_START(get_sev_encryption_bit)
xor %eax, %eax xor %eax, %eax
#ifdef CONFIG_AMD_MEM_ENCRYPT #ifdef CONFIG_AMD_MEM_ENCRYPT
@ -65,10 +65,10 @@ ENTRY(get_sev_encryption_bit)
#endif /* CONFIG_AMD_MEM_ENCRYPT */ #endif /* CONFIG_AMD_MEM_ENCRYPT */
ret ret
ENDPROC(get_sev_encryption_bit) SYM_FUNC_END(get_sev_encryption_bit)
.code64 .code64
ENTRY(set_sev_encryption_mask) SYM_FUNC_START(set_sev_encryption_mask)
#ifdef CONFIG_AMD_MEM_ENCRYPT #ifdef CONFIG_AMD_MEM_ENCRYPT
push %rbp push %rbp
push %rdx push %rdx
@ -90,12 +90,11 @@ ENTRY(set_sev_encryption_mask)
xor %rax, %rax xor %rax, %rax
ret ret
ENDPROC(set_sev_encryption_mask) SYM_FUNC_END(set_sev_encryption_mask)
.data .data
#ifdef CONFIG_AMD_MEM_ENCRYPT #ifdef CONFIG_AMD_MEM_ENCRYPT
.balign 8 .balign 8
GLOBAL(sme_me_mask) SYM_DATA(sme_me_mask, .quad 0)
.quad 0
#endif #endif

View file

@ -15,7 +15,7 @@
.code16 .code16
.text .text
GLOBAL(memcpy) SYM_FUNC_START_NOALIGN(memcpy)
pushw %si pushw %si
pushw %di pushw %di
movw %ax, %di movw %ax, %di
@ -29,9 +29,9 @@ GLOBAL(memcpy)
popw %di popw %di
popw %si popw %si
retl retl
ENDPROC(memcpy) SYM_FUNC_END(memcpy)
GLOBAL(memset) SYM_FUNC_START_NOALIGN(memset)
pushw %di pushw %di
movw %ax, %di movw %ax, %di
movzbl %dl, %eax movzbl %dl, %eax
@ -44,22 +44,22 @@ GLOBAL(memset)
rep; stosb rep; stosb
popw %di popw %di
retl retl
ENDPROC(memset) SYM_FUNC_END(memset)
GLOBAL(copy_from_fs) SYM_FUNC_START_NOALIGN(copy_from_fs)
pushw %ds pushw %ds
pushw %fs pushw %fs
popw %ds popw %ds
calll memcpy calll memcpy
popw %ds popw %ds
retl retl
ENDPROC(copy_from_fs) SYM_FUNC_END(copy_from_fs)
GLOBAL(copy_to_fs) SYM_FUNC_START_NOALIGN(copy_to_fs)
pushw %es pushw %es
pushw %fs pushw %fs
popw %es popw %es
calll memcpy calll memcpy
popw %es popw %es
retl retl
ENDPROC(copy_to_fs) SYM_FUNC_END(copy_to_fs)

View file

@ -21,7 +21,7 @@
/* /*
* void protected_mode_jump(u32 entrypoint, u32 bootparams); * void protected_mode_jump(u32 entrypoint, u32 bootparams);
*/ */
GLOBAL(protected_mode_jump) SYM_FUNC_START_NOALIGN(protected_mode_jump)
movl %edx, %esi # Pointer to boot_params table movl %edx, %esi # Pointer to boot_params table
xorl %ebx, %ebx xorl %ebx, %ebx
@ -40,13 +40,13 @@ GLOBAL(protected_mode_jump)
# Transition to 32-bit mode # Transition to 32-bit mode
.byte 0x66, 0xea # ljmpl opcode .byte 0x66, 0xea # ljmpl opcode
2: .long in_pm32 # offset 2: .long .Lin_pm32 # offset
.word __BOOT_CS # segment .word __BOOT_CS # segment
ENDPROC(protected_mode_jump) SYM_FUNC_END(protected_mode_jump)
.code32 .code32
.section ".text32","ax" .section ".text32","ax"
GLOBAL(in_pm32) SYM_FUNC_START_LOCAL_NOALIGN(.Lin_pm32)
# Set up data segments for flat 32-bit mode # Set up data segments for flat 32-bit mode
movl %ecx, %ds movl %ecx, %ds
movl %ecx, %es movl %ecx, %es
@ -72,4 +72,4 @@ GLOBAL(in_pm32)
lldt %cx lldt %cx
jmpl *%eax # Jump to the 32-bit entrypoint jmpl *%eax # Jump to the 32-bit entrypoint
ENDPROC(in_pm32) SYM_FUNC_END(.Lin_pm32)

View file

@ -71,7 +71,7 @@
* %r8 * %r8
* %r9 * %r9
*/ */
__load_partial: SYM_FUNC_START_LOCAL(__load_partial)
xor %r9d, %r9d xor %r9d, %r9d
pxor MSG, MSG pxor MSG, MSG
@ -123,7 +123,7 @@ __load_partial:
.Lld_partial_8: .Lld_partial_8:
ret ret
ENDPROC(__load_partial) SYM_FUNC_END(__load_partial)
/* /*
* __store_partial: internal ABI * __store_partial: internal ABI
@ -137,7 +137,7 @@ ENDPROC(__load_partial)
* %r9 * %r9
* %r10 * %r10
*/ */
__store_partial: SYM_FUNC_START_LOCAL(__store_partial)
mov LEN, %r8 mov LEN, %r8
mov DST, %r9 mov DST, %r9
@ -181,12 +181,12 @@ __store_partial:
.Lst_partial_1: .Lst_partial_1:
ret ret
ENDPROC(__store_partial) SYM_FUNC_END(__store_partial)
/* /*
* void crypto_aegis128_aesni_init(void *state, const void *key, const void *iv); * void crypto_aegis128_aesni_init(void *state, const void *key, const void *iv);
*/ */
ENTRY(crypto_aegis128_aesni_init) SYM_FUNC_START(crypto_aegis128_aesni_init)
FRAME_BEGIN FRAME_BEGIN
/* load IV: */ /* load IV: */
@ -226,13 +226,13 @@ ENTRY(crypto_aegis128_aesni_init)
FRAME_END FRAME_END
ret ret
ENDPROC(crypto_aegis128_aesni_init) SYM_FUNC_END(crypto_aegis128_aesni_init)
/* /*
* void crypto_aegis128_aesni_ad(void *state, unsigned int length, * void crypto_aegis128_aesni_ad(void *state, unsigned int length,
* const void *data); * const void *data);
*/ */
ENTRY(crypto_aegis128_aesni_ad) SYM_FUNC_START(crypto_aegis128_aesni_ad)
FRAME_BEGIN FRAME_BEGIN
cmp $0x10, LEN cmp $0x10, LEN
@ -378,7 +378,7 @@ ENTRY(crypto_aegis128_aesni_ad)
.Lad_out: .Lad_out:
FRAME_END FRAME_END
ret ret
ENDPROC(crypto_aegis128_aesni_ad) SYM_FUNC_END(crypto_aegis128_aesni_ad)
.macro encrypt_block a s0 s1 s2 s3 s4 i .macro encrypt_block a s0 s1 s2 s3 s4 i
movdq\a (\i * 0x10)(SRC), MSG movdq\a (\i * 0x10)(SRC), MSG
@ -402,7 +402,7 @@ ENDPROC(crypto_aegis128_aesni_ad)
* void crypto_aegis128_aesni_enc(void *state, unsigned int length, * void crypto_aegis128_aesni_enc(void *state, unsigned int length,
* const void *src, void *dst); * const void *src, void *dst);
*/ */
ENTRY(crypto_aegis128_aesni_enc) SYM_FUNC_START(crypto_aegis128_aesni_enc)
FRAME_BEGIN FRAME_BEGIN
cmp $0x10, LEN cmp $0x10, LEN
@ -493,13 +493,13 @@ ENTRY(crypto_aegis128_aesni_enc)
.Lenc_out: .Lenc_out:
FRAME_END FRAME_END
ret ret
ENDPROC(crypto_aegis128_aesni_enc) SYM_FUNC_END(crypto_aegis128_aesni_enc)
/* /*
* void crypto_aegis128_aesni_enc_tail(void *state, unsigned int length, * void crypto_aegis128_aesni_enc_tail(void *state, unsigned int length,
* const void *src, void *dst); * const void *src, void *dst);
*/ */
ENTRY(crypto_aegis128_aesni_enc_tail) SYM_FUNC_START(crypto_aegis128_aesni_enc_tail)
FRAME_BEGIN FRAME_BEGIN
/* load the state: */ /* load the state: */
@ -533,7 +533,7 @@ ENTRY(crypto_aegis128_aesni_enc_tail)
FRAME_END FRAME_END
ret ret
ENDPROC(crypto_aegis128_aesni_enc_tail) SYM_FUNC_END(crypto_aegis128_aesni_enc_tail)
.macro decrypt_block a s0 s1 s2 s3 s4 i .macro decrypt_block a s0 s1 s2 s3 s4 i
movdq\a (\i * 0x10)(SRC), MSG movdq\a (\i * 0x10)(SRC), MSG
@ -556,7 +556,7 @@ ENDPROC(crypto_aegis128_aesni_enc_tail)
* void crypto_aegis128_aesni_dec(void *state, unsigned int length, * void crypto_aegis128_aesni_dec(void *state, unsigned int length,
* const void *src, void *dst); * const void *src, void *dst);
*/ */
ENTRY(crypto_aegis128_aesni_dec) SYM_FUNC_START(crypto_aegis128_aesni_dec)
FRAME_BEGIN FRAME_BEGIN
cmp $0x10, LEN cmp $0x10, LEN
@ -647,13 +647,13 @@ ENTRY(crypto_aegis128_aesni_dec)
.Ldec_out: .Ldec_out:
FRAME_END FRAME_END
ret ret
ENDPROC(crypto_aegis128_aesni_dec) SYM_FUNC_END(crypto_aegis128_aesni_dec)
/* /*
* void crypto_aegis128_aesni_dec_tail(void *state, unsigned int length, * void crypto_aegis128_aesni_dec_tail(void *state, unsigned int length,
* const void *src, void *dst); * const void *src, void *dst);
*/ */
ENTRY(crypto_aegis128_aesni_dec_tail) SYM_FUNC_START(crypto_aegis128_aesni_dec_tail)
FRAME_BEGIN FRAME_BEGIN
/* load the state: */ /* load the state: */
@ -697,13 +697,13 @@ ENTRY(crypto_aegis128_aesni_dec_tail)
FRAME_END FRAME_END
ret ret
ENDPROC(crypto_aegis128_aesni_dec_tail) SYM_FUNC_END(crypto_aegis128_aesni_dec_tail)
/* /*
* void crypto_aegis128_aesni_final(void *state, void *tag_xor, * void crypto_aegis128_aesni_final(void *state, void *tag_xor,
* u64 assoclen, u64 cryptlen); * u64 assoclen, u64 cryptlen);
*/ */
ENTRY(crypto_aegis128_aesni_final) SYM_FUNC_START(crypto_aegis128_aesni_final)
FRAME_BEGIN FRAME_BEGIN
/* load the state: */ /* load the state: */
@ -744,4 +744,4 @@ ENTRY(crypto_aegis128_aesni_final)
FRAME_END FRAME_END
ret ret
ENDPROC(crypto_aegis128_aesni_final) SYM_FUNC_END(crypto_aegis128_aesni_final)

View file

@ -544,11 +544,11 @@ ddq_add_8:
* aes_ctr_enc_128_avx_by8(void *in, void *iv, void *keys, void *out, * aes_ctr_enc_128_avx_by8(void *in, void *iv, void *keys, void *out,
* unsigned int num_bytes) * unsigned int num_bytes)
*/ */
ENTRY(aes_ctr_enc_128_avx_by8) SYM_FUNC_START(aes_ctr_enc_128_avx_by8)
/* call the aes main loop */ /* call the aes main loop */
do_aes_ctrmain KEY_128 do_aes_ctrmain KEY_128
ENDPROC(aes_ctr_enc_128_avx_by8) SYM_FUNC_END(aes_ctr_enc_128_avx_by8)
/* /*
* routine to do AES192 CTR enc/decrypt "by8" * routine to do AES192 CTR enc/decrypt "by8"
@ -557,11 +557,11 @@ ENDPROC(aes_ctr_enc_128_avx_by8)
* aes_ctr_enc_192_avx_by8(void *in, void *iv, void *keys, void *out, * aes_ctr_enc_192_avx_by8(void *in, void *iv, void *keys, void *out,
* unsigned int num_bytes) * unsigned int num_bytes)
*/ */
ENTRY(aes_ctr_enc_192_avx_by8) SYM_FUNC_START(aes_ctr_enc_192_avx_by8)
/* call the aes main loop */ /* call the aes main loop */
do_aes_ctrmain KEY_192 do_aes_ctrmain KEY_192
ENDPROC(aes_ctr_enc_192_avx_by8) SYM_FUNC_END(aes_ctr_enc_192_avx_by8)
/* /*
* routine to do AES256 CTR enc/decrypt "by8" * routine to do AES256 CTR enc/decrypt "by8"
@ -570,8 +570,8 @@ ENDPROC(aes_ctr_enc_192_avx_by8)
* aes_ctr_enc_256_avx_by8(void *in, void *iv, void *keys, void *out, * aes_ctr_enc_256_avx_by8(void *in, void *iv, void *keys, void *out,
* unsigned int num_bytes) * unsigned int num_bytes)
*/ */
ENTRY(aes_ctr_enc_256_avx_by8) SYM_FUNC_START(aes_ctr_enc_256_avx_by8)
/* call the aes main loop */ /* call the aes main loop */
do_aes_ctrmain KEY_256 do_aes_ctrmain KEY_256
ENDPROC(aes_ctr_enc_256_avx_by8) SYM_FUNC_END(aes_ctr_enc_256_avx_by8)

View file

@ -1592,7 +1592,7 @@ _esb_loop_\@:
* poly = x^128 + x^127 + x^126 + x^121 + 1 * poly = x^128 + x^127 + x^126 + x^121 + 1
* *
*****************************************************************************/ *****************************************************************************/
ENTRY(aesni_gcm_dec) SYM_FUNC_START(aesni_gcm_dec)
FUNC_SAVE FUNC_SAVE
GCM_INIT %arg6, arg7, arg8, arg9 GCM_INIT %arg6, arg7, arg8, arg9
@ -1600,7 +1600,7 @@ ENTRY(aesni_gcm_dec)
GCM_COMPLETE arg10, arg11 GCM_COMPLETE arg10, arg11
FUNC_RESTORE FUNC_RESTORE
ret ret
ENDPROC(aesni_gcm_dec) SYM_FUNC_END(aesni_gcm_dec)
/***************************************************************************** /*****************************************************************************
@ -1680,7 +1680,7 @@ ENDPROC(aesni_gcm_dec)
* *
* poly = x^128 + x^127 + x^126 + x^121 + 1 * poly = x^128 + x^127 + x^126 + x^121 + 1
***************************************************************************/ ***************************************************************************/
ENTRY(aesni_gcm_enc) SYM_FUNC_START(aesni_gcm_enc)
FUNC_SAVE FUNC_SAVE
GCM_INIT %arg6, arg7, arg8, arg9 GCM_INIT %arg6, arg7, arg8, arg9
@ -1689,7 +1689,7 @@ ENTRY(aesni_gcm_enc)
GCM_COMPLETE arg10, arg11 GCM_COMPLETE arg10, arg11
FUNC_RESTORE FUNC_RESTORE
ret ret
ENDPROC(aesni_gcm_enc) SYM_FUNC_END(aesni_gcm_enc)
/***************************************************************************** /*****************************************************************************
* void aesni_gcm_init(void *aes_ctx, // AES Key schedule. Starts on a 16 byte boundary. * void aesni_gcm_init(void *aes_ctx, // AES Key schedule. Starts on a 16 byte boundary.
@ -1702,12 +1702,12 @@ ENDPROC(aesni_gcm_enc)
* const u8 *aad, // Additional Authentication Data (AAD) * const u8 *aad, // Additional Authentication Data (AAD)
* u64 aad_len) // Length of AAD in bytes. * u64 aad_len) // Length of AAD in bytes.
*/ */
ENTRY(aesni_gcm_init) SYM_FUNC_START(aesni_gcm_init)
FUNC_SAVE FUNC_SAVE
GCM_INIT %arg3, %arg4,%arg5, %arg6 GCM_INIT %arg3, %arg4,%arg5, %arg6
FUNC_RESTORE FUNC_RESTORE
ret ret
ENDPROC(aesni_gcm_init) SYM_FUNC_END(aesni_gcm_init)
/***************************************************************************** /*****************************************************************************
* void aesni_gcm_enc_update(void *aes_ctx, // AES Key schedule. Starts on a 16 byte boundary. * void aesni_gcm_enc_update(void *aes_ctx, // AES Key schedule. Starts on a 16 byte boundary.
@ -1717,12 +1717,12 @@ ENDPROC(aesni_gcm_init)
* const u8 *in, // Plaintext input * const u8 *in, // Plaintext input
* u64 plaintext_len, // Length of data in bytes for encryption. * u64 plaintext_len, // Length of data in bytes for encryption.
*/ */
ENTRY(aesni_gcm_enc_update) SYM_FUNC_START(aesni_gcm_enc_update)
FUNC_SAVE FUNC_SAVE
GCM_ENC_DEC enc GCM_ENC_DEC enc
FUNC_RESTORE FUNC_RESTORE
ret ret
ENDPROC(aesni_gcm_enc_update) SYM_FUNC_END(aesni_gcm_enc_update)
/***************************************************************************** /*****************************************************************************
* void aesni_gcm_dec_update(void *aes_ctx, // AES Key schedule. Starts on a 16 byte boundary. * void aesni_gcm_dec_update(void *aes_ctx, // AES Key schedule. Starts on a 16 byte boundary.
@ -1732,12 +1732,12 @@ ENDPROC(aesni_gcm_enc_update)
* const u8 *in, // Plaintext input * const u8 *in, // Plaintext input
* u64 plaintext_len, // Length of data in bytes for encryption. * u64 plaintext_len, // Length of data in bytes for encryption.
*/ */
ENTRY(aesni_gcm_dec_update) SYM_FUNC_START(aesni_gcm_dec_update)
FUNC_SAVE FUNC_SAVE
GCM_ENC_DEC dec GCM_ENC_DEC dec
FUNC_RESTORE FUNC_RESTORE
ret ret
ENDPROC(aesni_gcm_dec_update) SYM_FUNC_END(aesni_gcm_dec_update)
/***************************************************************************** /*****************************************************************************
* void aesni_gcm_finalize(void *aes_ctx, // AES Key schedule. Starts on a 16 byte boundary. * void aesni_gcm_finalize(void *aes_ctx, // AES Key schedule. Starts on a 16 byte boundary.
@ -1747,19 +1747,18 @@ ENDPROC(aesni_gcm_dec_update)
* u64 auth_tag_len); // Authenticated Tag Length in bytes. Valid values are 16 (most likely), * u64 auth_tag_len); // Authenticated Tag Length in bytes. Valid values are 16 (most likely),
* // 12 or 8. * // 12 or 8.
*/ */
ENTRY(aesni_gcm_finalize) SYM_FUNC_START(aesni_gcm_finalize)
FUNC_SAVE FUNC_SAVE
GCM_COMPLETE %arg3 %arg4 GCM_COMPLETE %arg3 %arg4
FUNC_RESTORE FUNC_RESTORE
ret ret
ENDPROC(aesni_gcm_finalize) SYM_FUNC_END(aesni_gcm_finalize)
#endif #endif
.align 4 SYM_FUNC_START_LOCAL_ALIAS(_key_expansion_128)
_key_expansion_128: SYM_FUNC_START_LOCAL(_key_expansion_256a)
_key_expansion_256a:
pshufd $0b11111111, %xmm1, %xmm1 pshufd $0b11111111, %xmm1, %xmm1
shufps $0b00010000, %xmm0, %xmm4 shufps $0b00010000, %xmm0, %xmm4
pxor %xmm4, %xmm0 pxor %xmm4, %xmm0
@ -1769,11 +1768,10 @@ _key_expansion_256a:
movaps %xmm0, (TKEYP) movaps %xmm0, (TKEYP)
add $0x10, TKEYP add $0x10, TKEYP
ret ret
ENDPROC(_key_expansion_128) SYM_FUNC_END(_key_expansion_256a)
ENDPROC(_key_expansion_256a) SYM_FUNC_END_ALIAS(_key_expansion_128)
.align 4 SYM_FUNC_START_LOCAL(_key_expansion_192a)
_key_expansion_192a:
pshufd $0b01010101, %xmm1, %xmm1 pshufd $0b01010101, %xmm1, %xmm1
shufps $0b00010000, %xmm0, %xmm4 shufps $0b00010000, %xmm0, %xmm4
pxor %xmm4, %xmm0 pxor %xmm4, %xmm0
@ -1795,10 +1793,9 @@ _key_expansion_192a:
movaps %xmm1, 0x10(TKEYP) movaps %xmm1, 0x10(TKEYP)
add $0x20, TKEYP add $0x20, TKEYP
ret ret
ENDPROC(_key_expansion_192a) SYM_FUNC_END(_key_expansion_192a)
.align 4 SYM_FUNC_START_LOCAL(_key_expansion_192b)
_key_expansion_192b:
pshufd $0b01010101, %xmm1, %xmm1 pshufd $0b01010101, %xmm1, %xmm1
shufps $0b00010000, %xmm0, %xmm4 shufps $0b00010000, %xmm0, %xmm4
pxor %xmm4, %xmm0 pxor %xmm4, %xmm0
@ -1815,10 +1812,9 @@ _key_expansion_192b:
movaps %xmm0, (TKEYP) movaps %xmm0, (TKEYP)
add $0x10, TKEYP add $0x10, TKEYP
ret ret
ENDPROC(_key_expansion_192b) SYM_FUNC_END(_key_expansion_192b)
.align 4 SYM_FUNC_START_LOCAL(_key_expansion_256b)
_key_expansion_256b:
pshufd $0b10101010, %xmm1, %xmm1 pshufd $0b10101010, %xmm1, %xmm1
shufps $0b00010000, %xmm2, %xmm4 shufps $0b00010000, %xmm2, %xmm4
pxor %xmm4, %xmm2 pxor %xmm4, %xmm2
@ -1828,13 +1824,13 @@ _key_expansion_256b:
movaps %xmm2, (TKEYP) movaps %xmm2, (TKEYP)
add $0x10, TKEYP add $0x10, TKEYP
ret ret
ENDPROC(_key_expansion_256b) SYM_FUNC_END(_key_expansion_256b)
/* /*
* int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, * int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
* unsigned int key_len) * unsigned int key_len)
*/ */
ENTRY(aesni_set_key) SYM_FUNC_START(aesni_set_key)
FRAME_BEGIN FRAME_BEGIN
#ifndef __x86_64__ #ifndef __x86_64__
pushl KEYP pushl KEYP
@ -1943,12 +1939,12 @@ ENTRY(aesni_set_key)
#endif #endif
FRAME_END FRAME_END
ret ret
ENDPROC(aesni_set_key) SYM_FUNC_END(aesni_set_key)
/* /*
* void aesni_enc(struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src) * void aesni_enc(struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src)
*/ */
ENTRY(aesni_enc) SYM_FUNC_START(aesni_enc)
FRAME_BEGIN FRAME_BEGIN
#ifndef __x86_64__ #ifndef __x86_64__
pushl KEYP pushl KEYP
@ -1967,7 +1963,7 @@ ENTRY(aesni_enc)
#endif #endif
FRAME_END FRAME_END
ret ret
ENDPROC(aesni_enc) SYM_FUNC_END(aesni_enc)
/* /*
* _aesni_enc1: internal ABI * _aesni_enc1: internal ABI
@ -1981,8 +1977,7 @@ ENDPROC(aesni_enc)
* KEY * KEY
* TKEYP (T1) * TKEYP (T1)
*/ */
.align 4 SYM_FUNC_START_LOCAL(_aesni_enc1)
_aesni_enc1:
movaps (KEYP), KEY # key movaps (KEYP), KEY # key
mov KEYP, TKEYP mov KEYP, TKEYP
pxor KEY, STATE # round 0 pxor KEY, STATE # round 0
@ -2025,7 +2020,7 @@ _aesni_enc1:
movaps 0x70(TKEYP), KEY movaps 0x70(TKEYP), KEY
AESENCLAST KEY STATE AESENCLAST KEY STATE
ret ret
ENDPROC(_aesni_enc1) SYM_FUNC_END(_aesni_enc1)
/* /*
* _aesni_enc4: internal ABI * _aesni_enc4: internal ABI
@ -2045,8 +2040,7 @@ ENDPROC(_aesni_enc1)
* KEY * KEY
* TKEYP (T1) * TKEYP (T1)
*/ */
.align 4 SYM_FUNC_START_LOCAL(_aesni_enc4)
_aesni_enc4:
movaps (KEYP), KEY # key movaps (KEYP), KEY # key
mov KEYP, TKEYP mov KEYP, TKEYP
pxor KEY, STATE1 # round 0 pxor KEY, STATE1 # round 0
@ -2134,12 +2128,12 @@ _aesni_enc4:
AESENCLAST KEY STATE3 AESENCLAST KEY STATE3
AESENCLAST KEY STATE4 AESENCLAST KEY STATE4
ret ret
ENDPROC(_aesni_enc4) SYM_FUNC_END(_aesni_enc4)
/* /*
* void aesni_dec (struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src) * void aesni_dec (struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src)
*/ */
ENTRY(aesni_dec) SYM_FUNC_START(aesni_dec)
FRAME_BEGIN FRAME_BEGIN
#ifndef __x86_64__ #ifndef __x86_64__
pushl KEYP pushl KEYP
@ -2159,7 +2153,7 @@ ENTRY(aesni_dec)
#endif #endif
FRAME_END FRAME_END
ret ret
ENDPROC(aesni_dec) SYM_FUNC_END(aesni_dec)
/* /*
* _aesni_dec1: internal ABI * _aesni_dec1: internal ABI
@ -2173,8 +2167,7 @@ ENDPROC(aesni_dec)
* KEY * KEY
* TKEYP (T1) * TKEYP (T1)
*/ */
.align 4 SYM_FUNC_START_LOCAL(_aesni_dec1)
_aesni_dec1:
movaps (KEYP), KEY # key movaps (KEYP), KEY # key
mov KEYP, TKEYP mov KEYP, TKEYP
pxor KEY, STATE # round 0 pxor KEY, STATE # round 0
@ -2217,7 +2210,7 @@ _aesni_dec1:
movaps 0x70(TKEYP), KEY movaps 0x70(TKEYP), KEY
AESDECLAST KEY STATE AESDECLAST KEY STATE
ret ret
ENDPROC(_aesni_dec1) SYM_FUNC_END(_aesni_dec1)
/* /*
* _aesni_dec4: internal ABI * _aesni_dec4: internal ABI
@ -2237,8 +2230,7 @@ ENDPROC(_aesni_dec1)
* KEY * KEY
* TKEYP (T1) * TKEYP (T1)
*/ */
.align 4 SYM_FUNC_START_LOCAL(_aesni_dec4)
_aesni_dec4:
movaps (KEYP), KEY # key movaps (KEYP), KEY # key
mov KEYP, TKEYP mov KEYP, TKEYP
pxor KEY, STATE1 # round 0 pxor KEY, STATE1 # round 0
@ -2326,13 +2318,13 @@ _aesni_dec4:
AESDECLAST KEY STATE3 AESDECLAST KEY STATE3
AESDECLAST KEY STATE4 AESDECLAST KEY STATE4
ret ret
ENDPROC(_aesni_dec4) SYM_FUNC_END(_aesni_dec4)
/* /*
* void aesni_ecb_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src, * void aesni_ecb_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
* size_t len) * size_t len)
*/ */
ENTRY(aesni_ecb_enc) SYM_FUNC_START(aesni_ecb_enc)
FRAME_BEGIN FRAME_BEGIN
#ifndef __x86_64__ #ifndef __x86_64__
pushl LEN pushl LEN
@ -2386,13 +2378,13 @@ ENTRY(aesni_ecb_enc)
#endif #endif
FRAME_END FRAME_END
ret ret
ENDPROC(aesni_ecb_enc) SYM_FUNC_END(aesni_ecb_enc)
/* /*
* void aesni_ecb_dec(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src, * void aesni_ecb_dec(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
* size_t len); * size_t len);
*/ */
ENTRY(aesni_ecb_dec) SYM_FUNC_START(aesni_ecb_dec)
FRAME_BEGIN FRAME_BEGIN
#ifndef __x86_64__ #ifndef __x86_64__
pushl LEN pushl LEN
@ -2447,13 +2439,13 @@ ENTRY(aesni_ecb_dec)
#endif #endif
FRAME_END FRAME_END
ret ret
ENDPROC(aesni_ecb_dec) SYM_FUNC_END(aesni_ecb_dec)
/* /*
* void aesni_cbc_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src, * void aesni_cbc_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
* size_t len, u8 *iv) * size_t len, u8 *iv)
*/ */
ENTRY(aesni_cbc_enc) SYM_FUNC_START(aesni_cbc_enc)
FRAME_BEGIN FRAME_BEGIN
#ifndef __x86_64__ #ifndef __x86_64__
pushl IVP pushl IVP
@ -2491,13 +2483,13 @@ ENTRY(aesni_cbc_enc)
#endif #endif
FRAME_END FRAME_END
ret ret
ENDPROC(aesni_cbc_enc) SYM_FUNC_END(aesni_cbc_enc)
/* /*
* void aesni_cbc_dec(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src, * void aesni_cbc_dec(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
* size_t len, u8 *iv) * size_t len, u8 *iv)
*/ */
ENTRY(aesni_cbc_dec) SYM_FUNC_START(aesni_cbc_dec)
FRAME_BEGIN FRAME_BEGIN
#ifndef __x86_64__ #ifndef __x86_64__
pushl IVP pushl IVP
@ -2584,7 +2576,7 @@ ENTRY(aesni_cbc_dec)
#endif #endif
FRAME_END FRAME_END
ret ret
ENDPROC(aesni_cbc_dec) SYM_FUNC_END(aesni_cbc_dec)
#ifdef __x86_64__ #ifdef __x86_64__
.pushsection .rodata .pushsection .rodata
@ -2604,8 +2596,7 @@ ENDPROC(aesni_cbc_dec)
* INC: == 1, in little endian * INC: == 1, in little endian
* BSWAP_MASK == endian swapping mask * BSWAP_MASK == endian swapping mask
*/ */
.align 4 SYM_FUNC_START_LOCAL(_aesni_inc_init)
_aesni_inc_init:
movaps .Lbswap_mask, BSWAP_MASK movaps .Lbswap_mask, BSWAP_MASK
movaps IV, CTR movaps IV, CTR
PSHUFB_XMM BSWAP_MASK CTR PSHUFB_XMM BSWAP_MASK CTR
@ -2613,7 +2604,7 @@ _aesni_inc_init:
MOVQ_R64_XMM TCTR_LOW INC MOVQ_R64_XMM TCTR_LOW INC
MOVQ_R64_XMM CTR TCTR_LOW MOVQ_R64_XMM CTR TCTR_LOW
ret ret
ENDPROC(_aesni_inc_init) SYM_FUNC_END(_aesni_inc_init)
/* /*
* _aesni_inc: internal ABI * _aesni_inc: internal ABI
@ -2630,8 +2621,7 @@ ENDPROC(_aesni_inc_init)
* CTR: == output IV, in little endian * CTR: == output IV, in little endian
* TCTR_LOW: == lower qword of CTR * TCTR_LOW: == lower qword of CTR
*/ */
.align 4 SYM_FUNC_START_LOCAL(_aesni_inc)
_aesni_inc:
paddq INC, CTR paddq INC, CTR
add $1, TCTR_LOW add $1, TCTR_LOW
jnc .Linc_low jnc .Linc_low
@ -2642,13 +2632,13 @@ _aesni_inc:
movaps CTR, IV movaps CTR, IV
PSHUFB_XMM BSWAP_MASK IV PSHUFB_XMM BSWAP_MASK IV
ret ret
ENDPROC(_aesni_inc) SYM_FUNC_END(_aesni_inc)
/* /*
* void aesni_ctr_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src, * void aesni_ctr_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
* size_t len, u8 *iv) * size_t len, u8 *iv)
*/ */
ENTRY(aesni_ctr_enc) SYM_FUNC_START(aesni_ctr_enc)
FRAME_BEGIN FRAME_BEGIN
cmp $16, LEN cmp $16, LEN
jb .Lctr_enc_just_ret jb .Lctr_enc_just_ret
@ -2705,7 +2695,7 @@ ENTRY(aesni_ctr_enc)
.Lctr_enc_just_ret: .Lctr_enc_just_ret:
FRAME_END FRAME_END
ret ret
ENDPROC(aesni_ctr_enc) SYM_FUNC_END(aesni_ctr_enc)
/* /*
* _aesni_gf128mul_x_ble: internal ABI * _aesni_gf128mul_x_ble: internal ABI
@ -2729,7 +2719,7 @@ ENDPROC(aesni_ctr_enc)
* void aesni_xts_crypt8(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src, * void aesni_xts_crypt8(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
* bool enc, u8 *iv) * bool enc, u8 *iv)
*/ */
ENTRY(aesni_xts_crypt8) SYM_FUNC_START(aesni_xts_crypt8)
FRAME_BEGIN FRAME_BEGIN
cmpb $0, %cl cmpb $0, %cl
movl $0, %ecx movl $0, %ecx
@ -2833,6 +2823,6 @@ ENTRY(aesni_xts_crypt8)
FRAME_END FRAME_END
ret ret
ENDPROC(aesni_xts_crypt8) SYM_FUNC_END(aesni_xts_crypt8)
#endif #endif

View file

@ -1775,12 +1775,12 @@ _initial_blocks_done\@:
# const u8 *aad, /* Additional Authentication Data (AAD)*/ # const u8 *aad, /* Additional Authentication Data (AAD)*/
# u64 aad_len) /* Length of AAD in bytes. With RFC4106 this is going to be 8 or 12 Bytes */ # u64 aad_len) /* Length of AAD in bytes. With RFC4106 this is going to be 8 or 12 Bytes */
############################################################# #############################################################
ENTRY(aesni_gcm_init_avx_gen2) SYM_FUNC_START(aesni_gcm_init_avx_gen2)
FUNC_SAVE FUNC_SAVE
INIT GHASH_MUL_AVX, PRECOMPUTE_AVX INIT GHASH_MUL_AVX, PRECOMPUTE_AVX
FUNC_RESTORE FUNC_RESTORE
ret ret
ENDPROC(aesni_gcm_init_avx_gen2) SYM_FUNC_END(aesni_gcm_init_avx_gen2)
############################################################################### ###############################################################################
#void aesni_gcm_enc_update_avx_gen2( #void aesni_gcm_enc_update_avx_gen2(
@ -1790,7 +1790,7 @@ ENDPROC(aesni_gcm_init_avx_gen2)
# const u8 *in, /* Plaintext input */ # const u8 *in, /* Plaintext input */
# u64 plaintext_len) /* Length of data in Bytes for encryption. */ # u64 plaintext_len) /* Length of data in Bytes for encryption. */
############################################################################### ###############################################################################
ENTRY(aesni_gcm_enc_update_avx_gen2) SYM_FUNC_START(aesni_gcm_enc_update_avx_gen2)
FUNC_SAVE FUNC_SAVE
mov keysize, %eax mov keysize, %eax
cmp $32, %eax cmp $32, %eax
@ -1809,7 +1809,7 @@ key_256_enc_update:
GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, ENC, 13 GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, ENC, 13
FUNC_RESTORE FUNC_RESTORE
ret ret
ENDPROC(aesni_gcm_enc_update_avx_gen2) SYM_FUNC_END(aesni_gcm_enc_update_avx_gen2)
############################################################################### ###############################################################################
#void aesni_gcm_dec_update_avx_gen2( #void aesni_gcm_dec_update_avx_gen2(
@ -1819,7 +1819,7 @@ ENDPROC(aesni_gcm_enc_update_avx_gen2)
# const u8 *in, /* Ciphertext input */ # const u8 *in, /* Ciphertext input */
# u64 plaintext_len) /* Length of data in Bytes for encryption. */ # u64 plaintext_len) /* Length of data in Bytes for encryption. */
############################################################################### ###############################################################################
ENTRY(aesni_gcm_dec_update_avx_gen2) SYM_FUNC_START(aesni_gcm_dec_update_avx_gen2)
FUNC_SAVE FUNC_SAVE
mov keysize,%eax mov keysize,%eax
cmp $32, %eax cmp $32, %eax
@ -1838,7 +1838,7 @@ key_256_dec_update:
GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, DEC, 13 GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, DEC, 13
FUNC_RESTORE FUNC_RESTORE
ret ret
ENDPROC(aesni_gcm_dec_update_avx_gen2) SYM_FUNC_END(aesni_gcm_dec_update_avx_gen2)
############################################################################### ###############################################################################
#void aesni_gcm_finalize_avx_gen2( #void aesni_gcm_finalize_avx_gen2(
@ -1848,7 +1848,7 @@ ENDPROC(aesni_gcm_dec_update_avx_gen2)
# u64 auth_tag_len)# /* Authenticated Tag Length in bytes. # u64 auth_tag_len)# /* Authenticated Tag Length in bytes.
# Valid values are 16 (most likely), 12 or 8. */ # Valid values are 16 (most likely), 12 or 8. */
############################################################################### ###############################################################################
ENTRY(aesni_gcm_finalize_avx_gen2) SYM_FUNC_START(aesni_gcm_finalize_avx_gen2)
FUNC_SAVE FUNC_SAVE
mov keysize,%eax mov keysize,%eax
cmp $32, %eax cmp $32, %eax
@ -1867,7 +1867,7 @@ key_256_finalize:
GCM_COMPLETE GHASH_MUL_AVX, 13, arg3, arg4 GCM_COMPLETE GHASH_MUL_AVX, 13, arg3, arg4
FUNC_RESTORE FUNC_RESTORE
ret ret
ENDPROC(aesni_gcm_finalize_avx_gen2) SYM_FUNC_END(aesni_gcm_finalize_avx_gen2)
#endif /* CONFIG_AS_AVX */ #endif /* CONFIG_AS_AVX */
@ -2746,12 +2746,12 @@ _initial_blocks_done\@:
# const u8 *aad, /* Additional Authentication Data (AAD)*/ # const u8 *aad, /* Additional Authentication Data (AAD)*/
# u64 aad_len) /* Length of AAD in bytes. With RFC4106 this is going to be 8 or 12 Bytes */ # u64 aad_len) /* Length of AAD in bytes. With RFC4106 this is going to be 8 or 12 Bytes */
############################################################# #############################################################
ENTRY(aesni_gcm_init_avx_gen4) SYM_FUNC_START(aesni_gcm_init_avx_gen4)
FUNC_SAVE FUNC_SAVE
INIT GHASH_MUL_AVX2, PRECOMPUTE_AVX2 INIT GHASH_MUL_AVX2, PRECOMPUTE_AVX2
FUNC_RESTORE FUNC_RESTORE
ret ret
ENDPROC(aesni_gcm_init_avx_gen4) SYM_FUNC_END(aesni_gcm_init_avx_gen4)
############################################################################### ###############################################################################
#void aesni_gcm_enc_avx_gen4( #void aesni_gcm_enc_avx_gen4(
@ -2761,7 +2761,7 @@ ENDPROC(aesni_gcm_init_avx_gen4)
# const u8 *in, /* Plaintext input */ # const u8 *in, /* Plaintext input */
# u64 plaintext_len) /* Length of data in Bytes for encryption. */ # u64 plaintext_len) /* Length of data in Bytes for encryption. */
############################################################################### ###############################################################################
ENTRY(aesni_gcm_enc_update_avx_gen4) SYM_FUNC_START(aesni_gcm_enc_update_avx_gen4)
FUNC_SAVE FUNC_SAVE
mov keysize,%eax mov keysize,%eax
cmp $32, %eax cmp $32, %eax
@ -2780,7 +2780,7 @@ key_256_enc_update4:
GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, ENC, 13 GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, ENC, 13
FUNC_RESTORE FUNC_RESTORE
ret ret
ENDPROC(aesni_gcm_enc_update_avx_gen4) SYM_FUNC_END(aesni_gcm_enc_update_avx_gen4)
############################################################################### ###############################################################################
#void aesni_gcm_dec_update_avx_gen4( #void aesni_gcm_dec_update_avx_gen4(
@ -2790,7 +2790,7 @@ ENDPROC(aesni_gcm_enc_update_avx_gen4)
# const u8 *in, /* Ciphertext input */ # const u8 *in, /* Ciphertext input */
# u64 plaintext_len) /* Length of data in Bytes for encryption. */ # u64 plaintext_len) /* Length of data in Bytes for encryption. */
############################################################################### ###############################################################################
ENTRY(aesni_gcm_dec_update_avx_gen4) SYM_FUNC_START(aesni_gcm_dec_update_avx_gen4)
FUNC_SAVE FUNC_SAVE
mov keysize,%eax mov keysize,%eax
cmp $32, %eax cmp $32, %eax
@ -2809,7 +2809,7 @@ key_256_dec_update4:
GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, DEC, 13 GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, DEC, 13
FUNC_RESTORE FUNC_RESTORE
ret ret
ENDPROC(aesni_gcm_dec_update_avx_gen4) SYM_FUNC_END(aesni_gcm_dec_update_avx_gen4)
############################################################################### ###############################################################################
#void aesni_gcm_finalize_avx_gen4( #void aesni_gcm_finalize_avx_gen4(
@ -2819,7 +2819,7 @@ ENDPROC(aesni_gcm_dec_update_avx_gen4)
# u64 auth_tag_len)# /* Authenticated Tag Length in bytes. # u64 auth_tag_len)# /* Authenticated Tag Length in bytes.
# Valid values are 16 (most likely), 12 or 8. */ # Valid values are 16 (most likely), 12 or 8. */
############################################################################### ###############################################################################
ENTRY(aesni_gcm_finalize_avx_gen4) SYM_FUNC_START(aesni_gcm_finalize_avx_gen4)
FUNC_SAVE FUNC_SAVE
mov keysize,%eax mov keysize,%eax
cmp $32, %eax cmp $32, %eax
@ -2838,6 +2838,6 @@ key_256_finalize4:
GCM_COMPLETE GHASH_MUL_AVX2, 13, arg3, arg4 GCM_COMPLETE GHASH_MUL_AVX2, 13, arg3, arg4
FUNC_RESTORE FUNC_RESTORE
ret ret
ENDPROC(aesni_gcm_finalize_avx_gen4) SYM_FUNC_END(aesni_gcm_finalize_avx_gen4)
#endif /* CONFIG_AS_AVX2 */ #endif /* CONFIG_AS_AVX2 */

View file

@ -47,7 +47,7 @@ SIGMA2:
.text .text
#ifdef CONFIG_AS_SSSE3 #ifdef CONFIG_AS_SSSE3
ENTRY(blake2s_compress_ssse3) SYM_FUNC_START(blake2s_compress_ssse3)
testq %rdx,%rdx testq %rdx,%rdx
je .Lendofloop je .Lendofloop
movdqu (%rdi),%xmm0 movdqu (%rdi),%xmm0
@ -173,11 +173,11 @@ ENTRY(blake2s_compress_ssse3)
movdqu %xmm14,0x20(%rdi) movdqu %xmm14,0x20(%rdi)
.Lendofloop: .Lendofloop:
ret ret
ENDPROC(blake2s_compress_ssse3) SYM_FUNC_END(blake2s_compress_ssse3)
#endif /* CONFIG_AS_SSSE3 */ #endif /* CONFIG_AS_SSSE3 */
#ifdef CONFIG_AS_AVX512 #ifdef CONFIG_AS_AVX512
ENTRY(blake2s_compress_avx512) SYM_FUNC_START(blake2s_compress_avx512)
vmovdqu (%rdi),%xmm0 vmovdqu (%rdi),%xmm0
vmovdqu 0x10(%rdi),%xmm1 vmovdqu 0x10(%rdi),%xmm1
vmovdqu 0x20(%rdi),%xmm4 vmovdqu 0x20(%rdi),%xmm4
@ -254,5 +254,5 @@ ENTRY(blake2s_compress_avx512)
vmovdqu %xmm4,0x20(%rdi) vmovdqu %xmm4,0x20(%rdi)
vzeroupper vzeroupper
retq retq
ENDPROC(blake2s_compress_avx512) SYM_FUNC_END(blake2s_compress_avx512)
#endif /* CONFIG_AS_AVX512 */ #endif /* CONFIG_AS_AVX512 */

View file

@ -103,7 +103,7 @@
bswapq RX0; \ bswapq RX0; \
xorq RX0, (RIO); xorq RX0, (RIO);
ENTRY(__blowfish_enc_blk) SYM_FUNC_START(__blowfish_enc_blk)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* %rsi: dst * %rsi: dst
@ -139,9 +139,9 @@ ENTRY(__blowfish_enc_blk)
.L__enc_xor: .L__enc_xor:
xor_block(); xor_block();
ret; ret;
ENDPROC(__blowfish_enc_blk) SYM_FUNC_END(__blowfish_enc_blk)
ENTRY(blowfish_dec_blk) SYM_FUNC_START(blowfish_dec_blk)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* %rsi: dst * %rsi: dst
@ -171,7 +171,7 @@ ENTRY(blowfish_dec_blk)
movq %r11, %r12; movq %r11, %r12;
ret; ret;
ENDPROC(blowfish_dec_blk) SYM_FUNC_END(blowfish_dec_blk)
/********************************************************************** /**********************************************************************
4-way blowfish, four blocks parallel 4-way blowfish, four blocks parallel
@ -283,7 +283,7 @@ ENDPROC(blowfish_dec_blk)
bswapq RX3; \ bswapq RX3; \
xorq RX3, 24(RIO); xorq RX3, 24(RIO);
ENTRY(__blowfish_enc_blk_4way) SYM_FUNC_START(__blowfish_enc_blk_4way)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* %rsi: dst * %rsi: dst
@ -330,9 +330,9 @@ ENTRY(__blowfish_enc_blk_4way)
popq %rbx; popq %rbx;
popq %r12; popq %r12;
ret; ret;
ENDPROC(__blowfish_enc_blk_4way) SYM_FUNC_END(__blowfish_enc_blk_4way)
ENTRY(blowfish_dec_blk_4way) SYM_FUNC_START(blowfish_dec_blk_4way)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* %rsi: dst * %rsi: dst
@ -365,4 +365,4 @@ ENTRY(blowfish_dec_blk_4way)
popq %r12; popq %r12;
ret; ret;
ENDPROC(blowfish_dec_blk_4way) SYM_FUNC_END(blowfish_dec_blk_4way)

View file

@ -189,20 +189,20 @@
* larger and would only be 0.5% faster (on sandy-bridge). * larger and would only be 0.5% faster (on sandy-bridge).
*/ */
.align 8 .align 8
roundsm16_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd: SYM_FUNC_START_LOCAL(roundsm16_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd)
roundsm16(%xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, roundsm16(%xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7,
%xmm8, %xmm9, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm15, %xmm8, %xmm9, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm15,
%rcx, (%r9)); %rcx, (%r9));
ret; ret;
ENDPROC(roundsm16_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd) SYM_FUNC_END(roundsm16_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd)
.align 8 .align 8
roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab: SYM_FUNC_START_LOCAL(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)
roundsm16(%xmm4, %xmm5, %xmm6, %xmm7, %xmm0, %xmm1, %xmm2, %xmm3, roundsm16(%xmm4, %xmm5, %xmm6, %xmm7, %xmm0, %xmm1, %xmm2, %xmm3,
%xmm12, %xmm13, %xmm14, %xmm15, %xmm8, %xmm9, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm15, %xmm8, %xmm9, %xmm10, %xmm11,
%rax, (%r9)); %rax, (%r9));
ret; ret;
ENDPROC(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) SYM_FUNC_END(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)
/* /*
* IN/OUT: * IN/OUT:
@ -722,7 +722,7 @@ ENDPROC(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)
.text .text
.align 8 .align 8
__camellia_enc_blk16: SYM_FUNC_START_LOCAL(__camellia_enc_blk16)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rax: temporary storage, 256 bytes * %rax: temporary storage, 256 bytes
@ -806,10 +806,10 @@ __camellia_enc_blk16:
%xmm15, %rax, %rcx, 24); %xmm15, %rax, %rcx, 24);
jmp .Lenc_done; jmp .Lenc_done;
ENDPROC(__camellia_enc_blk16) SYM_FUNC_END(__camellia_enc_blk16)
.align 8 .align 8
__camellia_dec_blk16: SYM_FUNC_START_LOCAL(__camellia_dec_blk16)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rax: temporary storage, 256 bytes * %rax: temporary storage, 256 bytes
@ -891,9 +891,9 @@ __camellia_dec_blk16:
((key_table + (24) * 8) + 4)(CTX)); ((key_table + (24) * 8) + 4)(CTX));
jmp .Ldec_max24; jmp .Ldec_max24;
ENDPROC(__camellia_dec_blk16) SYM_FUNC_END(__camellia_dec_blk16)
ENTRY(camellia_ecb_enc_16way) SYM_FUNC_START(camellia_ecb_enc_16way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (16 blocks) * %rsi: dst (16 blocks)
@ -916,9 +916,9 @@ ENTRY(camellia_ecb_enc_16way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(camellia_ecb_enc_16way) SYM_FUNC_END(camellia_ecb_enc_16way)
ENTRY(camellia_ecb_dec_16way) SYM_FUNC_START(camellia_ecb_dec_16way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (16 blocks) * %rsi: dst (16 blocks)
@ -946,9 +946,9 @@ ENTRY(camellia_ecb_dec_16way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(camellia_ecb_dec_16way) SYM_FUNC_END(camellia_ecb_dec_16way)
ENTRY(camellia_cbc_dec_16way) SYM_FUNC_START(camellia_cbc_dec_16way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (16 blocks) * %rsi: dst (16 blocks)
@ -997,7 +997,7 @@ ENTRY(camellia_cbc_dec_16way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(camellia_cbc_dec_16way) SYM_FUNC_END(camellia_cbc_dec_16way)
#define inc_le128(x, minus_one, tmp) \ #define inc_le128(x, minus_one, tmp) \
vpcmpeqq minus_one, x, tmp; \ vpcmpeqq minus_one, x, tmp; \
@ -1005,7 +1005,7 @@ ENDPROC(camellia_cbc_dec_16way)
vpslldq $8, tmp, tmp; \ vpslldq $8, tmp, tmp; \
vpsubq tmp, x, x; vpsubq tmp, x, x;
ENTRY(camellia_ctr_16way) SYM_FUNC_START(camellia_ctr_16way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (16 blocks) * %rsi: dst (16 blocks)
@ -1110,7 +1110,7 @@ ENTRY(camellia_ctr_16way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(camellia_ctr_16way) SYM_FUNC_END(camellia_ctr_16way)
#define gf128mul_x_ble(iv, mask, tmp) \ #define gf128mul_x_ble(iv, mask, tmp) \
vpsrad $31, iv, tmp; \ vpsrad $31, iv, tmp; \
@ -1120,7 +1120,7 @@ ENDPROC(camellia_ctr_16way)
vpxor tmp, iv, iv; vpxor tmp, iv, iv;
.align 8 .align 8
camellia_xts_crypt_16way: SYM_FUNC_START_LOCAL(camellia_xts_crypt_16way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (16 blocks) * %rsi: dst (16 blocks)
@ -1254,9 +1254,9 @@ camellia_xts_crypt_16way:
FRAME_END FRAME_END
ret; ret;
ENDPROC(camellia_xts_crypt_16way) SYM_FUNC_END(camellia_xts_crypt_16way)
ENTRY(camellia_xts_enc_16way) SYM_FUNC_START(camellia_xts_enc_16way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (16 blocks) * %rsi: dst (16 blocks)
@ -1268,9 +1268,9 @@ ENTRY(camellia_xts_enc_16way)
leaq __camellia_enc_blk16, %r9; leaq __camellia_enc_blk16, %r9;
jmp camellia_xts_crypt_16way; jmp camellia_xts_crypt_16way;
ENDPROC(camellia_xts_enc_16way) SYM_FUNC_END(camellia_xts_enc_16way)
ENTRY(camellia_xts_dec_16way) SYM_FUNC_START(camellia_xts_dec_16way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (16 blocks) * %rsi: dst (16 blocks)
@ -1286,4 +1286,4 @@ ENTRY(camellia_xts_dec_16way)
leaq __camellia_dec_blk16, %r9; leaq __camellia_dec_blk16, %r9;
jmp camellia_xts_crypt_16way; jmp camellia_xts_crypt_16way;
ENDPROC(camellia_xts_dec_16way) SYM_FUNC_END(camellia_xts_dec_16way)

View file

@ -223,20 +223,20 @@
* larger and would only marginally faster. * larger and would only marginally faster.
*/ */
.align 8 .align 8
roundsm32_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd: SYM_FUNC_START_LOCAL(roundsm32_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd)
roundsm32(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, roundsm32(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7,
%ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, %ymm15, %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, %ymm15,
%rcx, (%r9)); %rcx, (%r9));
ret; ret;
ENDPROC(roundsm32_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd) SYM_FUNC_END(roundsm32_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd)
.align 8 .align 8
roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab: SYM_FUNC_START_LOCAL(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)
roundsm32(%ymm4, %ymm5, %ymm6, %ymm7, %ymm0, %ymm1, %ymm2, %ymm3, roundsm32(%ymm4, %ymm5, %ymm6, %ymm7, %ymm0, %ymm1, %ymm2, %ymm3,
%ymm12, %ymm13, %ymm14, %ymm15, %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, %ymm15, %ymm8, %ymm9, %ymm10, %ymm11,
%rax, (%r9)); %rax, (%r9));
ret; ret;
ENDPROC(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) SYM_FUNC_END(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)
/* /*
* IN/OUT: * IN/OUT:
@ -760,7 +760,7 @@ ENDPROC(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)
.text .text
.align 8 .align 8
__camellia_enc_blk32: SYM_FUNC_START_LOCAL(__camellia_enc_blk32)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rax: temporary storage, 512 bytes * %rax: temporary storage, 512 bytes
@ -844,10 +844,10 @@ __camellia_enc_blk32:
%ymm15, %rax, %rcx, 24); %ymm15, %rax, %rcx, 24);
jmp .Lenc_done; jmp .Lenc_done;
ENDPROC(__camellia_enc_blk32) SYM_FUNC_END(__camellia_enc_blk32)
.align 8 .align 8
__camellia_dec_blk32: SYM_FUNC_START_LOCAL(__camellia_dec_blk32)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rax: temporary storage, 512 bytes * %rax: temporary storage, 512 bytes
@ -929,9 +929,9 @@ __camellia_dec_blk32:
((key_table + (24) * 8) + 4)(CTX)); ((key_table + (24) * 8) + 4)(CTX));
jmp .Ldec_max24; jmp .Ldec_max24;
ENDPROC(__camellia_dec_blk32) SYM_FUNC_END(__camellia_dec_blk32)
ENTRY(camellia_ecb_enc_32way) SYM_FUNC_START(camellia_ecb_enc_32way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (32 blocks) * %rsi: dst (32 blocks)
@ -958,9 +958,9 @@ ENTRY(camellia_ecb_enc_32way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(camellia_ecb_enc_32way) SYM_FUNC_END(camellia_ecb_enc_32way)
ENTRY(camellia_ecb_dec_32way) SYM_FUNC_START(camellia_ecb_dec_32way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (32 blocks) * %rsi: dst (32 blocks)
@ -992,9 +992,9 @@ ENTRY(camellia_ecb_dec_32way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(camellia_ecb_dec_32way) SYM_FUNC_END(camellia_ecb_dec_32way)
ENTRY(camellia_cbc_dec_32way) SYM_FUNC_START(camellia_cbc_dec_32way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (32 blocks) * %rsi: dst (32 blocks)
@ -1060,7 +1060,7 @@ ENTRY(camellia_cbc_dec_32way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(camellia_cbc_dec_32way) SYM_FUNC_END(camellia_cbc_dec_32way)
#define inc_le128(x, minus_one, tmp) \ #define inc_le128(x, minus_one, tmp) \
vpcmpeqq minus_one, x, tmp; \ vpcmpeqq minus_one, x, tmp; \
@ -1076,7 +1076,7 @@ ENDPROC(camellia_cbc_dec_32way)
vpslldq $8, tmp1, tmp1; \ vpslldq $8, tmp1, tmp1; \
vpsubq tmp1, x, x; vpsubq tmp1, x, x;
ENTRY(camellia_ctr_32way) SYM_FUNC_START(camellia_ctr_32way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (32 blocks) * %rsi: dst (32 blocks)
@ -1200,7 +1200,7 @@ ENTRY(camellia_ctr_32way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(camellia_ctr_32way) SYM_FUNC_END(camellia_ctr_32way)
#define gf128mul_x_ble(iv, mask, tmp) \ #define gf128mul_x_ble(iv, mask, tmp) \
vpsrad $31, iv, tmp; \ vpsrad $31, iv, tmp; \
@ -1222,7 +1222,7 @@ ENDPROC(camellia_ctr_32way)
vpxor tmp1, iv, iv; vpxor tmp1, iv, iv;
.align 8 .align 8
camellia_xts_crypt_32way: SYM_FUNC_START_LOCAL(camellia_xts_crypt_32way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (32 blocks) * %rsi: dst (32 blocks)
@ -1367,9 +1367,9 @@ camellia_xts_crypt_32way:
FRAME_END FRAME_END
ret; ret;
ENDPROC(camellia_xts_crypt_32way) SYM_FUNC_END(camellia_xts_crypt_32way)
ENTRY(camellia_xts_enc_32way) SYM_FUNC_START(camellia_xts_enc_32way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (32 blocks) * %rsi: dst (32 blocks)
@ -1382,9 +1382,9 @@ ENTRY(camellia_xts_enc_32way)
leaq __camellia_enc_blk32, %r9; leaq __camellia_enc_blk32, %r9;
jmp camellia_xts_crypt_32way; jmp camellia_xts_crypt_32way;
ENDPROC(camellia_xts_enc_32way) SYM_FUNC_END(camellia_xts_enc_32way)
ENTRY(camellia_xts_dec_32way) SYM_FUNC_START(camellia_xts_dec_32way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (32 blocks) * %rsi: dst (32 blocks)
@ -1400,4 +1400,4 @@ ENTRY(camellia_xts_dec_32way)
leaq __camellia_dec_blk32, %r9; leaq __camellia_dec_blk32, %r9;
jmp camellia_xts_crypt_32way; jmp camellia_xts_crypt_32way;
ENDPROC(camellia_xts_dec_32way) SYM_FUNC_END(camellia_xts_dec_32way)

View file

@ -175,7 +175,7 @@
bswapq RAB0; \ bswapq RAB0; \
movq RAB0, 4*2(RIO); movq RAB0, 4*2(RIO);
ENTRY(__camellia_enc_blk) SYM_FUNC_START(__camellia_enc_blk)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -220,9 +220,9 @@ ENTRY(__camellia_enc_blk)
movq RR12, %r12; movq RR12, %r12;
ret; ret;
ENDPROC(__camellia_enc_blk) SYM_FUNC_END(__camellia_enc_blk)
ENTRY(camellia_dec_blk) SYM_FUNC_START(camellia_dec_blk)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -258,7 +258,7 @@ ENTRY(camellia_dec_blk)
movq RR12, %r12; movq RR12, %r12;
ret; ret;
ENDPROC(camellia_dec_blk) SYM_FUNC_END(camellia_dec_blk)
/********************************************************************** /**********************************************************************
2-way camellia 2-way camellia
@ -409,7 +409,7 @@ ENDPROC(camellia_dec_blk)
bswapq RAB1; \ bswapq RAB1; \
movq RAB1, 12*2(RIO); movq RAB1, 12*2(RIO);
ENTRY(__camellia_enc_blk_2way) SYM_FUNC_START(__camellia_enc_blk_2way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -456,9 +456,9 @@ ENTRY(__camellia_enc_blk_2way)
movq RR12, %r12; movq RR12, %r12;
popq %rbx; popq %rbx;
ret; ret;
ENDPROC(__camellia_enc_blk_2way) SYM_FUNC_END(__camellia_enc_blk_2way)
ENTRY(camellia_dec_blk_2way) SYM_FUNC_START(camellia_dec_blk_2way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -496,4 +496,4 @@ ENTRY(camellia_dec_blk_2way)
movq RR12, %r12; movq RR12, %r12;
movq RXOR, %rbx; movq RXOR, %rbx;
ret; ret;
ENDPROC(camellia_dec_blk_2way) SYM_FUNC_END(camellia_dec_blk_2way)

View file

@ -209,7 +209,7 @@
.text .text
.align 16 .align 16
__cast5_enc_blk16: SYM_FUNC_START_LOCAL(__cast5_enc_blk16)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* RL1: blocks 1 and 2 * RL1: blocks 1 and 2
@ -280,10 +280,10 @@ __cast5_enc_blk16:
outunpack_blocks(RR4, RL4, RTMP, RX, RKM); outunpack_blocks(RR4, RL4, RTMP, RX, RKM);
ret; ret;
ENDPROC(__cast5_enc_blk16) SYM_FUNC_END(__cast5_enc_blk16)
.align 16 .align 16
__cast5_dec_blk16: SYM_FUNC_START_LOCAL(__cast5_dec_blk16)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* RL1: encrypted blocks 1 and 2 * RL1: encrypted blocks 1 and 2
@ -357,9 +357,9 @@ __cast5_dec_blk16:
.L__skip_dec: .L__skip_dec:
vpsrldq $4, RKR, RKR; vpsrldq $4, RKR, RKR;
jmp .L__dec_tail; jmp .L__dec_tail;
ENDPROC(__cast5_dec_blk16) SYM_FUNC_END(__cast5_dec_blk16)
ENTRY(cast5_ecb_enc_16way) SYM_FUNC_START(cast5_ecb_enc_16way)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* %rsi: dst * %rsi: dst
@ -394,9 +394,9 @@ ENTRY(cast5_ecb_enc_16way)
popq %r15; popq %r15;
FRAME_END FRAME_END
ret; ret;
ENDPROC(cast5_ecb_enc_16way) SYM_FUNC_END(cast5_ecb_enc_16way)
ENTRY(cast5_ecb_dec_16way) SYM_FUNC_START(cast5_ecb_dec_16way)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* %rsi: dst * %rsi: dst
@ -432,9 +432,9 @@ ENTRY(cast5_ecb_dec_16way)
popq %r15; popq %r15;
FRAME_END FRAME_END
ret; ret;
ENDPROC(cast5_ecb_dec_16way) SYM_FUNC_END(cast5_ecb_dec_16way)
ENTRY(cast5_cbc_dec_16way) SYM_FUNC_START(cast5_cbc_dec_16way)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* %rsi: dst * %rsi: dst
@ -484,9 +484,9 @@ ENTRY(cast5_cbc_dec_16way)
popq %r12; popq %r12;
FRAME_END FRAME_END
ret; ret;
ENDPROC(cast5_cbc_dec_16way) SYM_FUNC_END(cast5_cbc_dec_16way)
ENTRY(cast5_ctr_16way) SYM_FUNC_START(cast5_ctr_16way)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* %rsi: dst * %rsi: dst
@ -560,4 +560,4 @@ ENTRY(cast5_ctr_16way)
popq %r12; popq %r12;
FRAME_END FRAME_END
ret; ret;
ENDPROC(cast5_ctr_16way) SYM_FUNC_END(cast5_ctr_16way)

View file

@ -247,7 +247,7 @@
.text .text
.align 8 .align 8
__cast6_enc_blk8: SYM_FUNC_START_LOCAL(__cast6_enc_blk8)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: blocks * RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: blocks
@ -292,10 +292,10 @@ __cast6_enc_blk8:
outunpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM); outunpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM);
ret; ret;
ENDPROC(__cast6_enc_blk8) SYM_FUNC_END(__cast6_enc_blk8)
.align 8 .align 8
__cast6_dec_blk8: SYM_FUNC_START_LOCAL(__cast6_dec_blk8)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: encrypted blocks * RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: encrypted blocks
@ -339,9 +339,9 @@ __cast6_dec_blk8:
outunpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM); outunpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM);
ret; ret;
ENDPROC(__cast6_dec_blk8) SYM_FUNC_END(__cast6_dec_blk8)
ENTRY(cast6_ecb_enc_8way) SYM_FUNC_START(cast6_ecb_enc_8way)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* %rsi: dst * %rsi: dst
@ -362,9 +362,9 @@ ENTRY(cast6_ecb_enc_8way)
popq %r15; popq %r15;
FRAME_END FRAME_END
ret; ret;
ENDPROC(cast6_ecb_enc_8way) SYM_FUNC_END(cast6_ecb_enc_8way)
ENTRY(cast6_ecb_dec_8way) SYM_FUNC_START(cast6_ecb_dec_8way)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* %rsi: dst * %rsi: dst
@ -385,9 +385,9 @@ ENTRY(cast6_ecb_dec_8way)
popq %r15; popq %r15;
FRAME_END FRAME_END
ret; ret;
ENDPROC(cast6_ecb_dec_8way) SYM_FUNC_END(cast6_ecb_dec_8way)
ENTRY(cast6_cbc_dec_8way) SYM_FUNC_START(cast6_cbc_dec_8way)
/* input: /* input:
* %rdi: ctx * %rdi: ctx
* %rsi: dst * %rsi: dst
@ -411,9 +411,9 @@ ENTRY(cast6_cbc_dec_8way)
popq %r12; popq %r12;
FRAME_END FRAME_END
ret; ret;
ENDPROC(cast6_cbc_dec_8way) SYM_FUNC_END(cast6_cbc_dec_8way)
ENTRY(cast6_ctr_8way) SYM_FUNC_START(cast6_ctr_8way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -439,9 +439,9 @@ ENTRY(cast6_ctr_8way)
popq %r12; popq %r12;
FRAME_END FRAME_END
ret; ret;
ENDPROC(cast6_ctr_8way) SYM_FUNC_END(cast6_ctr_8way)
ENTRY(cast6_xts_enc_8way) SYM_FUNC_START(cast6_xts_enc_8way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -466,9 +466,9 @@ ENTRY(cast6_xts_enc_8way)
popq %r15; popq %r15;
FRAME_END FRAME_END
ret; ret;
ENDPROC(cast6_xts_enc_8way) SYM_FUNC_END(cast6_xts_enc_8way)
ENTRY(cast6_xts_dec_8way) SYM_FUNC_START(cast6_xts_dec_8way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -493,4 +493,4 @@ ENTRY(cast6_xts_dec_8way)
popq %r15; popq %r15;
FRAME_END FRAME_END
ret; ret;
ENDPROC(cast6_xts_dec_8way) SYM_FUNC_END(cast6_xts_dec_8way)

View file

@ -34,7 +34,7 @@ CTR4BL: .octa 0x00000000000000000000000000000002
.text .text
ENTRY(chacha_2block_xor_avx2) SYM_FUNC_START(chacha_2block_xor_avx2)
# %rdi: Input state matrix, s # %rdi: Input state matrix, s
# %rsi: up to 2 data blocks output, o # %rsi: up to 2 data blocks output, o
# %rdx: up to 2 data blocks input, i # %rdx: up to 2 data blocks input, i
@ -224,9 +224,9 @@ ENTRY(chacha_2block_xor_avx2)
lea -8(%r10),%rsp lea -8(%r10),%rsp
jmp .Ldone2 jmp .Ldone2
ENDPROC(chacha_2block_xor_avx2) SYM_FUNC_END(chacha_2block_xor_avx2)
ENTRY(chacha_4block_xor_avx2) SYM_FUNC_START(chacha_4block_xor_avx2)
# %rdi: Input state matrix, s # %rdi: Input state matrix, s
# %rsi: up to 4 data blocks output, o # %rsi: up to 4 data blocks output, o
# %rdx: up to 4 data blocks input, i # %rdx: up to 4 data blocks input, i
@ -529,9 +529,9 @@ ENTRY(chacha_4block_xor_avx2)
lea -8(%r10),%rsp lea -8(%r10),%rsp
jmp .Ldone4 jmp .Ldone4
ENDPROC(chacha_4block_xor_avx2) SYM_FUNC_END(chacha_4block_xor_avx2)
ENTRY(chacha_8block_xor_avx2) SYM_FUNC_START(chacha_8block_xor_avx2)
# %rdi: Input state matrix, s # %rdi: Input state matrix, s
# %rsi: up to 8 data blocks output, o # %rsi: up to 8 data blocks output, o
# %rdx: up to 8 data blocks input, i # %rdx: up to 8 data blocks input, i
@ -1018,4 +1018,4 @@ ENTRY(chacha_8block_xor_avx2)
jmp .Ldone8 jmp .Ldone8
ENDPROC(chacha_8block_xor_avx2) SYM_FUNC_END(chacha_8block_xor_avx2)

View file

@ -24,7 +24,7 @@ CTR8BL: .octa 0x00000003000000020000000100000000
.text .text
ENTRY(chacha_2block_xor_avx512vl) SYM_FUNC_START(chacha_2block_xor_avx512vl)
# %rdi: Input state matrix, s # %rdi: Input state matrix, s
# %rsi: up to 2 data blocks output, o # %rsi: up to 2 data blocks output, o
# %rdx: up to 2 data blocks input, i # %rdx: up to 2 data blocks input, i
@ -187,9 +187,9 @@ ENTRY(chacha_2block_xor_avx512vl)
jmp .Ldone2 jmp .Ldone2
ENDPROC(chacha_2block_xor_avx512vl) SYM_FUNC_END(chacha_2block_xor_avx512vl)
ENTRY(chacha_4block_xor_avx512vl) SYM_FUNC_START(chacha_4block_xor_avx512vl)
# %rdi: Input state matrix, s # %rdi: Input state matrix, s
# %rsi: up to 4 data blocks output, o # %rsi: up to 4 data blocks output, o
# %rdx: up to 4 data blocks input, i # %rdx: up to 4 data blocks input, i
@ -453,9 +453,9 @@ ENTRY(chacha_4block_xor_avx512vl)
jmp .Ldone4 jmp .Ldone4
ENDPROC(chacha_4block_xor_avx512vl) SYM_FUNC_END(chacha_4block_xor_avx512vl)
ENTRY(chacha_8block_xor_avx512vl) SYM_FUNC_START(chacha_8block_xor_avx512vl)
# %rdi: Input state matrix, s # %rdi: Input state matrix, s
# %rsi: up to 8 data blocks output, o # %rsi: up to 8 data blocks output, o
# %rdx: up to 8 data blocks input, i # %rdx: up to 8 data blocks input, i
@ -833,4 +833,4 @@ ENTRY(chacha_8block_xor_avx512vl)
jmp .Ldone8 jmp .Ldone8
ENDPROC(chacha_8block_xor_avx512vl) SYM_FUNC_END(chacha_8block_xor_avx512vl)

View file

@ -33,7 +33,7 @@ CTRINC: .octa 0x00000003000000020000000100000000
* *
* Clobbers: %r8d, %xmm4-%xmm7 * Clobbers: %r8d, %xmm4-%xmm7
*/ */
chacha_permute: SYM_FUNC_START_LOCAL(chacha_permute)
movdqa ROT8(%rip),%xmm4 movdqa ROT8(%rip),%xmm4
movdqa ROT16(%rip),%xmm5 movdqa ROT16(%rip),%xmm5
@ -109,9 +109,9 @@ chacha_permute:
jnz .Ldoubleround jnz .Ldoubleround
ret ret
ENDPROC(chacha_permute) SYM_FUNC_END(chacha_permute)
ENTRY(chacha_block_xor_ssse3) SYM_FUNC_START(chacha_block_xor_ssse3)
# %rdi: Input state matrix, s # %rdi: Input state matrix, s
# %rsi: up to 1 data block output, o # %rsi: up to 1 data block output, o
# %rdx: up to 1 data block input, i # %rdx: up to 1 data block input, i
@ -197,9 +197,9 @@ ENTRY(chacha_block_xor_ssse3)
lea -8(%r10),%rsp lea -8(%r10),%rsp
jmp .Ldone jmp .Ldone
ENDPROC(chacha_block_xor_ssse3) SYM_FUNC_END(chacha_block_xor_ssse3)
ENTRY(hchacha_block_ssse3) SYM_FUNC_START(hchacha_block_ssse3)
# %rdi: Input state matrix, s # %rdi: Input state matrix, s
# %rsi: output (8 32-bit words) # %rsi: output (8 32-bit words)
# %edx: nrounds # %edx: nrounds
@ -218,9 +218,9 @@ ENTRY(hchacha_block_ssse3)
FRAME_END FRAME_END
ret ret
ENDPROC(hchacha_block_ssse3) SYM_FUNC_END(hchacha_block_ssse3)
ENTRY(chacha_4block_xor_ssse3) SYM_FUNC_START(chacha_4block_xor_ssse3)
# %rdi: Input state matrix, s # %rdi: Input state matrix, s
# %rsi: up to 4 data blocks output, o # %rsi: up to 4 data blocks output, o
# %rdx: up to 4 data blocks input, i # %rdx: up to 4 data blocks input, i
@ -788,4 +788,4 @@ ENTRY(chacha_4block_xor_ssse3)
jmp .Ldone4 jmp .Ldone4
ENDPROC(chacha_4block_xor_ssse3) SYM_FUNC_END(chacha_4block_xor_ssse3)

View file

@ -103,7 +103,7 @@
* size_t len, uint crc32) * size_t len, uint crc32)
*/ */
ENTRY(crc32_pclmul_le_16) /* buffer and buffer size are 16 bytes aligned */ SYM_FUNC_START(crc32_pclmul_le_16) /* buffer and buffer size are 16 bytes aligned */
movdqa (BUF), %xmm1 movdqa (BUF), %xmm1
movdqa 0x10(BUF), %xmm2 movdqa 0x10(BUF), %xmm2
movdqa 0x20(BUF), %xmm3 movdqa 0x20(BUF), %xmm3
@ -238,4 +238,4 @@ fold_64:
PEXTRD 0x01, %xmm1, %eax PEXTRD 0x01, %xmm1, %eax
ret ret
ENDPROC(crc32_pclmul_le_16) SYM_FUNC_END(crc32_pclmul_le_16)

View file

@ -74,7 +74,7 @@
# unsigned int crc_pcl(u8 *buffer, int len, unsigned int crc_init); # unsigned int crc_pcl(u8 *buffer, int len, unsigned int crc_init);
.text .text
ENTRY(crc_pcl) SYM_FUNC_START(crc_pcl)
#define bufp %rdi #define bufp %rdi
#define bufp_dw %edi #define bufp_dw %edi
#define bufp_w %di #define bufp_w %di
@ -311,7 +311,7 @@ do_return:
popq %rdi popq %rdi
popq %rbx popq %rbx
ret ret
ENDPROC(crc_pcl) SYM_FUNC_END(crc_pcl)
.section .rodata, "a", @progbits .section .rodata, "a", @progbits
################################################################ ################################################################

View file

@ -95,7 +95,7 @@
# Assumes len >= 16. # Assumes len >= 16.
# #
.align 16 .align 16
ENTRY(crc_t10dif_pcl) SYM_FUNC_START(crc_t10dif_pcl)
movdqa .Lbswap_mask(%rip), BSWAP_MASK movdqa .Lbswap_mask(%rip), BSWAP_MASK
@ -280,7 +280,7 @@ ENTRY(crc_t10dif_pcl)
jge .Lfold_16_bytes_loop # 32 <= len <= 255 jge .Lfold_16_bytes_loop # 32 <= len <= 255
add $16, len add $16, len
jmp .Lhandle_partial_segment # 17 <= len <= 31 jmp .Lhandle_partial_segment # 17 <= len <= 31
ENDPROC(crc_t10dif_pcl) SYM_FUNC_END(crc_t10dif_pcl)
.section .rodata, "a", @progbits .section .rodata, "a", @progbits
.align 16 .align 16

View file

@ -162,7 +162,7 @@
movl left##d, (io); \ movl left##d, (io); \
movl right##d, 4(io); movl right##d, 4(io);
ENTRY(des3_ede_x86_64_crypt_blk) SYM_FUNC_START(des3_ede_x86_64_crypt_blk)
/* input: /* input:
* %rdi: round keys, CTX * %rdi: round keys, CTX
* %rsi: dst * %rsi: dst
@ -244,7 +244,7 @@ ENTRY(des3_ede_x86_64_crypt_blk)
popq %rbx; popq %rbx;
ret; ret;
ENDPROC(des3_ede_x86_64_crypt_blk) SYM_FUNC_END(des3_ede_x86_64_crypt_blk)
/*********************************************************************** /***********************************************************************
* 3-way 3DES * 3-way 3DES
@ -418,7 +418,7 @@ ENDPROC(des3_ede_x86_64_crypt_blk)
#define __movq(src, dst) \ #define __movq(src, dst) \
movq src, dst; movq src, dst;
ENTRY(des3_ede_x86_64_crypt_blk_3way) SYM_FUNC_START(des3_ede_x86_64_crypt_blk_3way)
/* input: /* input:
* %rdi: ctx, round keys * %rdi: ctx, round keys
* %rsi: dst (3 blocks) * %rsi: dst (3 blocks)
@ -529,7 +529,7 @@ ENTRY(des3_ede_x86_64_crypt_blk_3way)
popq %rbx; popq %rbx;
ret; ret;
ENDPROC(des3_ede_x86_64_crypt_blk_3way) SYM_FUNC_END(des3_ede_x86_64_crypt_blk_3way)
.section .rodata, "a", @progbits .section .rodata, "a", @progbits
.align 16 .align 16

View file

@ -44,7 +44,7 @@
* T2 * T2
* T3 * T3
*/ */
__clmul_gf128mul_ble: SYM_FUNC_START_LOCAL(__clmul_gf128mul_ble)
movaps DATA, T1 movaps DATA, T1
pshufd $0b01001110, DATA, T2 pshufd $0b01001110, DATA, T2
pshufd $0b01001110, SHASH, T3 pshufd $0b01001110, SHASH, T3
@ -87,10 +87,10 @@ __clmul_gf128mul_ble:
pxor T2, T1 pxor T2, T1
pxor T1, DATA pxor T1, DATA
ret ret
ENDPROC(__clmul_gf128mul_ble) SYM_FUNC_END(__clmul_gf128mul_ble)
/* void clmul_ghash_mul(char *dst, const u128 *shash) */ /* void clmul_ghash_mul(char *dst, const u128 *shash) */
ENTRY(clmul_ghash_mul) SYM_FUNC_START(clmul_ghash_mul)
FRAME_BEGIN FRAME_BEGIN
movups (%rdi), DATA movups (%rdi), DATA
movups (%rsi), SHASH movups (%rsi), SHASH
@ -101,13 +101,13 @@ ENTRY(clmul_ghash_mul)
movups DATA, (%rdi) movups DATA, (%rdi)
FRAME_END FRAME_END
ret ret
ENDPROC(clmul_ghash_mul) SYM_FUNC_END(clmul_ghash_mul)
/* /*
* void clmul_ghash_update(char *dst, const char *src, unsigned int srclen, * void clmul_ghash_update(char *dst, const char *src, unsigned int srclen,
* const u128 *shash); * const u128 *shash);
*/ */
ENTRY(clmul_ghash_update) SYM_FUNC_START(clmul_ghash_update)
FRAME_BEGIN FRAME_BEGIN
cmp $16, %rdx cmp $16, %rdx
jb .Lupdate_just_ret # check length jb .Lupdate_just_ret # check length
@ -130,4 +130,4 @@ ENTRY(clmul_ghash_update)
.Lupdate_just_ret: .Lupdate_just_ret:
FRAME_END FRAME_END
ret ret
ENDPROC(clmul_ghash_update) SYM_FUNC_END(clmul_ghash_update)

View file

@ -69,7 +69,7 @@
* *
* It's guaranteed that message_len % 16 == 0. * It's guaranteed that message_len % 16 == 0.
*/ */
ENTRY(nh_avx2) SYM_FUNC_START(nh_avx2)
vmovdqu 0x00(KEY), K0 vmovdqu 0x00(KEY), K0
vmovdqu 0x10(KEY), K1 vmovdqu 0x10(KEY), K1
@ -154,4 +154,4 @@ ENTRY(nh_avx2)
vpaddq T4, T0, T0 vpaddq T4, T0, T0
vmovdqu T0, (HASH) vmovdqu T0, (HASH)
ret ret
ENDPROC(nh_avx2) SYM_FUNC_END(nh_avx2)

View file

@ -71,7 +71,7 @@
* *
* It's guaranteed that message_len % 16 == 0. * It's guaranteed that message_len % 16 == 0.
*/ */
ENTRY(nh_sse2) SYM_FUNC_START(nh_sse2)
movdqu 0x00(KEY), K0 movdqu 0x00(KEY), K0
movdqu 0x10(KEY), K1 movdqu 0x10(KEY), K1
@ -120,4 +120,4 @@ ENTRY(nh_sse2)
movdqu T0, 0x00(HASH) movdqu T0, 0x00(HASH)
movdqu T1, 0x10(HASH) movdqu T1, 0x10(HASH)
ret ret
ENDPROC(nh_sse2) SYM_FUNC_END(nh_sse2)

View file

@ -79,7 +79,7 @@ ORMASK: .octa 0x00000000010000000000000001000000
#define d3 %r12 #define d3 %r12
#define d4 %r13 #define d4 %r13
ENTRY(poly1305_4block_avx2) SYM_FUNC_START(poly1305_4block_avx2)
# %rdi: Accumulator h[5] # %rdi: Accumulator h[5]
# %rsi: 64 byte input block m # %rsi: 64 byte input block m
# %rdx: Poly1305 key r[5] # %rdx: Poly1305 key r[5]
@ -387,4 +387,4 @@ ENTRY(poly1305_4block_avx2)
pop %r12 pop %r12
pop %rbx pop %rbx
ret ret
ENDPROC(poly1305_4block_avx2) SYM_FUNC_END(poly1305_4block_avx2)

View file

@ -46,7 +46,7 @@ ORMASK: .octa 0x00000000010000000000000001000000
#define d3 %r11 #define d3 %r11
#define d4 %r12 #define d4 %r12
ENTRY(poly1305_block_sse2) SYM_FUNC_START(poly1305_block_sse2)
# %rdi: Accumulator h[5] # %rdi: Accumulator h[5]
# %rsi: 16 byte input block m # %rsi: 16 byte input block m
# %rdx: Poly1305 key r[5] # %rdx: Poly1305 key r[5]
@ -276,7 +276,7 @@ ENTRY(poly1305_block_sse2)
pop %r12 pop %r12
pop %rbx pop %rbx
ret ret
ENDPROC(poly1305_block_sse2) SYM_FUNC_END(poly1305_block_sse2)
#define u0 0x00(%r8) #define u0 0x00(%r8)
@ -301,7 +301,7 @@ ENDPROC(poly1305_block_sse2)
#undef d0 #undef d0
#define d0 %r13 #define d0 %r13
ENTRY(poly1305_2block_sse2) SYM_FUNC_START(poly1305_2block_sse2)
# %rdi: Accumulator h[5] # %rdi: Accumulator h[5]
# %rsi: 16 byte input block m # %rsi: 16 byte input block m
# %rdx: Poly1305 key r[5] # %rdx: Poly1305 key r[5]
@ -587,4 +587,4 @@ ENTRY(poly1305_2block_sse2)
pop %r12 pop %r12
pop %rbx pop %rbx
ret ret
ENDPROC(poly1305_2block_sse2) SYM_FUNC_END(poly1305_2block_sse2)

View file

@ -555,7 +555,7 @@
transpose_4x4(x0, x1, x2, x3, t0, t1, t2) transpose_4x4(x0, x1, x2, x3, t0, t1, t2)
.align 8 .align 8
__serpent_enc_blk8_avx: SYM_FUNC_START_LOCAL(__serpent_enc_blk8_avx)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: blocks * RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: blocks
@ -606,10 +606,10 @@ __serpent_enc_blk8_avx:
write_blocks(RA2, RB2, RC2, RD2, RK0, RK1, RK2); write_blocks(RA2, RB2, RC2, RD2, RK0, RK1, RK2);
ret; ret;
ENDPROC(__serpent_enc_blk8_avx) SYM_FUNC_END(__serpent_enc_blk8_avx)
.align 8 .align 8
__serpent_dec_blk8_avx: SYM_FUNC_START_LOCAL(__serpent_dec_blk8_avx)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: encrypted blocks * RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: encrypted blocks
@ -660,9 +660,9 @@ __serpent_dec_blk8_avx:
write_blocks(RC2, RD2, RB2, RE2, RK0, RK1, RK2); write_blocks(RC2, RD2, RB2, RE2, RK0, RK1, RK2);
ret; ret;
ENDPROC(__serpent_dec_blk8_avx) SYM_FUNC_END(__serpent_dec_blk8_avx)
ENTRY(serpent_ecb_enc_8way_avx) SYM_FUNC_START(serpent_ecb_enc_8way_avx)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -678,9 +678,9 @@ ENTRY(serpent_ecb_enc_8way_avx)
FRAME_END FRAME_END
ret; ret;
ENDPROC(serpent_ecb_enc_8way_avx) SYM_FUNC_END(serpent_ecb_enc_8way_avx)
ENTRY(serpent_ecb_dec_8way_avx) SYM_FUNC_START(serpent_ecb_dec_8way_avx)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -696,9 +696,9 @@ ENTRY(serpent_ecb_dec_8way_avx)
FRAME_END FRAME_END
ret; ret;
ENDPROC(serpent_ecb_dec_8way_avx) SYM_FUNC_END(serpent_ecb_dec_8way_avx)
ENTRY(serpent_cbc_dec_8way_avx) SYM_FUNC_START(serpent_cbc_dec_8way_avx)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -714,9 +714,9 @@ ENTRY(serpent_cbc_dec_8way_avx)
FRAME_END FRAME_END
ret; ret;
ENDPROC(serpent_cbc_dec_8way_avx) SYM_FUNC_END(serpent_cbc_dec_8way_avx)
ENTRY(serpent_ctr_8way_avx) SYM_FUNC_START(serpent_ctr_8way_avx)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -734,9 +734,9 @@ ENTRY(serpent_ctr_8way_avx)
FRAME_END FRAME_END
ret; ret;
ENDPROC(serpent_ctr_8way_avx) SYM_FUNC_END(serpent_ctr_8way_avx)
ENTRY(serpent_xts_enc_8way_avx) SYM_FUNC_START(serpent_xts_enc_8way_avx)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -756,9 +756,9 @@ ENTRY(serpent_xts_enc_8way_avx)
FRAME_END FRAME_END
ret; ret;
ENDPROC(serpent_xts_enc_8way_avx) SYM_FUNC_END(serpent_xts_enc_8way_avx)
ENTRY(serpent_xts_dec_8way_avx) SYM_FUNC_START(serpent_xts_dec_8way_avx)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -778,4 +778,4 @@ ENTRY(serpent_xts_dec_8way_avx)
FRAME_END FRAME_END
ret; ret;
ENDPROC(serpent_xts_dec_8way_avx) SYM_FUNC_END(serpent_xts_dec_8way_avx)

View file

@ -561,7 +561,7 @@
transpose_4x4(x0, x1, x2, x3, t0, t1, t2) transpose_4x4(x0, x1, x2, x3, t0, t1, t2)
.align 8 .align 8
__serpent_enc_blk16: SYM_FUNC_START_LOCAL(__serpent_enc_blk16)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: plaintext * RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: plaintext
@ -612,10 +612,10 @@ __serpent_enc_blk16:
write_blocks(RA2, RB2, RC2, RD2, RK0, RK1, RK2); write_blocks(RA2, RB2, RC2, RD2, RK0, RK1, RK2);
ret; ret;
ENDPROC(__serpent_enc_blk16) SYM_FUNC_END(__serpent_enc_blk16)
.align 8 .align 8
__serpent_dec_blk16: SYM_FUNC_START_LOCAL(__serpent_dec_blk16)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: ciphertext * RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: ciphertext
@ -666,9 +666,9 @@ __serpent_dec_blk16:
write_blocks(RC2, RD2, RB2, RE2, RK0, RK1, RK2); write_blocks(RC2, RD2, RB2, RE2, RK0, RK1, RK2);
ret; ret;
ENDPROC(__serpent_dec_blk16) SYM_FUNC_END(__serpent_dec_blk16)
ENTRY(serpent_ecb_enc_16way) SYM_FUNC_START(serpent_ecb_enc_16way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -688,9 +688,9 @@ ENTRY(serpent_ecb_enc_16way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(serpent_ecb_enc_16way) SYM_FUNC_END(serpent_ecb_enc_16way)
ENTRY(serpent_ecb_dec_16way) SYM_FUNC_START(serpent_ecb_dec_16way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -710,9 +710,9 @@ ENTRY(serpent_ecb_dec_16way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(serpent_ecb_dec_16way) SYM_FUNC_END(serpent_ecb_dec_16way)
ENTRY(serpent_cbc_dec_16way) SYM_FUNC_START(serpent_cbc_dec_16way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -733,9 +733,9 @@ ENTRY(serpent_cbc_dec_16way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(serpent_cbc_dec_16way) SYM_FUNC_END(serpent_cbc_dec_16way)
ENTRY(serpent_ctr_16way) SYM_FUNC_START(serpent_ctr_16way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (16 blocks) * %rsi: dst (16 blocks)
@ -758,9 +758,9 @@ ENTRY(serpent_ctr_16way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(serpent_ctr_16way) SYM_FUNC_END(serpent_ctr_16way)
ENTRY(serpent_xts_enc_16way) SYM_FUNC_START(serpent_xts_enc_16way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (16 blocks) * %rsi: dst (16 blocks)
@ -784,9 +784,9 @@ ENTRY(serpent_xts_enc_16way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(serpent_xts_enc_16way) SYM_FUNC_END(serpent_xts_enc_16way)
ENTRY(serpent_xts_dec_16way) SYM_FUNC_START(serpent_xts_dec_16way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst (16 blocks) * %rsi: dst (16 blocks)
@ -810,4 +810,4 @@ ENTRY(serpent_xts_dec_16way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(serpent_xts_dec_16way) SYM_FUNC_END(serpent_xts_dec_16way)

View file

@ -497,7 +497,7 @@
pxor t0, x3; \ pxor t0, x3; \
movdqu x3, (3*4*4)(out); movdqu x3, (3*4*4)(out);
ENTRY(__serpent_enc_blk_4way) SYM_FUNC_START(__serpent_enc_blk_4way)
/* input: /* input:
* arg_ctx(%esp): ctx, CTX * arg_ctx(%esp): ctx, CTX
* arg_dst(%esp): dst * arg_dst(%esp): dst
@ -559,9 +559,9 @@ ENTRY(__serpent_enc_blk_4way)
xor_blocks(%eax, RA, RB, RC, RD, RT0, RT1, RE); xor_blocks(%eax, RA, RB, RC, RD, RT0, RT1, RE);
ret; ret;
ENDPROC(__serpent_enc_blk_4way) SYM_FUNC_END(__serpent_enc_blk_4way)
ENTRY(serpent_dec_blk_4way) SYM_FUNC_START(serpent_dec_blk_4way)
/* input: /* input:
* arg_ctx(%esp): ctx, CTX * arg_ctx(%esp): ctx, CTX
* arg_dst(%esp): dst * arg_dst(%esp): dst
@ -613,4 +613,4 @@ ENTRY(serpent_dec_blk_4way)
write_blocks(%eax, RC, RD, RB, RE, RT0, RT1, RA); write_blocks(%eax, RC, RD, RB, RE, RT0, RT1, RA);
ret; ret;
ENDPROC(serpent_dec_blk_4way) SYM_FUNC_END(serpent_dec_blk_4way)

View file

@ -619,7 +619,7 @@
pxor t0, x3; \ pxor t0, x3; \
movdqu x3, (3*4*4)(out); movdqu x3, (3*4*4)(out);
ENTRY(__serpent_enc_blk_8way) SYM_FUNC_START(__serpent_enc_blk_8way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -682,9 +682,9 @@ ENTRY(__serpent_enc_blk_8way)
xor_blocks(%rax, RA2, RB2, RC2, RD2, RK0, RK1, RK2); xor_blocks(%rax, RA2, RB2, RC2, RD2, RK0, RK1, RK2);
ret; ret;
ENDPROC(__serpent_enc_blk_8way) SYM_FUNC_END(__serpent_enc_blk_8way)
ENTRY(serpent_dec_blk_8way) SYM_FUNC_START(serpent_dec_blk_8way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -736,4 +736,4 @@ ENTRY(serpent_dec_blk_8way)
write_blocks(%rax, RC2, RD2, RB2, RE2, RK0, RK1, RK2); write_blocks(%rax, RC2, RD2, RB2, RE2, RK0, RK1, RK2);
ret; ret;
ENDPROC(serpent_dec_blk_8way) SYM_FUNC_END(serpent_dec_blk_8way)

View file

@ -634,7 +634,7 @@ _loop3:
* param: function's name * param: function's name
*/ */
.macro SHA1_VECTOR_ASM name .macro SHA1_VECTOR_ASM name
ENTRY(\name) SYM_FUNC_START(\name)
push %rbx push %rbx
push %r12 push %r12
@ -676,7 +676,7 @@ _loop3:
ret ret
ENDPROC(\name) SYM_FUNC_END(\name)
.endm .endm
.section .rodata .section .rodata

View file

@ -95,7 +95,7 @@
*/ */
.text .text
.align 32 .align 32
ENTRY(sha1_ni_transform) SYM_FUNC_START(sha1_ni_transform)
mov %rsp, RSPSAVE mov %rsp, RSPSAVE
sub $FRAME_SIZE, %rsp sub $FRAME_SIZE, %rsp
and $~0xF, %rsp and $~0xF, %rsp
@ -291,7 +291,7 @@ ENTRY(sha1_ni_transform)
mov RSPSAVE, %rsp mov RSPSAVE, %rsp
ret ret
ENDPROC(sha1_ni_transform) SYM_FUNC_END(sha1_ni_transform)
.section .rodata.cst16.PSHUFFLE_BYTE_FLIP_MASK, "aM", @progbits, 16 .section .rodata.cst16.PSHUFFLE_BYTE_FLIP_MASK, "aM", @progbits, 16
.align 16 .align 16

View file

@ -67,7 +67,7 @@
* param: function's name * param: function's name
*/ */
.macro SHA1_VECTOR_ASM name .macro SHA1_VECTOR_ASM name
ENTRY(\name) SYM_FUNC_START(\name)
push %rbx push %rbx
push %r12 push %r12
@ -101,7 +101,7 @@
pop %rbx pop %rbx
ret ret
ENDPROC(\name) SYM_FUNC_END(\name)
.endm .endm
/* /*

View file

@ -347,7 +347,7 @@ a = TMP_
## arg 3 : Num blocks ## arg 3 : Num blocks
######################################################################## ########################################################################
.text .text
ENTRY(sha256_transform_avx) SYM_FUNC_START(sha256_transform_avx)
.align 32 .align 32
pushq %rbx pushq %rbx
pushq %r12 pushq %r12
@ -460,7 +460,7 @@ done_hash:
popq %r12 popq %r12
popq %rbx popq %rbx
ret ret
ENDPROC(sha256_transform_avx) SYM_FUNC_END(sha256_transform_avx)
.section .rodata.cst256.K256, "aM", @progbits, 256 .section .rodata.cst256.K256, "aM", @progbits, 256
.align 64 .align 64

View file

@ -526,7 +526,7 @@ STACK_SIZE = _RSP + _RSP_SIZE
## arg 3 : Num blocks ## arg 3 : Num blocks
######################################################################## ########################################################################
.text .text
ENTRY(sha256_transform_rorx) SYM_FUNC_START(sha256_transform_rorx)
.align 32 .align 32
pushq %rbx pushq %rbx
pushq %r12 pushq %r12
@ -713,7 +713,7 @@ done_hash:
popq %r12 popq %r12
popq %rbx popq %rbx
ret ret
ENDPROC(sha256_transform_rorx) SYM_FUNC_END(sha256_transform_rorx)
.section .rodata.cst512.K256, "aM", @progbits, 512 .section .rodata.cst512.K256, "aM", @progbits, 512
.align 64 .align 64

View file

@ -353,7 +353,7 @@ a = TMP_
## arg 3 : Num blocks ## arg 3 : Num blocks
######################################################################## ########################################################################
.text .text
ENTRY(sha256_transform_ssse3) SYM_FUNC_START(sha256_transform_ssse3)
.align 32 .align 32
pushq %rbx pushq %rbx
pushq %r12 pushq %r12
@ -471,7 +471,7 @@ done_hash:
popq %rbx popq %rbx
ret ret
ENDPROC(sha256_transform_ssse3) SYM_FUNC_END(sha256_transform_ssse3)
.section .rodata.cst256.K256, "aM", @progbits, 256 .section .rodata.cst256.K256, "aM", @progbits, 256
.align 64 .align 64

View file

@ -97,7 +97,7 @@
.text .text
.align 32 .align 32
ENTRY(sha256_ni_transform) SYM_FUNC_START(sha256_ni_transform)
shl $6, NUM_BLKS /* convert to bytes */ shl $6, NUM_BLKS /* convert to bytes */
jz .Ldone_hash jz .Ldone_hash
@ -327,7 +327,7 @@ ENTRY(sha256_ni_transform)
.Ldone_hash: .Ldone_hash:
ret ret
ENDPROC(sha256_ni_transform) SYM_FUNC_END(sha256_ni_transform)
.section .rodata.cst256.K256, "aM", @progbits, 256 .section .rodata.cst256.K256, "aM", @progbits, 256
.align 64 .align 64

View file

@ -277,7 +277,7 @@ frame_size = frame_GPRSAVE + GPRSAVE_SIZE
# message blocks. # message blocks.
# L is the message length in SHA512 blocks # L is the message length in SHA512 blocks
######################################################################## ########################################################################
ENTRY(sha512_transform_avx) SYM_FUNC_START(sha512_transform_avx)
cmp $0, msglen cmp $0, msglen
je nowork je nowork
@ -365,7 +365,7 @@ updateblock:
nowork: nowork:
ret ret
ENDPROC(sha512_transform_avx) SYM_FUNC_END(sha512_transform_avx)
######################################################################## ########################################################################
### Binary Data ### Binary Data

View file

@ -569,7 +569,7 @@ frame_size = frame_GPRSAVE + GPRSAVE_SIZE
# message blocks. # message blocks.
# L is the message length in SHA512 blocks # L is the message length in SHA512 blocks
######################################################################## ########################################################################
ENTRY(sha512_transform_rorx) SYM_FUNC_START(sha512_transform_rorx)
# Allocate Stack Space # Allocate Stack Space
mov %rsp, %rax mov %rsp, %rax
sub $frame_size, %rsp sub $frame_size, %rsp
@ -682,7 +682,7 @@ done_hash:
# Restore Stack Pointer # Restore Stack Pointer
mov frame_RSPSAVE(%rsp), %rsp mov frame_RSPSAVE(%rsp), %rsp
ret ret
ENDPROC(sha512_transform_rorx) SYM_FUNC_END(sha512_transform_rorx)
######################################################################## ########################################################################
### Binary Data ### Binary Data

View file

@ -275,7 +275,7 @@ frame_size = frame_GPRSAVE + GPRSAVE_SIZE
# message blocks. # message blocks.
# L is the message length in SHA512 blocks. # L is the message length in SHA512 blocks.
######################################################################## ########################################################################
ENTRY(sha512_transform_ssse3) SYM_FUNC_START(sha512_transform_ssse3)
cmp $0, msglen cmp $0, msglen
je nowork je nowork
@ -364,7 +364,7 @@ updateblock:
nowork: nowork:
ret ret
ENDPROC(sha512_transform_ssse3) SYM_FUNC_END(sha512_transform_ssse3)
######################################################################## ########################################################################
### Binary Data ### Binary Data

View file

@ -234,7 +234,7 @@
vpxor x3, wkey, x3; vpxor x3, wkey, x3;
.align 8 .align 8
__twofish_enc_blk8: SYM_FUNC_START_LOCAL(__twofish_enc_blk8)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: blocks * RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: blocks
@ -273,10 +273,10 @@ __twofish_enc_blk8:
outunpack_blocks(RC2, RD2, RA2, RB2, RK1, RX0, RY0, RK2); outunpack_blocks(RC2, RD2, RA2, RB2, RK1, RX0, RY0, RK2);
ret; ret;
ENDPROC(__twofish_enc_blk8) SYM_FUNC_END(__twofish_enc_blk8)
.align 8 .align 8
__twofish_dec_blk8: SYM_FUNC_START_LOCAL(__twofish_dec_blk8)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* RC1, RD1, RA1, RB1, RC2, RD2, RA2, RB2: encrypted blocks * RC1, RD1, RA1, RB1, RC2, RD2, RA2, RB2: encrypted blocks
@ -313,9 +313,9 @@ __twofish_dec_blk8:
outunpack_blocks(RA2, RB2, RC2, RD2, RK1, RX0, RY0, RK2); outunpack_blocks(RA2, RB2, RC2, RD2, RK1, RX0, RY0, RK2);
ret; ret;
ENDPROC(__twofish_dec_blk8) SYM_FUNC_END(__twofish_dec_blk8)
ENTRY(twofish_ecb_enc_8way) SYM_FUNC_START(twofish_ecb_enc_8way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -333,9 +333,9 @@ ENTRY(twofish_ecb_enc_8way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(twofish_ecb_enc_8way) SYM_FUNC_END(twofish_ecb_enc_8way)
ENTRY(twofish_ecb_dec_8way) SYM_FUNC_START(twofish_ecb_dec_8way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -353,9 +353,9 @@ ENTRY(twofish_ecb_dec_8way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(twofish_ecb_dec_8way) SYM_FUNC_END(twofish_ecb_dec_8way)
ENTRY(twofish_cbc_dec_8way) SYM_FUNC_START(twofish_cbc_dec_8way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -378,9 +378,9 @@ ENTRY(twofish_cbc_dec_8way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(twofish_cbc_dec_8way) SYM_FUNC_END(twofish_cbc_dec_8way)
ENTRY(twofish_ctr_8way) SYM_FUNC_START(twofish_ctr_8way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -405,9 +405,9 @@ ENTRY(twofish_ctr_8way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(twofish_ctr_8way) SYM_FUNC_END(twofish_ctr_8way)
ENTRY(twofish_xts_enc_8way) SYM_FUNC_START(twofish_xts_enc_8way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -429,9 +429,9 @@ ENTRY(twofish_xts_enc_8way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(twofish_xts_enc_8way) SYM_FUNC_END(twofish_xts_enc_8way)
ENTRY(twofish_xts_dec_8way) SYM_FUNC_START(twofish_xts_dec_8way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -453,4 +453,4 @@ ENTRY(twofish_xts_dec_8way)
FRAME_END FRAME_END
ret; ret;
ENDPROC(twofish_xts_dec_8way) SYM_FUNC_END(twofish_xts_dec_8way)

View file

@ -207,7 +207,7 @@
xor %esi, d ## D;\ xor %esi, d ## D;\
ror $1, d ## D; ror $1, d ## D;
ENTRY(twofish_enc_blk) SYM_FUNC_START(twofish_enc_blk)
push %ebp /* save registers according to calling convention*/ push %ebp /* save registers according to calling convention*/
push %ebx push %ebx
push %esi push %esi
@ -261,9 +261,9 @@ ENTRY(twofish_enc_blk)
pop %ebp pop %ebp
mov $1, %eax mov $1, %eax
ret ret
ENDPROC(twofish_enc_blk) SYM_FUNC_END(twofish_enc_blk)
ENTRY(twofish_dec_blk) SYM_FUNC_START(twofish_dec_blk)
push %ebp /* save registers according to calling convention*/ push %ebp /* save registers according to calling convention*/
push %ebx push %ebx
push %esi push %esi
@ -318,4 +318,4 @@ ENTRY(twofish_dec_blk)
pop %ebp pop %ebp
mov $1, %eax mov $1, %eax
ret ret
ENDPROC(twofish_dec_blk) SYM_FUNC_END(twofish_dec_blk)

View file

@ -220,7 +220,7 @@
rorq $32, RAB2; \ rorq $32, RAB2; \
outunpack3(mov, RIO, 2, RAB, 2); outunpack3(mov, RIO, 2, RAB, 2);
ENTRY(__twofish_enc_blk_3way) SYM_FUNC_START(__twofish_enc_blk_3way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -267,9 +267,9 @@ ENTRY(__twofish_enc_blk_3way)
popq %r12; popq %r12;
popq %r13; popq %r13;
ret; ret;
ENDPROC(__twofish_enc_blk_3way) SYM_FUNC_END(__twofish_enc_blk_3way)
ENTRY(twofish_dec_blk_3way) SYM_FUNC_START(twofish_dec_blk_3way)
/* input: /* input:
* %rdi: ctx, CTX * %rdi: ctx, CTX
* %rsi: dst * %rsi: dst
@ -302,4 +302,4 @@ ENTRY(twofish_dec_blk_3way)
popq %r12; popq %r12;
popq %r13; popq %r13;
ret; ret;
ENDPROC(twofish_dec_blk_3way) SYM_FUNC_END(twofish_dec_blk_3way)

View file

@ -202,7 +202,7 @@
xor %r8d, d ## D;\ xor %r8d, d ## D;\
ror $1, d ## D; ror $1, d ## D;
ENTRY(twofish_enc_blk) SYM_FUNC_START(twofish_enc_blk)
pushq R1 pushq R1
/* %rdi contains the ctx address */ /* %rdi contains the ctx address */
@ -253,9 +253,9 @@ ENTRY(twofish_enc_blk)
popq R1 popq R1
movl $1,%eax movl $1,%eax
ret ret
ENDPROC(twofish_enc_blk) SYM_FUNC_END(twofish_enc_blk)
ENTRY(twofish_dec_blk) SYM_FUNC_START(twofish_dec_blk)
pushq R1 pushq R1
/* %rdi contains the ctx address */ /* %rdi contains the ctx address */
@ -305,4 +305,4 @@ ENTRY(twofish_dec_blk)
popq R1 popq R1
movl $1,%eax movl $1,%eax
ret ret
ENDPROC(twofish_dec_blk) SYM_FUNC_END(twofish_dec_blk)

View file

@ -730,7 +730,7 @@
* %eax: prev task * %eax: prev task
* %edx: next task * %edx: next task
*/ */
ENTRY(__switch_to_asm) SYM_CODE_START(__switch_to_asm)
/* /*
* Save callee-saved registers * Save callee-saved registers
* This must match the order in struct inactive_task_frame * This must match the order in struct inactive_task_frame
@ -769,7 +769,7 @@ ENTRY(__switch_to_asm)
popl %ebp popl %ebp
jmp __switch_to jmp __switch_to
END(__switch_to_asm) SYM_CODE_END(__switch_to_asm)
/* /*
* The unwinder expects the last frame on the stack to always be at the same * The unwinder expects the last frame on the stack to always be at the same
@ -778,7 +778,7 @@ END(__switch_to_asm)
* asmlinkage function so its argument has to be pushed on the stack. This * asmlinkage function so its argument has to be pushed on the stack. This
* wrapper creates a proper "end of stack" frame header before the call. * wrapper creates a proper "end of stack" frame header before the call.
*/ */
ENTRY(schedule_tail_wrapper) SYM_FUNC_START(schedule_tail_wrapper)
FRAME_BEGIN FRAME_BEGIN
pushl %eax pushl %eax
@ -787,7 +787,7 @@ ENTRY(schedule_tail_wrapper)
FRAME_END FRAME_END
ret ret
ENDPROC(schedule_tail_wrapper) SYM_FUNC_END(schedule_tail_wrapper)
/* /*
* A newly forked process directly context switches into this address. * A newly forked process directly context switches into this address.
* *
@ -795,7 +795,7 @@ ENDPROC(schedule_tail_wrapper)
* ebx: kernel thread func (NULL for user thread) * ebx: kernel thread func (NULL for user thread)
* edi: kernel thread arg * edi: kernel thread arg
*/ */
ENTRY(ret_from_fork) SYM_CODE_START(ret_from_fork)
call schedule_tail_wrapper call schedule_tail_wrapper
testl %ebx, %ebx testl %ebx, %ebx
@ -818,7 +818,7 @@ ENTRY(ret_from_fork)
*/ */
movl $0, PT_EAX(%esp) movl $0, PT_EAX(%esp)
jmp 2b jmp 2b
END(ret_from_fork) SYM_CODE_END(ret_from_fork)
/* /*
* Return to user mode is not as complex as all this looks, * Return to user mode is not as complex as all this looks,
@ -828,8 +828,7 @@ END(ret_from_fork)
*/ */
# userspace resumption stub bypassing syscall exit tracing # userspace resumption stub bypassing syscall exit tracing
ALIGN SYM_CODE_START_LOCAL(ret_from_exception)
ret_from_exception:
preempt_stop(CLBR_ANY) preempt_stop(CLBR_ANY)
ret_from_intr: ret_from_intr:
#ifdef CONFIG_VM86 #ifdef CONFIG_VM86
@ -846,15 +845,14 @@ ret_from_intr:
cmpl $USER_RPL, %eax cmpl $USER_RPL, %eax
jb restore_all_kernel # not returning to v8086 or userspace jb restore_all_kernel # not returning to v8086 or userspace
ENTRY(resume_userspace)
DISABLE_INTERRUPTS(CLBR_ANY) DISABLE_INTERRUPTS(CLBR_ANY)
TRACE_IRQS_OFF TRACE_IRQS_OFF
movl %esp, %eax movl %esp, %eax
call prepare_exit_to_usermode call prepare_exit_to_usermode
jmp restore_all jmp restore_all
END(ret_from_exception) SYM_CODE_END(ret_from_exception)
GLOBAL(__begin_SYSENTER_singlestep_region) SYM_ENTRY(__begin_SYSENTER_singlestep_region, SYM_L_GLOBAL, SYM_A_NONE)
/* /*
* All code from here through __end_SYSENTER_singlestep_region is subject * All code from here through __end_SYSENTER_singlestep_region is subject
* to being single-stepped if a user program sets TF and executes SYSENTER. * to being single-stepped if a user program sets TF and executes SYSENTER.
@ -869,9 +867,10 @@ GLOBAL(__begin_SYSENTER_singlestep_region)
* Xen doesn't set %esp to be precisely what the normal SYSENTER * Xen doesn't set %esp to be precisely what the normal SYSENTER
* entry point expects, so fix it up before using the normal path. * entry point expects, so fix it up before using the normal path.
*/ */
ENTRY(xen_sysenter_target) SYM_CODE_START(xen_sysenter_target)
addl $5*4, %esp /* remove xen-provided frame */ addl $5*4, %esp /* remove xen-provided frame */
jmp .Lsysenter_past_esp jmp .Lsysenter_past_esp
SYM_CODE_END(xen_sysenter_target)
#endif #endif
/* /*
@ -906,7 +905,7 @@ ENTRY(xen_sysenter_target)
* ebp user stack * ebp user stack
* 0(%ebp) arg6 * 0(%ebp) arg6
*/ */
ENTRY(entry_SYSENTER_32) SYM_FUNC_START(entry_SYSENTER_32)
/* /*
* On entry-stack with all userspace-regs live - save and * On entry-stack with all userspace-regs live - save and
* restore eflags and %eax to use it as scratch-reg for the cr3 * restore eflags and %eax to use it as scratch-reg for the cr3
@ -1033,8 +1032,8 @@ ENTRY(entry_SYSENTER_32)
pushl $X86_EFLAGS_FIXED pushl $X86_EFLAGS_FIXED
popfl popfl
jmp .Lsysenter_flags_fixed jmp .Lsysenter_flags_fixed
GLOBAL(__end_SYSENTER_singlestep_region) SYM_ENTRY(__end_SYSENTER_singlestep_region, SYM_L_GLOBAL, SYM_A_NONE)
ENDPROC(entry_SYSENTER_32) SYM_FUNC_END(entry_SYSENTER_32)
/* /*
* 32-bit legacy system call entry. * 32-bit legacy system call entry.
@ -1064,7 +1063,7 @@ ENDPROC(entry_SYSENTER_32)
* edi arg5 * edi arg5
* ebp arg6 * ebp arg6
*/ */
ENTRY(entry_INT80_32) SYM_FUNC_START(entry_INT80_32)
ASM_CLAC ASM_CLAC
pushl %eax /* pt_regs->orig_ax */ pushl %eax /* pt_regs->orig_ax */
@ -1120,7 +1119,7 @@ restore_all_kernel:
jmp .Lirq_return jmp .Lirq_return
.section .fixup, "ax" .section .fixup, "ax"
ENTRY(iret_exc ) SYM_CODE_START(iret_exc)
pushl $0 # no error code pushl $0 # no error code
pushl $do_iret_error pushl $do_iret_error
@ -1137,9 +1136,10 @@ ENTRY(iret_exc )
#endif #endif
jmp common_exception jmp common_exception
SYM_CODE_END(iret_exc)
.previous .previous
_ASM_EXTABLE(.Lirq_return, iret_exc) _ASM_EXTABLE(.Lirq_return, iret_exc)
ENDPROC(entry_INT80_32) SYM_FUNC_END(entry_INT80_32)
.macro FIXUP_ESPFIX_STACK .macro FIXUP_ESPFIX_STACK
/* /*
@ -1193,7 +1193,7 @@ ENDPROC(entry_INT80_32)
* We pack 1 stub into every 8-byte block. * We pack 1 stub into every 8-byte block.
*/ */
.align 8 .align 8
ENTRY(irq_entries_start) SYM_CODE_START(irq_entries_start)
vector=FIRST_EXTERNAL_VECTOR vector=FIRST_EXTERNAL_VECTOR
.rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR) .rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)
pushl $(~vector+0x80) /* Note: always in signed byte range */ pushl $(~vector+0x80) /* Note: always in signed byte range */
@ -1201,11 +1201,11 @@ ENTRY(irq_entries_start)
jmp common_interrupt jmp common_interrupt
.align 8 .align 8
.endr .endr
END(irq_entries_start) SYM_CODE_END(irq_entries_start)
#ifdef CONFIG_X86_LOCAL_APIC #ifdef CONFIG_X86_LOCAL_APIC
.align 8 .align 8
ENTRY(spurious_entries_start) SYM_CODE_START(spurious_entries_start)
vector=FIRST_SYSTEM_VECTOR vector=FIRST_SYSTEM_VECTOR
.rept (NR_VECTORS - FIRST_SYSTEM_VECTOR) .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR)
pushl $(~vector+0x80) /* Note: always in signed byte range */ pushl $(~vector+0x80) /* Note: always in signed byte range */
@ -1213,9 +1213,9 @@ ENTRY(spurious_entries_start)
jmp common_spurious jmp common_spurious
.align 8 .align 8
.endr .endr
END(spurious_entries_start) SYM_CODE_END(spurious_entries_start)
common_spurious: SYM_CODE_START_LOCAL(common_spurious)
ASM_CLAC ASM_CLAC
addl $-0x80, (%esp) /* Adjust vector into the [-256, -1] range */ addl $-0x80, (%esp) /* Adjust vector into the [-256, -1] range */
SAVE_ALL switch_stacks=1 SAVE_ALL switch_stacks=1
@ -1224,7 +1224,7 @@ common_spurious:
movl %esp, %eax movl %esp, %eax
call smp_spurious_interrupt call smp_spurious_interrupt
jmp ret_from_intr jmp ret_from_intr
ENDPROC(common_spurious) SYM_CODE_END(common_spurious)
#endif #endif
/* /*
@ -1232,7 +1232,7 @@ ENDPROC(common_spurious)
* so IRQ-flags tracing has to follow that: * so IRQ-flags tracing has to follow that:
*/ */
.p2align CONFIG_X86_L1_CACHE_SHIFT .p2align CONFIG_X86_L1_CACHE_SHIFT
common_interrupt: SYM_CODE_START_LOCAL(common_interrupt)
ASM_CLAC ASM_CLAC
addl $-0x80, (%esp) /* Adjust vector into the [-256, -1] range */ addl $-0x80, (%esp) /* Adjust vector into the [-256, -1] range */
@ -1242,10 +1242,10 @@ common_interrupt:
movl %esp, %eax movl %esp, %eax
call do_IRQ call do_IRQ
jmp ret_from_intr jmp ret_from_intr
ENDPROC(common_interrupt) SYM_CODE_END(common_interrupt)
#define BUILD_INTERRUPT3(name, nr, fn) \ #define BUILD_INTERRUPT3(name, nr, fn) \
ENTRY(name) \ SYM_FUNC_START(name) \
ASM_CLAC; \ ASM_CLAC; \
pushl $~(nr); \ pushl $~(nr); \
SAVE_ALL switch_stacks=1; \ SAVE_ALL switch_stacks=1; \
@ -1254,7 +1254,7 @@ ENTRY(name) \
movl %esp, %eax; \ movl %esp, %eax; \
call fn; \ call fn; \
jmp ret_from_intr; \ jmp ret_from_intr; \
ENDPROC(name) SYM_FUNC_END(name)
#define BUILD_INTERRUPT(name, nr) \ #define BUILD_INTERRUPT(name, nr) \
BUILD_INTERRUPT3(name, nr, smp_##name); \ BUILD_INTERRUPT3(name, nr, smp_##name); \
@ -1262,14 +1262,14 @@ ENDPROC(name)
/* The include is where all of the SMP etc. interrupts come from */ /* The include is where all of the SMP etc. interrupts come from */
#include <asm/entry_arch.h> #include <asm/entry_arch.h>
ENTRY(coprocessor_error) SYM_CODE_START(coprocessor_error)
ASM_CLAC ASM_CLAC
pushl $0 pushl $0
pushl $do_coprocessor_error pushl $do_coprocessor_error
jmp common_exception jmp common_exception
END(coprocessor_error) SYM_CODE_END(coprocessor_error)
ENTRY(simd_coprocessor_error) SYM_CODE_START(simd_coprocessor_error)
ASM_CLAC ASM_CLAC
pushl $0 pushl $0
#ifdef CONFIG_X86_INVD_BUG #ifdef CONFIG_X86_INVD_BUG
@ -1281,99 +1281,99 @@ ENTRY(simd_coprocessor_error)
pushl $do_simd_coprocessor_error pushl $do_simd_coprocessor_error
#endif #endif
jmp common_exception jmp common_exception
END(simd_coprocessor_error) SYM_CODE_END(simd_coprocessor_error)
ENTRY(device_not_available) SYM_CODE_START(device_not_available)
ASM_CLAC ASM_CLAC
pushl $-1 # mark this as an int pushl $-1 # mark this as an int
pushl $do_device_not_available pushl $do_device_not_available
jmp common_exception jmp common_exception
END(device_not_available) SYM_CODE_END(device_not_available)
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
ENTRY(native_iret) SYM_CODE_START(native_iret)
iret iret
_ASM_EXTABLE(native_iret, iret_exc) _ASM_EXTABLE(native_iret, iret_exc)
END(native_iret) SYM_CODE_END(native_iret)
#endif #endif
ENTRY(overflow) SYM_CODE_START(overflow)
ASM_CLAC ASM_CLAC
pushl $0 pushl $0
pushl $do_overflow pushl $do_overflow
jmp common_exception jmp common_exception
END(overflow) SYM_CODE_END(overflow)
ENTRY(bounds) SYM_CODE_START(bounds)
ASM_CLAC ASM_CLAC
pushl $0 pushl $0
pushl $do_bounds pushl $do_bounds
jmp common_exception jmp common_exception
END(bounds) SYM_CODE_END(bounds)
ENTRY(invalid_op) SYM_CODE_START(invalid_op)
ASM_CLAC ASM_CLAC
pushl $0 pushl $0
pushl $do_invalid_op pushl $do_invalid_op
jmp common_exception jmp common_exception
END(invalid_op) SYM_CODE_END(invalid_op)
ENTRY(coprocessor_segment_overrun) SYM_CODE_START(coprocessor_segment_overrun)
ASM_CLAC ASM_CLAC
pushl $0 pushl $0
pushl $do_coprocessor_segment_overrun pushl $do_coprocessor_segment_overrun
jmp common_exception jmp common_exception
END(coprocessor_segment_overrun) SYM_CODE_END(coprocessor_segment_overrun)
ENTRY(invalid_TSS) SYM_CODE_START(invalid_TSS)
ASM_CLAC ASM_CLAC
pushl $do_invalid_TSS pushl $do_invalid_TSS
jmp common_exception jmp common_exception
END(invalid_TSS) SYM_CODE_END(invalid_TSS)
ENTRY(segment_not_present) SYM_CODE_START(segment_not_present)
ASM_CLAC ASM_CLAC
pushl $do_segment_not_present pushl $do_segment_not_present
jmp common_exception jmp common_exception
END(segment_not_present) SYM_CODE_END(segment_not_present)
ENTRY(stack_segment) SYM_CODE_START(stack_segment)
ASM_CLAC ASM_CLAC
pushl $do_stack_segment pushl $do_stack_segment
jmp common_exception jmp common_exception
END(stack_segment) SYM_CODE_END(stack_segment)
ENTRY(alignment_check) SYM_CODE_START(alignment_check)
ASM_CLAC ASM_CLAC
pushl $do_alignment_check pushl $do_alignment_check
jmp common_exception jmp common_exception
END(alignment_check) SYM_CODE_END(alignment_check)
ENTRY(divide_error) SYM_CODE_START(divide_error)
ASM_CLAC ASM_CLAC
pushl $0 # no error code pushl $0 # no error code
pushl $do_divide_error pushl $do_divide_error
jmp common_exception jmp common_exception
END(divide_error) SYM_CODE_END(divide_error)
#ifdef CONFIG_X86_MCE #ifdef CONFIG_X86_MCE
ENTRY(machine_check) SYM_CODE_START(machine_check)
ASM_CLAC ASM_CLAC
pushl $0 pushl $0
pushl machine_check_vector pushl machine_check_vector
jmp common_exception jmp common_exception
END(machine_check) SYM_CODE_END(machine_check)
#endif #endif
ENTRY(spurious_interrupt_bug) SYM_CODE_START(spurious_interrupt_bug)
ASM_CLAC ASM_CLAC
pushl $0 pushl $0
pushl $do_spurious_interrupt_bug pushl $do_spurious_interrupt_bug
jmp common_exception jmp common_exception
END(spurious_interrupt_bug) SYM_CODE_END(spurious_interrupt_bug)
#ifdef CONFIG_XEN_PV #ifdef CONFIG_XEN_PV
ENTRY(xen_hypervisor_callback) SYM_FUNC_START(xen_hypervisor_callback)
/* /*
* Check to see if we got the event in the critical * Check to see if we got the event in the critical
* region in xen_iret_direct, after we've reenabled * region in xen_iret_direct, after we've reenabled
@ -1397,7 +1397,7 @@ ENTRY(xen_hypervisor_callback)
call xen_maybe_preempt_hcall call xen_maybe_preempt_hcall
#endif #endif
jmp ret_from_intr jmp ret_from_intr
ENDPROC(xen_hypervisor_callback) SYM_FUNC_END(xen_hypervisor_callback)
/* /*
* Hypervisor uses this for application faults while it executes. * Hypervisor uses this for application faults while it executes.
@ -1411,7 +1411,7 @@ ENDPROC(xen_hypervisor_callback)
* to pop the stack frame we end up in an infinite loop of failsafe callbacks. * to pop the stack frame we end up in an infinite loop of failsafe callbacks.
* We distinguish between categories by maintaining a status value in EAX. * We distinguish between categories by maintaining a status value in EAX.
*/ */
ENTRY(xen_failsafe_callback) SYM_FUNC_START(xen_failsafe_callback)
pushl %eax pushl %eax
movl $1, %eax movl $1, %eax
1: mov 4(%esp), %ds 1: mov 4(%esp), %ds
@ -1448,7 +1448,7 @@ ENTRY(xen_failsafe_callback)
_ASM_EXTABLE(2b, 7b) _ASM_EXTABLE(2b, 7b)
_ASM_EXTABLE(3b, 8b) _ASM_EXTABLE(3b, 8b)
_ASM_EXTABLE(4b, 9b) _ASM_EXTABLE(4b, 9b)
ENDPROC(xen_failsafe_callback) SYM_FUNC_END(xen_failsafe_callback)
#endif /* CONFIG_XEN_PV */ #endif /* CONFIG_XEN_PV */
#ifdef CONFIG_XEN_PVHVM #ifdef CONFIG_XEN_PVHVM
@ -1470,13 +1470,13 @@ BUILD_INTERRUPT3(hv_stimer0_callback_vector, HYPERV_STIMER0_VECTOR,
#endif /* CONFIG_HYPERV */ #endif /* CONFIG_HYPERV */
ENTRY(page_fault) SYM_CODE_START(page_fault)
ASM_CLAC ASM_CLAC
pushl $do_page_fault pushl $do_page_fault
jmp common_exception_read_cr2 jmp common_exception_read_cr2
END(page_fault) SYM_CODE_END(page_fault)
common_exception_read_cr2: SYM_CODE_START_LOCAL_NOALIGN(common_exception_read_cr2)
/* the function address is in %gs's slot on the stack */ /* the function address is in %gs's slot on the stack */
SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1 SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1
@ -1498,9 +1498,9 @@ common_exception_read_cr2:
movl %esp, %eax # pt_regs pointer movl %esp, %eax # pt_regs pointer
CALL_NOSPEC %edi CALL_NOSPEC %edi
jmp ret_from_exception jmp ret_from_exception
END(common_exception_read_cr2) SYM_CODE_END(common_exception_read_cr2)
common_exception: SYM_CODE_START_LOCAL_NOALIGN(common_exception)
/* the function address is in %gs's slot on the stack */ /* the function address is in %gs's slot on the stack */
SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1 SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1
ENCODE_FRAME_POINTER ENCODE_FRAME_POINTER
@ -1519,9 +1519,9 @@ common_exception:
movl %esp, %eax # pt_regs pointer movl %esp, %eax # pt_regs pointer
CALL_NOSPEC %edi CALL_NOSPEC %edi
jmp ret_from_exception jmp ret_from_exception
END(common_exception) SYM_CODE_END(common_exception)
ENTRY(debug) SYM_CODE_START(debug)
/* /*
* Entry from sysenter is now handled in common_exception * Entry from sysenter is now handled in common_exception
*/ */
@ -1529,7 +1529,7 @@ ENTRY(debug)
pushl $-1 # mark this as an int pushl $-1 # mark this as an int
pushl $do_debug pushl $do_debug
jmp common_exception jmp common_exception
END(debug) SYM_CODE_END(debug)
/* /*
* NMI is doubly nasty. It can happen on the first instruction of * NMI is doubly nasty. It can happen on the first instruction of
@ -1538,7 +1538,7 @@ END(debug)
* switched stacks. We handle both conditions by simply checking whether we * switched stacks. We handle both conditions by simply checking whether we
* interrupted kernel code running on the SYSENTER stack. * interrupted kernel code running on the SYSENTER stack.
*/ */
ENTRY(nmi) SYM_CODE_START(nmi)
ASM_CLAC ASM_CLAC
#ifdef CONFIG_X86_ESPFIX32 #ifdef CONFIG_X86_ESPFIX32
@ -1631,9 +1631,9 @@ ENTRY(nmi)
lss (1+5+6)*4(%esp), %esp # back to espfix stack lss (1+5+6)*4(%esp), %esp # back to espfix stack
jmp .Lirq_return jmp .Lirq_return
#endif #endif
END(nmi) SYM_CODE_END(nmi)
ENTRY(int3) SYM_CODE_START(int3)
ASM_CLAC ASM_CLAC
pushl $-1 # mark this as an int pushl $-1 # mark this as an int
@ -1644,22 +1644,22 @@ ENTRY(int3)
movl %esp, %eax # pt_regs pointer movl %esp, %eax # pt_regs pointer
call do_int3 call do_int3
jmp ret_from_exception jmp ret_from_exception
END(int3) SYM_CODE_END(int3)
ENTRY(general_protection) SYM_CODE_START(general_protection)
pushl $do_general_protection pushl $do_general_protection
jmp common_exception jmp common_exception
END(general_protection) SYM_CODE_END(general_protection)
#ifdef CONFIG_KVM_GUEST #ifdef CONFIG_KVM_GUEST
ENTRY(async_page_fault) SYM_CODE_START(async_page_fault)
ASM_CLAC ASM_CLAC
pushl $do_async_page_fault pushl $do_async_page_fault
jmp common_exception_read_cr2 jmp common_exception_read_cr2
END(async_page_fault) SYM_CODE_END(async_page_fault)
#endif #endif
ENTRY(rewind_stack_do_exit) SYM_CODE_START(rewind_stack_do_exit)
/* Prevent any naive code from trying to unwind to our caller. */ /* Prevent any naive code from trying to unwind to our caller. */
xorl %ebp, %ebp xorl %ebp, %ebp
@ -1668,4 +1668,4 @@ ENTRY(rewind_stack_do_exit)
call do_exit call do_exit
1: jmp 1b 1: jmp 1b
END(rewind_stack_do_exit) SYM_CODE_END(rewind_stack_do_exit)

View file

@ -15,7 +15,7 @@
* at the top of the kernel process stack. * at the top of the kernel process stack.
* *
* Some macro usage: * Some macro usage:
* - ENTRY/END: Define functions in the symbol table. * - SYM_FUNC_START/END:Define functions in the symbol table.
* - TRACE_IRQ_*: Trace hardirq state for lock debugging. * - TRACE_IRQ_*: Trace hardirq state for lock debugging.
* - idtentry: Define exception entry points. * - idtentry: Define exception entry points.
*/ */
@ -46,11 +46,11 @@
.section .entry.text, "ax" .section .entry.text, "ax"
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
ENTRY(native_usergs_sysret64) SYM_CODE_START(native_usergs_sysret64)
UNWIND_HINT_EMPTY UNWIND_HINT_EMPTY
swapgs swapgs
sysretq sysretq
END(native_usergs_sysret64) SYM_CODE_END(native_usergs_sysret64)
#endif /* CONFIG_PARAVIRT */ #endif /* CONFIG_PARAVIRT */
.macro TRACE_IRQS_FLAGS flags:req .macro TRACE_IRQS_FLAGS flags:req
@ -142,7 +142,7 @@ END(native_usergs_sysret64)
* with them due to bugs in both AMD and Intel CPUs. * with them due to bugs in both AMD and Intel CPUs.
*/ */
ENTRY(entry_SYSCALL_64) SYM_CODE_START(entry_SYSCALL_64)
UNWIND_HINT_EMPTY UNWIND_HINT_EMPTY
/* /*
* Interrupts are off on entry. * Interrupts are off on entry.
@ -162,7 +162,7 @@ ENTRY(entry_SYSCALL_64)
pushq %r11 /* pt_regs->flags */ pushq %r11 /* pt_regs->flags */
pushq $__USER_CS /* pt_regs->cs */ pushq $__USER_CS /* pt_regs->cs */
pushq %rcx /* pt_regs->ip */ pushq %rcx /* pt_regs->ip */
GLOBAL(entry_SYSCALL_64_after_hwframe) SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
pushq %rax /* pt_regs->orig_ax */ pushq %rax /* pt_regs->orig_ax */
PUSH_AND_CLEAR_REGS rax=$-ENOSYS PUSH_AND_CLEAR_REGS rax=$-ENOSYS
@ -273,13 +273,13 @@ syscall_return_via_sysret:
popq %rdi popq %rdi
popq %rsp popq %rsp
USERGS_SYSRET64 USERGS_SYSRET64
END(entry_SYSCALL_64) SYM_CODE_END(entry_SYSCALL_64)
/* /*
* %rdi: prev task * %rdi: prev task
* %rsi: next task * %rsi: next task
*/ */
ENTRY(__switch_to_asm) SYM_CODE_START(__switch_to_asm)
UNWIND_HINT_FUNC UNWIND_HINT_FUNC
/* /*
* Save callee-saved registers * Save callee-saved registers
@ -321,7 +321,7 @@ ENTRY(__switch_to_asm)
popq %rbp popq %rbp
jmp __switch_to jmp __switch_to
END(__switch_to_asm) SYM_CODE_END(__switch_to_asm)
/* /*
* A newly forked process directly context switches into this address. * A newly forked process directly context switches into this address.
@ -330,7 +330,7 @@ END(__switch_to_asm)
* rbx: kernel thread func (NULL for user thread) * rbx: kernel thread func (NULL for user thread)
* r12: kernel thread arg * r12: kernel thread arg
*/ */
ENTRY(ret_from_fork) SYM_CODE_START(ret_from_fork)
UNWIND_HINT_EMPTY UNWIND_HINT_EMPTY
movq %rax, %rdi movq %rax, %rdi
call schedule_tail /* rdi: 'prev' task parameter */ call schedule_tail /* rdi: 'prev' task parameter */
@ -357,14 +357,14 @@ ENTRY(ret_from_fork)
*/ */
movq $0, RAX(%rsp) movq $0, RAX(%rsp)
jmp 2b jmp 2b
END(ret_from_fork) SYM_CODE_END(ret_from_fork)
/* /*
* Build the entry stubs with some assembler magic. * Build the entry stubs with some assembler magic.
* We pack 1 stub into every 8-byte block. * We pack 1 stub into every 8-byte block.
*/ */
.align 8 .align 8
ENTRY(irq_entries_start) SYM_CODE_START(irq_entries_start)
vector=FIRST_EXTERNAL_VECTOR vector=FIRST_EXTERNAL_VECTOR
.rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR) .rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)
UNWIND_HINT_IRET_REGS UNWIND_HINT_IRET_REGS
@ -373,10 +373,10 @@ ENTRY(irq_entries_start)
.align 8 .align 8
vector=vector+1 vector=vector+1
.endr .endr
END(irq_entries_start) SYM_CODE_END(irq_entries_start)
.align 8 .align 8
ENTRY(spurious_entries_start) SYM_CODE_START(spurious_entries_start)
vector=FIRST_SYSTEM_VECTOR vector=FIRST_SYSTEM_VECTOR
.rept (NR_VECTORS - FIRST_SYSTEM_VECTOR) .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR)
UNWIND_HINT_IRET_REGS UNWIND_HINT_IRET_REGS
@ -385,7 +385,7 @@ ENTRY(spurious_entries_start)
.align 8 .align 8
vector=vector+1 vector=vector+1
.endr .endr
END(spurious_entries_start) SYM_CODE_END(spurious_entries_start)
.macro DEBUG_ENTRY_ASSERT_IRQS_OFF .macro DEBUG_ENTRY_ASSERT_IRQS_OFF
#ifdef CONFIG_DEBUG_ENTRY #ifdef CONFIG_DEBUG_ENTRY
@ -511,7 +511,7 @@ END(spurious_entries_start)
* | return address | * | return address |
* +----------------------------------------------------+ * +----------------------------------------------------+
*/ */
ENTRY(interrupt_entry) SYM_CODE_START(interrupt_entry)
UNWIND_HINT_FUNC UNWIND_HINT_FUNC
ASM_CLAC ASM_CLAC
cld cld
@ -579,7 +579,7 @@ ENTRY(interrupt_entry)
TRACE_IRQS_OFF TRACE_IRQS_OFF
ret ret
END(interrupt_entry) SYM_CODE_END(interrupt_entry)
_ASM_NOKPROBE(interrupt_entry) _ASM_NOKPROBE(interrupt_entry)
@ -589,18 +589,18 @@ _ASM_NOKPROBE(interrupt_entry)
* The interrupt stubs push (~vector+0x80) onto the stack and * The interrupt stubs push (~vector+0x80) onto the stack and
* then jump to common_spurious/interrupt. * then jump to common_spurious/interrupt.
*/ */
common_spurious: SYM_CODE_START_LOCAL(common_spurious)
addq $-0x80, (%rsp) /* Adjust vector to [-256, -1] range */ addq $-0x80, (%rsp) /* Adjust vector to [-256, -1] range */
call interrupt_entry call interrupt_entry
UNWIND_HINT_REGS indirect=1 UNWIND_HINT_REGS indirect=1
call smp_spurious_interrupt /* rdi points to pt_regs */ call smp_spurious_interrupt /* rdi points to pt_regs */
jmp ret_from_intr jmp ret_from_intr
END(common_spurious) SYM_CODE_END(common_spurious)
_ASM_NOKPROBE(common_spurious) _ASM_NOKPROBE(common_spurious)
/* common_interrupt is a hotpath. Align it */ /* common_interrupt is a hotpath. Align it */
.p2align CONFIG_X86_L1_CACHE_SHIFT .p2align CONFIG_X86_L1_CACHE_SHIFT
common_interrupt: SYM_CODE_START_LOCAL(common_interrupt)
addq $-0x80, (%rsp) /* Adjust vector to [-256, -1] range */ addq $-0x80, (%rsp) /* Adjust vector to [-256, -1] range */
call interrupt_entry call interrupt_entry
UNWIND_HINT_REGS indirect=1 UNWIND_HINT_REGS indirect=1
@ -616,12 +616,12 @@ ret_from_intr:
jz retint_kernel jz retint_kernel
/* Interrupt came from user space */ /* Interrupt came from user space */
GLOBAL(retint_user) .Lretint_user:
mov %rsp,%rdi mov %rsp,%rdi
call prepare_exit_to_usermode call prepare_exit_to_usermode
TRACE_IRQS_IRETQ TRACE_IRQS_IRETQ
GLOBAL(swapgs_restore_regs_and_return_to_usermode) SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
#ifdef CONFIG_DEBUG_ENTRY #ifdef CONFIG_DEBUG_ENTRY
/* Assert that pt_regs indicates user mode. */ /* Assert that pt_regs indicates user mode. */
testb $3, CS(%rsp) testb $3, CS(%rsp)
@ -679,7 +679,7 @@ retint_kernel:
*/ */
TRACE_IRQS_IRETQ TRACE_IRQS_IRETQ
GLOBAL(restore_regs_and_return_to_kernel) SYM_INNER_LABEL(restore_regs_and_return_to_kernel, SYM_L_GLOBAL)
#ifdef CONFIG_DEBUG_ENTRY #ifdef CONFIG_DEBUG_ENTRY
/* Assert that pt_regs indicates kernel mode. */ /* Assert that pt_regs indicates kernel mode. */
testb $3, CS(%rsp) testb $3, CS(%rsp)
@ -695,7 +695,7 @@ GLOBAL(restore_regs_and_return_to_kernel)
*/ */
INTERRUPT_RETURN INTERRUPT_RETURN
ENTRY(native_iret) SYM_INNER_LABEL_ALIGN(native_iret, SYM_L_GLOBAL)
UNWIND_HINT_IRET_REGS UNWIND_HINT_IRET_REGS
/* /*
* Are we returning to a stack segment from the LDT? Note: in * Are we returning to a stack segment from the LDT? Note: in
@ -706,8 +706,7 @@ ENTRY(native_iret)
jnz native_irq_return_ldt jnz native_irq_return_ldt
#endif #endif
.global native_irq_return_iret SYM_INNER_LABEL(native_irq_return_iret, SYM_L_GLOBAL)
native_irq_return_iret:
/* /*
* This may fault. Non-paranoid faults on return to userspace are * This may fault. Non-paranoid faults on return to userspace are
* handled by fixup_bad_iret. These include #SS, #GP, and #NP. * handled by fixup_bad_iret. These include #SS, #GP, and #NP.
@ -789,14 +788,14 @@ native_irq_return_ldt:
*/ */
jmp native_irq_return_iret jmp native_irq_return_iret
#endif #endif
END(common_interrupt) SYM_CODE_END(common_interrupt)
_ASM_NOKPROBE(common_interrupt) _ASM_NOKPROBE(common_interrupt)
/* /*
* APIC interrupts. * APIC interrupts.
*/ */
.macro apicinterrupt3 num sym do_sym .macro apicinterrupt3 num sym do_sym
ENTRY(\sym) SYM_CODE_START(\sym)
UNWIND_HINT_IRET_REGS UNWIND_HINT_IRET_REGS
pushq $~(\num) pushq $~(\num)
.Lcommon_\sym: .Lcommon_\sym:
@ -804,7 +803,7 @@ ENTRY(\sym)
UNWIND_HINT_REGS indirect=1 UNWIND_HINT_REGS indirect=1
call \do_sym /* rdi points to pt_regs */ call \do_sym /* rdi points to pt_regs */
jmp ret_from_intr jmp ret_from_intr
END(\sym) SYM_CODE_END(\sym)
_ASM_NOKPROBE(\sym) _ASM_NOKPROBE(\sym)
.endm .endm
@ -969,7 +968,7 @@ apicinterrupt IRQ_WORK_VECTOR irq_work_interrupt smp_irq_work_interrupt
* #DF: if the thread stack is somehow unusable, we'll still get a useful OOPS. * #DF: if the thread stack is somehow unusable, we'll still get a useful OOPS.
*/ */
.macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1 ist_offset=0 create_gap=0 read_cr2=0 .macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1 ist_offset=0 create_gap=0 read_cr2=0
ENTRY(\sym) SYM_CODE_START(\sym)
UNWIND_HINT_IRET_REGS offset=\has_error_code*8 UNWIND_HINT_IRET_REGS offset=\has_error_code*8
/* Sanity check */ /* Sanity check */
@ -1019,7 +1018,7 @@ ENTRY(\sym)
.endif .endif
_ASM_NOKPROBE(\sym) _ASM_NOKPROBE(\sym)
END(\sym) SYM_CODE_END(\sym)
.endm .endm
idtentry divide_error do_divide_error has_error_code=0 idtentry divide_error do_divide_error has_error_code=0
@ -1041,7 +1040,7 @@ idtentry simd_coprocessor_error do_simd_coprocessor_error has_error_code=0
* Reload gs selector with exception handling * Reload gs selector with exception handling
* edi: new selector * edi: new selector
*/ */
ENTRY(native_load_gs_index) SYM_FUNC_START(native_load_gs_index)
FRAME_BEGIN FRAME_BEGIN
pushfq pushfq
DISABLE_INTERRUPTS(CLBR_ANY & ~CLBR_RDI) DISABLE_INTERRUPTS(CLBR_ANY & ~CLBR_RDI)
@ -1055,13 +1054,13 @@ ENTRY(native_load_gs_index)
popfq popfq
FRAME_END FRAME_END
ret ret
ENDPROC(native_load_gs_index) SYM_FUNC_END(native_load_gs_index)
EXPORT_SYMBOL(native_load_gs_index) EXPORT_SYMBOL(native_load_gs_index)
_ASM_EXTABLE(.Lgs_change, .Lbad_gs) _ASM_EXTABLE(.Lgs_change, .Lbad_gs)
.section .fixup, "ax" .section .fixup, "ax"
/* running with kernelgs */ /* running with kernelgs */
.Lbad_gs: SYM_CODE_START_LOCAL_NOALIGN(.Lbad_gs)
SWAPGS /* switch back to user gs */ SWAPGS /* switch back to user gs */
.macro ZAP_GS .macro ZAP_GS
/* This can't be a string because the preprocessor needs to see it. */ /* This can't be a string because the preprocessor needs to see it. */
@ -1072,10 +1071,11 @@ EXPORT_SYMBOL(native_load_gs_index)
xorl %eax, %eax xorl %eax, %eax
movl %eax, %gs movl %eax, %gs
jmp 2b jmp 2b
SYM_CODE_END(.Lbad_gs)
.previous .previous
/* Call softirq on interrupt stack. Interrupts are off. */ /* Call softirq on interrupt stack. Interrupts are off. */
ENTRY(do_softirq_own_stack) SYM_FUNC_START(do_softirq_own_stack)
pushq %rbp pushq %rbp
mov %rsp, %rbp mov %rsp, %rbp
ENTER_IRQ_STACK regs=0 old_rsp=%r11 ENTER_IRQ_STACK regs=0 old_rsp=%r11
@ -1083,7 +1083,7 @@ ENTRY(do_softirq_own_stack)
LEAVE_IRQ_STACK regs=0 LEAVE_IRQ_STACK regs=0
leaveq leaveq
ret ret
ENDPROC(do_softirq_own_stack) SYM_FUNC_END(do_softirq_own_stack)
#ifdef CONFIG_XEN_PV #ifdef CONFIG_XEN_PV
idtentry hypervisor_callback xen_do_hypervisor_callback has_error_code=0 idtentry hypervisor_callback xen_do_hypervisor_callback has_error_code=0
@ -1101,7 +1101,8 @@ idtentry hypervisor_callback xen_do_hypervisor_callback has_error_code=0
* existing activation in its critical region -- if so, we pop the current * existing activation in its critical region -- if so, we pop the current
* activation and restart the handler using the previous one. * activation and restart the handler using the previous one.
*/ */
ENTRY(xen_do_hypervisor_callback) /* do_hypervisor_callback(struct *pt_regs) */ /* do_hypervisor_callback(struct *pt_regs) */
SYM_CODE_START_LOCAL(xen_do_hypervisor_callback)
/* /*
* Since we don't modify %rdi, evtchn_do_upall(struct *pt_regs) will * Since we don't modify %rdi, evtchn_do_upall(struct *pt_regs) will
@ -1119,7 +1120,7 @@ ENTRY(xen_do_hypervisor_callback) /* do_hypervisor_callback(struct *pt_regs) */
call xen_maybe_preempt_hcall call xen_maybe_preempt_hcall
#endif #endif
jmp error_exit jmp error_exit
END(xen_do_hypervisor_callback) SYM_CODE_END(xen_do_hypervisor_callback)
/* /*
* Hypervisor uses this for application faults while it executes. * Hypervisor uses this for application faults while it executes.
@ -1134,7 +1135,7 @@ END(xen_do_hypervisor_callback)
* We distinguish between categories by comparing each saved segment register * We distinguish between categories by comparing each saved segment register
* with its current contents: any discrepancy means we in category 1. * with its current contents: any discrepancy means we in category 1.
*/ */
ENTRY(xen_failsafe_callback) SYM_CODE_START(xen_failsafe_callback)
UNWIND_HINT_EMPTY UNWIND_HINT_EMPTY
movl %ds, %ecx movl %ds, %ecx
cmpw %cx, 0x10(%rsp) cmpw %cx, 0x10(%rsp)
@ -1164,7 +1165,7 @@ ENTRY(xen_failsafe_callback)
PUSH_AND_CLEAR_REGS PUSH_AND_CLEAR_REGS
ENCODE_FRAME_POINTER ENCODE_FRAME_POINTER
jmp error_exit jmp error_exit
END(xen_failsafe_callback) SYM_CODE_END(xen_failsafe_callback)
#endif /* CONFIG_XEN_PV */ #endif /* CONFIG_XEN_PV */
#ifdef CONFIG_XEN_PVHVM #ifdef CONFIG_XEN_PVHVM
@ -1214,7 +1215,7 @@ idtentry machine_check do_mce has_error_code=0 paranoid=1
* Use slow, but surefire "are we in kernel?" check. * Use slow, but surefire "are we in kernel?" check.
* Return: ebx=0: need swapgs on exit, ebx=1: otherwise * Return: ebx=0: need swapgs on exit, ebx=1: otherwise
*/ */
ENTRY(paranoid_entry) SYM_CODE_START_LOCAL(paranoid_entry)
UNWIND_HINT_FUNC UNWIND_HINT_FUNC
cld cld
PUSH_AND_CLEAR_REGS save_ret=1 PUSH_AND_CLEAR_REGS save_ret=1
@ -1248,7 +1249,7 @@ ENTRY(paranoid_entry)
FENCE_SWAPGS_KERNEL_ENTRY FENCE_SWAPGS_KERNEL_ENTRY
ret ret
END(paranoid_entry) SYM_CODE_END(paranoid_entry)
/* /*
* "Paranoid" exit path from exception stack. This is invoked * "Paranoid" exit path from exception stack. This is invoked
@ -1262,7 +1263,7 @@ END(paranoid_entry)
* *
* On entry, ebx is "no swapgs" flag (1: don't need swapgs, 0: need it) * On entry, ebx is "no swapgs" flag (1: don't need swapgs, 0: need it)
*/ */
ENTRY(paranoid_exit) SYM_CODE_START_LOCAL(paranoid_exit)
UNWIND_HINT_REGS UNWIND_HINT_REGS
DISABLE_INTERRUPTS(CLBR_ANY) DISABLE_INTERRUPTS(CLBR_ANY)
TRACE_IRQS_OFF_DEBUG TRACE_IRQS_OFF_DEBUG
@ -1272,19 +1273,18 @@ ENTRY(paranoid_exit)
/* Always restore stashed CR3 value (see paranoid_entry) */ /* Always restore stashed CR3 value (see paranoid_entry) */
RESTORE_CR3 scratch_reg=%rbx save_reg=%r14 RESTORE_CR3 scratch_reg=%rbx save_reg=%r14
SWAPGS_UNSAFE_STACK SWAPGS_UNSAFE_STACK
jmp .Lparanoid_exit_restore jmp restore_regs_and_return_to_kernel
.Lparanoid_exit_no_swapgs: .Lparanoid_exit_no_swapgs:
TRACE_IRQS_IRETQ_DEBUG TRACE_IRQS_IRETQ_DEBUG
/* Always restore stashed CR3 value (see paranoid_entry) */ /* Always restore stashed CR3 value (see paranoid_entry) */
RESTORE_CR3 scratch_reg=%rbx save_reg=%r14 RESTORE_CR3 scratch_reg=%rbx save_reg=%r14
.Lparanoid_exit_restore:
jmp restore_regs_and_return_to_kernel jmp restore_regs_and_return_to_kernel
END(paranoid_exit) SYM_CODE_END(paranoid_exit)
/* /*
* Save all registers in pt_regs, and switch GS if needed. * Save all registers in pt_regs, and switch GS if needed.
*/ */
ENTRY(error_entry) SYM_CODE_START_LOCAL(error_entry)
UNWIND_HINT_FUNC UNWIND_HINT_FUNC
cld cld
PUSH_AND_CLEAR_REGS save_ret=1 PUSH_AND_CLEAR_REGS save_ret=1
@ -1364,16 +1364,16 @@ ENTRY(error_entry)
call fixup_bad_iret call fixup_bad_iret
mov %rax, %rsp mov %rax, %rsp
jmp .Lerror_entry_from_usermode_after_swapgs jmp .Lerror_entry_from_usermode_after_swapgs
END(error_entry) SYM_CODE_END(error_entry)
ENTRY(error_exit) SYM_CODE_START_LOCAL(error_exit)
UNWIND_HINT_REGS UNWIND_HINT_REGS
DISABLE_INTERRUPTS(CLBR_ANY) DISABLE_INTERRUPTS(CLBR_ANY)
TRACE_IRQS_OFF TRACE_IRQS_OFF
testb $3, CS(%rsp) testb $3, CS(%rsp)
jz retint_kernel jz retint_kernel
jmp retint_user jmp .Lretint_user
END(error_exit) SYM_CODE_END(error_exit)
/* /*
* Runs on exception stack. Xen PV does not go through this path at all, * Runs on exception stack. Xen PV does not go through this path at all,
@ -1383,7 +1383,7 @@ END(error_exit)
* %r14: Used to save/restore the CR3 of the interrupted context * %r14: Used to save/restore the CR3 of the interrupted context
* when PAGE_TABLE_ISOLATION is in use. Do not clobber. * when PAGE_TABLE_ISOLATION is in use. Do not clobber.
*/ */
ENTRY(nmi) SYM_CODE_START(nmi)
UNWIND_HINT_IRET_REGS UNWIND_HINT_IRET_REGS
/* /*
@ -1718,21 +1718,21 @@ nmi_restore:
* about espfix64 on the way back to kernel mode. * about espfix64 on the way back to kernel mode.
*/ */
iretq iretq
END(nmi) SYM_CODE_END(nmi)
#ifndef CONFIG_IA32_EMULATION #ifndef CONFIG_IA32_EMULATION
/* /*
* This handles SYSCALL from 32-bit code. There is no way to program * This handles SYSCALL from 32-bit code. There is no way to program
* MSRs to fully disable 32-bit SYSCALL. * MSRs to fully disable 32-bit SYSCALL.
*/ */
ENTRY(ignore_sysret) SYM_CODE_START(ignore_sysret)
UNWIND_HINT_EMPTY UNWIND_HINT_EMPTY
mov $-ENOSYS, %eax mov $-ENOSYS, %eax
sysret sysret
END(ignore_sysret) SYM_CODE_END(ignore_sysret)
#endif #endif
ENTRY(rewind_stack_do_exit) SYM_CODE_START(rewind_stack_do_exit)
UNWIND_HINT_FUNC UNWIND_HINT_FUNC
/* Prevent any naive code from trying to unwind to our caller. */ /* Prevent any naive code from trying to unwind to our caller. */
xorl %ebp, %ebp xorl %ebp, %ebp
@ -1742,4 +1742,4 @@ ENTRY(rewind_stack_do_exit)
UNWIND_HINT_FUNC sp_offset=PTREGS_SIZE UNWIND_HINT_FUNC sp_offset=PTREGS_SIZE
call do_exit call do_exit
END(rewind_stack_do_exit) SYM_CODE_END(rewind_stack_do_exit)

View file

@ -46,7 +46,7 @@
* ebp user stack * ebp user stack
* 0(%ebp) arg6 * 0(%ebp) arg6
*/ */
ENTRY(entry_SYSENTER_compat) SYM_FUNC_START(entry_SYSENTER_compat)
/* Interrupts are off on entry. */ /* Interrupts are off on entry. */
SWAPGS SWAPGS
@ -146,8 +146,8 @@ ENTRY(entry_SYSENTER_compat)
pushq $X86_EFLAGS_FIXED pushq $X86_EFLAGS_FIXED
popfq popfq
jmp .Lsysenter_flags_fixed jmp .Lsysenter_flags_fixed
GLOBAL(__end_entry_SYSENTER_compat) SYM_INNER_LABEL(__end_entry_SYSENTER_compat, SYM_L_GLOBAL)
ENDPROC(entry_SYSENTER_compat) SYM_FUNC_END(entry_SYSENTER_compat)
/* /*
* 32-bit SYSCALL entry. * 32-bit SYSCALL entry.
@ -196,7 +196,7 @@ ENDPROC(entry_SYSENTER_compat)
* esp user stack * esp user stack
* 0(%esp) arg6 * 0(%esp) arg6
*/ */
ENTRY(entry_SYSCALL_compat) SYM_CODE_START(entry_SYSCALL_compat)
/* Interrupts are off on entry. */ /* Interrupts are off on entry. */
swapgs swapgs
@ -215,7 +215,7 @@ ENTRY(entry_SYSCALL_compat)
pushq %r11 /* pt_regs->flags */ pushq %r11 /* pt_regs->flags */
pushq $__USER32_CS /* pt_regs->cs */ pushq $__USER32_CS /* pt_regs->cs */
pushq %rcx /* pt_regs->ip */ pushq %rcx /* pt_regs->ip */
GLOBAL(entry_SYSCALL_compat_after_hwframe) SYM_INNER_LABEL(entry_SYSCALL_compat_after_hwframe, SYM_L_GLOBAL)
movl %eax, %eax /* discard orig_ax high bits */ movl %eax, %eax /* discard orig_ax high bits */
pushq %rax /* pt_regs->orig_ax */ pushq %rax /* pt_regs->orig_ax */
pushq %rdi /* pt_regs->di */ pushq %rdi /* pt_regs->di */
@ -311,7 +311,7 @@ sysret32_from_system_call:
xorl %r10d, %r10d xorl %r10d, %r10d
swapgs swapgs
sysretl sysretl
END(entry_SYSCALL_compat) SYM_CODE_END(entry_SYSCALL_compat)
/* /*
* 32-bit legacy system call entry. * 32-bit legacy system call entry.
@ -339,7 +339,7 @@ END(entry_SYSCALL_compat)
* edi arg5 * edi arg5
* ebp arg6 * ebp arg6
*/ */
ENTRY(entry_INT80_compat) SYM_CODE_START(entry_INT80_compat)
/* /*
* Interrupts are off on entry. * Interrupts are off on entry.
*/ */
@ -416,4 +416,4 @@ ENTRY(entry_INT80_compat)
/* Go back to user mode. */ /* Go back to user mode. */
TRACE_IRQS_ON TRACE_IRQS_ON
jmp swapgs_restore_regs_and_return_to_usermode jmp swapgs_restore_regs_and_return_to_usermode
END(entry_INT80_compat) SYM_CODE_END(entry_INT80_compat)

View file

@ -10,8 +10,7 @@
/* put return address in eax (arg1) */ /* put return address in eax (arg1) */
.macro THUNK name, func, put_ret_addr_in_eax=0 .macro THUNK name, func, put_ret_addr_in_eax=0
.globl \name SYM_CODE_START_NOALIGN(\name)
\name:
pushl %eax pushl %eax
pushl %ecx pushl %ecx
pushl %edx pushl %edx
@ -27,6 +26,7 @@
popl %eax popl %eax
ret ret
_ASM_NOKPROBE(\name) _ASM_NOKPROBE(\name)
SYM_CODE_END(\name)
.endm .endm
#ifdef CONFIG_TRACE_IRQFLAGS #ifdef CONFIG_TRACE_IRQFLAGS

View file

@ -12,7 +12,7 @@
/* rdi: arg1 ... normal C conventions. rax is saved/restored. */ /* rdi: arg1 ... normal C conventions. rax is saved/restored. */
.macro THUNK name, func, put_ret_addr_in_rdi=0 .macro THUNK name, func, put_ret_addr_in_rdi=0
ENTRY(\name) SYM_FUNC_START_NOALIGN(\name)
pushq %rbp pushq %rbp
movq %rsp, %rbp movq %rsp, %rbp
@ -33,7 +33,7 @@
call \func call \func
jmp .L_restore jmp .L_restore
ENDPROC(\name) SYM_FUNC_END(\name)
_ASM_NOKPROBE(\name) _ASM_NOKPROBE(\name)
.endm .endm
@ -56,7 +56,7 @@
#if defined(CONFIG_TRACE_IRQFLAGS) \ #if defined(CONFIG_TRACE_IRQFLAGS) \
|| defined(CONFIG_DEBUG_LOCK_ALLOC) \ || defined(CONFIG_DEBUG_LOCK_ALLOC) \
|| defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPTION)
.L_restore: SYM_CODE_START_LOCAL_NOALIGN(.L_restore)
popq %r11 popq %r11
popq %r10 popq %r10
popq %r9 popq %r9
@ -69,4 +69,5 @@
popq %rbp popq %rbp
ret ret
_ASM_NOKPROBE(.L_restore) _ASM_NOKPROBE(.L_restore)
SYM_CODE_END(.L_restore)
#endif #endif

View file

@ -87,11 +87,9 @@ $(vobjs): KBUILD_CFLAGS := $(filter-out $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS
# #
# vDSO code runs in userspace and -pg doesn't help with profiling anyway. # vDSO code runs in userspace and -pg doesn't help with profiling anyway.
# #
CFLAGS_REMOVE_vdso-note.o = -pg
CFLAGS_REMOVE_vclock_gettime.o = -pg CFLAGS_REMOVE_vclock_gettime.o = -pg
CFLAGS_REMOVE_vdso32/vclock_gettime.o = -pg CFLAGS_REMOVE_vdso32/vclock_gettime.o = -pg
CFLAGS_REMOVE_vgetcpu.o = -pg CFLAGS_REMOVE_vgetcpu.o = -pg
CFLAGS_REMOVE_vvar.o = -pg
# #
# X32 processes use x32 vDSO to access 64bit kernel data. # X32 processes use x32 vDSO to access 64bit kernel data.

View file

@ -62,7 +62,7 @@ __kernel_vsyscall:
/* Enter using int $0x80 */ /* Enter using int $0x80 */
int $0x80 int $0x80
GLOBAL(int80_landing_pad) SYM_INNER_LABEL(int80_landing_pad, SYM_L_GLOBAL)
/* /*
* Restore EDX and ECX in case they were clobbered. EBP is not * Restore EDX and ECX in case they were clobbered. EBP is not

View file

@ -13,10 +13,6 @@
#ifdef __ASSEMBLY__ #ifdef __ASSEMBLY__
#define GLOBAL(name) \
.globl name; \
name:
#if defined(CONFIG_X86_64) || defined(CONFIG_X86_ALIGNMENT_16) #if defined(CONFIG_X86_64) || defined(CONFIG_X86_ALIGNMENT_16)
#define __ALIGN .p2align 4, 0x90 #define __ALIGN .p2align 4, 0x90
#define __ALIGN_STR __stringify(__ALIGN) #define __ALIGN_STR __stringify(__ALIGN)

View file

@ -966,7 +966,7 @@ static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves)
extern unsigned long arch_align_stack(unsigned long sp); extern unsigned long arch_align_stack(unsigned long sp);
void free_init_pages(const char *what, unsigned long begin, unsigned long end); void free_init_pages(const char *what, unsigned long begin, unsigned long end);
extern void free_kernel_image_pages(void *begin, void *end); extern void free_kernel_image_pages(const char *what, void *begin, void *end);
void default_idle(void); void default_idle(void);
#ifdef CONFIG_XEN #ifdef CONFIG_XEN

View file

@ -6,7 +6,6 @@
#include <asm/extable.h> #include <asm/extable.h>
extern char __brk_base[], __brk_limit[]; extern char __brk_base[], __brk_limit[];
extern struct exception_table_entry __stop___ex_table[];
extern char __end_rodata_aligned[]; extern char __end_rodata_aligned[];
#if defined(CONFIG_X86_64) #if defined(CONFIG_X86_64)

View file

@ -9,8 +9,7 @@
.code32 .code32
ALIGN ALIGN
ENTRY(wakeup_pmode_return) SYM_CODE_START(wakeup_pmode_return)
wakeup_pmode_return:
movw $__KERNEL_DS, %ax movw $__KERNEL_DS, %ax
movw %ax, %ss movw %ax, %ss
movw %ax, %fs movw %ax, %fs
@ -39,6 +38,7 @@ wakeup_pmode_return:
# jump to place where we left off # jump to place where we left off
movl saved_eip, %eax movl saved_eip, %eax
jmp *%eax jmp *%eax
SYM_CODE_END(wakeup_pmode_return)
bogus_magic: bogus_magic:
jmp bogus_magic jmp bogus_magic
@ -72,7 +72,7 @@ restore_registers:
popfl popfl
ret ret
ENTRY(do_suspend_lowlevel) SYM_CODE_START(do_suspend_lowlevel)
call save_processor_state call save_processor_state
call save_registers call save_registers
pushl $3 pushl $3
@ -87,10 +87,11 @@ ret_point:
call restore_registers call restore_registers
call restore_processor_state call restore_processor_state
ret ret
SYM_CODE_END(do_suspend_lowlevel)
.data .data
ALIGN ALIGN
ENTRY(saved_magic) .long 0 SYM_DATA(saved_magic, .long 0)
saved_eip: .long 0 saved_eip: .long 0
# saved registers # saved registers

View file

@ -14,7 +14,7 @@
/* /*
* Hooray, we are in Long 64-bit mode (but still running in low memory) * Hooray, we are in Long 64-bit mode (but still running in low memory)
*/ */
ENTRY(wakeup_long64) SYM_FUNC_START(wakeup_long64)
movq saved_magic, %rax movq saved_magic, %rax
movq $0x123456789abcdef0, %rdx movq $0x123456789abcdef0, %rdx
cmpq %rdx, %rax cmpq %rdx, %rax
@ -40,9 +40,9 @@ ENTRY(wakeup_long64)
movq saved_rip, %rax movq saved_rip, %rax
jmp *%rax jmp *%rax
ENDPROC(wakeup_long64) SYM_FUNC_END(wakeup_long64)
ENTRY(do_suspend_lowlevel) SYM_FUNC_START(do_suspend_lowlevel)
FRAME_BEGIN FRAME_BEGIN
subq $8, %rsp subq $8, %rsp
xorl %eax, %eax xorl %eax, %eax
@ -125,7 +125,7 @@ ENTRY(do_suspend_lowlevel)
addq $8, %rsp addq $8, %rsp
FRAME_END FRAME_END
jmp restore_processor_state jmp restore_processor_state
ENDPROC(do_suspend_lowlevel) SYM_FUNC_END(do_suspend_lowlevel)
.data .data
saved_rbp: .quad 0 saved_rbp: .quad 0
@ -136,4 +136,4 @@ saved_rbx: .quad 0
saved_rip: .quad 0 saved_rip: .quad 0
saved_rsp: .quad 0 saved_rsp: .quad 0
ENTRY(saved_magic) .quad 0 SYM_DATA(saved_magic, .quad 0)

View file

@ -12,20 +12,18 @@
#include <asm/frame.h> #include <asm/frame.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
# define function_hook __fentry__
EXPORT_SYMBOL(__fentry__)
#ifdef CONFIG_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
# define MCOUNT_FRAME 1 /* using frame = true */ # define MCOUNT_FRAME 1 /* using frame = true */
#else #else
# define MCOUNT_FRAME 0 /* using frame = false */ # define MCOUNT_FRAME 0 /* using frame = false */
#endif #endif
ENTRY(function_hook) SYM_FUNC_START(__fentry__)
ret ret
END(function_hook) SYM_FUNC_END(__fentry__)
EXPORT_SYMBOL(__fentry__)
ENTRY(ftrace_caller) SYM_CODE_START(ftrace_caller)
#ifdef CONFIG_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
/* /*
@ -85,11 +83,11 @@ ftrace_graph_call:
#endif #endif
/* This is weak to keep gas from relaxing the jumps */ /* This is weak to keep gas from relaxing the jumps */
WEAK(ftrace_stub) SYM_INNER_LABEL_ALIGN(ftrace_stub, SYM_L_WEAK)
ret ret
END(ftrace_caller) SYM_CODE_END(ftrace_caller)
ENTRY(ftrace_regs_caller) SYM_CODE_START(ftrace_regs_caller)
/* /*
* We're here from an mcount/fentry CALL, and the stack frame looks like: * We're here from an mcount/fentry CALL, and the stack frame looks like:
* *
@ -138,7 +136,7 @@ ENTRY(ftrace_regs_caller)
movl function_trace_op, %ecx # 3rd argument: ftrace_pos movl function_trace_op, %ecx # 3rd argument: ftrace_pos
pushl %esp # 4th argument: pt_regs pushl %esp # 4th argument: pt_regs
GLOBAL(ftrace_regs_call) SYM_INNER_LABEL(ftrace_regs_call, SYM_L_GLOBAL)
call ftrace_stub call ftrace_stub
addl $4, %esp # skip 4th argument addl $4, %esp # skip 4th argument
@ -163,9 +161,10 @@ GLOBAL(ftrace_regs_call)
popl %eax popl %eax
jmp .Lftrace_ret jmp .Lftrace_ret
SYM_CODE_END(ftrace_regs_caller)
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
ENTRY(ftrace_graph_caller) SYM_CODE_START(ftrace_graph_caller)
pushl %eax pushl %eax
pushl %ecx pushl %ecx
pushl %edx pushl %edx
@ -179,7 +178,7 @@ ENTRY(ftrace_graph_caller)
popl %ecx popl %ecx
popl %eax popl %eax
ret ret
END(ftrace_graph_caller) SYM_CODE_END(ftrace_graph_caller)
.globl return_to_handler .globl return_to_handler
return_to_handler: return_to_handler:

View file

@ -14,9 +14,6 @@
.code64 .code64
.section .entry.text, "ax" .section .entry.text, "ax"
# define function_hook __fentry__
EXPORT_SYMBOL(__fentry__)
#ifdef CONFIG_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
/* Save parent and function stack frames (rip and rbp) */ /* Save parent and function stack frames (rip and rbp) */
# define MCOUNT_FRAME_SIZE (8+16*2) # define MCOUNT_FRAME_SIZE (8+16*2)
@ -132,22 +129,23 @@ EXPORT_SYMBOL(__fentry__)
#ifdef CONFIG_DYNAMIC_FTRACE #ifdef CONFIG_DYNAMIC_FTRACE
ENTRY(function_hook) SYM_FUNC_START(__fentry__)
retq retq
ENDPROC(function_hook) SYM_FUNC_END(__fentry__)
EXPORT_SYMBOL(__fentry__)
ENTRY(ftrace_caller) SYM_FUNC_START(ftrace_caller)
/* save_mcount_regs fills in first two parameters */ /* save_mcount_regs fills in first two parameters */
save_mcount_regs save_mcount_regs
GLOBAL(ftrace_caller_op_ptr) SYM_INNER_LABEL(ftrace_caller_op_ptr, SYM_L_GLOBAL)
/* Load the ftrace_ops into the 3rd parameter */ /* Load the ftrace_ops into the 3rd parameter */
movq function_trace_op(%rip), %rdx movq function_trace_op(%rip), %rdx
/* regs go into 4th parameter (but make it NULL) */ /* regs go into 4th parameter (but make it NULL) */
movq $0, %rcx movq $0, %rcx
GLOBAL(ftrace_call) SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
call ftrace_stub call ftrace_stub
restore_mcount_regs restore_mcount_regs
@ -157,10 +155,10 @@ GLOBAL(ftrace_call)
* think twice before adding any new code or changing the * think twice before adding any new code or changing the
* layout here. * layout here.
*/ */
GLOBAL(ftrace_epilogue) SYM_INNER_LABEL(ftrace_epilogue, SYM_L_GLOBAL)
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
GLOBAL(ftrace_graph_call) SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL)
jmp ftrace_stub jmp ftrace_stub
#endif #endif
@ -168,11 +166,11 @@ GLOBAL(ftrace_graph_call)
* This is weak to keep gas from relaxing the jumps. * This is weak to keep gas from relaxing the jumps.
* It is also used to copy the retq for trampolines. * It is also used to copy the retq for trampolines.
*/ */
WEAK(ftrace_stub) SYM_INNER_LABEL_ALIGN(ftrace_stub, SYM_L_WEAK)
retq retq
ENDPROC(ftrace_caller) SYM_FUNC_END(ftrace_caller)
ENTRY(ftrace_regs_caller) SYM_FUNC_START(ftrace_regs_caller)
/* Save the current flags before any operations that can change them */ /* Save the current flags before any operations that can change them */
pushfq pushfq
@ -180,7 +178,7 @@ ENTRY(ftrace_regs_caller)
save_mcount_regs 8 save_mcount_regs 8
/* save_mcount_regs fills in first two parameters */ /* save_mcount_regs fills in first two parameters */
GLOBAL(ftrace_regs_caller_op_ptr) SYM_INNER_LABEL(ftrace_regs_caller_op_ptr, SYM_L_GLOBAL)
/* Load the ftrace_ops into the 3rd parameter */ /* Load the ftrace_ops into the 3rd parameter */
movq function_trace_op(%rip), %rdx movq function_trace_op(%rip), %rdx
@ -209,7 +207,7 @@ GLOBAL(ftrace_regs_caller_op_ptr)
/* regs go into 4th parameter */ /* regs go into 4th parameter */
leaq (%rsp), %rcx leaq (%rsp), %rcx
GLOBAL(ftrace_regs_call) SYM_INNER_LABEL(ftrace_regs_call, SYM_L_GLOBAL)
call ftrace_stub call ftrace_stub
/* Copy flags back to SS, to restore them */ /* Copy flags back to SS, to restore them */
@ -239,16 +237,16 @@ GLOBAL(ftrace_regs_call)
* The trampoline will add the code to jump * The trampoline will add the code to jump
* to the return. * to the return.
*/ */
GLOBAL(ftrace_regs_caller_end) SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)
jmp ftrace_epilogue jmp ftrace_epilogue
ENDPROC(ftrace_regs_caller) SYM_FUNC_END(ftrace_regs_caller)
#else /* ! CONFIG_DYNAMIC_FTRACE */ #else /* ! CONFIG_DYNAMIC_FTRACE */
ENTRY(function_hook) SYM_FUNC_START(__fentry__)
cmpq $ftrace_stub, ftrace_trace_function cmpq $ftrace_stub, ftrace_trace_function
jnz trace jnz trace
@ -261,7 +259,7 @@ fgraph_trace:
jnz ftrace_graph_caller jnz ftrace_graph_caller
#endif #endif
GLOBAL(ftrace_stub) SYM_INNER_LABEL(ftrace_stub, SYM_L_GLOBAL)
retq retq
trace: trace:
@ -279,11 +277,12 @@ trace:
restore_mcount_regs restore_mcount_regs
jmp fgraph_trace jmp fgraph_trace
ENDPROC(function_hook) SYM_FUNC_END(__fentry__)
EXPORT_SYMBOL(__fentry__)
#endif /* CONFIG_DYNAMIC_FTRACE */ #endif /* CONFIG_DYNAMIC_FTRACE */
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
ENTRY(ftrace_graph_caller) SYM_FUNC_START(ftrace_graph_caller)
/* Saves rbp into %rdx and fills first parameter */ /* Saves rbp into %rdx and fills first parameter */
save_mcount_regs save_mcount_regs
@ -294,9 +293,9 @@ ENTRY(ftrace_graph_caller)
restore_mcount_regs restore_mcount_regs
retq retq
ENDPROC(ftrace_graph_caller) SYM_FUNC_END(ftrace_graph_caller)
ENTRY(return_to_handler) SYM_CODE_START(return_to_handler)
UNWIND_HINT_EMPTY UNWIND_HINT_EMPTY
subq $24, %rsp subq $24, %rsp
@ -312,5 +311,5 @@ ENTRY(return_to_handler)
movq (%rsp), %rax movq (%rsp), %rax
addq $24, %rsp addq $24, %rsp
JMP_NOSPEC %rdi JMP_NOSPEC %rdi
END(return_to_handler) SYM_CODE_END(return_to_handler)
#endif #endif

View file

@ -64,7 +64,7 @@ RESERVE_BRK(pagetables, INIT_MAP_SIZE)
* can. * can.
*/ */
__HEAD __HEAD
ENTRY(startup_32) SYM_CODE_START(startup_32)
movl pa(initial_stack),%ecx movl pa(initial_stack),%ecx
/* test KEEP_SEGMENTS flag to see if the bootloader is asking /* test KEEP_SEGMENTS flag to see if the bootloader is asking
@ -156,7 +156,7 @@ ENTRY(startup_32)
jmp *%eax jmp *%eax
.Lbad_subarch: .Lbad_subarch:
WEAK(xen_entry) SYM_INNER_LABEL_ALIGN(xen_entry, SYM_L_WEAK)
/* Unknown implementation; there's really /* Unknown implementation; there's really
nothing we can do at this point. */ nothing we can do at this point. */
ud2a ud2a
@ -172,6 +172,7 @@ num_subarch_entries = (. - subarch_entries) / 4
#else #else
jmp .Ldefault_entry jmp .Ldefault_entry
#endif /* CONFIG_PARAVIRT */ #endif /* CONFIG_PARAVIRT */
SYM_CODE_END(startup_32)
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
/* /*
@ -179,12 +180,12 @@ num_subarch_entries = (. - subarch_entries) / 4
* up already except stack. We just set up stack here. Then call * up already except stack. We just set up stack here. Then call
* start_secondary(). * start_secondary().
*/ */
ENTRY(start_cpu0) SYM_FUNC_START(start_cpu0)
movl initial_stack, %ecx movl initial_stack, %ecx
movl %ecx, %esp movl %ecx, %esp
call *(initial_code) call *(initial_code)
1: jmp 1b 1: jmp 1b
ENDPROC(start_cpu0) SYM_FUNC_END(start_cpu0)
#endif #endif
/* /*
@ -195,7 +196,7 @@ ENDPROC(start_cpu0)
* If cpu hotplug is not supported then this code can go in init section * If cpu hotplug is not supported then this code can go in init section
* which will be freed later * which will be freed later
*/ */
ENTRY(startup_32_smp) SYM_FUNC_START(startup_32_smp)
cld cld
movl $(__BOOT_DS),%eax movl $(__BOOT_DS),%eax
movl %eax,%ds movl %eax,%ds
@ -362,7 +363,7 @@ ENTRY(startup_32_smp)
call *(initial_code) call *(initial_code)
1: jmp 1b 1: jmp 1b
ENDPROC(startup_32_smp) SYM_FUNC_END(startup_32_smp)
#include "verify_cpu.S" #include "verify_cpu.S"
@ -392,7 +393,7 @@ setup_once:
andl $0,setup_once_ref /* Once is enough, thanks */ andl $0,setup_once_ref /* Once is enough, thanks */
ret ret
ENTRY(early_idt_handler_array) SYM_FUNC_START(early_idt_handler_array)
# 36(%esp) %eflags # 36(%esp) %eflags
# 32(%esp) %cs # 32(%esp) %cs
# 28(%esp) %eip # 28(%esp) %eip
@ -407,9 +408,9 @@ ENTRY(early_idt_handler_array)
i = i + 1 i = i + 1
.fill early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc .fill early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc
.endr .endr
ENDPROC(early_idt_handler_array) SYM_FUNC_END(early_idt_handler_array)
early_idt_handler_common: SYM_CODE_START_LOCAL(early_idt_handler_common)
/* /*
* The stack is the hardware frame, an error code or zero, and the * The stack is the hardware frame, an error code or zero, and the
* vector number. * vector number.
@ -460,10 +461,10 @@ early_idt_handler_common:
decl %ss:early_recursion_flag decl %ss:early_recursion_flag
addl $4, %esp /* pop pt_regs->orig_ax */ addl $4, %esp /* pop pt_regs->orig_ax */
iret iret
ENDPROC(early_idt_handler_common) SYM_CODE_END(early_idt_handler_common)
/* This is the default interrupt "handler" :-) */ /* This is the default interrupt "handler" :-) */
ENTRY(early_ignore_irq) SYM_FUNC_START(early_ignore_irq)
cld cld
#ifdef CONFIG_PRINTK #ifdef CONFIG_PRINTK
pushl %eax pushl %eax
@ -498,19 +499,16 @@ ENTRY(early_ignore_irq)
hlt_loop: hlt_loop:
hlt hlt
jmp hlt_loop jmp hlt_loop
ENDPROC(early_ignore_irq) SYM_FUNC_END(early_ignore_irq)
__INITDATA __INITDATA
.align 4 .align 4
GLOBAL(early_recursion_flag) SYM_DATA(early_recursion_flag, .long 0)
.long 0
__REFDATA __REFDATA
.align 4 .align 4
ENTRY(initial_code) SYM_DATA(initial_code, .long i386_start_kernel)
.long i386_start_kernel SYM_DATA(setup_once_ref, .long setup_once)
ENTRY(setup_once_ref)
.long setup_once
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_PAGE_TABLE_ISOLATION
#define PGD_ALIGN (2 * PAGE_SIZE) #define PGD_ALIGN (2 * PAGE_SIZE)
@ -553,7 +551,7 @@ EXPORT_SYMBOL(empty_zero_page)
__PAGE_ALIGNED_DATA __PAGE_ALIGNED_DATA
/* Page-aligned for the benefit of paravirt? */ /* Page-aligned for the benefit of paravirt? */
.align PGD_ALIGN .align PGD_ALIGN
ENTRY(initial_page_table) SYM_DATA_START(initial_page_table)
.long pa(initial_pg_pmd+PGD_IDENT_ATTR),0 /* low identity map */ .long pa(initial_pg_pmd+PGD_IDENT_ATTR),0 /* low identity map */
# if KPMDS == 3 # if KPMDS == 3
.long pa(initial_pg_pmd+PGD_IDENT_ATTR),0 .long pa(initial_pg_pmd+PGD_IDENT_ATTR),0
@ -581,17 +579,18 @@ ENTRY(initial_page_table)
.fill 1024, 4, 0 .fill 1024, 4, 0
#endif #endif
SYM_DATA_END(initial_page_table)
#endif #endif
.data .data
.balign 4 .balign 4
ENTRY(initial_stack) /*
/* * The SIZEOF_PTREGS gap is a convention which helps the in-kernel unwinder
* The SIZEOF_PTREGS gap is a convention which helps the in-kernel * reliably detect the end of the stack.
* unwinder reliably detect the end of the stack. */
*/ SYM_DATA(initial_stack,
.long init_thread_union + THREAD_SIZE - SIZEOF_PTREGS - \ .long init_thread_union + THREAD_SIZE -
TOP_OF_KERNEL_STACK_PADDING; SIZEOF_PTREGS - TOP_OF_KERNEL_STACK_PADDING)
__INITRODATA __INITRODATA
int_msg: int_msg:
@ -607,27 +606,28 @@ int_msg:
*/ */
.data .data
.globl boot_gdt_descr
ALIGN ALIGN
# early boot GDT descriptor (must use 1:1 address mapping) # early boot GDT descriptor (must use 1:1 address mapping)
.word 0 # 32 bit align gdt_desc.address .word 0 # 32 bit align gdt_desc.address
boot_gdt_descr: SYM_DATA_START_LOCAL(boot_gdt_descr)
.word __BOOT_DS+7 .word __BOOT_DS+7
.long boot_gdt - __PAGE_OFFSET .long boot_gdt - __PAGE_OFFSET
SYM_DATA_END(boot_gdt_descr)
# boot GDT descriptor (later on used by CPU#0): # boot GDT descriptor (later on used by CPU#0):
.word 0 # 32 bit align gdt_desc.address .word 0 # 32 bit align gdt_desc.address
ENTRY(early_gdt_descr) SYM_DATA_START(early_gdt_descr)
.word GDT_ENTRIES*8-1 .word GDT_ENTRIES*8-1
.long gdt_page /* Overwritten for secondary CPUs */ .long gdt_page /* Overwritten for secondary CPUs */
SYM_DATA_END(early_gdt_descr)
/* /*
* The boot_gdt must mirror the equivalent in setup.S and is * The boot_gdt must mirror the equivalent in setup.S and is
* used only for booting. * used only for booting.
*/ */
.align L1_CACHE_BYTES .align L1_CACHE_BYTES
ENTRY(boot_gdt) SYM_DATA_START(boot_gdt)
.fill GDT_ENTRY_BOOT_CS,8,0 .fill GDT_ENTRY_BOOT_CS,8,0
.quad 0x00cf9a000000ffff /* kernel 4GB code at 0x00000000 */ .quad 0x00cf9a000000ffff /* kernel 4GB code at 0x00000000 */
.quad 0x00cf92000000ffff /* kernel 4GB data at 0x00000000 */ .quad 0x00cf92000000ffff /* kernel 4GB data at 0x00000000 */
SYM_DATA_END(boot_gdt)

View file

@ -49,8 +49,7 @@ L3_START_KERNEL = pud_index(__START_KERNEL_map)
.text .text
__HEAD __HEAD
.code64 .code64
.globl startup_64 SYM_CODE_START_NOALIGN(startup_64)
startup_64:
UNWIND_HINT_EMPTY UNWIND_HINT_EMPTY
/* /*
* At this point the CPU runs in 64bit mode CS.L = 1 CS.D = 0, * At this point the CPU runs in 64bit mode CS.L = 1 CS.D = 0,
@ -90,7 +89,9 @@ startup_64:
/* Form the CR3 value being sure to include the CR3 modifier */ /* Form the CR3 value being sure to include the CR3 modifier */
addq $(early_top_pgt - __START_KERNEL_map), %rax addq $(early_top_pgt - __START_KERNEL_map), %rax
jmp 1f jmp 1f
ENTRY(secondary_startup_64) SYM_CODE_END(startup_64)
SYM_CODE_START(secondary_startup_64)
UNWIND_HINT_EMPTY UNWIND_HINT_EMPTY
/* /*
* At this point the CPU runs in 64bit mode CS.L = 1 CS.D = 0, * At this point the CPU runs in 64bit mode CS.L = 1 CS.D = 0,
@ -240,7 +241,7 @@ ENTRY(secondary_startup_64)
pushq %rax # target address in negative space pushq %rax # target address in negative space
lretq lretq
.Lafter_lret: .Lafter_lret:
END(secondary_startup_64) SYM_CODE_END(secondary_startup_64)
#include "verify_cpu.S" #include "verify_cpu.S"
@ -250,30 +251,28 @@ END(secondary_startup_64)
* up already except stack. We just set up stack here. Then call * up already except stack. We just set up stack here. Then call
* start_secondary() via .Ljump_to_C_code. * start_secondary() via .Ljump_to_C_code.
*/ */
ENTRY(start_cpu0) SYM_CODE_START(start_cpu0)
UNWIND_HINT_EMPTY UNWIND_HINT_EMPTY
movq initial_stack(%rip), %rsp movq initial_stack(%rip), %rsp
jmp .Ljump_to_C_code jmp .Ljump_to_C_code
END(start_cpu0) SYM_CODE_END(start_cpu0)
#endif #endif
/* Both SMP bootup and ACPI suspend change these variables */ /* Both SMP bootup and ACPI suspend change these variables */
__REFDATA __REFDATA
.balign 8 .balign 8
GLOBAL(initial_code) SYM_DATA(initial_code, .quad x86_64_start_kernel)
.quad x86_64_start_kernel SYM_DATA(initial_gs, .quad INIT_PER_CPU_VAR(fixed_percpu_data))
GLOBAL(initial_gs)
.quad INIT_PER_CPU_VAR(fixed_percpu_data) /*
GLOBAL(initial_stack) * The SIZEOF_PTREGS gap is a convention which helps the in-kernel unwinder
/* * reliably detect the end of the stack.
* The SIZEOF_PTREGS gap is a convention which helps the in-kernel */
* unwinder reliably detect the end of the stack. SYM_DATA(initial_stack, .quad init_thread_union + THREAD_SIZE - SIZEOF_PTREGS)
*/
.quad init_thread_union + THREAD_SIZE - SIZEOF_PTREGS
__FINITDATA __FINITDATA
__INIT __INIT
ENTRY(early_idt_handler_array) SYM_CODE_START(early_idt_handler_array)
i = 0 i = 0
.rept NUM_EXCEPTION_VECTORS .rept NUM_EXCEPTION_VECTORS
.if ((EXCEPTION_ERRCODE_MASK >> i) & 1) == 0 .if ((EXCEPTION_ERRCODE_MASK >> i) & 1) == 0
@ -289,9 +288,9 @@ ENTRY(early_idt_handler_array)
.fill early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc .fill early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc
.endr .endr
UNWIND_HINT_IRET_REGS offset=16 UNWIND_HINT_IRET_REGS offset=16
END(early_idt_handler_array) SYM_CODE_END(early_idt_handler_array)
early_idt_handler_common: SYM_CODE_START_LOCAL(early_idt_handler_common)
/* /*
* The stack is the hardware frame, an error code or zero, and the * The stack is the hardware frame, an error code or zero, and the
* vector number. * vector number.
@ -333,17 +332,11 @@ early_idt_handler_common:
20: 20:
decl early_recursion_flag(%rip) decl early_recursion_flag(%rip)
jmp restore_regs_and_return_to_kernel jmp restore_regs_and_return_to_kernel
END(early_idt_handler_common) SYM_CODE_END(early_idt_handler_common)
__INITDATA
.balign 4 #define SYM_DATA_START_PAGE_ALIGNED(name) \
GLOBAL(early_recursion_flag) SYM_START(name, SYM_L_GLOBAL, .balign PAGE_SIZE)
.long 0
#define NEXT_PAGE(name) \
.balign PAGE_SIZE; \
GLOBAL(name)
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_PAGE_TABLE_ISOLATION
/* /*
@ -358,11 +351,11 @@ GLOBAL(name)
*/ */
#define PTI_USER_PGD_FILL 512 #define PTI_USER_PGD_FILL 512
/* This ensures they are 8k-aligned: */ /* This ensures they are 8k-aligned: */
#define NEXT_PGD_PAGE(name) \ #define SYM_DATA_START_PTI_ALIGNED(name) \
.balign 2 * PAGE_SIZE; \ SYM_START(name, SYM_L_GLOBAL, .balign 2 * PAGE_SIZE)
GLOBAL(name)
#else #else
#define NEXT_PGD_PAGE(name) NEXT_PAGE(name) #define SYM_DATA_START_PTI_ALIGNED(name) \
SYM_DATA_START_PAGE_ALIGNED(name)
#define PTI_USER_PGD_FILL 0 #define PTI_USER_PGD_FILL 0
#endif #endif
@ -375,17 +368,23 @@ GLOBAL(name)
.endr .endr
__INITDATA __INITDATA
NEXT_PGD_PAGE(early_top_pgt) .balign 4
SYM_DATA_START_PTI_ALIGNED(early_top_pgt)
.fill 512,8,0 .fill 512,8,0
.fill PTI_USER_PGD_FILL,8,0 .fill PTI_USER_PGD_FILL,8,0
SYM_DATA_END(early_top_pgt)
NEXT_PAGE(early_dynamic_pgts) SYM_DATA_START_PAGE_ALIGNED(early_dynamic_pgts)
.fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0 .fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0
SYM_DATA_END(early_dynamic_pgts)
SYM_DATA(early_recursion_flag, .long 0)
.data .data
#if defined(CONFIG_XEN_PV) || defined(CONFIG_PVH) #if defined(CONFIG_XEN_PV) || defined(CONFIG_PVH)
NEXT_PGD_PAGE(init_top_pgt) SYM_DATA_START_PTI_ALIGNED(init_top_pgt)
.quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
.org init_top_pgt + L4_PAGE_OFFSET*8, 0 .org init_top_pgt + L4_PAGE_OFFSET*8, 0
.quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
@ -393,11 +392,13 @@ NEXT_PGD_PAGE(init_top_pgt)
/* (2^48-(2*1024*1024*1024))/(2^39) = 511 */ /* (2^48-(2*1024*1024*1024))/(2^39) = 511 */
.quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
.fill PTI_USER_PGD_FILL,8,0 .fill PTI_USER_PGD_FILL,8,0
SYM_DATA_END(init_top_pgt)
NEXT_PAGE(level3_ident_pgt) SYM_DATA_START_PAGE_ALIGNED(level3_ident_pgt)
.quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
.fill 511, 8, 0 .fill 511, 8, 0
NEXT_PAGE(level2_ident_pgt) SYM_DATA_END(level3_ident_pgt)
SYM_DATA_START_PAGE_ALIGNED(level2_ident_pgt)
/* /*
* Since I easily can, map the first 1G. * Since I easily can, map the first 1G.
* Don't set NX because code runs from these pages. * Don't set NX because code runs from these pages.
@ -407,25 +408,29 @@ NEXT_PAGE(level2_ident_pgt)
* the CPU should ignore the bit. * the CPU should ignore the bit.
*/ */
PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD) PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD)
SYM_DATA_END(level2_ident_pgt)
#else #else
NEXT_PGD_PAGE(init_top_pgt) SYM_DATA_START_PTI_ALIGNED(init_top_pgt)
.fill 512,8,0 .fill 512,8,0
.fill PTI_USER_PGD_FILL,8,0 .fill PTI_USER_PGD_FILL,8,0
SYM_DATA_END(init_top_pgt)
#endif #endif
#ifdef CONFIG_X86_5LEVEL #ifdef CONFIG_X86_5LEVEL
NEXT_PAGE(level4_kernel_pgt) SYM_DATA_START_PAGE_ALIGNED(level4_kernel_pgt)
.fill 511,8,0 .fill 511,8,0
.quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
SYM_DATA_END(level4_kernel_pgt)
#endif #endif
NEXT_PAGE(level3_kernel_pgt) SYM_DATA_START_PAGE_ALIGNED(level3_kernel_pgt)
.fill L3_START_KERNEL,8,0 .fill L3_START_KERNEL,8,0
/* (2^48-(2*1024*1024*1024)-((2^39)*511))/(2^30) = 510 */ /* (2^48-(2*1024*1024*1024)-((2^39)*511))/(2^30) = 510 */
.quad level2_kernel_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC .quad level2_kernel_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
.quad level2_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC .quad level2_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
SYM_DATA_END(level3_kernel_pgt)
NEXT_PAGE(level2_kernel_pgt) SYM_DATA_START_PAGE_ALIGNED(level2_kernel_pgt)
/* /*
* 512 MB kernel mapping. We spend a full page on this pagetable * 512 MB kernel mapping. We spend a full page on this pagetable
* anyway. * anyway.
@ -442,8 +447,9 @@ NEXT_PAGE(level2_kernel_pgt)
*/ */
PMDS(0, __PAGE_KERNEL_LARGE_EXEC, PMDS(0, __PAGE_KERNEL_LARGE_EXEC,
KERNEL_IMAGE_SIZE/PMD_SIZE) KERNEL_IMAGE_SIZE/PMD_SIZE)
SYM_DATA_END(level2_kernel_pgt)
NEXT_PAGE(level2_fixmap_pgt) SYM_DATA_START_PAGE_ALIGNED(level2_fixmap_pgt)
.fill (512 - 4 - FIXMAP_PMD_NUM),8,0 .fill (512 - 4 - FIXMAP_PMD_NUM),8,0
pgtno = 0 pgtno = 0
.rept (FIXMAP_PMD_NUM) .rept (FIXMAP_PMD_NUM)
@ -453,31 +459,32 @@ NEXT_PAGE(level2_fixmap_pgt)
.endr .endr
/* 6 MB reserved space + a 2MB hole */ /* 6 MB reserved space + a 2MB hole */
.fill 4,8,0 .fill 4,8,0
SYM_DATA_END(level2_fixmap_pgt)
NEXT_PAGE(level1_fixmap_pgt) SYM_DATA_START_PAGE_ALIGNED(level1_fixmap_pgt)
.rept (FIXMAP_PMD_NUM) .rept (FIXMAP_PMD_NUM)
.fill 512,8,0 .fill 512,8,0
.endr .endr
SYM_DATA_END(level1_fixmap_pgt)
#undef PMDS #undef PMDS
.data .data
.align 16 .align 16
.globl early_gdt_descr
early_gdt_descr:
.word GDT_ENTRIES*8-1
early_gdt_descr_base:
.quad INIT_PER_CPU_VAR(gdt_page)
ENTRY(phys_base) SYM_DATA(early_gdt_descr, .word GDT_ENTRIES*8-1)
/* This must match the first entry in level2_kernel_pgt */ SYM_DATA_LOCAL(early_gdt_descr_base, .quad INIT_PER_CPU_VAR(gdt_page))
.quad 0x0000000000000000
.align 16
/* This must match the first entry in level2_kernel_pgt */
SYM_DATA(phys_base, .quad 0x0)
EXPORT_SYMBOL(phys_base) EXPORT_SYMBOL(phys_base)
#include "../../x86/xen/xen-head.S" #include "../../x86/xen/xen-head.S"
__PAGE_ALIGNED_BSS __PAGE_ALIGNED_BSS
NEXT_PAGE(empty_zero_page) SYM_DATA_START_PAGE_ALIGNED(empty_zero_page)
.skip PAGE_SIZE .skip PAGE_SIZE
SYM_DATA_END(empty_zero_page)
EXPORT_SYMBOL(empty_zero_page) EXPORT_SYMBOL(empty_zero_page)

View file

@ -7,20 +7,20 @@
/* /*
* unsigned long native_save_fl(void) * unsigned long native_save_fl(void)
*/ */
ENTRY(native_save_fl) SYM_FUNC_START(native_save_fl)
pushf pushf
pop %_ASM_AX pop %_ASM_AX
ret ret
ENDPROC(native_save_fl) SYM_FUNC_END(native_save_fl)
EXPORT_SYMBOL(native_save_fl) EXPORT_SYMBOL(native_save_fl)
/* /*
* void native_restore_fl(unsigned long flags) * void native_restore_fl(unsigned long flags)
* %eax/%rdi: flags * %eax/%rdi: flags
*/ */
ENTRY(native_restore_fl) SYM_FUNC_START(native_restore_fl)
push %_ASM_ARG1 push %_ASM_ARG1
popf popf
ret ret
ENDPROC(native_restore_fl) SYM_FUNC_END(native_restore_fl)
EXPORT_SYMBOL(native_restore_fl) EXPORT_SYMBOL(native_restore_fl)

View file

@ -35,8 +35,7 @@
#define CP_PA_BACKUP_PAGES_MAP DATA(0x1c) #define CP_PA_BACKUP_PAGES_MAP DATA(0x1c)
.text .text
.globl relocate_kernel SYM_CODE_START_NOALIGN(relocate_kernel)
relocate_kernel:
/* Save the CPU context, used for jumping back */ /* Save the CPU context, used for jumping back */
pushl %ebx pushl %ebx
@ -93,8 +92,9 @@ relocate_kernel:
addl $(identity_mapped - relocate_kernel), %eax addl $(identity_mapped - relocate_kernel), %eax
pushl %eax pushl %eax
ret ret
SYM_CODE_END(relocate_kernel)
identity_mapped: SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
/* set return address to 0 if not preserving context */ /* set return address to 0 if not preserving context */
pushl $0 pushl $0
/* store the start address on the stack */ /* store the start address on the stack */
@ -191,8 +191,9 @@ identity_mapped:
addl $(virtual_mapped - relocate_kernel), %eax addl $(virtual_mapped - relocate_kernel), %eax
pushl %eax pushl %eax
ret ret
SYM_CODE_END(identity_mapped)
virtual_mapped: SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped)
movl CR4(%edi), %eax movl CR4(%edi), %eax
movl %eax, %cr4 movl %eax, %cr4
movl CR3(%edi), %eax movl CR3(%edi), %eax
@ -208,9 +209,10 @@ virtual_mapped:
popl %esi popl %esi
popl %ebx popl %ebx
ret ret
SYM_CODE_END(virtual_mapped)
/* Do the copies */ /* Do the copies */
swap_pages: SYM_CODE_START_LOCAL_NOALIGN(swap_pages)
movl 8(%esp), %edx movl 8(%esp), %edx
movl 4(%esp), %ecx movl 4(%esp), %ecx
pushl %ebp pushl %ebp
@ -270,6 +272,7 @@ swap_pages:
popl %ebx popl %ebx
popl %ebp popl %ebp
ret ret
SYM_CODE_END(swap_pages)
.globl kexec_control_code_size .globl kexec_control_code_size
.set kexec_control_code_size, . - relocate_kernel .set kexec_control_code_size, . - relocate_kernel

View file

@ -38,8 +38,7 @@
.text .text
.align PAGE_SIZE .align PAGE_SIZE
.code64 .code64
.globl relocate_kernel SYM_CODE_START_NOALIGN(relocate_kernel)
relocate_kernel:
/* /*
* %rdi indirection_page * %rdi indirection_page
* %rsi page_list * %rsi page_list
@ -103,8 +102,9 @@ relocate_kernel:
addq $(identity_mapped - relocate_kernel), %r8 addq $(identity_mapped - relocate_kernel), %r8
pushq %r8 pushq %r8
ret ret
SYM_CODE_END(relocate_kernel)
identity_mapped: SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
/* set return address to 0 if not preserving context */ /* set return address to 0 if not preserving context */
pushq $0 pushq $0
/* store the start address on the stack */ /* store the start address on the stack */
@ -209,8 +209,9 @@ identity_mapped:
movq $virtual_mapped, %rax movq $virtual_mapped, %rax
pushq %rax pushq %rax
ret ret
SYM_CODE_END(identity_mapped)
virtual_mapped: SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped)
movq RSP(%r8), %rsp movq RSP(%r8), %rsp
movq CR4(%r8), %rax movq CR4(%r8), %rax
movq %rax, %cr4 movq %rax, %cr4
@ -228,9 +229,10 @@ virtual_mapped:
popq %rbp popq %rbp
popq %rbx popq %rbx
ret ret
SYM_CODE_END(virtual_mapped)
/* Do the copies */ /* Do the copies */
swap_pages: SYM_CODE_START_LOCAL_NOALIGN(swap_pages)
movq %rdi, %rcx /* Put the page_list in %rcx */ movq %rdi, %rcx /* Put the page_list in %rcx */
xorl %edi, %edi xorl %edi, %edi
xorl %esi, %esi xorl %esi, %esi
@ -283,6 +285,7 @@ swap_pages:
jmp 0b jmp 0b
3: 3:
ret ret
SYM_CODE_END(swap_pages)
.globl kexec_control_code_size .globl kexec_control_code_size
.set kexec_control_code_size, . - relocate_kernel .set kexec_control_code_size, . - relocate_kernel

View file

@ -143,6 +143,13 @@ struct boot_params boot_params;
/* /*
* Machine setup.. * Machine setup..
*/ */
static struct resource rodata_resource = {
.name = "Kernel rodata",
.start = 0,
.end = 0,
.flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM
};
static struct resource data_resource = { static struct resource data_resource = {
.name = "Kernel data", .name = "Kernel data",
.start = 0, .start = 0,
@ -957,7 +964,9 @@ void __init setup_arch(char **cmdline_p)
code_resource.start = __pa_symbol(_text); code_resource.start = __pa_symbol(_text);
code_resource.end = __pa_symbol(_etext)-1; code_resource.end = __pa_symbol(_etext)-1;
data_resource.start = __pa_symbol(_etext); rodata_resource.start = __pa_symbol(__start_rodata);
rodata_resource.end = __pa_symbol(__end_rodata)-1;
data_resource.start = __pa_symbol(_sdata);
data_resource.end = __pa_symbol(_edata)-1; data_resource.end = __pa_symbol(_edata)-1;
bss_resource.start = __pa_symbol(__bss_start); bss_resource.start = __pa_symbol(__bss_start);
bss_resource.end = __pa_symbol(__bss_stop)-1; bss_resource.end = __pa_symbol(__bss_stop)-1;
@ -1046,6 +1055,7 @@ void __init setup_arch(char **cmdline_p)
/* after parse_early_param, so could debug it */ /* after parse_early_param, so could debug it */
insert_resource(&iomem_resource, &code_resource); insert_resource(&iomem_resource, &code_resource);
insert_resource(&iomem_resource, &rodata_resource);
insert_resource(&iomem_resource, &data_resource); insert_resource(&iomem_resource, &data_resource);
insert_resource(&iomem_resource, &bss_resource); insert_resource(&iomem_resource, &bss_resource);

View file

@ -31,7 +31,7 @@
#include <asm/cpufeatures.h> #include <asm/cpufeatures.h>
#include <asm/msr-index.h> #include <asm/msr-index.h>
ENTRY(verify_cpu) SYM_FUNC_START_LOCAL(verify_cpu)
pushf # Save caller passed flags pushf # Save caller passed flags
push $0 # Kill any dangerous flags push $0 # Kill any dangerous flags
popf popf
@ -137,4 +137,4 @@ ENTRY(verify_cpu)
popf # Restore caller passed flags popf # Restore caller passed flags
xorl %eax, %eax xorl %eax, %eax
ret ret
ENDPROC(verify_cpu) SYM_FUNC_END(verify_cpu)

View file

@ -21,6 +21,9 @@
#define LOAD_OFFSET __START_KERNEL_map #define LOAD_OFFSET __START_KERNEL_map
#endif #endif
#define EMITS_PT_NOTE
#define RO_EXCEPTION_TABLE_ALIGN 16
#include <asm-generic/vmlinux.lds.h> #include <asm-generic/vmlinux.lds.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
@ -141,17 +144,12 @@ SECTIONS
*(.text.__x86.indirect_thunk) *(.text.__x86.indirect_thunk)
__indirect_thunk_end = .; __indirect_thunk_end = .;
#endif #endif
} :text =0xcccc
/* End of text section */ /* End of text section, which should occupy whole number of pages */
_etext = .; _etext = .;
} :text = 0x9090
NOTES :text :note
EXCEPTION_TABLE(16) :text = 0x9090
/* .text should occupy whole number of pages */
. = ALIGN(PAGE_SIZE); . = ALIGN(PAGE_SIZE);
X86_ALIGN_RODATA_BEGIN X86_ALIGN_RODATA_BEGIN
RO_DATA(PAGE_SIZE) RO_DATA(PAGE_SIZE)
X86_ALIGN_RODATA_END X86_ALIGN_RODATA_END

Some files were not shown because too many files have changed in this diff Show more