This patch adds the errata management firmware interface for lower ELs
to discover details about CPU erratum. Based on the CPU erratum
identifier the interface enables the OS to find the mitigation of an
erratum in EL3.
The ABI can only be present in a system that is compliant with SMCCCv1.1
or higher. This implements v1.0 of the errata ABI spec.
For details on all possible return values, refer the design
documentation below:
ABI design documentation:
https://developer.arm.com/documentation/den0100/1-0?lang=en
Signed-off-by: Sona Mathew <SonaRebecca.Mathew@arm.com>
Change-Id: I70f0e2569cf92e6e02ad82e3e77874546232b89a
At the moment we only support FEAT_DIT to be either unconditionally
compiled in, or to be not supported at all.
Add support for runtime detection (ENABLE_DIT=2), by splitting
is_armv8_4_dit_present() into an ID register reading function and a
second function to report the support status. That function considers
both build time settings and runtime information (if needed).
We use ENABLE_DIT in two occassions in assembly code, where we just set
the DIT bit in the DIT system register.
Protect those two cases by reading the CPU ID register when ENABLE_DIT
is set to 2.
Change the FVP platform default to the now supported dynamic
option (=2), so the right decision can be made by the code at runtime.
Change-Id: I506d352f18e23c60db8cdf08edb449f60adbe098
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
This change forces LD to sort all input sections by alignment when
allocating them within an output section. This is done in some places
explicitly in the linker scripts today, but this makes sure we don't
miss any easy targets.
Change-Id: Id702a2a572f2b43c77d53634ddc64b0220d2560b
Signed-off-by: Chris Kay <chris.kay@arm.com>
This change forces LD to sort all input sections by alignment when
allocating them within an output section. This is done in some places
explicitly in the linker scripts today, but this makes sure we don't
miss any easy targets.
Change-Id: I33d5044e4d34a9d1187d0935ffc03d1f1177e340
Signed-off-by: Chris Kay <chris.kay@arm.com>
So far we have the ENABLE_AMU build option to include AMU register
handling code for enabling and context switch. There is also an
ENABLE_FEAT_AMUv1 option, solely to protect the HAFGRTR_EL2 system
register handling. The latter needs some alignment with the new feature
scheme, but it conceptually overlaps with the ENABLE_AMU option.
Since there is no real need for two separate options, unify both into a
new ENABLE_FEAT_AMU name in a first step. This is mostly just renaming at
this point, a subsequent patch will make use of the new feature handling
scheme.
Change-Id: I97d8a55bdee2ed1e1509fa9f2b09fd0bdd82736e
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
At the moment we only support access to the trace unit by system
registers (SYS_REG_TRACE) to be either unconditionally compiled in, or
to be not supported at all.
Add support for runtime detection (ENABLE_SYS_REG_TRACE_FOR_NS=2), by
adding is_feat_sys_reg_trace_supported(). That function considers both
build time settings and runtime information (if needed), and is used
before we access SYS_REG_TRACE related registers.
The FVP platform decided to compile in support unconditionally (=1),
even though this is an optional feature, so it is not available with the
FVP model's default command line.
Change that to the now supported dynamic option (=2), so the right
decision can be made by the code at runtime.
Change-Id: I450a574a4f6bd9fc269887037049c94c906f54b2
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
At the moment we only support FEAT_TRF to be either unconditionally
compiled in, or to be not supported at all.
Add support for runtime detection (ENABLE_TRF_FOR_NS=2), by splitting
is_feat_trf_present() into an ID register reading function and a second
function to report the support status. That function considers both
build time settings and runtime information (if needed), and is used
before we access TRF related registers.
Also move the context saving code from assembly to C, and use the new
is_feat_trf_supported() function to guard its execution.
The FVP platform decided to compile in support unconditionally (=1),
even though FEAT_TRF is an ARMv8.4 feature, so is not available with the
FVP model's default command line.
Change that to the now supported dynamic option (=2), so the right
decision can be made by the code at runtime.
Change-Id: Ia97b01adbe24970a4d837afd463dc5506b7295a3
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Some of our specialized sections are not prefixed with the conventional
period. The compiler uses input section names to derive certain other
section names (e.g. `.rela.text`, `.relacpu_ops`), and these can be
difficult to select in linker scripts when there is a lack of a
delimiter.
This change introduces the period prefix to all specialized section
names.
BREAKING-CHANGE: All input and output linker section names have been
prefixed with the period character, e.g. `cpu_ops` -> `.cpu_ops`.
Change-Id: I51c13c5266d5975fbd944ef4961328e72f82fc1c
Signed-off-by: Chris Kay <chris.kay@arm.com>
The following build system variables have been renamed:
- `LINKERFILE` -> `DEFAULT_LINKER_SCRIPT`
- `BL_LINKERFILE` -> `DEFAULT_LINKER_SCRIPT_SOURCE`
- `<IMAGE>_LINKERFILE` -> `<IMAGE>_DEFAULT_LINKER_SCRIPT_SOURCE`
These new names better reflect how each variable is used:
1. the default linker script is passed via `-dT` instead of `-T`
2. linker script source files are first preprocessed
Additionally, linker scripts are now placed in the build directory
relative to where they exist in the source directory. For example,
the `bl32/sp_min/sp_min.ld.S` would now preprocess to
`sp_min/sp_min.ld` instead of just `bl32.ld`
BREAKING-CHANGE: The `LINKERFILE`, `BL_LINKERFILE` and
`<IMAGE_LINKERFILE>` build system variables have been renamed. See the
commit message for more information.
Change-Id: If8cef65dcb8820e8993736702c8741e97a66e6cc
Signed-off-by: Chris Kay <chris.kay@arm.com>
There are a variety of code styles used by the various linker scripts
around the code-base. This change brings them in line with one another
and attempts to make the scripts more friendly for skim-readers.
Change-Id: Ibee2afad0d543129c9ba5a8a22e3ec17d77e36ea
Signed-off-by: Chris Kay <chris.kay@arm.com>
In test_memory_send the variable i is of unsigned type, so
it is never negative. If i is 0, the result of i-- is
4294967295. Don't know what happens if trying to
access composite->address_range_array[4294967295].
Made i a signed integer.
Signed-off-by: Thomas Viehweger <Thomas.Viehweger@rohde-schwarz.com>
Change-Id: I8b4e532749b5e86e4b5acd238e72c3f88e309ff2
New helper functions are created to handle data & prefetch aborts
in AARCH32. They call platform functions, just like what
report_exception is doing.
As extended MSR/MRS instructions (to access lr_abt in monitor mode)
are only available if CPU (Armv7) has virtualization extension,
the functions branch to original report_exception handlers if this is
not the case.
Those new helpers are created mainly to distinguish data and prefetch
aborts, as they both share the same mode.
This adds 40 bytes of code.
Change-Id: I5dd31930344ad4e3a658f8a9d366a87a300aeb67
Signed-off-by: Yann Gautier <yann.gautier@st.com>
Introduce initial test cases to the TSP which are
designed to be exercised by the FF-A Test Driver
in the Normal World. These have been designed to
test basic functionality of the EL3 SPMC.
These tests currently ensure the following functionality:
- Partition discovery.
- Direct messaging.
- Communication with a Logical SP.
- Memory Sharing and Lending ABIs
- Sharing of contiguous and non-contiguous memory regions.
- Memory region descriptors spread of over multiple
invocations.
Signed-off-by: Marc Bonnici <marc.bonnici@arm.com>
Signed-off-by: Shruti Gupta <shruti.gupta@arm.com>
Change-Id: Iaee4180aa18d6b7ac7b53685c6589f0ab306e876
Include ffa_helpers originally taken from the TF-A Tests repo
to provide support for additional FF-A functionality.
Change-Id: Iacc3ee270d5e3903f86f8078ed915d1e791c1298
Signed-off-by: Marc Bonnici <marc.bonnici@arm.com>
Signed-off-by: Shruti Gupta <shruti.gupta@arm.com>
This patch adds the FF-A programming model in the test
secure payload to ensure that it can be used to test
the following spec features.
1. SP initialisation on the primary and secondary cpus.
2. An event loop to receive direct requests and respond
with direct responses.
3. Ability to receive messages that indicate power on
and off of a cpu.
4. Ability to handle a secure interrupt.
Signed-off-by: Achin Gupta <achin.gupta@arm.com>
Signed-off-by: Marc Bonnici <marc.bonnici@arm.com>
Signed-off-by: Shruti <shruti.gupta@arm.com>
Change-Id: I81cf744904d5cdc0b27862b5e4bc6f2cfe58a13a
Cortex-A15 does not support FEAT_CSV2 so the existing workaround for
Spectre V2 is sufficient to mitigate against Spectre BHB attacks,
however the code needed to be updated to work with the new build flag.
Also, some code was refactored several years ago and not updated in
the Cortex-A15 library file so this patch fixes that as well.
Signed-off-by: John Powell <john.powell@arm.com>
Change-Id: I768c88a38c561c91019b038ac6c22b291955f18e
Add tsp service to check the value of the PSTATE DIT bit is as
expected and toggle it's value. This is used to ensure that
the DIT bit is maintained during a switch from the Normal to
Secure worlds and back.
Change-Id: I4e8bdfa6530e5e75925c0079d4fa2795133c5105
Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
sha 4ce3e99a3 introduced printf format specifiers for fixed width
types, which uses PRI*64 instead of "ll" for 64 bit values.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: Ic6811cc1788c698adde0807e5f8ab5290a900a26
Use long instead of long long on aarch64 for 64_t stdint types.
Introduce inttypes.h to properly support printf format specifiers for
fixed width types for such change.
Change-Id: I0bca594687a996fde0a9702d7a383055b99f10a1
Signed-off-by: Scott Branden <scott.branden@broadcom.com>
This change introduces the `amu.mk` Makefile, used to remove the need
to manually include AMU sources into the various build images.
Makefiles requiring the list of AMU sources are expected to include
this file and use `${AMU_SOURCES}` to retrieve them.
Change-Id: I3d174033ecdce6439a110d776f0c064c67abcfe0
Signed-off-by: Chris Kay <chris.kay@arm.com>
Currently on image entry, the data cache in the RW address range is
invalidated before MMU is enabled to safeguard against potential
stale data from previous firmware stage. If PIE is enabled however,
RO sections including the GOT may be also modified during pie fixup.
Therefore, to be on the safe side, invalidate the entire image
region if PIE is enabled.
Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Change-Id: I7ee2a324fe4377b026e32f9ab842617ad4e09d89
Introduced a build flag 'ENABLE_TRF_FOR_NS' to enable trace filter
control registers access in NS-EL2, or NS-EL1 (when NS-EL2 is
implemented but unused).
Change-Id: If3f53b8173a5573424b9a405a4bd8c206ffdeb8c
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
Introduced a build flag 'ENABLE_SYS_REG_TRACE_FOR_NS' to enable trace
system registers access in NS-EL2, or NS-EL1 (when NS-EL2 is
implemented but unused).
Change-Id: Idc1acede4186e101758cbf7bed5af7b634d7d18d
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
Only BL32 (SP_min) is supported at the moment, BL1 and BL2_AT_EL3 are just
stubbed with _pie_fixup_size=0.
The changes are an adaptation for AARCH32 on what has been done for
PIE support on AARCH64.
The RELA_SECTION is redefined for AARCH32, as the created section is
.rel.dyn and the symbols are .rel*.
Change-Id: I92bafe70e6b77735f6f890f32f2b637b98cf01b9
Signed-off-by: Yann Gautier <yann.gautier@st.com>
The use of end addresses is preferred over the size of sections.
This was done for some AARCH64 files for PIE with commit [1],
and some extra explanations can be found in its commit message.
Align the missing AARCH64 files.
For AARCH32 files, this is required to prepare PIE support introduction.
[1] f1722b693d ("PIE: Use PC relative adrp/adr for symbol reference")
Change-Id: I8f1c06580182b10c680310850f72904e58a54d7d
Signed-off-by: Yann Gautier <yann.gautier@st.com>
The Trusted Random Number Generator service is using the standard SMC
service dispatcher, running in BL31. For that reason we list the files
implementing the service in bl31.mk.
However when building for a 32-bit TF-A runtime, sp_min.mk is the
Makefile snippet used, so we have to add the files into there as well.
This fixes 32-bit builds of platforms that provide the TRNG service.
Change-Id: I8be61522300d36477a9ee0a9ce159a140390b254
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
If FEAT_PMUv3 is implemented and PMEVTYPER<n>(_EL0).MT bit is implemented
as well, it is possible to control whether PMU counters take into account
events happening on other threads.
If FEAT_MTPMU is implemented, EL3 (or EL2) can override the MT bit
leaving it to effective state of 0 regardless of any write to it.
This patch introduces the DISABLE_MTPMU flag, which allows to diable
multithread event count from EL3 (or EL2). The flag is disabled
by default so the behavior is consistent with those architectures
that do not implement FEAT_MTPMU.
Signed-off-by: Javier Almansa Sobrino <javier.almansasobrino@arm.com>
Change-Id: Iee3a8470ae8ba13316af1bd40c8d4aa86e0cb85e
This patch fixes the following compilation error
reported by aarch64-none-elf-gcc 11.0.0:
bl32/tsp/tsp_main.c: In function 'tsp_smc_handler':
bl32/tsp/tsp_main.c:393:9: error: 'tsp_get_magic'
accessing 32 bytes in a region of size 16
[-Werror=stringop-overflow=]
393 | tsp_get_magic(service_args);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
bl32/tsp/tsp_main.c:393:9: note: referencing argument 1
of type 'uint64_t *' {aka 'long long unsigned int *'}
In file included from bl32/tsp/tsp_main.c:19:
bl32/tsp/tsp_private.h:64:6: note: in a call to function 'tsp_get_magic'
64 | void tsp_get_magic(uint64_t args[4]);
| ^~~~~~~~~~~~~
by changing declaration of tsp_get_magic function from
void tsp_get_magic(uint64_t args[4]);
to
uint128_t tsp_get_magic(void);
which returns arguments directly in x0 and x1 registers.
In bl32\tsp\tsp_main.c the current tsp_smc_handler()
implementation calls tsp_get_magic(service_args);
, where service_args array is declared as
uint64_t service_args[2];
and tsp_get_magic() in bl32\tsp\aarch64\tsp_request.S
copies only 2 registers in output buffer:
/* Store returned arguments to the array */
stp x0, x1, [x4, #0]
Change-Id: Ib34759fc5d7bb803e6c734540d91ea278270b330
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
Usually, C has no problem up-converting types to larger bit sizes. MISRA
rule 10.7 requires that you not do this, or be very explicit about this.
This resolves the following required rule:
bl1/aarch64/bl1_context_mgmt.c:81:[MISRA C-2012 Rule 10.7 (required)]<None>
The width of the composite expression "0U | ((mode & 3U) << 2U) | 1U |
0x3c0U" (32 bits) is less that the right hand operand
"18446744073709547519ULL" (64 bits).
This also resolves MISRA defects such as:
bl2/aarch64/bl2arch_setup.c:18:[MISRA C-2012 Rule 12.2 (required)]
In the expression "3U << 20", shifting more than 7 bits, the number
of bits in the essential type of the left expression, "3U", is
not allowed.
Further, MISRA requires that all shifts don't overflow. The definition of
PAGE_SIZE was (1U << 12), and 1U is 8 bits. This caused about 50 issues.
This fixes the violation by changing the definition to 1UL << 12. Since
this uses 32bits, it should not create any issues for aarch32.
This patch also contains a fix for a build failure in the sun50i_a64
platform. Specifically, these misra fixes removed a single and
instruction,
92407e73 and x19, x19, #0xffffffff
from the cm_setup_context function caused a relocation in
psci_cpus_on_start to require a linker-generated stub. This increased the
size of the .text section and caused an alignment later on to go over a
page boundary and round up to the end of RAM before placing the .data
section. This sectionn is of non-zero size and therefore causes a link
error.
The fix included in this reorders the functions during link time
without changing their ording with respect to alignment.
Change-Id: I76b4b662c3d262296728a8b9aab7a33b02087f16
Signed-off-by: Jimmy Brisson <jimmy.brisson@arm.com>
This assert is present in all other linker scripts. This checks the
size of BL32 doesn't exceed its defined limit.
Change-Id: I0005959b5591d3eebd870045adafe437108bc9e1
Signed-off-by: Yann Gautier <yann.gautier@st.com>
The macro SORT_BY_ALIGNMENT is used for .text* and .rodata*. This allows
reducing the space lost to object alignment. This is an alignment with
the following patch:
ebd6efae67
Some comments are also aligned with other linker scripts.
Change-Id: I2ea59edb445af0ed8c08fd883ffbf56852570d0c
Signed-off-by: Yann Gautier <yann.gautier@st.com>
The .rela.dyn section is the same for BL2-AT-EL3, BL31, TSP.
Move it to the common header file.
I slightly changed the definition so that we can do "RELA_SECTION >RAM".
It still produced equivalent elf images.
Please note I got rid of '.' from the VMA field. Otherwise, if the end
of previous .data section is not 8-byte aligned, it fails to link.
aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
make: *** [Makefile:1071: build/qemu/release/bl31/bl31.elf] Error 1
Change-Id: Iba7422d99c0374d4d9e97e6fd47bae129dba5cc9
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Move the data section to the common header.
I slightly tweaked some scripts as follows:
[1] bl1.ld.S has ALIGN(16). I added DATA_ALIGN macro, which is 1
by default, but overridden by bl1.ld.S. Currently, ALIGN(16)
of the .data section is redundant because commit 4128659076
("Fix boot failures on some builds linked with ld.lld.") padded
out the previous section to work around the issue of LLD version
<= 10.0. This will be fixed in the future release of LLVM, so
I am keeping the proper way to align LMA.
[2] bl1.ld.S and bl2_el3.ld.S define __DATA_RAM_{START,END}__ instead
of __DATA_{START,END}__. I put them out of the .data section.
[3] SORT_BY_ALIGNMENT() is missing tsp.ld.S, sp_min.ld.S, and
mediatek/mt6795/bl31.ld.S. This commit adds SORT_BY_ALIGNMENT()
for all images, so the symbol order in those three will change,
but I do not think it is a big deal.
Change-Id: I215bb23c319f045cd88e6f4e8ee2518c67f03692
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
The stacks section is the same for all BL linker scripts.
Move it to the common header file.
Change-Id: Ibd253488667ab4f69702d56ff9e9929376704f6c
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Move the bss section to the common header. This adds BAKERY_LOCK_NORMAL
and PMF_TIMESTAMP, which previously existed only in BL31. This is not
a big deal because unused data should not be compiled in the first
place. I believe this should be controlled by BL*_SOURCES in Makefiles,
not by linker scripts.
I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3,
BL31, BL31 for plat=uniphier. I did not see any more unexpected
code addition.
The bss section has bigger alignment. I added BSS_ALIGN for this.
Currently, SORT_BY_ALIGNMENT() is missing in sp_min.ld.S, and with this
change, the BSS symbols in SP_MIN will be sorted by the alignment.
This is not a big deal (or, even better in terms of the image size).
Change-Id: I680ee61f84067a559bac0757f9d03e73119beb33
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
The common section data are repeated in many linker scripts (often
twice in each script to support SEPARATE_CODE_AND_RODATA). When you
add a new read-only data section, you end up with touching lots of
places.
After this commit, you will only need to touch bl_common.ld.h when
you add a new section to RODATA_COMMON.
Replace a series of RO section with RODATA_COMMON, which contains
6 sections, some of which did not exist before.
This is not a big deal because unneeded data should not be compiled
in the first place. I believe this should be controlled by BL*_SOURCES
in Makefiles, not by linker scripts.
When I was working on this commit, the BL1 image size increased
due to the fconf_populator. Commit c452ba159c ("fconf: exclude
fconf_dyn_cfg_getter.c from BL1_SOURCES") fixed this issue.
I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3,
BL31, BL31 for plat=uniphier. I did not see any more unexpected
code addition.
Change-Id: I5d14d60dbe3c821765bce3ae538968ef266f1460
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
These are mostly used to collect data from special structure,
and repeated in many linker scripts.
To differentiate the alignment size between aarch32/aarch64, I added
a new macro STRUCT_ALIGN.
While I moved the PMF_SVC_DESCS, I dropped #if ENABLE_PMF conditional.
As you can see in include/lib/pmf/pmf_helpers.h, PMF_REGISTER_SERVICE*
are no-op when ENABLE_PMF=0. So, pmf_svc_descs and pmf_timestamp_array
data are not populated.
Change-Id: I3f4ab7fa18f76339f1789103407ba76bda7e56d0
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
sp_min.ld.S is used for aarch32.
ALIGN(4) is used for alignment of the other structures. I do not think
struct fconf_populator is a special case. Let's use ALIGN(4) here too.
Perhaps, this is just a copy-paste mistake of commit 26d1e0c330
("fconf: necessary modifications to support fconf in BL31 & SP_MIN").
Change-Id: I29f4c68680842c1b5ef913934b4ccf378e9bfcfb
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
CPUs use console to print debug/info messages. This critical section
must be guarded by locks to avoid overlaps in messages from multiple
CPUs.
Change-Id: I786bf90072c1ed73c4f53d8c950979d95255e67e
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
Necessary infrastructure added to integrate fconf framework in BL31 & SP_MIN.
Created few populator() functions which parse HW_CONFIG device tree
and registered them with fconf framework. Many of the changes are
only applicable for fvp platform.
This patch:
1. Adds necessary symbols and sections in BL31, SP_MIN linker script
2. Adds necessary memory map entry for translation in BL31, SP_MIN
3. Creates an abstraction layer for hardware configuration based on
fconf framework
4. Adds necessary changes to build flow (makefiles)
5. Minimal callback to read hw_config dtb for capturing properties
related to GIC(interrupt-controller node)
6. updates the fconf documentation
Change-Id: Ib6292071f674ef093962b9e8ba0d322b7bf919af
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
TF-A has so many linker scripts, at least one linker script for each BL
image, and some platforms have their own ones. They duplicate quite
similar code (and comments).
When we add some changes to linker scripts, we end up with touching
so many files. This is not nice in the maintainability perspective.
When you look at Linux kernel, the common code is macrofied in
include/asm-generic/vmlinux.lds.h, which is included from each arch
linker script, arch/*/kernel/vmlinux.lds.S
TF-A can follow this approach. Let's factor out the common code into
include/common/bl_common.ld.h
As a start point, this commit factors out the xlat_table section.
Change-Id: Ifa369e9b48e8e12702535d721cc2a16d12397895
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
In CPU resume function, CPU suspend count was printed instead of CPU
resume count.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I0c081dc03a4ccfb2129687f690667c5ceed00a5f
This implementation simply mimics that of BL31.
Change-Id: Ibbaa4ca012d38ac211c52b0b3e97449947160e07
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Even though ERET always causes a jump to another address, aarch64 CPUs
speculatively execute following instructions as if the ERET
instruction was not a jump instruction.
The speculative execution does not cross privilege-levels (to the jump
target as one would expect), but it continues on the kernel privilege
level as if the ERET instruction did not change the control flow -
thus execution anything that is accidentally linked after the ERET
instruction. Later, the results of this speculative execution are
always architecturally discarded, however they can leak data using
microarchitectural side channels. This speculative execution is very
reliable (seems to be unconditional) and it manages to complete even
relatively performance-heavy operations (e.g. multiple dependent
fetches from uncached memory).
This was fixed in Linux, FreeBSD, OpenBSD and Optee OS:
679db7080129fb48ace43a08873eceabfd092aa1
It is demonstrated in a SafeSide example:
https://github.com/google/safeside/blob/master/demos/eret_hvc_smc_wrapper.cchttps://github.com/google/safeside/blob/master/kernel_modules/kmod_eret_hvc_smc/eret_hvc_smc_module.c
Signed-off-by: Anthony Steinhauser <asteinhauser@google.com>
Change-Id: Iead39b0b9fb4b8d8b5609daaa8be81497ba63a0f
Ported the pmf asm macros and the asm code in the bl31 entrypoint
necessary for the instrumentation to AArch32.
Since smc dispatch is handled by the bl32 payload on AArch32, we
provide this service only if AARCH32_SP=sp_min is set.
Signed-off-by: Bence Szépkúti <bence.szepkuti@arm.com>
Change-Id: Id33b7e9762ae86a4f4b40d7f1b37a90e5130c8ac
This patch changes implementation for disabling Secure Cycle
Counter. For ARMv8.5 the counter gets disabled by setting
SDCR.SCCD bit on CPU cold/warm boot. For the earlier
architectures PMCR register is saved/restored on secure
world entry/exit from/to Non-secure state, and cycle counting
gets disabled by setting PMCR.DP bit.
In 'include\aarch32\arch.h' header file new
ARMv8.5-PMU related definitions were added.
Change-Id: Ia8845db2ebe8de940d66dff479225a5b879316f8
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
This patch provides the following features and makes modifications
listed below:
- Individual APIAKey key generation for each CPU.
- New key generation on every BL31 warm boot and TSP CPU On event.
- Per-CPU storage of APIAKey added in percpu_data[]
of cpu_data structure.
- `plat_init_apiakey()` function replaced with `plat_init_apkey()`
which returns 128-bit value and uses Generic timer physical counter
value to increase the randomness of the generated key.
The new function can be used for generation of all ARMv8.3-PAuth keys
- ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.
- New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions
generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively;
pauth_disable_el1()` and `pauth_disable_el3()` functions disable
PAuth for EL1 and EL3 respectively;
`pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from
cpu-data structure.
- Combined `save_gp_pauth_registers()` function replaces calls to
`save_gp_registers()` and `pauth_context_save()`;
`restore_gp_pauth_registers()` replaces `pauth_context_restore()`
and `restore_gp_registers()` calls.
- `restore_gp_registers_eret()` function removed with corresponding
code placed in `el3_exit()`.
- Fixed the issue when `pauth_t pauth_ctx` structure allocated space
for 12 uint64_t PAuth registers instead of 10 by removal of macro
CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h`
and assigning its value to CTX_PAUTH_REGS_END.
- Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions
in `msr spsel` instruction instead of hard-coded values.
- Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.
Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
This patch adds support for the new Memory Tagging Extension arriving in
ARMv8.5. MTE support is now enabled by default on systems that support
at EL0. To enable it at ELx for both the non-secure and the secure
world, the compiler flag CTX_INCLUDE_MTE_REGS includes register saving
and restoring when necessary in order to prevent register leakage
between the worlds.
Change-Id: I2d4ea993d6b11654ea0d4757d00ca20d23acf36c
Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>