MPMM - the Maximum Power Mitigation Mechanism - is an optional
microarchitectural feature present on some Armv9-A cores, introduced
with the Cortex-X2, Cortex-A710 and Cortex-A510 cores.
MPMM allows the SoC firmware to detect and limit high activity events
to assist in SoC processor power domain dynamic power budgeting and
limit the triggering of whole-rail (i.e. clock chopping) responses to
overcurrent conditions.
This feature is enabled via the `ENABLE_MPMM` build option.
Configuration can be done via FCONF by enabling `ENABLE_MPMM_FCONF`, or
by via the plaform-implemented `plat_mpmm_topology` function.
Change-Id: I77da82808ad4744ece8263f0bf215c5a091c3167
Signed-off-by: Chris Kay <chris.kay@arm.com>
This change introduces the `amu.mk` Makefile, used to remove the need
to manually include AMU sources into the various build images.
Makefiles requiring the list of AMU sources are expected to include
this file and use `${AMU_SOURCES}` to retrieve them.
Change-Id: I3d174033ecdce6439a110d776f0c064c67abcfe0
Signed-off-by: Chris Kay <chris.kay@arm.com>
This patch updates and refactors the GPT library and fixes bugs.
- Support all combinations of PGS, PPS, and L0GPTSZ parameters.
- PPS and PGS are set at runtime, L0GPTSZ is read from GPCCR_EL3.
- Use compiler definitions to simplify code.
- Renaming functions to better suit intended uses.
- MMU enabled before GPT APIs called.
- Add comments to make function usage more clear in GPT library.
- Added _rme suffix to file names to differentiate better from the
GPT file system code.
- Renamed gpt_defs.h to gpt_rme_private.h to better separate private
and public code.
- Renamed gpt_core.c to gpt_rme.c to better conform to TF-A precedent.
Signed-off-by: John Powell <john.powell@arm.com>
Change-Id: I4cbb23b0f81e697baa9fb23ba458aa3f7d1ed919
This patch introduces the Granule Protection Table (GPT)
library code. This implementation will be updated later to
be more flexible, as the current implementation is very rigid.
Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Change-Id: I3af824a28c6e9a5d36459c0c51d2d9bebfba1505
The changes include:
- A new build option (ENABLE_RME) to enable FEAT_RME
- New image called RMM. RMM is R-EL2 firmware that manages Realms.
When building TF-A, a path to RMM image can be specified using
the "RMM" build flag. If RMM image is not provided, TRP is built
by default and used as RMM image.
- Support for RMM image in fiptool
Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Change-Id: I017c23ef02e465a5198baafd665a60858ecd1b25
This patch adds a new context for realm world and realm world
awareness in context management.
Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Signed-off-by: Subhasish Ghosh <subhasish.ghosh@arm.com>
Change-Id: Ic17469393603e789d7adc025880346bc3d6233d7
This patch introduces the RMM dispatcher into BL31. This
will be the mechanism that will enable communication to
take place between the Realm and non-secure world. Currently
gives the capability for granules to be
transitioned from non-secure type to realm and vice versa.
Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Signed-off-by: Subhasish Ghosh <subhasish.ghosh@arm.com>
Change-Id: I1fdc99a4bdd42bc14911aa0c6954b131de309511
FEAT_RME introduces two additional security states,
Root and Realm security states. This patch adds Realm
security state awareness to SMCCC helpers and entry point info
structure.
Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Change-Id: I9cdefcc1aa71259b2de46e5fb62b28d658fa59bd
FEAT_HCX adds the extended hypervisor configuration register (HCRX_EL2)
and access to this register must be explicitly enabled through the
SCR_EL3.HXEn bit. This patch adds a new build flag ENABLE_FEAT_HCX to
allow the register to be accessed from EL2.
Signed-off-by: John Powell <john.powell@arm.com>
Change-Id: Ibb36ad90622f1dc857adab4b0d4d7a89456a522b
Introduced a build flag 'ENABLE_TRF_FOR_NS' to enable trace filter
control registers access in NS-EL2, or NS-EL1 (when NS-EL2 is
implemented but unused).
Change-Id: If3f53b8173a5573424b9a405a4bd8c206ffdeb8c
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
Introduced a build flag 'ENABLE_SYS_REG_TRACE_FOR_NS' to enable trace
system registers access in NS-EL2, or NS-EL1 (when NS-EL2 is
implemented but unused).
Change-Id: Idc1acede4186e101758cbf7bed5af7b634d7d18d
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
Introduced a build flag 'ENABLE_TRBE_FOR_NS' to enable trace buffer
control registers access in NS-EL2, or NS-EL1 (when NS-EL2 is
implemented but unused).
Change-Id: I285a672ccd395eebd377714c992bb21062a729cc
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
Add SMC wrappers for handshaking the existence
and basic parameter validation for the SMCCC/PCI
API. The actual read/write/segment validation is
implemented by a given platform which will enable
the API by defining SMC_PCI_SUPPORT.
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Change-Id: I4485ad0fe6003cec6f5eedef688914d100513c21
This adds the TRNG Firmware Interface Service to the standard
service dispatcher. This includes a method for dispatching entropy
requests to platforms and includes an entropy pool implementation to
avoid dropping any entropy requested from the platform.
Change-Id: I71cadb3cb377a507652eca9e0d68714c973026e9
Signed-off-by: Jimmy Brisson <jimmy.brisson@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
For SoCs which do not implement RAS, use DSB as a barrier to
synchronize pending external aborts at the entry and exit of
exception handlers. This is needed to isolate the SErrors to
appropriate context.
However, this introduces an unintended side effect as discussed
in the https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/3440
A summary of the side effect and a quick workaround is provided as
part of this patch and summarized here:
The explicit DSB at the entry of various exception vectors in BL31
for handling exceptions from lower ELs can inadvertently trigger an
SError exception in EL3 due to pending asyncrhonouus aborts in lower
ELs. This will end up being handled by serror_sp_elx in EL3 which will
ultimately panic and die.
The way to workaround is to update a flag to indicate if the exception
truly came from EL3. This flag is allocated in the cpu_context
structure. This is not a bullet proof solution to the problem at hand
because we assume the instructions following "isb" that help to update
the flag (lines 100-102 & 139-141) execute without causing further
exceptions.
Change-Id: I4d345b07d746a727459435ddd6abb37fda24a9bf
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
If FEAT_PMUv3 is implemented and PMEVTYPER<n>(_EL0).MT bit is implemented
as well, it is possible to control whether PMU counters take into account
events happening on other threads.
If FEAT_MTPMU is implemented, EL3 (or EL2) can override the MT bit
leaving it to effective state of 0 regardless of any write to it.
This patch introduces the DISABLE_MTPMU flag, which allows to diable
multithread event count from EL3 (or EL2). The flag is disabled
by default so the behavior is consistent with those architectures
that do not implement FEAT_MTPMU.
Signed-off-by: Javier Almansa Sobrino <javier.almansasobrino@arm.com>
Change-Id: Iee3a8470ae8ba13316af1bd40c8d4aa86e0cb85e
Usually, C has no problem up-converting types to larger bit sizes. MISRA
rule 10.7 requires that you not do this, or be very explicit about this.
This resolves the following required rule:
bl1/aarch64/bl1_context_mgmt.c:81:[MISRA C-2012 Rule 10.7 (required)]<None>
The width of the composite expression "0U | ((mode & 3U) << 2U) | 1U |
0x3c0U" (32 bits) is less that the right hand operand
"18446744073709547519ULL" (64 bits).
This also resolves MISRA defects such as:
bl2/aarch64/bl2arch_setup.c:18:[MISRA C-2012 Rule 12.2 (required)]
In the expression "3U << 20", shifting more than 7 bits, the number
of bits in the essential type of the left expression, "3U", is
not allowed.
Further, MISRA requires that all shifts don't overflow. The definition of
PAGE_SIZE was (1U << 12), and 1U is 8 bits. This caused about 50 issues.
This fixes the violation by changing the definition to 1UL << 12. Since
this uses 32bits, it should not create any issues for aarch32.
This patch also contains a fix for a build failure in the sun50i_a64
platform. Specifically, these misra fixes removed a single and
instruction,
92407e73 and x19, x19, #0xffffffff
from the cm_setup_context function caused a relocation in
psci_cpus_on_start to require a linker-generated stub. This increased the
size of the .text section and caused an alignment later on to go over a
page boundary and round up to the end of RAM before placing the .data
section. This sectionn is of non-zero size and therefore causes a link
error.
The fix included in this reorders the functions during link time
without changing their ording with respect to alignment.
Change-Id: I76b4b662c3d262296728a8b9aab7a33b02087f16
Signed-off-by: Jimmy Brisson <jimmy.brisson@arm.com>
This patch allows the system to fallback to a default CPU library
in case the MPID does not match with any of the supported ones.
This feature can be enabled by setting SUPPORT_UNKNOWN_MPID build
option to 1 (enabled by default only on arm_fpga platform).
This feature can be very dangerous on a production image and
therefore it MUST be disabled for Release images.
Signed-off-by: Javier Almansa Sobrino <javier.almansasobrino@arm.com>
Change-Id: I0df7ef2b012d7d60a4fd5de44dea1fbbb46881ba
Loop macros make it easier for developers to include new variables to
assert or define and also help code code readability on makefiles.
Change-Id: I0d21d6e67b3eca8976c4d856ac8ccc02c8bb5ffa
Signed-off-by: Leonardo Sandoval <leonardo.sandoval@linaro.org>
As per latest mailing communication [1], we decided to
update AT speculative workaround implementation in order to
disable page table walk for lower ELs(EL1 or EL0) immediately
after context switching to EL3 from lower ELs.
Previous implementation of AT speculative workaround is available
here: 45aecff00
AT speculative workaround is updated as below:
1. Avoid saving and restoring of SCTLR and TCR registers for EL1
in context save and restore routine respectively.
2. On EL3 entry, save SCTLR and TCR registers for EL1.
3. On EL3 entry, update EL1 system registers to disable stage 1
page table walk for lower ELs (EL1 and EL0) and enable EL1
MMU.
4. On EL3 exit, restore SCTLR and TCR registers for EL1 which
are saved in step 2.
[1]:
https://lists.trustedfirmware.org/pipermail/tf-a/2020-July/000586.html
Change-Id: Iee8de16f81dc970a8f492726f2ddd57e7bd9ffb5
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
The .rela.dyn section is the same for BL2-AT-EL3, BL31, TSP.
Move it to the common header file.
I slightly changed the definition so that we can do "RELA_SECTION >RAM".
It still produced equivalent elf images.
Please note I got rid of '.' from the VMA field. Otherwise, if the end
of previous .data section is not 8-byte aligned, it fails to link.
aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
make: *** [Makefile:1071: build/qemu/release/bl31/bl31.elf] Error 1
Change-Id: Iba7422d99c0374d4d9e97e6fd47bae129dba5cc9
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Move the data section to the common header.
I slightly tweaked some scripts as follows:
[1] bl1.ld.S has ALIGN(16). I added DATA_ALIGN macro, which is 1
by default, but overridden by bl1.ld.S. Currently, ALIGN(16)
of the .data section is redundant because commit 4128659076
("Fix boot failures on some builds linked with ld.lld.") padded
out the previous section to work around the issue of LLD version
<= 10.0. This will be fixed in the future release of LLVM, so
I am keeping the proper way to align LMA.
[2] bl1.ld.S and bl2_el3.ld.S define __DATA_RAM_{START,END}__ instead
of __DATA_{START,END}__. I put them out of the .data section.
[3] SORT_BY_ALIGNMENT() is missing tsp.ld.S, sp_min.ld.S, and
mediatek/mt6795/bl31.ld.S. This commit adds SORT_BY_ALIGNMENT()
for all images, so the symbol order in those three will change,
but I do not think it is a big deal.
Change-Id: I215bb23c319f045cd88e6f4e8ee2518c67f03692
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
The stacks section is the same for all BL linker scripts.
Move it to the common header file.
Change-Id: Ibd253488667ab4f69702d56ff9e9929376704f6c
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Move the bss section to the common header. This adds BAKERY_LOCK_NORMAL
and PMF_TIMESTAMP, which previously existed only in BL31. This is not
a big deal because unused data should not be compiled in the first
place. I believe this should be controlled by BL*_SOURCES in Makefiles,
not by linker scripts.
I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3,
BL31, BL31 for plat=uniphier. I did not see any more unexpected
code addition.
The bss section has bigger alignment. I added BSS_ALIGN for this.
Currently, SORT_BY_ALIGNMENT() is missing in sp_min.ld.S, and with this
change, the BSS symbols in SP_MIN will be sorted by the alignment.
This is not a big deal (or, even better in terms of the image size).
Change-Id: I680ee61f84067a559bac0757f9d03e73119beb33
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
The common section data are repeated in many linker scripts (often
twice in each script to support SEPARATE_CODE_AND_RODATA). When you
add a new read-only data section, you end up with touching lots of
places.
After this commit, you will only need to touch bl_common.ld.h when
you add a new section to RODATA_COMMON.
Replace a series of RO section with RODATA_COMMON, which contains
6 sections, some of which did not exist before.
This is not a big deal because unneeded data should not be compiled
in the first place. I believe this should be controlled by BL*_SOURCES
in Makefiles, not by linker scripts.
When I was working on this commit, the BL1 image size increased
due to the fconf_populator. Commit c452ba159c ("fconf: exclude
fconf_dyn_cfg_getter.c from BL1_SOURCES") fixed this issue.
I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3,
BL31, BL31 for plat=uniphier. I did not see any more unexpected
code addition.
Change-Id: I5d14d60dbe3c821765bce3ae538968ef266f1460
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
These are mostly used to collect data from special structure,
and repeated in many linker scripts.
To differentiate the alignment size between aarch32/aarch64, I added
a new macro STRUCT_ALIGN.
While I moved the PMF_SVC_DESCS, I dropped #if ENABLE_PMF conditional.
As you can see in include/lib/pmf/pmf_helpers.h, PMF_REGISTER_SERVICE*
are no-op when ENABLE_PMF=0. So, pmf_svc_descs and pmf_timestamp_array
data are not populated.
Change-Id: I3f4ab7fa18f76339f1789103407ba76bda7e56d0
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Necessary infrastructure added to integrate fconf framework in BL31 & SP_MIN.
Created few populator() functions which parse HW_CONFIG device tree
and registered them with fconf framework. Many of the changes are
only applicable for fvp platform.
This patch:
1. Adds necessary symbols and sections in BL31, SP_MIN linker script
2. Adds necessary memory map entry for translation in BL31, SP_MIN
3. Creates an abstraction layer for hardware configuration based on
fconf framework
4. Adds necessary changes to build flow (makefiles)
5. Minimal callback to read hw_config dtb for capturing properties
related to GIC(interrupt-controller node)
6. updates the fconf documentation
Change-Id: Ib6292071f674ef093962b9e8ba0d322b7bf919af
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
TF-A has so many linker scripts, at least one linker script for each BL
image, and some platforms have their own ones. They duplicate quite
similar code (and comments).
When we add some changes to linker scripts, we end up with touching
so many files. This is not nice in the maintainability perspective.
When you look at Linux kernel, the common code is macrofied in
include/asm-generic/vmlinux.lds.h, which is included from each arch
linker script, arch/*/kernel/vmlinux.lds.S
TF-A can follow this approach. Let's factor out the common code into
include/common/bl_common.ld.h
As a start point, this commit factors out the xlat_table section.
Change-Id: Ifa369e9b48e8e12702535d721cc2a16d12397895
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
This patch provides a fix for incorrect crash dump data for
lower EL when TF-A is built with HANDLE_EA_EL3_FIRST=1 option
which enables routing of External Aborts and SErrors to EL3.
Change-Id: I9d5e6775e6aad21db5b78362da6c3a3d897df977
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
This patch adds support to the build system to include support for the SPM
dispatcher when the SPD configuration option is spmd.
Signed-off-by: Achin Gupta <achin.gupta@arm.com>
Signed-off-by: Artsem Artsemenka <artsem.artsemenka@arm.com>
Change-Id: Ic1ae50ecd7403fcbcf1d318abdbd6ebdc642f732
At the moment, address demangling is only used by the backtrace
functionality. However, at some point, other parts of the TF-A
codebase may want to use it.
The 'demangle_address' function is replaced with a single XPACI
instruction which is also added in 'do_crash_reporting()'.
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
Change-Id: I4424dcd54d5bf0a5f9b2a0a84c4e565eec7329ec
The Secure Configuration Register is 64-bits in AArch64 and 32-bits in
AArch32. Use u_register_t instead of unsigned int to reflect this.
Change-Id: I51b69467baba36bf0cfaec2595dc8837b1566934
Signed-off-by: Louis Mayencourt <louis.mayencourt@arm.com>
Since BL31 PROGBITS and BL31 NOBITS sections are going to be
in non-adjacent memory regions, potentially far from each other,
some fixes are needed to support it completely.
1. adr instruction only allows computing the effective address
of a location only within 1MB range of the PC. However, adrp
instruction together with an add permits position independent
address of any location with 4GB range of PC.
2. Since BL31 _RW_END_ marks the end of BL31 image, care must be
taken that it is aligned to page size since we map this memory
region in BL31 using xlat_v2 lib utils which mandate alignment of
image size to page granularity.
Change-Id: I3451cc030d03cb2032db3cc088f0c0e2c84bffda
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
When I tried ENABLE_PIE for my PLAT=uniphier platform, BL31 crashed
at its entry. When it is built with ENABLE_PIE=1, some sections are
inserted before the executable code.
$ make PLAT=uniphier CROSS_COMPILE=aarch64-linux-gnu- ENABLE_PIE=1 bl31
$ aarch64-linux-gnu-objdump -h build/uniphier/release/bl31/bl31.elf | head -n 13
build/uniphier/release/bl31/bl31.elf: file format elf64-littleaarch64
Sections:
Idx Name Size VMA LMA File off Algn
0 .dynsym 000002a0 0000000081000000 0000000081000000 00010000 2**3
CONTENTS, ALLOC, LOAD, READONLY, DATA
1 .dynstr 000002a0 00000000810002a0 00000000810002a0 000102a0 2**0
CONTENTS, ALLOC, LOAD, READONLY, DATA
2 .hash 00000124 0000000081000540 0000000081000540 00010540 2**3
CONTENTS, ALLOC, LOAD, READONLY, DATA
3 ro 0000699c 0000000081000664 0000000081000664 00010664 2**11
CONTENTS, ALLOC, LOAD, CODE
The previous stage loader generally jumps over to the base address of
BL31, where no valid instruction exists.
I checked the linker script of Linux (arch/arm64/kernel/vmlinux.lds.S)
and U-Boot (arch/arm/cpu/armv8/u-boot.lds), both of which support
relocation. They simply discard those sections.
Do similar in TF-A too.
Change-Id: I6c33e9143856765d4ffa24f3924b0ab51a17cde9
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Even though ERET always causes a jump to another address, aarch64 CPUs
speculatively execute following instructions as if the ERET
instruction was not a jump instruction.
The speculative execution does not cross privilege-levels (to the jump
target as one would expect), but it continues on the kernel privilege
level as if the ERET instruction did not change the control flow -
thus execution anything that is accidentally linked after the ERET
instruction. Later, the results of this speculative execution are
always architecturally discarded, however they can leak data using
microarchitectural side channels. This speculative execution is very
reliable (seems to be unconditional) and it manages to complete even
relatively performance-heavy operations (e.g. multiple dependent
fetches from uncached memory).
This was fixed in Linux, FreeBSD, OpenBSD and Optee OS:
679db7080129fb48ace43a08873eceabfd092aa1
It is demonstrated in a SafeSide example:
https://github.com/google/safeside/blob/master/demos/eret_hvc_smc_wrapper.cchttps://github.com/google/safeside/blob/master/kernel_modules/kmod_eret_hvc_smc/eret_hvc_smc_module.c
Signed-off-by: Anthony Steinhauser <asteinhauser@google.com>
Change-Id: Iead39b0b9fb4b8d8b5609daaa8be81497ba63a0f
Since BL31 PROGBITS and BL31 NOBITS sections are going to be
in non-adjacent memory regions, potentially far from each other,
some fixes are needed to support it completely.
1. adr instruction only allows computing the effective address
of a location only within 1MB range of the PC. However, adrp
instruction together with an add permits position independent
address of any location with 4GB range of PC.
2. Since BL31 _RW_END_ marks the end of BL31 image, care must be
taken that it is aligned to page size since we map this memory
region in BL31 using xlat_v2 lib utils which mandate alignment of
image size to page granularity.
Change-Id: Ic745c5a130fe4239fa2742142d083b2bdc4e8b85
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
Some platforms are extremely memory constrained and must split BL31
between multiple non-contiguous areas in SRAM. Allow the NOBITS
sections (.bss, stacks, page tables, and coherent memory) to be placed
in a separate region of RAM from the loaded firmware image.
Because the NOBITS region may be at a lower address than the rest of
BL31, __RW_{START,END}__ and __BL31_{START,END}__ cannot include this
region, or el3_entrypoint_common would attempt to invalidate the dcache
for the entire address space. New symbols __NOBITS_{START,END}__ are
added when SEPARATE_NOBITS_REGION is enabled, and the dcached for the
NOBITS region is invalidated separately.
Signed-off-by: Samuel Holland <samuel@sholland.org>
Change-Id: Idedfec5e4dbee77e94f2fdd356e6ae6f4dc79d37
The Secure Partition Manager (SPM) prototype implementation is
being removed. This is preparatory work for putting in place a
dispatcher component that, in turn, enables partition managers
at S-EL2 / S-EL1.
This patch removes:
- The core service files (std_svc/spm)
- The Resource Descriptor headers (include/services)
- SPRT protocol support and service definitions
- SPCI protocol support and service definitions
Change-Id: Iaade6f6422eaf9a71187b1e2a4dffd7fb8766426
Signed-off-by: Paul Beesley <paul.beesley@arm.com>
Signed-off-by: Artsem Artsemenka <artsem.artsemenka@arm.com>
There are two different implementations of Secure Partition
management in TF-A. One is based on the "Management Mode" (MM)
design, the other is based on the Secure Partition Client Interface
(SPCI) specification. Currently there is a dependency between their
build flags that shouldn't exist, making further development
harder than it should be. This patch removes that
dependency, making the two flags function independently.
Before: ENABLE_SPM=1 is required for using either implementation.
By default, the SPCI-based implementation is enabled and
this is overridden if SPM_MM=1.
After: ENABLE_SPM=1 enables the SPCI-based implementation.
SPM_MM=1 enables the MM-based implementation.
The two build flags are mutually exclusive.
Note that the name of the ENABLE_SPM flag remains a bit
ambiguous - this will be improved in a subsequent patch. For this
patch the intention was to leave the name as-is so that it is
easier to track the changes that were made.
Change-Id: I8e64ee545d811c7000f27e8dc8ebb977d670608a
Signed-off-by: Paul Beesley <paul.beesley@arm.com>
EA handlers for exceptions taken from lower ELs at the end invokes
el3_exit function. However there was a bug with sp maintenance which
resulted in el3_exit setting runtime stack to context. This in turn
caused memory corruption on consecutive EL3 entries.
Signed-off-by: Jan Dabros <jsd@semihalf.com>
Change-Id: I0424245c27c369c864506f4baa719968890ce659
Ported the pmf asm macros and the asm code in the bl31 entrypoint
necessary for the instrumentation to AArch32.
Since smc dispatch is handled by the bl32 payload on AArch32, we
provide this service only if AARCH32_SP=sp_min is set.
Signed-off-by: Bence Szépkúti <bence.szepkuti@arm.com>
Change-Id: Id33b7e9762ae86a4f4b40d7f1b37a90e5130c8ac
The 9p interface provides abstraction layers allowing the software
that uses devices to be independent from the hardware.
This patch provides a file system abstraction to link drivers to their
devices and propose a common interface to expose driver operations to
higher layers. This file system can be used to access and configure a
device by doing read/write operations.
Signed-off-by: Ambroise Vincent <ambroise.vincent@arm.com>
Signed-off-by: Olivier Deprez <olivier.deprez@arm.com>
Change-Id: Ia9662393baf489855dc0c8f389fe4a0afbc9c255
When a Firmware is complied as Position Independent Executable it needs
to request GDT fixup by passing size of the memory region to
el3_entrypoint_common macro.
The Global descriptor table fixup will be done early on during cold boot
process of primary core.
Currently only BL31 supports PIE, but in future when BL2_AT_EL3 will be
compiled as PIE, it can simply pass fixup size to the common el3
entrypoint macro to fixup GDT.
The reason for this patch was to overcome the bug introduced by SHA
330ead806 which called fixup routine for each core causing
re-initializing of global pointers thus overwriting any changes
done by the previous core.
Change-Id: I55c792cc3ea9e7eef34c2e4653afd04572c4f055
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Currently, sections within .text/.rodata/.data/.bss are emitted in the
order they are seen by the linker. This leads to wasted space, when a
section with a larger alignment follows one with a smaller alignment.
We can avoid this wasted space by sorting the sections.
To take full advantage of this, we must disable generation of common
symbols, so "common" data can be sorted along with the rest of .bss.
An example of the improvement, from `make DEBUG=1 PLAT=sun50i_a64 bl31`:
.text => no change
.rodata => 16 bytes saved
.data => 11 bytes saved
.bss => 576 bytes saved
As a side effect, the addition of `-fno-common` in TF_CFLAGS makes it
easier to spot bugs in header files.
Signed-off-by: Samuel Holland <samuel@sholland.org>
Change-Id: I073630a9b0b84e7302a7a500d4bb4b547be01d51
Coherent I-cache is causing a prefetch violation where when the core
executes an instruction that has recently been modified, the core might
fetch a stale instruction which violates the ordering of instruction
fetches.
The workaround includes an instruction sequence to implementation
defined registers to trap all EL0 IC IVAU instructions to EL3 and a trap
handler to execute a TLB inner-shareable invalidation to an arbitrary
address followed by a DSB.
Signed-off-by: Lauren Wehrmeister <lauren.wehrmeister@arm.com>
Change-Id: Ic3b7cbb11cf2eaf9005523ef5578a372593ae4d6
This patch provides the following features and makes modifications
listed below:
- Individual APIAKey key generation for each CPU.
- New key generation on every BL31 warm boot and TSP CPU On event.
- Per-CPU storage of APIAKey added in percpu_data[]
of cpu_data structure.
- `plat_init_apiakey()` function replaced with `plat_init_apkey()`
which returns 128-bit value and uses Generic timer physical counter
value to increase the randomness of the generated key.
The new function can be used for generation of all ARMv8.3-PAuth keys
- ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.
- New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions
generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively;
pauth_disable_el1()` and `pauth_disable_el3()` functions disable
PAuth for EL1 and EL3 respectively;
`pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from
cpu-data structure.
- Combined `save_gp_pauth_registers()` function replaces calls to
`save_gp_registers()` and `pauth_context_save()`;
`restore_gp_pauth_registers()` replaces `pauth_context_restore()`
and `restore_gp_registers()` calls.
- `restore_gp_registers_eret()` function removed with corresponding
code placed in `el3_exit()`.
- Fixed the issue when `pauth_t pauth_ctx` structure allocated space
for 12 uint64_t PAuth registers instead of 10 by removal of macro
CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h`
and assigning its value to CTX_PAUTH_REGS_END.
- Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions
in `msr spsel` instruction instead of hard-coded values.
- Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.
Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
This patch adds support for the Undefined Behaviour sanitizer. There are
two types of support offered - minimalistic trapping support which
essentially immediately crashes on undefined behaviour and full support
with full debug messages.
The full support relies on ubsan.c which has been adapted from code used
by OPTEE.
Change-Id: I417c810f4fc43dcb56db6a6a555bfd0b38440727
Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>