As per SMCCC spec Table 2.1 bit 23:17 must be zero (MBZ),
for all Fast Calls, when bit[31] == 1.
Adding this check to ensure SMC FIDs when get to the SMC handler
have these bits (23:17) cleared, if not capture and report them
as an unknown SMCs at the core.
Also the C runtime stack is copied to the stackpointer well in
advance, to leverage the existing el3_exit routine for unknown SMC.
Change-Id: I9972216db5ac164815011177945fb34dadc871b0
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
When we reach sysreg_handler64 from any trap handling we are entering
this path from lower EL and thus we should be calling lower_el_panic
reporting mechanism to print panic report.
Make report_elx_panic available through assembly func elx_panic which
could be used for reporting any lower_el_panic.
Change-Id: Ieb260cf20ea327a59db84198b2c6a6bfc9ca9537
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Current panic call invokes do_panic which calls el3_panic, but now panic
handles only panic from EL3 anid clear separation to use lower_el_panic()
which handles panic from lower ELs.
So now we can remove do_panic and just call el3_panic for all panics.
Change-Id: I739c69271b9fb15c1176050877a9b0c0394dc739
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Cleanup weak links to el3_panic and restrict crash_reporting usage
to bl31.
Crash reporting is not used with bl1, bl2 and weak linkage to el3_panic
is used, this can cause ambiguity in understanding the code so remove
this weak linkage and introduce funcs that should be used when we have
crash reporting for el3 panics.
Change-Id: Ic5c711143ba36898ef9574a078b8fa02effceb12
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Currently we call el3_panic for panics from EL3 and elx_panic for
panics from lower ELs.
When we boot into a rich OS environment and interact with BL31 using
SMC/ABI calls and we can also decide to handle any lower EL panics in
EL3. Panic can occur in lower EL from rich OS or during SMC/ABI calls
after context switch to EL3.
But after booting into any rich OS we may land in panic either from
rich OS or while servicing any SMC call, here the logic to use
el3_panic or elx_panic is flawed as spsr_el3[3:0] is always EL3h
and end up in elx_panic even if panic occurred from EL3 during
SMC handling.
We try to decouple the elx_panic usage for its intended purpose,
introduce lower_el_panic which would call elx_panic, currently
lower_el_panic is called from default platform_ea_handle which
would be called due to panic from any of the lower ELs.
Also remove the weak linkage for elx_panic and rename it to
report_elx_panic which could be used with lower_el_panic.
Change-Id: I268bca89c01c60520d127ef6c7ba851460edc747
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
handle_lower_el_async_ea and enter_lower_el_async_ea are same except for
saving x30 register, with previous patch x30 is now freed before calling
these function we don't need both of them.
This patch also unifies the naming convention, now we have 3 handlers
- handle_lower_el_ea_esb
- handle_lower_el_sync_ea
- handle_lower_el_async_ea
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I63b584cf059bac80195aa334981d50fa6272cf49
Most of the macros/routine in vector entry need a free scratch register.
Introduce a macro "save_x30" and call it right at the begining of vector
entries where x30 is used. It is more exlicit and less error prone
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I617f3d41a120739e5e3fe1c421c79ceb70c1188e
Following macros removed
- handle_async_ea : It duplicates "check_and_unmask_ea" functionality
- check_if_serror_from_EL3: This macro is small and called only once,
replace this macro with instructions at the caller.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: Id7eec6263ec23cc8792139f491c563f616fd3618
At the moment we only handle SMC traps from lower ELs, but ignore any
other synchronous traps and just panic.
To cope with system register traps, which we might need to emulate,
introduce a C function to handle those traps, and wire that up in the
exception handler to be called.
We provide a dispatcher function (in C), that will call platform
specific implementation for certain (classes of) system registers.
For now this is empty.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Change-Id: If147bcb49472eb02791498700300926afbcf75ff
scr_el3 registers cannot be modified in lower ELs which means it retains
the same value which is stored in the EL3 cpu context structure for the
given world. So, we should not save the register when entering to EL3
from lower EL as we have the copy of it present in cpu context.
During EL3 execution SCR_EL3 value can be modifed for following cases
1. Changes which is required for EL3 execution, this change is temp
and do not need to be saved.
2. Changes which affects lower EL execution, these changes need to be
written to cpu context as well and will be retrieved when scr_el3
is restored as part of exiting EL3
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I9cc984ddf50e27d09e361bd83b1b3c9f068cf2fd
SMCCCv1.3 introduces the SVE hint bit added to the SMC FID (bit 16)
denoting that the world issuing an SMC doesn't expect the callee to
preserve the SVE state (FFR, predicates, Zn vector bits greater than
127). Update the generic SMC handler to copy the SVE hint bit state
to SMC flags and mask out the bit by default for the services called
by the standard dispatcher. It is permitted by the SMCCC standard to
ignore the bit as long as the SVE state is preserved. In any case a
callee must preserve the NEON state (FPCR/FPSR, Vn 128b vectors)
whichever the SVE hint bit state.
Signed-off-by: Olivier Deprez <olivier.deprez@arm.com>
Change-Id: I2b163ed83dc311b8f81f96b23c942829ae9fa1b5
Following hardening done around ESR_EL3 register usage
- Panic if exception is anyting other than SError
- AET bit is only valid if DFSC is 0x11, move DFSC check before AET.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: Ib15159920f6cad964332fd40f88943aee2bc73b4
During a synchronous exception, the 'enter_lower_el_sync_ea' handler
tests the ESR_EL3 EA bit and calls 'report_unhandled_exception', if
it is not set.
EA = 0 and IFSC = SEA, seems to be a contradiction. EA provides further
classification of a synchronous abort. A synchronous abort is determined
by the IFSC value on an instruction fetch synchronous abort. As a result,
EA will never be set to 1 on an instruction fetch synchronous abort and
'report_unhandled_exception' should not be called.
This patch removes this behavior to allow the platform to handle the
exception.
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Change-Id: I3f004447ad4316d81649063e1ffb3ac644c83ede
It is not always the case that RESET_TO_BL31 enabled platforms don't
execute a bootloader before BL31.
For those use cases, being able to receive arguments from that first
loader (i.e: a DTB with TPM logs) might be necessary feature.
This code has been validated on iMX8mm.
Signed-off-by: Jorge Ramirez-Ortiz <jorge@foundries.io>
Change-Id: Ibf00c3867cb1d1012b8b376e64ccaeca1c9d2bff
During a transition to a higher EL some of the PSTATE bits are not set
by hardware, this means that their state may be leaked from lower ELs.
This patch sets those bits to a default value upon entry to EL3.
This patch was tested using a debugger to check the PSTATE values
are correctly set. As well as adding a test in the next patch to
ensure the PSTATE in lower ELs is still maintained after this change.
Change-Id: Ie546acbca7b9aa3c86bd68185edded91b2a64ae5
Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
In the next patch we add an extra step of setting the PSTATE
registers to a known state on el3 entry. In this patch we create
the function prepare_el3_entry to wrap the steps needed for before
el3 entry. For now this is only save_gp_pmcr_pauth_regs.
Change-Id: Ie26dc8d89bfaec308769165d2649e84d41be196c
Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
This patch updates and refactors the GPT library and fixes bugs.
- Support all combinations of PGS, PPS, and L0GPTSZ parameters.
- PPS and PGS are set at runtime, L0GPTSZ is read from GPCCR_EL3.
- Use compiler definitions to simplify code.
- Renaming functions to better suit intended uses.
- MMU enabled before GPT APIs called.
- Add comments to make function usage more clear in GPT library.
- Added _rme suffix to file names to differentiate better from the
GPT file system code.
- Renamed gpt_defs.h to gpt_rme_private.h to better separate private
and public code.
- Renamed gpt_core.c to gpt_rme.c to better conform to TF-A precedent.
Signed-off-by: John Powell <john.powell@arm.com>
Change-Id: I4cbb23b0f81e697baa9fb23ba458aa3f7d1ed919
This patch introduces the Granule Protection Table (GPT)
library code. This implementation will be updated later to
be more flexible, as the current implementation is very rigid.
Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Change-Id: I3af824a28c6e9a5d36459c0c51d2d9bebfba1505
FEAT_RME introduces two additional security states,
Root and Realm security states. This patch adds Realm
security state awareness to SMCCC helpers and entry point info
structure.
Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Change-Id: I9cdefcc1aa71259b2de46e5fb62b28d658fa59bd
For SoCs which do not implement RAS, use DSB as a barrier to
synchronize pending external aborts at the entry and exit of
exception handlers. This is needed to isolate the SErrors to
appropriate context.
However, this introduces an unintended side effect as discussed
in the https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/3440
A summary of the side effect and a quick workaround is provided as
part of this patch and summarized here:
The explicit DSB at the entry of various exception vectors in BL31
for handling exceptions from lower ELs can inadvertently trigger an
SError exception in EL3 due to pending asyncrhonouus aborts in lower
ELs. This will end up being handled by serror_sp_elx in EL3 which will
ultimately panic and die.
The way to workaround is to update a flag to indicate if the exception
truly came from EL3. This flag is allocated in the cpu_context
structure. This is not a bullet proof solution to the problem at hand
because we assume the instructions following "isb" that help to update
the flag (lines 100-102 & 139-141) execute without causing further
exceptions.
Change-Id: I4d345b07d746a727459435ddd6abb37fda24a9bf
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
As per latest mailing communication [1], we decided to
update AT speculative workaround implementation in order to
disable page table walk for lower ELs(EL1 or EL0) immediately
after context switching to EL3 from lower ELs.
Previous implementation of AT speculative workaround is available
here: 45aecff00
AT speculative workaround is updated as below:
1. Avoid saving and restoring of SCTLR and TCR registers for EL1
in context save and restore routine respectively.
2. On EL3 entry, save SCTLR and TCR registers for EL1.
3. On EL3 entry, update EL1 system registers to disable stage 1
page table walk for lower ELs (EL1 and EL0) and enable EL1
MMU.
4. On EL3 exit, restore SCTLR and TCR registers for EL1 which
are saved in step 2.
[1]:
https://lists.trustedfirmware.org/pipermail/tf-a/2020-July/000586.html
Change-Id: Iee8de16f81dc970a8f492726f2ddd57e7bd9ffb5
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
This patch provides a fix for incorrect crash dump data for
lower EL when TF-A is built with HANDLE_EA_EL3_FIRST=1 option
which enables routing of External Aborts and SErrors to EL3.
Change-Id: I9d5e6775e6aad21db5b78362da6c3a3d897df977
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
At the moment, address demangling is only used by the backtrace
functionality. However, at some point, other parts of the TF-A
codebase may want to use it.
The 'demangle_address' function is replaced with a single XPACI
instruction which is also added in 'do_crash_reporting()'.
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
Change-Id: I4424dcd54d5bf0a5f9b2a0a84c4e565eec7329ec
Since BL31 PROGBITS and BL31 NOBITS sections are going to be
in non-adjacent memory regions, potentially far from each other,
some fixes are needed to support it completely.
1. adr instruction only allows computing the effective address
of a location only within 1MB range of the PC. However, adrp
instruction together with an add permits position independent
address of any location with 4GB range of PC.
2. Since BL31 _RW_END_ marks the end of BL31 image, care must be
taken that it is aligned to page size since we map this memory
region in BL31 using xlat_v2 lib utils which mandate alignment of
image size to page granularity.
Change-Id: I3451cc030d03cb2032db3cc088f0c0e2c84bffda
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
Even though ERET always causes a jump to another address, aarch64 CPUs
speculatively execute following instructions as if the ERET
instruction was not a jump instruction.
The speculative execution does not cross privilege-levels (to the jump
target as one would expect), but it continues on the kernel privilege
level as if the ERET instruction did not change the control flow -
thus execution anything that is accidentally linked after the ERET
instruction. Later, the results of this speculative execution are
always architecturally discarded, however they can leak data using
microarchitectural side channels. This speculative execution is very
reliable (seems to be unconditional) and it manages to complete even
relatively performance-heavy operations (e.g. multiple dependent
fetches from uncached memory).
This was fixed in Linux, FreeBSD, OpenBSD and Optee OS:
679db7080129fb48ace43a08873eceabfd092aa1
It is demonstrated in a SafeSide example:
https://github.com/google/safeside/blob/master/demos/eret_hvc_smc_wrapper.cchttps://github.com/google/safeside/blob/master/kernel_modules/kmod_eret_hvc_smc/eret_hvc_smc_module.c
Signed-off-by: Anthony Steinhauser <asteinhauser@google.com>
Change-Id: Iead39b0b9fb4b8d8b5609daaa8be81497ba63a0f
Since BL31 PROGBITS and BL31 NOBITS sections are going to be
in non-adjacent memory regions, potentially far from each other,
some fixes are needed to support it completely.
1. adr instruction only allows computing the effective address
of a location only within 1MB range of the PC. However, adrp
instruction together with an add permits position independent
address of any location with 4GB range of PC.
2. Since BL31 _RW_END_ marks the end of BL31 image, care must be
taken that it is aligned to page size since we map this memory
region in BL31 using xlat_v2 lib utils which mandate alignment of
image size to page granularity.
Change-Id: Ic745c5a130fe4239fa2742142d083b2bdc4e8b85
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
EA handlers for exceptions taken from lower ELs at the end invokes
el3_exit function. However there was a bug with sp maintenance which
resulted in el3_exit setting runtime stack to context. This in turn
caused memory corruption on consecutive EL3 entries.
Signed-off-by: Jan Dabros <jsd@semihalf.com>
Change-Id: I0424245c27c369c864506f4baa719968890ce659
Ported the pmf asm macros and the asm code in the bl31 entrypoint
necessary for the instrumentation to AArch32.
Since smc dispatch is handled by the bl32 payload on AArch32, we
provide this service only if AARCH32_SP=sp_min is set.
Signed-off-by: Bence Szépkúti <bence.szepkuti@arm.com>
Change-Id: Id33b7e9762ae86a4f4b40d7f1b37a90e5130c8ac
When a Firmware is complied as Position Independent Executable it needs
to request GDT fixup by passing size of the memory region to
el3_entrypoint_common macro.
The Global descriptor table fixup will be done early on during cold boot
process of primary core.
Currently only BL31 supports PIE, but in future when BL2_AT_EL3 will be
compiled as PIE, it can simply pass fixup size to the common el3
entrypoint macro to fixup GDT.
The reason for this patch was to overcome the bug introduced by SHA
330ead806 which called fixup routine for each core causing
re-initializing of global pointers thus overwriting any changes
done by the previous core.
Change-Id: I55c792cc3ea9e7eef34c2e4653afd04572c4f055
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Coherent I-cache is causing a prefetch violation where when the core
executes an instruction that has recently been modified, the core might
fetch a stale instruction which violates the ordering of instruction
fetches.
The workaround includes an instruction sequence to implementation
defined registers to trap all EL0 IC IVAU instructions to EL3 and a trap
handler to execute a TLB inner-shareable invalidation to an arbitrary
address followed by a DSB.
Signed-off-by: Lauren Wehrmeister <lauren.wehrmeister@arm.com>
Change-Id: Ic3b7cbb11cf2eaf9005523ef5578a372593ae4d6
This patch provides the following features and makes modifications
listed below:
- Individual APIAKey key generation for each CPU.
- New key generation on every BL31 warm boot and TSP CPU On event.
- Per-CPU storage of APIAKey added in percpu_data[]
of cpu_data structure.
- `plat_init_apiakey()` function replaced with `plat_init_apkey()`
which returns 128-bit value and uses Generic timer physical counter
value to increase the randomness of the generated key.
The new function can be used for generation of all ARMv8.3-PAuth keys
- ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.
- New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions
generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively;
pauth_disable_el1()` and `pauth_disable_el3()` functions disable
PAuth for EL1 and EL3 respectively;
`pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from
cpu-data structure.
- Combined `save_gp_pauth_registers()` function replaces calls to
`save_gp_registers()` and `pauth_context_save()`;
`restore_gp_pauth_registers()` replaces `pauth_context_restore()`
and `restore_gp_registers()` calls.
- `restore_gp_registers_eret()` function removed with corresponding
code placed in `el3_exit()`.
- Fixed the issue when `pauth_t pauth_ctx` structure allocated space
for 12 uint64_t PAuth registers instead of 10 by removal of macro
CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h`
and assigning its value to CTX_PAUTH_REGS_END.
- Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions
in `msr spsel` instruction instead of hard-coded values.
- Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.
Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
This patch adds support for the Undefined Behaviour sanitizer. There are
two types of support offered - minimalistic trapping support which
essentially immediately crashes on undefined behaviour and full support
with full debug messages.
The full support relies on ubsan.c which has been adapted from code used
by OPTEE.
Change-Id: I417c810f4fc43dcb56db6a6a555bfd0b38440727
Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
Printing a newline is a relatively common functionality for code to want
to do. Therefore, this patch now moves this function into a common part
of the code that anyone can use.
Change-Id: I2cad699fde00ef8d2aabf8bf35742ddd88d090ba
Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
This patch fixes an issue when secure world timing information
can be leaked because Secure Cycle Counter is not disabled.
For ARMv8.5 the counter gets disabled by setting MDCR_El3.SCCD
bit on CPU cold/warm boot.
For the earlier architectures PMCR_EL0 register is saved/restored
on secure world entry/exit from/to Non-secure state, and cycle
counting gets disabled by setting PMCR_EL0.DP bit.
'include\aarch64\arch.h' header file was tided up and new
ARMv8.5-PMU related definitions were added.
Change-Id: I6f56db6bc77504634a352388990ad925a69ebbfa
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
This patch modifies crash reporting for AArch64 to provide
aligned output of register dump and GIC registers.
Change-Id: I8743bf1d2d6d56086e735df43785ef28051c5fc3
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
The AArch32 system registers are not listed if the platform supports
AArch64 only.
Change-Id: I087a10ae6e7cad1bb52775a344635dbac1f12679
Signed-off-by: Imre Kis <imre.kis@arm.com>
This patch adds the functionality needed for platforms to provide
Branch Target Identification (BTI) extension, introduced to AArch64
in Armv8.5-A by adding BTI instruction used to mark valid targets
for indirect branches. The patch sets new GP bit [50] to the stage 1
Translation Table Block and Page entries to denote guarded EL3 code
pages which will cause processor to trap instructions in protected
pages trying to perform an indirect branch to any instruction other
than BTI.
BTI feature is selected by BRANCH_PROTECTION option which supersedes
the previous ENABLE_PAUTH used for Armv8.3-A Pointer Authentication
and is disabled by default. Enabling BTI requires compiler support
and was tested with GCC versions 9.0.0, 9.0.1 and 10.0.0.
The assembly macros and helpers are modified to accommodate the BTI
instruction.
This is an experimental feature.
Note. The previous ENABLE_PAUTH build option to enable PAuth in EL3
is now made as an internal flag and BRANCH_PROTECTION flag should be
used instead to enable Pointer Authentication.
Note. USE_LIBROM=1 option is currently not supported.
Change-Id: Ifaf4438609b16647dc79468b70cd1f47a623362e
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
The intention of this patch is to leverage the existing el3_exit() return
routine for smc_unknown return path rather than a custom set of instructions.
In order to leverage el3_exit(), the necessary counteraction (i.e., saving the
system registers apart from GP registers) must be performed. Hence a series of
instructions which save system registers( like SPSR_EL3, SCR_EL3 etc) to stack
are moved to the top of group of instructions which essentially decode the OEN
from the smc function identifier and obtain the specific service handler in
rt_svc_descs_array. This ensures that the control flow for both known and
unknown smc calls will be similar.
Change-Id: I67f94cfcba176bf8aee1a446fb58a4e383905a87
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
Replace call to pauth_context_save() with pauth_context_restore()
in case of unknown SMC call.
Change-Id: Ib863d979faa7831052b33e8ac73913e2f661f9a0
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
The relocation fixup code must be called at the beginning of bl31
entrypoint to ensure that CPU specific reset handlers are fixed up for
relocations.
Change-Id: Icb04eacb2d4c26c26b08b768d871d2c82777babb
Signed-off-by: Louis Mayencourt <louis.mayencourt@arm.com>
In the current Pointer Authentication support added in
commit b86048c40c
PAuth gets enabled in BL31 cold boot entrypoint only,
(see bl31_entrypoint() in bl31\aarch64\bl31_entrypoint.S)
but not in bl31_warm_entrypoint().
This results in EnIA bit [31] in SCTLR_EL3 not being set
and pointer authentication disabled after CPU wake-up event.
FixesARM-software/tf-issues#684
Change-Id: I27a67804764dfba2a6d72ca119ca2bcff4f536d6
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
The previous commit added the infrastructure to load and save
ARMv8.3-PAuth registers during Non-secure <-> Secure world switches, but
didn't actually enable pointer authentication in the firmware.
This patch adds the functionality needed for platforms to provide
authentication keys for the firmware, and a new option (ENABLE_PAUTH) to
enable pointer authentication in the firmware itself. This option is
disabled by default, and it requires CTX_INCLUDE_PAUTH_REGS to be
enabled.
Change-Id: I35127ec271e1198d43209044de39fa712ef202a5
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
ARMv8.3-PAuth adds functionality that supports address authentication of
the contents of a register before that register is used as the target of
an indirect branch, or as a load.
This feature is supported only in AArch64 state.
This feature is mandatory in ARMv8.3 implementations.
This feature adds several registers to EL1. A new option called
CTX_INCLUDE_PAUTH_REGS has been added to select if the TF needs to save
them during Non-secure <-> Secure world switches. This option must be
enabled if the hardware has the registers or the values will be leaked
during world switches.
To prevent leaks, this patch also disables pointer authentication in the
Secure world if CTX_INCLUDE_PAUTH_REGS is 0. Any attempt to use it will
be trapped in EL3.
Change-Id: I27beba9907b9a86c6df1d0c5bf6180c972830855
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
This reverts commit 2f37046524 ("Add support for the SMC Calling
Convention 2.0").
SMCCC v2.0 is no longer required for SPM, and won't be needed in the
future. Removing it makes the SMC handling code less complicated.
The SPM implementation based on SPCI and SPRT was using it, but it has
been adapted to SMCCC v1.0.
Change-Id: I36795b91857b2b9c00437cfbfed04b3c1627f578
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Enforce full include path for includes. Deprecate old paths.
The following folders inside include/lib have been left unchanged:
- include/lib/cpus/${ARCH}
- include/lib/el3_runtime/${ARCH}
The reason for this change is that having a global namespace for
includes isn't a good idea. It defeats one of the advantages of having
folders and it introduces problems that are sometimes subtle (because
you may not know the header you are actually including if there are two
of them).
For example, this patch had to be created because two headers were
called the same way: e0ea0928d5 ("Fix gpio includes of mt8173 platform
to avoid collision."). More recently, this patch has had similar
problems: 46f9b2c3a2 ("drivers: add tzc380 support").
This problem was introduced in commit 4ecca33988 ("Move include and
source files to logical locations"). At that time, there weren't too
many headers so it wasn't a real issue. However, time has shown that
this creates problems.
Platforms that want to preserve the way they include headers may add the
removed paths to PLAT_INCLUDES, but this is discouraged.
Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Use the helper function `save_gp_registers` to save the register
state to cpu_context on entry to EL3 in SMC handler. This has the
effect of saving x0 - x3 as well into the cpu_context which was
not done previously but it unifies the register save sequence
in BL31.
Change-Id: I5753c942263a5f9178deda3dba896e3220f3dd83
Signed-off-by: Soby Mathew <soby.mathew@arm.com>