This patch updates and refactors the GPT library and fixes bugs.
- Support all combinations of PGS, PPS, and L0GPTSZ parameters.
- PPS and PGS are set at runtime, L0GPTSZ is read from GPCCR_EL3.
- Use compiler definitions to simplify code.
- Renaming functions to better suit intended uses.
- MMU enabled before GPT APIs called.
- Add comments to make function usage more clear in GPT library.
- Added _rme suffix to file names to differentiate better from the
GPT file system code.
- Renamed gpt_defs.h to gpt_rme_private.h to better separate private
and public code.
- Renamed gpt_core.c to gpt_rme.c to better conform to TF-A precedent.
Signed-off-by: John Powell <john.powell@arm.com>
Change-Id: I4cbb23b0f81e697baa9fb23ba458aa3f7d1ed919
This patch introduces the Granule Protection Table (GPT)
library code. This implementation will be updated later to
be more flexible, as the current implementation is very rigid.
Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Change-Id: I3af824a28c6e9a5d36459c0c51d2d9bebfba1505
FEAT_RME introduces two additional security states,
Root and Realm security states. This patch adds Realm
security state awareness to SMCCC helpers and entry point info
structure.
Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Change-Id: I9cdefcc1aa71259b2de46e5fb62b28d658fa59bd
For SoCs which do not implement RAS, use DSB as a barrier to
synchronize pending external aborts at the entry and exit of
exception handlers. This is needed to isolate the SErrors to
appropriate context.
However, this introduces an unintended side effect as discussed
in the https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/3440
A summary of the side effect and a quick workaround is provided as
part of this patch and summarized here:
The explicit DSB at the entry of various exception vectors in BL31
for handling exceptions from lower ELs can inadvertently trigger an
SError exception in EL3 due to pending asyncrhonouus aborts in lower
ELs. This will end up being handled by serror_sp_elx in EL3 which will
ultimately panic and die.
The way to workaround is to update a flag to indicate if the exception
truly came from EL3. This flag is allocated in the cpu_context
structure. This is not a bullet proof solution to the problem at hand
because we assume the instructions following "isb" that help to update
the flag (lines 100-102 & 139-141) execute without causing further
exceptions.
Change-Id: I4d345b07d746a727459435ddd6abb37fda24a9bf
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
As per latest mailing communication [1], we decided to
update AT speculative workaround implementation in order to
disable page table walk for lower ELs(EL1 or EL0) immediately
after context switching to EL3 from lower ELs.
Previous implementation of AT speculative workaround is available
here: 45aecff00
AT speculative workaround is updated as below:
1. Avoid saving and restoring of SCTLR and TCR registers for EL1
in context save and restore routine respectively.
2. On EL3 entry, save SCTLR and TCR registers for EL1.
3. On EL3 entry, update EL1 system registers to disable stage 1
page table walk for lower ELs (EL1 and EL0) and enable EL1
MMU.
4. On EL3 exit, restore SCTLR and TCR registers for EL1 which
are saved in step 2.
[1]:
https://lists.trustedfirmware.org/pipermail/tf-a/2020-July/000586.html
Change-Id: Iee8de16f81dc970a8f492726f2ddd57e7bd9ffb5
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
This patch provides a fix for incorrect crash dump data for
lower EL when TF-A is built with HANDLE_EA_EL3_FIRST=1 option
which enables routing of External Aborts and SErrors to EL3.
Change-Id: I9d5e6775e6aad21db5b78362da6c3a3d897df977
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
At the moment, address demangling is only used by the backtrace
functionality. However, at some point, other parts of the TF-A
codebase may want to use it.
The 'demangle_address' function is replaced with a single XPACI
instruction which is also added in 'do_crash_reporting()'.
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
Change-Id: I4424dcd54d5bf0a5f9b2a0a84c4e565eec7329ec
Since BL31 PROGBITS and BL31 NOBITS sections are going to be
in non-adjacent memory regions, potentially far from each other,
some fixes are needed to support it completely.
1. adr instruction only allows computing the effective address
of a location only within 1MB range of the PC. However, adrp
instruction together with an add permits position independent
address of any location with 4GB range of PC.
2. Since BL31 _RW_END_ marks the end of BL31 image, care must be
taken that it is aligned to page size since we map this memory
region in BL31 using xlat_v2 lib utils which mandate alignment of
image size to page granularity.
Change-Id: I3451cc030d03cb2032db3cc088f0c0e2c84bffda
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
Even though ERET always causes a jump to another address, aarch64 CPUs
speculatively execute following instructions as if the ERET
instruction was not a jump instruction.
The speculative execution does not cross privilege-levels (to the jump
target as one would expect), but it continues on the kernel privilege
level as if the ERET instruction did not change the control flow -
thus execution anything that is accidentally linked after the ERET
instruction. Later, the results of this speculative execution are
always architecturally discarded, however they can leak data using
microarchitectural side channels. This speculative execution is very
reliable (seems to be unconditional) and it manages to complete even
relatively performance-heavy operations (e.g. multiple dependent
fetches from uncached memory).
This was fixed in Linux, FreeBSD, OpenBSD and Optee OS:
679db7080129fb48ace43a08873eceabfd092aa1
It is demonstrated in a SafeSide example:
https://github.com/google/safeside/blob/master/demos/eret_hvc_smc_wrapper.cchttps://github.com/google/safeside/blob/master/kernel_modules/kmod_eret_hvc_smc/eret_hvc_smc_module.c
Signed-off-by: Anthony Steinhauser <asteinhauser@google.com>
Change-Id: Iead39b0b9fb4b8d8b5609daaa8be81497ba63a0f
Since BL31 PROGBITS and BL31 NOBITS sections are going to be
in non-adjacent memory regions, potentially far from each other,
some fixes are needed to support it completely.
1. adr instruction only allows computing the effective address
of a location only within 1MB range of the PC. However, adrp
instruction together with an add permits position independent
address of any location with 4GB range of PC.
2. Since BL31 _RW_END_ marks the end of BL31 image, care must be
taken that it is aligned to page size since we map this memory
region in BL31 using xlat_v2 lib utils which mandate alignment of
image size to page granularity.
Change-Id: Ic745c5a130fe4239fa2742142d083b2bdc4e8b85
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
EA handlers for exceptions taken from lower ELs at the end invokes
el3_exit function. However there was a bug with sp maintenance which
resulted in el3_exit setting runtime stack to context. This in turn
caused memory corruption on consecutive EL3 entries.
Signed-off-by: Jan Dabros <jsd@semihalf.com>
Change-Id: I0424245c27c369c864506f4baa719968890ce659
Ported the pmf asm macros and the asm code in the bl31 entrypoint
necessary for the instrumentation to AArch32.
Since smc dispatch is handled by the bl32 payload on AArch32, we
provide this service only if AARCH32_SP=sp_min is set.
Signed-off-by: Bence Szépkúti <bence.szepkuti@arm.com>
Change-Id: Id33b7e9762ae86a4f4b40d7f1b37a90e5130c8ac
When a Firmware is complied as Position Independent Executable it needs
to request GDT fixup by passing size of the memory region to
el3_entrypoint_common macro.
The Global descriptor table fixup will be done early on during cold boot
process of primary core.
Currently only BL31 supports PIE, but in future when BL2_AT_EL3 will be
compiled as PIE, it can simply pass fixup size to the common el3
entrypoint macro to fixup GDT.
The reason for this patch was to overcome the bug introduced by SHA
330ead806 which called fixup routine for each core causing
re-initializing of global pointers thus overwriting any changes
done by the previous core.
Change-Id: I55c792cc3ea9e7eef34c2e4653afd04572c4f055
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Coherent I-cache is causing a prefetch violation where when the core
executes an instruction that has recently been modified, the core might
fetch a stale instruction which violates the ordering of instruction
fetches.
The workaround includes an instruction sequence to implementation
defined registers to trap all EL0 IC IVAU instructions to EL3 and a trap
handler to execute a TLB inner-shareable invalidation to an arbitrary
address followed by a DSB.
Signed-off-by: Lauren Wehrmeister <lauren.wehrmeister@arm.com>
Change-Id: Ic3b7cbb11cf2eaf9005523ef5578a372593ae4d6
This patch provides the following features and makes modifications
listed below:
- Individual APIAKey key generation for each CPU.
- New key generation on every BL31 warm boot and TSP CPU On event.
- Per-CPU storage of APIAKey added in percpu_data[]
of cpu_data structure.
- `plat_init_apiakey()` function replaced with `plat_init_apkey()`
which returns 128-bit value and uses Generic timer physical counter
value to increase the randomness of the generated key.
The new function can be used for generation of all ARMv8.3-PAuth keys
- ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.
- New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions
generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively;
pauth_disable_el1()` and `pauth_disable_el3()` functions disable
PAuth for EL1 and EL3 respectively;
`pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from
cpu-data structure.
- Combined `save_gp_pauth_registers()` function replaces calls to
`save_gp_registers()` and `pauth_context_save()`;
`restore_gp_pauth_registers()` replaces `pauth_context_restore()`
and `restore_gp_registers()` calls.
- `restore_gp_registers_eret()` function removed with corresponding
code placed in `el3_exit()`.
- Fixed the issue when `pauth_t pauth_ctx` structure allocated space
for 12 uint64_t PAuth registers instead of 10 by removal of macro
CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h`
and assigning its value to CTX_PAUTH_REGS_END.
- Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions
in `msr spsel` instruction instead of hard-coded values.
- Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.
Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
This patch adds support for the Undefined Behaviour sanitizer. There are
two types of support offered - minimalistic trapping support which
essentially immediately crashes on undefined behaviour and full support
with full debug messages.
The full support relies on ubsan.c which has been adapted from code used
by OPTEE.
Change-Id: I417c810f4fc43dcb56db6a6a555bfd0b38440727
Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
Printing a newline is a relatively common functionality for code to want
to do. Therefore, this patch now moves this function into a common part
of the code that anyone can use.
Change-Id: I2cad699fde00ef8d2aabf8bf35742ddd88d090ba
Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
This patch fixes an issue when secure world timing information
can be leaked because Secure Cycle Counter is not disabled.
For ARMv8.5 the counter gets disabled by setting MDCR_El3.SCCD
bit on CPU cold/warm boot.
For the earlier architectures PMCR_EL0 register is saved/restored
on secure world entry/exit from/to Non-secure state, and cycle
counting gets disabled by setting PMCR_EL0.DP bit.
'include\aarch64\arch.h' header file was tided up and new
ARMv8.5-PMU related definitions were added.
Change-Id: I6f56db6bc77504634a352388990ad925a69ebbfa
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
This patch modifies crash reporting for AArch64 to provide
aligned output of register dump and GIC registers.
Change-Id: I8743bf1d2d6d56086e735df43785ef28051c5fc3
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
The AArch32 system registers are not listed if the platform supports
AArch64 only.
Change-Id: I087a10ae6e7cad1bb52775a344635dbac1f12679
Signed-off-by: Imre Kis <imre.kis@arm.com>
This patch adds the functionality needed for platforms to provide
Branch Target Identification (BTI) extension, introduced to AArch64
in Armv8.5-A by adding BTI instruction used to mark valid targets
for indirect branches. The patch sets new GP bit [50] to the stage 1
Translation Table Block and Page entries to denote guarded EL3 code
pages which will cause processor to trap instructions in protected
pages trying to perform an indirect branch to any instruction other
than BTI.
BTI feature is selected by BRANCH_PROTECTION option which supersedes
the previous ENABLE_PAUTH used for Armv8.3-A Pointer Authentication
and is disabled by default. Enabling BTI requires compiler support
and was tested with GCC versions 9.0.0, 9.0.1 and 10.0.0.
The assembly macros and helpers are modified to accommodate the BTI
instruction.
This is an experimental feature.
Note. The previous ENABLE_PAUTH build option to enable PAuth in EL3
is now made as an internal flag and BRANCH_PROTECTION flag should be
used instead to enable Pointer Authentication.
Note. USE_LIBROM=1 option is currently not supported.
Change-Id: Ifaf4438609b16647dc79468b70cd1f47a623362e
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
The intention of this patch is to leverage the existing el3_exit() return
routine for smc_unknown return path rather than a custom set of instructions.
In order to leverage el3_exit(), the necessary counteraction (i.e., saving the
system registers apart from GP registers) must be performed. Hence a series of
instructions which save system registers( like SPSR_EL3, SCR_EL3 etc) to stack
are moved to the top of group of instructions which essentially decode the OEN
from the smc function identifier and obtain the specific service handler in
rt_svc_descs_array. This ensures that the control flow for both known and
unknown smc calls will be similar.
Change-Id: I67f94cfcba176bf8aee1a446fb58a4e383905a87
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
Replace call to pauth_context_save() with pauth_context_restore()
in case of unknown SMC call.
Change-Id: Ib863d979faa7831052b33e8ac73913e2f661f9a0
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
The relocation fixup code must be called at the beginning of bl31
entrypoint to ensure that CPU specific reset handlers are fixed up for
relocations.
Change-Id: Icb04eacb2d4c26c26b08b768d871d2c82777babb
Signed-off-by: Louis Mayencourt <louis.mayencourt@arm.com>
In the current Pointer Authentication support added in
commit b86048c40c
PAuth gets enabled in BL31 cold boot entrypoint only,
(see bl31_entrypoint() in bl31\aarch64\bl31_entrypoint.S)
but not in bl31_warm_entrypoint().
This results in EnIA bit [31] in SCTLR_EL3 not being set
and pointer authentication disabled after CPU wake-up event.
FixesARM-software/tf-issues#684
Change-Id: I27a67804764dfba2a6d72ca119ca2bcff4f536d6
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
The previous commit added the infrastructure to load and save
ARMv8.3-PAuth registers during Non-secure <-> Secure world switches, but
didn't actually enable pointer authentication in the firmware.
This patch adds the functionality needed for platforms to provide
authentication keys for the firmware, and a new option (ENABLE_PAUTH) to
enable pointer authentication in the firmware itself. This option is
disabled by default, and it requires CTX_INCLUDE_PAUTH_REGS to be
enabled.
Change-Id: I35127ec271e1198d43209044de39fa712ef202a5
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
ARMv8.3-PAuth adds functionality that supports address authentication of
the contents of a register before that register is used as the target of
an indirect branch, or as a load.
This feature is supported only in AArch64 state.
This feature is mandatory in ARMv8.3 implementations.
This feature adds several registers to EL1. A new option called
CTX_INCLUDE_PAUTH_REGS has been added to select if the TF needs to save
them during Non-secure <-> Secure world switches. This option must be
enabled if the hardware has the registers or the values will be leaked
during world switches.
To prevent leaks, this patch also disables pointer authentication in the
Secure world if CTX_INCLUDE_PAUTH_REGS is 0. Any attempt to use it will
be trapped in EL3.
Change-Id: I27beba9907b9a86c6df1d0c5bf6180c972830855
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
This reverts commit 2f37046524 ("Add support for the SMC Calling
Convention 2.0").
SMCCC v2.0 is no longer required for SPM, and won't be needed in the
future. Removing it makes the SMC handling code less complicated.
The SPM implementation based on SPCI and SPRT was using it, but it has
been adapted to SMCCC v1.0.
Change-Id: I36795b91857b2b9c00437cfbfed04b3c1627f578
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Enforce full include path for includes. Deprecate old paths.
The following folders inside include/lib have been left unchanged:
- include/lib/cpus/${ARCH}
- include/lib/el3_runtime/${ARCH}
The reason for this change is that having a global namespace for
includes isn't a good idea. It defeats one of the advantages of having
folders and it introduces problems that are sometimes subtle (because
you may not know the header you are actually including if there are two
of them).
For example, this patch had to be created because two headers were
called the same way: e0ea0928d5 ("Fix gpio includes of mt8173 platform
to avoid collision."). More recently, this patch has had similar
problems: 46f9b2c3a2 ("drivers: add tzc380 support").
This problem was introduced in commit 4ecca33988 ("Move include and
source files to logical locations"). At that time, there weren't too
many headers so it wasn't a real issue. However, time has shown that
this creates problems.
Platforms that want to preserve the way they include headers may add the
removed paths to PLAT_INCLUDES, but this is discouraged.
Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Use the helper function `save_gp_registers` to save the register
state to cpu_context on entry to EL3 in SMC handler. This has the
effect of saving x0 - x3 as well into the cpu_context which was
not done previously but it unifies the register save sequence
in BL31.
Change-Id: I5753c942263a5f9178deda3dba896e3220f3dd83
Signed-off-by: Soby Mathew <soby.mathew@arm.com>
This patch introduces Position Independant Executable(PIE) support
in TF-A. As a initial prototype, only BL31 can support PIE. A trivial
dynamic linker is implemented which supports fixing up Global Offset
Table(GOT) and Dynamic relocations(.rela.dyn). The fixup_gdt_reloc()
helper function implements this linker and this needs to be called
early in the boot sequence prior to invoking C functions. The GOT is
placed in the RO section of BL31 binary for improved security and the
BL31 linker script is modified to export the appropriate symbols
required for the dynamic linker.
The C compiler always generates PC relative addresses to linker symbols
and hence referencing symbols exporting constants are a problem when
relocating the binary. Hence the reference to the
`__PERCPU_TIMESTAMP_SIZE__` symbol in PMF is removed and is now calculated
at runtime based on start and end addresses.
Change-Id: I1228583ff92cf432963b7cef052e95d995cca93d
Signed-off-by: Soby Mathew <soby.mathew@arm.com>
External Aborts while executing in EL3 is fatal in nature. This patch
allows for the platform to define a handler for External Aborts received
while executing in EL3. A default implementation is added which falls
back to platform unhandled exception.
Change-Id: I466f2c8113a33870f2c7d2d8f2bf20437d9fd354
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
Double fault is when the PE receives another error whilst one is being
handled. To detect double fault condition, a per-CPU flag is introduced
to track the status of error handling. The flag is checked/modified
while temporarily masking external aborts on the PE.
This patch routes double faults to a separate platform-defined handler.
Change-Id: I70e9b7ba4c817273c55a0af978d9755ff32cc702
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
Uncontainable errors are the most severe form of errors, which typically
mean that the system state can't be trusted any more. This further means
that normal error recovery process can't be followed, and an orderly
shutdown of the system is often desirable.
This patch allows for the platform to define a handler for Uncontainable
errors received. Due to the nature of Uncontainable error, the handler
is expected to initiate an orderly shutdown of the system, and therefore
is not expected to return. A default implementation is added which falls
back to platform unhandled exception.
Also fix ras_arch.h header guards.
Change-Id: I072e336a391a0b382e77e627eb9e40729d488b55
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
RAS handling flows might involve using setjmp/longjump calls introduced
in earlier patches; therefore, it'd be good to make sure the stack has
been unwound completely after the handling.
This patch inserts ASM assertions on the RAS handling path to validate
stack.
Change-Id: I59d40d3122010c977cdeab3cce3160f3909e7e69
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
A new file ea_delegate.S is introduced, and all EA-related functions are
moved into it. This makes runtime_exceptions.S less crowded and reads
better.
No functional changes.
Change-Id: I64b653b3931984cffd420563f8e8d1ba263f329f
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
Check_vector_size checks if the size of the vector fits
in the size reserved for it. This check creates problems in
the Clang assembler. A new macro, end_vector_entry, is added
and check_vector_size is deprecated.
This new macro fills the current exception vector until the next
exception vector. If the size of the current vector is bigger
than 32 instructions then it gives an error.
Change-Id: Ie8545cf1003a1e31656a1018dd6b4c28a4eaf671
Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
Having an active stack while enabling MMU has shown coherency problems.
This patch builds on top of translation library changes that introduces
MMU-enabling without using stacks.
Previously, with HW_ASSISTED_COHERENCY, data caches were disabled while
enabling MMU only because of active stack. Now that we can enable MMU
without using stack, we can enable both MMU and data caches at the same
time.
NOTE: Since this feature depends on using translation table library v2,
disallow using translation table library v1 with HW_ASSISTED_COHERENCY.
FixesARM-software/tf-issues#566
Change-Id: Ie55aba0c23ee9c5109eb3454cb8fa45d74f8bbb2
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
RAS extensions are mandatory for ARMv8.2 CPUs, but are also optional
extensions to base ARMv8.0 architecture.
This patch adds build system support to enable RAS features in ARM
Trusted Firmware. A boolean build option RAS_EXTENSION is introduced for
this.
With RAS_EXTENSION, an Exception Synchronization Barrier (ESB) is
inserted at all EL3 vector entry and exit. ESBs will synchronize pending
external aborts before entering EL3, and therefore will contain and
attribute errors to lower EL execution. Any errors thus synchronized are
detected via. DISR_EL1 register.
When RAS_EXTENSION is set to 1, HANDLE_EL3_EA_FIRST must also be set to 1.
Change-Id: I38a19d84014d4d8af688bd81d61ba582c039383a
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
At present, any External Abort routed to EL3 is reported as an unhandled
exception and cause a panic. This patch enables ARM Trusted Firmware to
handle External Aborts routed to EL3.
With this patch, when an External Abort is received at EL3, its handling
is delegated to plat_ea_handler() function. Platforms can provide their
own implementation of this function. This patch adds a weak definition
of the said function that prints out a message and just panics.
In order to support handling External Aborts at EL3, the build option
HANDLE_EA_EL3_FIRST must be set to 1.
Before this patch, HANDLE_EA_EL3_FIRST wasn't passed down to
compilation; this patch fixes that too.
Change-Id: I4d07b7e65eb191ff72d63b909ae9512478cd01a1
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
At present, the function that restores general purpose registers also
does ERET. Refactor the restore code to restore general purpose
registers without ERET to complement the save function.
The macro save_x18_to_x29_sp_el0 was used only once, and is therefore
removed, and its contents expanded inline for readability.
No functional changes, but with this patch:
- The SMC return path will incur an branch-return and an additional
register load.
- The unknown SMC path restores registers x0 to x3.
Change-Id: I7a1a63e17f34f9cde810685d70a0ad13ca3b7c50
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
Due to differences in the bitfields of the SMC IDs, it is not possible
to support SMCCC 1.X and 2.0 at the same time.
The behaviour of `SMCCC_MAJOR_VERSION` has changed. Now, it is a build
option that specifies the major version of the SMCCC that the Trusted
Firmware supports. The only two allowed values are 1 and 2, and it
defaults to 1. The value of `SMCCC_MINOR_VERSION` is derived from it.
Note: Support for SMCCC v2.0 is an experimental feature to enable
prototyping of secure partition specifications. Support for this
convention is disabled by default and could be removed without notice.
Change-Id: I88abf9ccf08e9c66a13ce55c890edea54d9f16a7
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
When the MMU is enabled and the translation tables are mapped, data
read/writes to the translation tables are made using the attributes
specified in the translation tables themselves. However, the MMU
performs table walks with the attributes specified in TCR_ELx. They are
completely independent, so special care has to be taken to make sure
that they are the same.
This has to be done manually because it is not practical to have a test
in the code. Such a test would need to know the virtual memory region
that contains the translation tables and check that for all of the
tables the attributes match the ones in TCR_ELx. As the tables may not
even be mapped at all, this isn't a test that can be made generic.
The flags used by enable_mmu_xxx() have been moved to the same header
where the functions are.
Also, some comments in the linker scripts related to the translation
tables have been fixed.
Change-Id: I1754768bffdae75f53561b1c4a5baf043b45a304
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
This patch introduces a new BL handover interface. It essentially allows
passing 4 arguments between the different BL stages. Effort has been made
so as to be compatible with the previous handover interface. The previous
blx_early_platform_setup() platform API is now deprecated and the new
blx_early_platform_setup2() variant is introduced. The weak compatiblity
implementation for the new API is done in the `plat_bl_common.c` file.
Some of the new arguments in the new API will be reserved for generic
code use when dynamic configuration support is implemented. Otherwise
the other registers are available for platform use.
Change-Id: Ifddfe2ea8e32497fe1beb565cac155ad9d50d404
Signed-off-by: Soby Mathew <soby.mathew@arm.com>
According to the SMC Calling Convention (ARM DEN0028B):
The Unknown SMC Function Identifier is a sign-extended value of
(-1) that is returned in R0, W0 or X0 register.
The value wasn't sign-extended because it was defined as a 32-bit
unsigned value (0xFFFFFFFF).
SMC_PREEMPT has been redefined as -2 for the same reason.
NOTE: This might be a compatibility break for some AArch64 platforms
that don't follow the previous version of the SMCCC (ARM DEN0028A)
correctly. That document specifies that only the bottom 32 bits of the
returned value must be checked. If a platform relies on the top 32 bits
of the result being 0 (so that SMC_UNK is 0x00000000FFFFFFFF), it will
have to fix its code to comply with the SMCCC.
Change-Id: I7f7b109f6b30c114fe570aa0ead3c335383cb54d
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>