Building a Hardware Root of Trust: From Secure Boot to TEE

Mar 30, 2026·
James Hyunmin Kim
James Hyunmin Kim
· 8 min read
blog

Every secure system needs an anchor — a component that is trusted unconditionally because everything else’s trustworthiness depends on it. In modern SoC design, this anchor is the Hardware Root of Trust (HRoT). It’s the first thing that executes when power is applied, and its integrity determines whether anything that follows can be believed.

This article traces the complete trust chain in a modern secure SoC: from the immutable Boot ROM through Secure Boot verification, Measured Boot attestation, and finally into the Trusted Execution Environment where sensitive operations run in isolation.

What Makes a Root of Trust “Hardware”

A Root of Trust must satisfy three properties:

Immutability: The RoT code and configuration cannot be modified after manufacturing. This typically means mask ROM or one-time-programmable (OTP) fuses. If an attacker can modify the RoT, the entire trust chain collapses.

Isolation: The RoT must be architecturally isolated from the rest of the system. It has its own execution context, its own key storage, and its own access control. No software running on the application processor can read or modify RoT secrets.

Minimal Attack Surface: The RoT should be as small as possible. Every line of code and every hardware interface is a potential vulnerability. A well-designed Boot ROM is typically 16-64KB — small enough to be formally verified.

Software-based “Roots of Trust” violate at least one of these properties. A Root of Trust implemented in flash memory can be modified. One running in the same address space as the application can be read. One with a complex feature set has a large attack surface. True security requires hardware.

Stage 1: Boot ROM — The First Instruction

When a SoC powers on, the processor begins executing from a fixed address that points into the Boot ROM. This ROM contains the first-stage boot code and the public key (or hash of the public key) used to verify the next stage.

The Boot ROM performs exactly three functions:

Hardware initialization: Configure the minimum necessary peripherals — clock, SRAM controller, and the interface to the next-stage storage (typically SPI flash or eMMC).

Cryptographic verification: Load the first-stage bootloader (FSBL) from external storage, compute its cryptographic signature or hash, and verify it against the embedded public key or expected hash value.

Conditional execution transfer: If verification succeeds, transfer execution to the FSBL. If it fails, enter a recovery mode or halt.

The Boot ROM must be mask ROM — written during chip fabrication and physically impossible to modify. Even OTP-based Boot ROM configurations carry risk, as OTP programming interfaces can potentially be exploited.

Critical design consideration: The verification algorithm in the Boot ROM determines the system’s cryptographic agility. A Boot ROM that only supports RSA-2048 verification cannot be upgraded to PQC without a silicon respin. This is why forward-looking designs should implement PQC verification (ML-DSA) in the Boot ROM from day one, with classical algorithms as a fallback.

Stage 2: Secure Boot Chain

Once the FSBL is verified and running, it takes over the chain-of-trust verification for subsequent stages:

Boot ROM (immutable, in silicon)
  └── verifies  First Stage Bootloader (FSBL)
        └── verifies  Second Stage Bootloader (U-Boot, UEFI)
              └── verifies  OS Kernel
                    └── verifies  System Services
                          └── verifies  Applications

Each stage verifies the next before transferring control. This creates an unbroken chain where trust propagates from the hardware root through every layer of the software stack.

Key mechanisms in a robust Secure Boot implementation:

Anti-rollback protection: Each firmware component includes a version counter stored in OTP fuses or a monotonic counter in secure storage. The bootloader refuses to load any firmware with a version number lower than the currently stored value. This prevents attackers from downgrading to a known-vulnerable firmware version.

Key revocation: If a signing key is compromised, the system must support revoking that key and transitioning to a new one. This typically requires multiple key slots in OTP, with a revocation bitmask indicating which keys are still valid.

Authenticated encryption: Beyond signature verification, sensitive firmware components (such as those containing proprietary algorithms) should be encrypted. The decryption key is derived from a hardware unique key (HUK) that never leaves the SoC.

Failure modes: A well-designed Secure Boot chain has defined behavior for every failure scenario — signature mismatch, rollback attempt, corrupted storage, partially written update. Each failure mode must lead to a safe state, never to an insecure boot.

Stage 3: Measured Boot and Remote Attestation

Secure Boot answers the question: “Is this firmware authentic?” Measured Boot answers a different question: “What exactly is running?”

The distinction matters. Secure Boot is binary — the firmware either passes verification or it doesn’t. Measured Boot creates a detailed log of every component that loaded, allowing a remote verifier to assess the exact configuration of the device.

The measurement process:

Each boot stage computes a hash (measurement) of the next stage before loading it. This measurement is extended into a Platform Configuration Register (PCR) using the operation:

PCR_new = Hash(PCR_old || measurement)

This chaining ensures that the final PCR value represents the entire sequence of loaded components. Any change to any component produces a different final value.

Hardware support for Measured Boot:

TPM (Trusted Platform Module): The traditional approach. A discrete chip or firmware TPM that maintains PCRs and can generate attestation quotes signed with a device-unique key. TPM 2.0 is widely deployed in PC platforms.

DICE (Device Identifier Composition Engine): TCG’s lightweight alternative designed for embedded systems. DICE derives a unique device identity and per-layer keys using a hardware secret and measurements, without requiring a full TPM. This is particularly relevant for IoT and embedded devices where TPM integration is impractical.

Custom measurement engines: Some SoC vendors implement proprietary measurement schemes integrated into the boot flow, offering tighter integration but less interoperability.

Remote Attestation allows a server to verify the integrity of a remote device:

The device generates an attestation report containing its PCR values, signed by a hardware-protected attestation key. The server compares these values against known-good reference values. If they match, the server trusts the device; if not, it can refuse access or trigger a remediation workflow.

Stage 4: Trusted Execution Environments

Once the system has booted securely, certain operations need ongoing protection during runtime — cryptographic key operations, biometric processing, DRM, secure payments. This is the role of the Trusted Execution Environment (TEE).

A TEE provides:

Isolation: TEE code and data are protected from the normal OS (called the “Rich Execution Environment” or REE). Even a fully compromised Linux kernel cannot read TEE memory.

Secure storage: Encryption keys, credentials, and sensitive data are stored in TEE-controlled encrypted storage, inaccessible to the REE.

Trusted I/O: Direct, secure paths between the TEE and specific peripherals (e.g., fingerprint sensor, secure display) that bypass the REE.

Major TEE architectures:

ARM TrustZone: The most widely deployed TEE technology. TrustZone divides the processor into “Secure World” and “Normal World” states, with hardware-enforced memory access controls. The Secure World runs a trusted OS (such as OP-TEE) that hosts security-critical applications (Trusted Applications, or TAs).

TrustZone’s strength is its hardware enforcement — the Normal World physically cannot access Secure World memory, as the access control is implemented in the bus interconnect (TZC-400, TZASC) and memory controller.

RISC-V Security Extensions: The RISC-V ecosystem offers multiple approaches — Physical Memory Protection (PMP) for basic isolation, the forthcoming WorldGuard extension for TrustZone-like world separation, and the Keystone framework for enclave-based TEEs.

Intel SGX / TDX: Enclave-based approaches that protect specific code and data within encrypted memory regions. While primarily used in server/cloud contexts, the architectural concepts are relevant to embedded design.

Connecting the Chain: RoT to TEE

The complete trust chain connects all four stages:

Hardware Root of Trust (Boot ROM + HUK + OTP)
  
  ├── Secure Boot: Authenticates every firmware layer
  
  ├── Measured Boot: Records what loaded into PCRs/DICE
  
  └── TEE: Provides runtime isolation for secrets
        
        └── Secure Services: Key management, attestation,
            cryptographic operations, secure storage

Each stage depends on the one below it. A TEE running on a system without Secure Boot offers no real guarantee — the TEE firmware itself could be compromised. Secure Boot without a hardware RoT means the verification keys could be tampered with. The chain is only as strong as its weakest link.

Vulnerabilities and Emerging Challenges

Despite the maturity of these technologies, significant challenges remain:

Supply chain attacks: If an attacker can modify the Boot ROM mask or inject malicious logic during manufacturing, the entire trust chain is compromised from the factory. Hardware Trojans at the foundry level are an active research area.

Fault injection: Voltage glitching and laser fault injection can cause the Boot ROM to skip signature verification, bypassing Secure Boot entirely. Hardened designs require fault detection circuits and redundant verification paths.

Side-channel leakage from TEE: TrustZone’s memory isolation doesn’t prevent power analysis or cache-timing attacks. An attacker running code in the Normal World can observe TEE execution through shared microarchitectural state.

PQC integration: The entire trust chain must be migrated to post-quantum cryptography. This means PQC verification in the Boot ROM, PQC-signed attestation in Measured Boot, and PQC key management in the TEE. Each layer presents unique integration challenges.

Looking Forward

The next generation of Hardware Root of Trust architectures will need to address:

PQC-native design: Boot ROM verification using ML-DSA from day one, not as a retrofit.

Formal verification: Mathematically proving that the RoT implementation correctly enforces its security policy, eliminating implementation bugs.

Heterogeneous trust: Systems with multiple processing elements (CPU, GPU, NPU, DSP) each needing their own security boundary and attestation path.

Post-manufacturing provisioning: Secure, scalable methods for injecting device-unique keys and credentials after fabrication, supporting diverse deployment scenarios.

The Hardware Root of Trust is the foundation upon which all digital trust is built. As threats evolve — from quantum computing to AI-powered fault injection — this foundation must evolve with them. The design decisions made in silicon today will determine the security posture of billions of devices for the next decade.

James Hyunmin Kim
Authors
Senior SoC Architect & Hardware Security Expert
Ph.D. in Electrical Engineering from KU Leuven (imec-COSIC), with 15+ years of expertise in secure SoC architecture, hardware security, and cryptographic implementations. Specialized in ARM/RISC-V security subsystems, side-channel countermeasures, and post-quantum cryptography. 4 silicon tape-outs, CAVP-certified security IPs.