Independent, hardware-rooted verification for sovereign confidential computing and AI integrity.
Intel Trust Authority, initially introduced as Project Amber, is a suite of trust and security services that provides remote attestation for the verification of Trusted Execution Environments (TEEs). As of 2026, it serves as the backbone for Confidential AI, enabling enterprises to verify that their sensitive workloads are running on genuine, up-to-date hardware, such as Intel Xeon processors with SGX or TDX, and Intel Gaudi AI accelerators. Unlike traditional security models that rely on the cloud provider's integrity, Intel Trust Authority acts as a decoupled, third-party authority, providing a hardware-rooted 'seal of approval.' This architecture is critical for multi-party computation and federated learning, where data sovereignty and model IP protection are paramount. The service works across hybrid, edge, and multi-cloud environments, ensuring that security policies are enforced consistently regardless of the underlying infrastructure. By providing cryptographically signed evidence (tokens) that a TEE is correctly configured and uncompromised, it enables a 'verify before you trust' paradigm essential for highly regulated sectors like finance, healthcare, and government defense.
Verification is performed by Intel independently of the cloud service provider, removing the CSP from the Trusted Computing Base (TCB).
Verified feedback from the global deployment network.
Post queries, share implementation strategies, and help other users.
A centralized dashboard to define security requirements for diverse hardware across different regions and clouds.
Assigns identities to workloads based on their hardware state rather than just IP or service names.
Ensures AI weights are only decrypted within a verified enclave that has been attested by Intel.
Generates cryptographically verifiable logs for all attestation events for regulatory compliance.
Full support for Trust Domain Extensions, allowing whole-VM attestation without code changes.
Allows multiple stakeholders to verify a shared enclave before data is contributed.
Preventing data scientists or cloud admins from accessing raw training data or model weights during processing.
Registry Updated:2/7/2026
Run training and output encrypted weights.
Meeting government requirements that data must not be accessible by foreign cloud providers.
Aggregating insights from multiple hospitals without moving or revealing patient data.