Confidential Computing – a logical step in the development of data protection

Confidential Computing – a logical step in the development of data protection

The requirement for securing data is not generally a new one, but it is one that has gathered pace over recent times and has gained renewed importance for governments and organizations, especially since the introduction of the General Data Protection Regulation (GDPR) in 2018 and other legislations throughout the world aimed at protecting personal data used by controllers and processors. Paul O’Neill from Intel’s Confidential Computing group explains the relevance of Confidential Computing in a world where data is becoming increasingly important for emerging workloads like Machine Learning and AI.

About the Author: Paul O’Neill is a Senior Manager driving Strategic Business Development in Intel’s Confidential Computing group. Specializing in Security, Confidential Computing, Data Protection, Customer Service Delivery, Cloud Services, Managed Services, and Customer Success. Paul combines astute strategic, business, and technology skills with a track record of over 20 years in successfully delivering technology-driven solutions.

Existing legislation and judgments such as GDPR, Schrems II, and the US Cloud Act have put data protection under more scrutiny. Data controllers and data processors must ensure compliance in an ever-changing, complex legislative environment. We can expect that there is potential for a lot more oversight regarding data protection, not just from here in Europe but on a global scale, as many different countries adopt stronger legislation concerning not only the protection of data but also how data is used.

Cloud Sovereignty is emerging as a key driver for European-based enterprises determined to protect data generated in Europe and ensure it delivers benefits for the countries within which it was generated.

We are also well into the era of AI and Machine Learning, where the demand for data for models to consume is becoming increasingly important as machine learning models become highly commoditized. Generative AI can change the way many enterprises work today but requires rich access to data sources and strong governance models to protect the privacy of individuals and Intellectual Property. Regulation on the use of AI and its data sources is set to become stronger over the coming years, with upcoming legislation like the EU AI Act expected to become law soon.

With the increased use of encryption by enterprises, Privacy Enhancing Technologies (PETs) that are designed to help protect data and intellectual property are becoming an important toolset in helping enterprises enhance protection, and Confidential Computing is fast emerging as one of the most important of those technologies.

The three states of data

We can consider that there are three states for data: data at rest, data in flight, and data in use. Data that is being stored on a physical or logical medium is at rest. Data that is transmitted over a network or between logical devices is in flight. Data that is being processed, updated, accessed, or read by an application is in use.

Nowadays, encrypting data at rest and in transit has become more common in modern enterprise applications and processes. However, when data is in use during a computation, it is typically decrypted and, therefore, at risk during this time.

As threat vectors against network and storage devices are increasingly thwarted by the protections that apply to data at rest and in transit, malicious actors are increasingly targeting data in use through various vulnerabilities.

The increased use of encryption has required new Privacy Enhancing Technologies to allow enterprises to compute their encrypted data without decrypting it before processing at scale.

Confidential Computing from the ground up

Confidential Computing has quickly emerged as a critical Privacy Enhancing Technology that ensures the confidentiality and security of sensitive data while it’s being processed.

It performs the sensitive computation in a hardware isolated, Trusted Execution Environment (TEE). The data inside the TEE is encrypted with keys managed by the processor and unavailable to all other hardware components, software components, and even administrators. Confidential Computing also allows data owners and model owners to guarantee the integrity of their data and code at runtime to ensure no modifications have been made either to code or data. This provides protection for data in use.

Confidential Computing also provides the capability for enterprises to verify that the expected code has been executed, the environment is a genuine Intel environment and is secured with the latest security patches. This process provides cryptographic proofs to data and model owners and is called remote attestation.

The combination of these Confidential Computing features increases the security assurances for organizations that manage sensitive and regulated data - even in a public cloud infrastructure.

Confidential Computing boosts AI adoption

Confidential Computing is a key component in expanding the use of AI for SAS software. Protecting AI models with Confidential Computing not only improves data security but also helps reduce model bias and improve model governance. It can be used to guarantee the authenticity of data used to train an AI solution and ensure that model usage occurs only with authorized data by authorized users.

For a practical example, consider an application handling a business transaction and requiring collaboration among multiple parties. Often, the data being shared is confidential. The data may be personal information, financial records, medical records, or private citizen data. Public and private organizations legally require their data to be protected from unauthorized access. Sometimes, these organizations even want to protect data from internal users like infrastructure administrators or engineers, security architects, business analysts, and data scientists. The data protection in use and attestation feature of Confidential Computing can enable the necessary, trusted collaboration in such a case.

These solutions can significantly improve privacy and data security for customers in highly regulated industries such as financial services, healthcare, and government. They can encourage them to move more of their sensitive data and computing workloads to public cloud services, saving organizational costs.

Another good example is the usage of AI for healthcare services, which continues to grow significantly as we've obtained access to larger datasets and patient images that are captured by medical devices. Disease diagnostic and drug development will benefit from more data sources because aggregated data analysis can provide higher prediction accuracy. Hospitals and health institutes can collaborate by sharing patient medical records within a TEE, even in the public cloud. AI services running inside the same TEE can, e.g., aggregate data, analyze data, and automate some decision-making. The data protection in use combined with the attestation feature of Confidential Computing can minimize the risks of disclosing patients’ private data. It can also protect the valuable IP of model owners.

Other use cases requiring secure data sharing and processing on that data are anti-money laundering, fraud detection, and exposing human trafficking. In these cases, the parties could agree on the processing to be performed. A corresponding application is deployed in a Confidential Computing environment; the parties verify that the expected application is running (using attestation) and then provide their input. No party – not even the one executing the analysis – can see another party's data and can’t perform any processing that was not agreed upon.

AI leaders, like SAS, have a number of tools in their portfolio that effectively use growing computing power and Confidential Computing techniques. The combination of these technologies is necessary to offer solutions for companies that want to take full advantage of the potential of artificial intelligence while mitigating the inherent risks.

Confidential Computing at Intel

With Intel® Trust Domain Extensions (Intel® TDX) launched in 2023, Intel now provides a Confidential Computing technology that offers isolation and confidentiality at the Virtual Machine (VM) level. Intel TDX isolates the guest OS and applications inside from the cloud host, hypervisor, and other VMs on the same platform, offering enhanced protection capabilities.

Confidential Computing powered by Intel® Software Guard Extensions (Intel® SGX) enables application isolation, providing an even more granular level of protection. With Intel TDX and Intel SGX, Intel’s portfolio of Confidential Computing technologies allows businesses to choose the level of security they need to help meet their needs.

Towards responsible innovation

Confidential Computing powered by Intel technologies allows data owners and enterprises to run computations on sensitive data or protect important IP with explicit trust, allowing for adherence to data regulations and compliance requirements. Data encryption is becoming the norm, and building privacy, confidentiality, and integrity-based solutions into hybrid and multi-cloud sovereign architectures from the outset will make the complexity of data and model protection easier to navigate for organizations.

Starting in 2024 SAS and Intel will be working together to bring confidential versions of SAS products to market with Intel TDX. Confidential Computing is helping to overcome some of the key challenges that limit the development and adoption of AI solutions and enables organizations to collaborate more effectively on data-driven projects. Combined with SAS solutions, it will help to continue the drive for secure and compliant innovation in the AI and ML space.

With projections from Allied Market Research (1) that the Confidential Computing market could grow to $184.5 billion by 2032, it is clear that enterprises are seeing Confidential Computing as a significant tool in their privacy arsenal.


Udostępnij link