AI Model Data Poisoning – the most disconcerting new area of attack

AI Model Data Poisoning – the most disconcerting new area of attack

— Since training an AI model can require billions of data samples, injecting intentionally malicious information into the process can be easier than expected. Much of the data used comes directly from the internet, and just a small amount of bad data can skew or otherwise bias a model while going undetected. The result is either bad data or, perhaps worse, an ‘invisible’ impact that goes unnoticed until it creates a larger problem — warns Philip King, Confidential Computing expert and Technical Solution Specialist at Intel® Americas.

Phil King is a 22-year Intel veteran and has 30 years of experience in Enterprise IT, ranging from PC migration/installation to solution engineering for very large enterprise deployments. More recently, Phil has made Security his focus, highlighting things like Confidential Computing and Platform Trust. He is also an expert on data center products and technologies, including but not limited to CPU, Memory, SSD, NIC, and other peripherals.

What do you think are the biggest challenges today that could hamper the potential of technology transformation?

Clearly, Artificial Intelligence is a technology transformation that is taking the world by storm. Compared to the AI of yesteryear, modern AI offers generative capabilities that can be seen as a co-pilot to the human user. At the same time, AI raises concerns about unintended side effects from things like data poisoning, model bias, or ‘hallucinations’ (aka confabulation or delusion). Hallucinations describe the scenario where the AI begins to misinterpret (or overanalyze) the data it’s been given, providing plausible, random falsehoods or unjustified responses. In the AI practitioner community, this is a major problem in Large Language Model (LLM) technology (like ChatGPT).

The challenge that this creates can generally be defined as a shortage. The key resources that enable AI to be leveraged are well-organized data, high-speed processing access, and skilled hands to bring it all together into a functional solution. Each resource has its own constraints that present challenges for any entity working towards greater AI enablement.

In today's digital economy, all eyes are on the applications that deliver real-time insights, but what is often forgotten is the underlying hardware that makes it all possible.

It’s true. Aside from the legacy speeds and feeds, today’s hardware continues to integrate new features that enable great efficiency, density, and performance depending on how they are applied. One example would be the integrated accelerators in the 4th Generation Intel® Xeon® Scalable processors. With specific features designed to:

● Off-load and accelerate database analytics (Intel® In-Memory Analytics Accelerator - IAA)
● Accelerate AI workloads (Intel® Advanced Matrix Extensions (AMX)
● Off-load security and compression operations (Intel® Quick Assist Technology - QAT)
● Deliver built-in, high-performance load balancing (Intel® Dynamic Load Balancer - DLB)
● Off-load and accelerate data streaming (Intel Data Streaming Accelerator - DSA).

In addition, these CPUs also feature Intel® Trust Domain Extensions (TDX), adding the ability to create a secure enclave on shared hardware that is protected from other tenants & administrators alike. I could also point out that accelerators implement the required functionality in custom IP blocks that run at higher performance with lower power. They also free up CPU cores for other tasks. The result is increased efficiency and a powerful performance boost for enterprise tasks.

The exponential growth of computing power led to the development of new applications and technologies that shape how we live and work today. What applications and processes need the highest level of computing power today, as opposed to needing it in the near future? Is the potential of computing power limitless, or can we expect a slowdown in its performance?

The AI phenomenon has gained traction and represents the most compute-heavy workload segment of interest today. Many AI algorithms use a brute-force approach to training, which requires many cycles to compute the necessary model logic that gives AI its amazing capabilities. With the advent of the GP-GPU(General Purpose-Graphics Processing Unit), it is now possible to share the work required across both the primary CPU(s) and any GPUs installed in the system. This can yield faster time-to-result for both training and inference.

While computing power may be perceived as being limitless, the reality is that the amount of performance that can be delivered by a single server host is gated by the technology that it leverages. The good news for the industry is that Intel and the industry continue to innovate, with Intel running at a torrid pace to bring the next generation of computing devices to market in ever-shrinking time windows. One example of this at Intel is delivering five new technology nodes in just four calendar years. These ever-shrinking silicon designs will be the foundation for future generations of processors that will continue to march computing forward. The roadmap for delivering gen-over-gen innovation is healthy, and we can foresee many years of continued improvement.

Data analytics is a key driver of digital transformation and greatly impacts business growth and competitive advantage. How are Intel solutions enabling data-driven strategies?

Data processing and management are part of the key workloads that Intel tracks as part of our technology development. When designing a new CPU, key metrics are continually monitored to ensure ample benefit and no performance regressions. Most recently, Intel has added acceleration for Data Analytics with the Intel In-Memory Analytics Accelerator that is built-in to the latest 4th Gen Intel Xeon Scalable CPU. Features like this will continue to be integrated into the CPU going forward and will also receive updates over time.

Powerful processors drive AI innovations. What are the major benefits and risks of moving data analytics to the cloud?

Intel's CPUs available in the cloud market are excellent for AI inference workloads and offer some of the best performance per watt for inference thanks to features like the Advanced Matrix Extensions, which offer accelerated matrix operations that are heavily utilized in AI workloads. In cases where customers want to use GP-GPUs to augment their AI computing performance (esp. for training), Intel offers both hardware and software to simplify the adoption and implementation of these resources. Examples include the Intel ‘Gaudi’ line of GPUs and OpenVINO, which enables a write-once, run-anywhere model for AI workloads.

In terms of Risk, any migration to the cloud requires a careful, measured approach to maintain security. Possible threats include data theft or corruption, stealing of the models and/or associated intellectual property, and the contamination of the model during training. With the rapid evolution of confidential computing in the cloud, it is now possible to protect your data and models from theft while simultaneously removing the cloud administrator from within the trust boundary.

Intel and SAS state: "Leaders in AI have found that it's far more practical to rely on a single, central, integrated analytics solution built to manage a wide range of AI and non-AI workloads." Why is a single analytics platform a better solution than multiple applications?

It is a well-known premise in Information Technology that complexity increases with every new platform, technology, or software deployed into your infrastructure. In general, sticking with a smaller subset of richer applications enables the IT practitioner to deliver the necessary functionality without incurring undue complexity.

Why is it an attractive opportunity for Intel to work with an AI company such as SAS?

As a leader in microprocessor technology, Intel has a wide array of tools and techniques to enable software to extract maximum performance from the hardware for the specific needs of the software. By partnering closely, the two companies can share the technical nuances of their individual solutions, thereby enabling each other to take the best advantage of the capabilities provided. AI is complex, and trying to dissect and comprehend another party’s technology or code can prove very challenging. Through collaboration, we can deliver a more optimized solution with faster time-to-market (TTM) when compared to offering individual solutions.

Key focus business areas for both companies are customer intelligence, fraud prevention, risk management, and others. What are the major impacts of reshaping the processes and operations in all these areas?

In order to provide fraud prevention and risk management, it is critical to implement a comprehensive security strategy that embraces modern practices like Zero Trust and Confidential Computing. With Zero Trust, the days of simple firewalls and castle-and-moat-style security are long gone. Instead, every transaction should be authenticated, authorized, and encrypted. In addition, sensitive workloads should be protected using Confidential Computing technologies like Intel® Software Guard Extensions or Intel® TDX to ensure the highest levels of assurance and maintain regulatory compliance. These Confidential Computing technologies are being made available from the major Cloud Service Providers (CSPs) in the second half of 2023 and will also be available for on-premises deployment in 2024. Together, Zero Trust and Confidential Computing can be used to create a highly secure infrastructure that is well-equipped to defend against today’s most common attack vectors.

What are the other risks, challenges, or fresh threats to avoid that you see from the perspective of the security expert?

In addition to the myriad of legacy threats that face the average enterprise, there are some emerging areas of exploitation that every company needs to be aware of. Perhaps the most disconcerting new area of attack is AI Model Data Poisoning. Since training an AI model can require billions of data samples, injecting intentionally malicious information into the process can be easier than expected. Much of the data used comes directly from the internet, and just a small amount of bad data can skew or otherwise bias a model while going undetected. The result is either bad data (garbage in, garbage out) or, perhaps worse, an ‘invisible’ impact that goes unnoticed until it creates a larger problem.

Another emerging area of concern for security is Software Bill-of-Materials (SBOM) Validation. After surviving software supply chain attacks like the Solarwinds incident in 2020, which negatively impacted thousands of organizations, including the US Government, SBOM health has become a genuine concern. This attack used a novel approach of embedding itself inside a known and trusted software package, enabling hackers to access data and networks of the victims. Known as a Supply Chain attack, it targets by embedding malware into 3rd party software instead of attacking the target’s data or network directly.

Relentless patching is likely a part of every discussion on security. The two key tenets here are Timeliness and Efficacy. In order to be effective, patches must be deployed as promptly as possible. Leaving a known exploit unpatched is akin to leaving the door open for hackers to waltz in. Instead, IT practitioners should aspire to patch systems early and often, thereby mitigating this simple but real threat. Similarly, ensuring that all systems are patched is critical as well. Even one unpatched system can provide the breeding ground for staging a larger attack. Therefore, it is important to ensure that patches are rolled out promptly, and measures are taken to reach the entire fleet of computing assets as quickly as possible.

Verizon's Data Breach Investigation Report identifies 5 types of insider attackers. They are the reckless worker, the internal agent, the disgruntled employee, the malicious whistleblower, and the careless third party. Which of these roles is the most dangerous from a data center security perspective?

While all variations of insider threats are worrisome, I’d posit that the disgruntled employee could pose the greatest threat to an organization’s cybersecurity. This is partly because legacy IT practices often do perceive this as the legitimate issue that it is. Another aspect worthy of consideration is that disgruntled people are often impassioned about achieving a sense of retribution. Therefore, a disgruntled or departing employee might not think twice about knowingly stealing IP, sabotaging systems, or even tampering with virtual or physical assets. They might share their credentials with an outsider to launch an attack, commit fraud, or even go as far as espionage. Even clicking on known phishing links while maintaining plausible deniability can be needed as the basis for a larger, more impactful hack.

Our interview will be published on a Data Science expert website, followed by business decision-makers and senior and young tech professionals interested in future career development. Could you share your experience on the competencies and specializations most wanted in the industry? Can we say that hardware education is as necessary as coding?

In order to make the best use of the hardware with your software, you need to comprehend the functionality offered and exploit it to the greatest extent possible. Knowing how to code is key, but coding without understanding the platform will likely result in inefficient code that does not perform well and creates resource contention. If the coder has a detailed understanding of the features and functions offered by the hardware, they are significantly more well-equipped to deliver clean, high-performance code without the need to go back and re-work it later to address gaps.

Udostępnij link

https://www.datasciencerobie.pl/ai-model-data-poisoning-the-most-disconcerting-new-area-of-attack/