Cybercriminals Target Companies’ AI Systems
Threat actors, hackers, cyber thieves — they go by many names, but they’ve all got one characteristic in common. They aim at the weakest links in the most vulnerable systems to exploit and then wreak havoc.
A red-hot target? The artificial intelligence (AI) programs more and more businesses are using. “AI brings unprecedented opportunity, but also can present opportunities for malicious activity,” according to a new National Security Agency (NSA) advisory. “[Threat or malicious] actors, who have historically used data theft of sensitive information and intellectual property to advance their interests, may seek to co-opt deployed AI systems and apply them to malicious ends.”
Forget the may part — hackers will target companies’ AI open doors. At least five groups came hard at OpenAI, creators of ChatGPT. The malicious actors attempted to hijack OpenAI’s large language models to eventually defraud individuals, businesses and government agencies. OpenAI worked in tandem with Microsoft to stop the threats and inform the public.
The NSA notes how deploying an AI program in a secure fashion “requires careful setup and configuration that depends on the complexity of the AI system, the resources required … and the infrastructure used (i.e., on premises, cloud or hybrid).”
When business teams and individual staffers use AI without telling their employers — often in violation of policies — their companies may be at even greater risk of an attack. Keep in mind a majority of IT pros believe their companies likely wouldn’t survive a significant theft of customer data and systems shutdown by a cybercriminal group.
Best Security Practices to Ward Off Cybercriminals
Here are a handful of steps the NSA advises companies to take before and while implementing AI systems:
- “ensure that the person responsible and accountable for AI system cybersecurity is the same person” in charge of overall cybersecurity
- if using a third-party vendor to deploy an AI solution, “work with [that organization’s] IT service department to identify the deployment environment and confirm it meets [your] organization’s IT standards”
- demand that the third-party AI developer provides a threat model for your systems and IT infrastructure
- insist on “a collaborative culture for all parties involved, including the data science, infrastructure and cybersecurity teams in particular, to allow for [voicing] any risks or concerns,” and
- address “blind spots in boundary protections and other security-relevant areas” of the AI system. Consider using an “access control system for AI model weights” and limiting “access to a set of privileged users with two-person control and and two-person integrity.”
Free Training & Resources
Resources
Excel Tips
Excel Tips
Case Studies