A threat actor could use AI technologies to obtain initial access into networks, move laterally, install malware, steal data, and/or poison organizations’ supply chains.

Malware authors can use ML tools to create malicious AI programs that can be used to attack enterprises’ networks.

These models, which are often publicly available, serve as new launching pads for a wide variety of cyber threats that also can harm organizations’ supply chains. Enterprises must be prepared.

Proof-Of-Concept (POC) Attack

Researchers from HiddenLayer’s SAI team have demonstrated an example of how a threat actor could use ML models – the decision-maker at the heart of most modern AI-powered solutions – to penetrate enterprise network security.

The research was conducted by HiddenLayer’s Tom Bonnar, senior director of adverserial threats research; Marta Januš, principal adversarial threats researcher; and Eoin Wickes, senior adversarial threats researcher.

Report from CompTIA

According to a recent survey by CompTIA, 86 percent of CEOs say their respective organizations are already employing artificial intelligence (AI) technologies.

Indeed, solutions as diverse as

  • self-driving cars
  • robots
  • medical equipment
  • missile-guidance systems
  • chatbots
  • digital assistants
  • facial-recognition systems
  • online recommendation systems rely on ML to function.

ML models

Because of the complexity involved in deploying these kinds of AI systems, most organizations tend to use open-sourced software for training and deploying them, but that’s where the problem lies, according to the research team.

“Such repositories often lack comprehensive security controls, which ultimately passes the risk on to the end user — and attackers are counting on it,”

If you use pre-trained machine learning models obtained from an untrusted source or public repository, you’re potentially vulnerable to attacks similar to those recently reported by security experts.

“Moreover, companies and individuals that rely on trusted third-party models can also be exposed to supply chain attacks, in which the supplied model has been hijacked,”

Be in touch with the cybersecurity news through our website.

An Advanced Attack Vector

Researchers showed how such attacks could be used against popular deep learning frameworks, including PyTorch, TensorFlow, Scikit-Learn, and Keras. They also showed how they could be generalized to target other popular ML frameworks.

Ransomware

Researchers used a method similar to steganographic encryption to embed a malicious payload into the weights and bias values of a deep learning model.

PyTorch/pickle Serialization Format

To decrypt the binary, they used a flaw in Pytorch/pickle serialization format that allowed them to load arbitrary Python modules and execute their code.

They did this by injecting a small Python script at the beginning of one of the model’s files, preceded by an instruction for executing the script, Janus says.

“The script itself rebuilds the payload from the tensor and injects it into memory, without dropping it to the disk,”

Janus

“The hijacked model is still functional and its accuracy is not visibly affected by any of these modifications.”

Weaponized Model

The result was a weaponized malware that evaded current detection by anti-virus software and EDR tools while causing only a negligible decrease in effectiveness.

Indeed, the current most popular anti-malware doesn’t offer any support for detecting malware using machine learning techniques.

The Risk for the Enterprise

To exploit ML systems for targeted attacks, attackers need to get their hands on the model they want to use. In the simplest cases, this means simply obtaining a copy of the model from a public source.

“In one of the possible scenarios, an attacker could gain access to a public model repository (such as Hugging Face or TensorFlow Hub) and replace a legitimate benign model with its Trojanized version that will execute the embedded ransomware,”

Janus

“For as long as the breach remains undetected, everyone who downloads the trojanized model and loads it on a local machine will get ransomed.”

An attacker could also use this method to conduct a supply chain attack by hijacking a service provider’s supply chain to distribute a Trojanized model to all service subscribers, she adds.

“The hijacked model could provide a foothold for further lateral movement and enable the adversaries to exfiltrate sensitive data or deploy further malware,”

Janus

Business Implications

Depending on the type of attack, the consequences may include the initial compromise of a company’s networks and subsequent lateral movement to deploy ransomware, spyware, or other types of malicious software.

An attacker could steal data and intellectual properties, launch DDoS attacks, or even compromise an entire supply chain, among other things.

Mitigations and Recommendations

The research is a warning for any organization using pre-trained ML models downloaded from the Internet or provided by a third party to treat them “just like any untrusted software,” Janus says.

Malicious software should be detected by scanning them for malware. However, at present, there aren’t any products that detect malware during the development phase.

Secure Storage Formats

Furthermore, everyone who builds machine-learned systems should use secure storage formats (for example, formats that don’t allow for programmability) and cryptographically sign all the systems they create so they cannot be tampered with without breaking the signature (i.e., making them unusable).

“Cryptographic signing can assure model integrity in the same way as it does for software,”

Overall, the research team said taking steps to understand risks, address blind spots, and identify areas for improvement when deploying ML models within an organization can help prevent attacks from this vector.

Our experts share the latest authentic news and reviews daily, don’t miss out!!!

Author

  • Victor is the Editor in Chief at Techtyche. He tests the performance and quality of new VR boxes, headsets, pedals, etc. He got promoted to the Senior Game Tester position in 2021. His past experience makes him very qualified to review gadgets, speakers, VR, games, Xbox, laptops, and more. Feel free to check out his posts.

Share.

Victor is the Editor in Chief at Techtyche. He tests the performance and quality of new VR boxes, headsets, pedals, etc. He got promoted to the Senior Game Tester position in 2021. His past experience makes him very qualified to review gadgets, speakers, VR, games, Xbox, laptops, and more. Feel free to check out his posts.

Leave A Reply

Exit mobile version