Minecraft Malware, Secure AI Framework - June 4 - June 9th F5 SIRT This Week in Security

 

F5 SIRT This Week in Security

 

Minecraft Malware, Secure AI Framework, Updated breach notification rules, Automated Cold Boot Attacks - June 4 - June 9th F5 SIRT This Week in Security

Introduction

Tikka is your editor for F5 SIRT's This Week in Security covering June 4ht through Junt 9th 2023.
THIS WEEK, TL;DR
  • Mincraft Malware and the future: The article discusses a new malware called 'Fractureiser' and  a potential security risk associated with the use of AI models like ChatGPT in software development. Hackers have used 'Fractureiser' to infiltrate the popular Minecraft modding platforms Bukkit and CurseForge. The hackers compromised several accounts on these platforms and injected malicious code into plugins and mods, which were then incorporated into popular modpacks such as 'Better Minecraft,' which has over 4.6 million downloads.
  • Secure AI Framework from Google: The article announces the introduction of Google's Secure AI Framework (SAIF), a conceptual framework for secure AI systems. The framework is inspired by security best practices applied to software development, while also incorporating an understanding of security mega-trends and risks specific to AI systems.

  • Updated breach notification rules for Health data from FTC: The article discusses a proposal by the Federal Trade Commission (FTC) to amend its Health Breach Notification Rule. The amendment would require vendors of personal health records, including developers of health applications, to report data breaches.

  • Automated Cold Boot Attacks: The article discusses a new development in cold boot attacks, where memory chips are chilled to extract data, including encryption keys.

Minecraft Malware and the future

Bleepingcomputer reports that hackers have started using a new malware called 'Fractureiser' to infiltrate the popular Minecraft modding platforms Bukkit and CurseForge. The hackers compromised several accounts on these platforms and injected malicious code into plugins and mods, which were then incorporated into popular modpacks such as 'Better Minecraft,' which has over 4.6 million downloads. The malware has affected players who downloaded mods or plugins from CurseForge and dev.bukkit.org in the past three weeks.

The Fractureiser malware operates in four stages:

Stage 0: New mods are uploaded or legitimate mods are hijacked to include a new malicious function at the end of the main class for the project.
Stage 1: The malware connects to a URL and downloads a file called dl.jar, which is then executed as a new Utility class.
Stage 2: The malware connects to an IP address for the attacker's command and control server and downloads a file, which is then configured to automatically launch in Windows or Linux.
Stage 3: The malware downloads an additional payload called 'client.jar', which is a mix of Java and native Windows code in the form of an information-stealing malware named hook.dll.

The Fractureiser malware is capable of self-propagating to all .jar files on the filesystem, stealing cookies and account credentials stored on web browsers, replacing cryptocurrency wallet addresses copied in the system clipboard, and stealing Microsoft, Discord, and Minecraft account credentials from a variety of launchers.

Minecraft players should avoid using the CurseForge launcher or downloading anything from the CurseForge or the Bukkit plugin repositories until the situation clears up. If you fear that you might have been infected you can use a scanner scripts provided by the community to check for signs of infection on your system. If infected, it is recommended to clean the computer, ideally reinstalling the operating system, and then change to unique passwords on all accounts.

This brings me to, unrelated, but relevant topic of ChatGPT hallucianations and the potential security risk associated with the use of AI models like ChatGPT in software development. The researchers at Vulcan Cyber have found that attackers could exploit the tendency of ChatGPT to "hallucinate" or generate non-existent code libraries (packages) to spread malicious packages into developers' environments.

The process, termed "AI package hallucination," works as follows:

1. An attacker formulates a question asking ChatGPT for a package that will solve a coding problem.
2. ChatGPT responds with multiple packages, some of which may not exist.
3. The attacker finds a recommendation for an unpublished package and publishes their own malicious package in its place.
4. The next time a user asks a similar question, they may receive a recommendation from ChatGPT to use the now-existing malicious package.
 
This has happened to me on multiple occassions when ChatGPT provided "functional" python code which fails because the package names in the imports simply did not exist.

It is very important for developers to do proper vetting of the libraries they use, especially when these are recommended by AI tools like ChatGPT. At bare minimum, one must check the creation date, number of downloads, comments, and any attached notes of a library before installing it. If anything looks suspicious, developers should think twice before installing the package.


Secure AI Framework from Google

Google just announced, Secure AI Framework (SAIF), a conceptual framework for secure AI systems. The framework is inspired by security best practices applied to software development, while also incorporating an understanding of security mega-trends and risks specific to AI systems.

The motivation behind SAIF is to establish clear industry security standards for building and deploying AI technology responsibly. The framework is designed to help mitigate risks specific to AI systems, such as stealing the model, data poisoning of the training data, injecting malicious inputs through prompt injection, and extracting confidential information in the training data.

Google emphasizes the importance of a framework across the public and private sectors to ensure that responsible actors safeguard the technology supporting AI advancements. The goal is to ensure that when AI models are implemented, they are secure by default. As AI capabilities become increasingly integrated into products worldwide, adhering to a bold and responsible framework will be even more critical.

Six core elements of SAIF:
  1. Expand strong security foundations to the AI ecosystem
  2. Extend detection and response to bring AI into an organization’s threat universe
  3. Automate defenses to keep pace with existing and new threats
  4. Harmonize platform level controls to ensure consistent security across the organization
  5. Adapt controls to adjust mitigations and create faster feedback loops for AI deployment
  6. Contextualize AI system risks in surrounding business processes


Updated breach notification rules for Health data from FTC


There is a new proposal by the Federal Trade Commission (FTC) to amend its Health Breach Notification Rule. The amendment would require vendors of personal health records, including developers of health applications, to report data breaches. The proposed changes aim to cover entities not covered by the Health Insurance Portability and Accountability Act (HIPAA) and would require them to notify the FTC, individuals, and the media in some cases of breaches of personally identifiable health data.

The proposed amendments would also clarify that a security breach includes data security breaches and unauthorized disclosures, revise the definition of a personal health record (PHR) related entity, clarify what it means for a PHR vendor to draw identifiable health information from multiple sources, modernize the method of notice, expand the content of the notice, and consolidate notice and timing requirements. The amendments also outline the penalties for not following the rules.

The implications for privacy and security of user's health data are significant. If the amendments are adopted, it could lead to increased transparency and accountability in the event of data breaches involving personal health records. This could potentially enhance the protection of users' health data and ensure that they are promptly informed in the event of a breach. However, it could also impose additional regulatory burdens on vendors of personal health records and developers of health applications. The FTC is asking for public comment on the proposed rule changes.


Automated Cold Boot Attacks

There is a new development in cold boot attacks, where memory chips are chilled to extract data, including encryption keys. This type of attack has now been automated and improved in the form of a memory-pilfering machine that can be built for around $2K.

The machine, a Cryo-Mechanical RAM Content Extraction Robot, was developed by Ang Cui, founder and CEO of Red Balloon Security, and his colleagues. It is designed to collect decrypted data from DDR3 memory modules. The machine physically freezes one RAM chip on a device at a time, then pulls the physical memory off the device to read its content.

Cold boot attacks can be countered with physical memory encryption. Modern CPUs and game consoles already use full encrypted memory, which would defeat this approach. But many critical infrastructure embedded systems that we depend on, such as programmable logic controllers (PLCs), do not currently address this kind of attack.

This technique could potentially be used to extract sensitive data from devices, including encrypted firmware binaries and runtime ARM TrustZone memory. The researchers demonstrated their robot on a Siemens SIMATIC S7-1500 PLC and a CISCO IP Phone 8800 series, and they believe their technique could be applicable to more sophisticated DDR4 and DDR5 with a more expensive FPGA-based memory readout platform.




Published Jun 13, 2023
Version 1.0

Was this article helpful?

No CommentsBe the first to comment