Curse the Rusty OSS supply chain - September 10th to 16th, 2023 - F5 SIRT - This Week in Security

This Week in Security

September 10th to 16th, 2023

Aaron here as your editor this week for a round-up of interesting or notable security news from the last week that caught my eye; keeping up to date with new technologies, techniques and information is an important part of our role in the F5 SIRT. The problem with security news is that it's an absolute fire-hose of information, so each week or so we try to distill the things we found interesting and pass them on to you in a curated form.

It's also important for us to keep up to date with the frequently changing behaviour of bad actors. Bad actors are a threat to your business, your reputation, your livelihood. That's why we take the security of your business seriously. When you're under attack, we'll work quickly to effectively mitigate attacks and vulnerabilities, and get you back up and running. So next time you are under security emergency please contact F5 SIRT.

While there was no shortage of news in my week (from September 10th through 16th), this week has already started off strong for Microsoft, with 38TB of data being leaked through a misconfigured SAS token which included private Teams messages and other AI research data, and news of that being swiftly followed by the news that the Microsoft v FTC court case filings had also leaked private information relating to Xbox and gaming plans. All this hot on the heels of Microsoft's disclosure in July that they had been breached by Storm-0324/TA543 allowing the theft of an email signing key allowing access to various email accounts. At least two of these leaks can be added to Arvin's list of Unintentional Data Leaks from August, further proving that we are always one human error away from catastrophe; of course trying to eliminate human error entirely is an exercise in futility (as my attempts at proof reading will attest!) so I think it is up to us in Information Security to figure out how we can build better technological guard-rails to prevent this kind of thing from happening when the human-administered processes fail. After all, that's what would happen in every other industry, right? It's why organisations like OHSA (US) and the HSE (UK) exist for other industries. Perhaps we need an ISPA .. the Information Safety and Process Administration?

For more on the Microsoft SAS leak you can also see the original research here: https://www.wiz.io/blog/38-terabytes-of-private-data-accidentally-exposed-by-microsoft-ai-researchers 
 

First up, your regular reminder to patch your End User Devices

September 12th say Google issue an out-of-band security patch for Chrome to address CVE-2023-4863, a vulnerability which could potentially allow code execution via maliciously crafted WebP images. As THN reported, this is the fourth zero-day vulnerability since the start of the year; this time the vulnerability was disclosed to Google on September 6th and was already being exploited in the wild at the point of the patch being made available.

No less than a day later, Mozilla issued patches for the same issue which was also found to affect Firefox, Firefox ESR and Thunderbird and, of course, the same vulnerability is likely to be found in any other browser that is Chromium based so it should come as no surprise that Microsoft also dropped a fix on Patch Tuesday along with 58 other vulnerabilities.

Microsoft's patches also include fixes for CVE-2023-36761 and CVE-2023-36802; the former in Word and the latter in the Streaming Service Proxy with both giving an attacker the possibility of stealing NTLM hashes or escalating privileges to that of SYSTEM and where both are known to be under active exploitation in the real-world.

Not to be left out, of course, Apple also dropped emergency updates for iOS, iPadOS, macOS and watchOS to fix two CVEs (CVE-2023-41061 and CVE-2023-41064) known to be exploited by attackers to deliver Pegasus spyware.

Finally, if you run Windows nodes in your Kubernetes environments you will want to make sure you are patched and up to date, lest you give an attacker SYSTEM privileges via CVE-2023-3676, CVE-2023-3893 or CVE-2023-3955.



I will admit that I haven't carried out exhaustive research but reading the news last week certainly gives me the impression that the prevalence of 0days being exploited in the wild - or at least the fixes for them - is increasing as we move through the year. For me this really highlights the need to design security into environments - and by that I mean we have to design in ways to limit the damage that a popped system can cause, because it is almost inevitable that someone is going to get popped sooner or later. I do also wonder if the effort put into finding (and exploiting) these vulnerabilities suggests that we are all getting wiser to phishing attacks?

Curses!

Sorry, new-curses! On the 14th Microsoft published a lovely write-up of CVE-2023-29491, a memory corruption vulnerability in ncurses which could be used as part of an exploit chain to escalate privileges or execute arbitrary code. This caugh my eye because ncurses is one of those libraries that feels like it's existed since forever. Of course it turns out that forever is only since 1993 so a mere 30 years.

This caught my eye not really because of the technical specifics of the vulnerability (but I highly encourage you to read the Microsoft blog for those details) but because of all the memories ncurses surfaced; the look of an ncurses interface is iconic at this point and if you've ever run 'menuconfig' to configure a Linux kernel before building it then you'll be more than famliar with it. I remember hand-coding DOS applications written in Pascal to have the same look & feel..

The Open Source Supply Chain


I saw two pieces of news last week that got me thinking about the safety and security of the open source supply chain - something that impacts many companies, ours included, who use open source components within their products. Of course, responsible companies like F5 have a robust Security Development Lifecycle Policy which will include controls on what third party components may be imported, mandating security inpsection on any new components prior to inclusion and regular static and dynamic code analysis, but modern products contain millions of lines of code and most of these controls are human-driven - so going back to my OHSA analogy earlier, how can we put guard rails in place to limit the potential damage from a failure of human oversight?

Back to the articles for a second; the first was the disclosure of another GitHub repository hijacking vulnerability by Checkmarx; this is similar to the research we saw late in 2022 by Joren Vrancken and earlier by Checkmarx, and again involves the redirect created by GitHub when a user changes their username. Essentially the exploit looks like this (more details in the Checkmarx blog, of course):
  1. A victim owns the namespace "victim_user/repo"
  2. The victim renames "victim_user" to "renamed_user"
  3. GitHub automatically retires "victim_user/repo"
  4. Meanwhile, an attacker who already owned the username "attacker_user" creates a new repository called "repo" and simultaneously renames their "attacker_user" to "victim_user" using the API
This breaks the automatic redirect that GitHub would normally create to direct users looking for the original "victim_user/repo" (and redirect them to "renamed_user/repo") by creating a new "victim_user/repo" which is actually the redirect for "attacker_user/repo", meaning that anyone going looking for the old "victim_user/repo" would, rather than being redirected to "renamed_user/repo", actually find themselves browsing the attacker's malicious repository.

This is trickier to pull off than typosquatting a malicious package into a dependency management system like PyPi but is likely to be more effective and much harder to detect - your developers aren't accidentally downloading the wrong package by mis-typing a name here, they are downloading what they believe to be the right thing, with the right name, and getting something else entirely - you'll need good code inspection to detect that.

The second article I read was about typosquatting; less interesting, perhaps, but on September 1st the crates.io team (the Rust package registry) disclosed the creation of a number of typosquatted malicious packages which contained build scripts designed to exfiltrate sensitive data via Telegram and (wildly!), PuTTY.

So back to my OHSA analogy.. I don't think human oversight is going to solve this problem - of course we have to continue to work to secure package management systems and open source code repositories, but sneaking malicious packages in is always likely to happen while open source is, well, open, so how do we mitigate the damage when & if that happens? Personally I think AI is the most exciting possibility here - Jordan already wrote about AI fuzzing in the September 3-9th TWIS and I think we are going to see a gradual move from classic SAST analysis and tools like Mend to an AI powered or enhanced model better suited to finding suspicious looking code rather than only code that might break in an unexpected way or that uses a known-vulnerable library. This should be an exiting area to watch and I think we're going to see really rapid evolution given how fast AI-forward companies are iterating right now.

While I'm talking Rust

Malware families typically live forever - just look at Qakbot, for example which has been alive since at least 2007 or was until the FBI took it down at the tender age of 16 (it'll be back, I'm sure!) - so it's not often we get to witness the birth of a star. Sorry, the birth of a new piece of malware.. but: enter 3AM. This news was interesting for two reasons: First, the malware appears to be entirely new (which is rare, as I say) and written in Rust, and secondly the malware was discovered when it was leveraged only because LockBit failed.

That means this was a determined attacker who really, really wanted to breach the target - attackers don't burn 0days without good reason, nor do they deploy their latest shiny toy unless they really have to; they save those things for the high value targets.

It's also interesting that attackers are learning. We've seen before that new malware is sometimes rushed and half baked, containing errors or oversights and in some cases vulnerabilities caused by unsafe memory operations and those errors have resulted in effective 'kill switches' for the malware that defenders have successfully used - so what's the best fix for that? Move to a memory safe language, as commercial products are doing, to remove the possibility of a use-after-free or out-of-bounds read/write operation ever appearing in your code.

Rust seems a good choice here - it compiles to quick, efficient code (Rust is 9th fastest in Dave Plummer's Software Drag-racing challenge) and is memory-safe out of the box. We've already seen Go (29th fastest in the drag racing challenge) become quite common over the last two years, but Rust seems to be the hot new contenter with a number of articles just in the last few days about new Go malware popping up. Keep an eye on this trend, I think.
Updated Sep 25, 2023
Version 3.0

Was this article helpful?

No CommentsBe the first to comment