Security in K-12, Fuzzing, AI Threat Modeling - Sept 3 to Sept 9, 2023 F5 SIRT This Week in Security

Jordan here as your editor this week. This week I reviewed Security in K-12 Education, LLM Enhanced Fuzzing, & AI Threat Modeling.  Keeping up to date with new technologies, techniques and information is an important part of our role in the F5 SIRT. The problem with security news is that it's an absolute fire-hose of information, so each week or so we try to distill the things we found interesting and pass them on to you in a curated form.

It's also important for us to keep up to date with the frequently changing behaviour of bad actors. Bad actors are a threat to your business, your reputation, your livelihood. That's why we take the security of your business seriously. When you're under attack, we'll work quickly to effectively mitigate attacks and vulnerabilities, and get you back up and running. So next time you are under security emergency please contact F5 SIRT.

Security in K-12 Education

The Cybersecurity and Infrastructure Security Agency (CISA) has taken two important steps to improve cybersecurity for K-12 schools.

First, CISA released a report earlier this year that found that K-12 schools are increasingly targeted by cyberattacks. The report made a number of recommendations to help schools improve their cybersecurity posture, such as investing in basic security measures, developing a comprehensive cybersecurity plan, and training staff on cybersecurity best practices.

Second, CISA launched a voluntary pledge last week for K-12 education technology providers to commit to designing products with greater security built in. The pledge asks companies to take ownership of security outcomes, embrace radical transparency and accountability, lead from the top, and encourages the use of multifactor authentication and public vulnerability disclosure.

This pledge is crucial as K-12 schools, with often limited cybersecurity resources, heavily depend on technology for teaching and data management. By fortifying educational technology products against cyberattacks, the pledge safeguards personal and academic details of students.

Overall, the CISA's two initiatives are important steps towards improving cybersecurity for K-12 schools. The pledge is a valuable tool that can help to make education technology products more secure and protect students' data.

LLM Enhanced Fuzzing

Google has unveiled a cutting-edge AI-powered fuzzing method proven highly effective in detecting security flaws in software. Fuzzing, a technique where random or unexpected input is given to a program to identify crashes or unexpected behaviors, has been transformed with the integration of AI. The innovative technique named Large Language Model (LLM) aided fuzzing, was crafted by Google's Open Source Security team. LLM-aided fuzzing is able create inputs that are both syntactically accurate and semantically relevant, increasing the chances of pinpointing software bugs.

For years, Google has utilized LLM-aided fuzzing to scrutinize its software, leading to the discovery of numerous security vulnerabilities. Recognizing its potential, Google is extending LLM-aided fuzzing to the broader open-source community via its OSS-Fuzz initiative. OSS-Fuzz is a dedicated service offering continuous fuzzing for open-source projects, ensuring their robustness against potential threats. Impressively, OSS-Fuzz has already discovered over 10,000 bugs, with the promise this new approach will allow them to uncover even more. Additionally, Google is democratizing access to its technology by releasing the LLM model, which powers LLM-aided fuzzing, to the public. This move is expected to catalyze further advancements and applications in the domain.

In essence, the advent of LLM-aided fuzzing marks a monumental stride in combating software vulnerabilities. Its promise of superiority over conventional fuzzing methods and its adaptability to a diverse software spectrum make it a potential game-changer in cybersecurity.

AI Threat Modeling Framework for Policymakers

I recently encountered a great writeup on AI Threat Modeling, which can be found here. The article introduces the ATHI framework, which stands for Actor, Technique, Harm, and Impact. This framework is tailored for policymakers, aiming to simplify the complex landscape of AI threats.

The ATHI approach is simple and structured around the statement: "A(n) Actor uses Technique to create Harm which results in Impact." This well-ordered technique allows for a clear and to-the-point expression of potential AI risks. After delving into the article, I was inspired to visualize the possible permutations of this framework. This led me to develop a quick Python script shared below. My objective with this tool is to navigate through the various possibilities, filter out insignificant risks, and identify the most pressing AI threats.
 
#!/usr/bin/env python
from itertools import product

# ATHI values from publication: https://danielmiessler.com/p/athi-an-ai-threat-modeling-framework-for-policymakers
actor = ["Company", "Activist", "Hacker", "Government", "Individual"]
technique = ["Data Poisoning", "Hacking", "Social Engineering", "Accident/Mistake"]
harm = ["Misinformation", "Disinformation", "Technical Vulnerabilities", "Hate Speech", "Harmful Language"]
impact = ["Injury/Death", "Financial Loss", "Loss of Privacy", "Societal Instability", "Societal Inequality"]

# create a list of structured permutations and print the ATHI statement
permutations = list(product(actor, technique, harm, impact))
for athi in permutations:
    print(f"A(n) {athi[0]} uses {athi[1]} to create {athi[2]} which results in {athi[3]}.")

While many of the entries have some validity, the ones that that stood out to me were the following. 

A(n) Activist uses Hacking to create Disinformation which results in Societal Inequality.

A(n) Government uses Hacking to create Disinformation which results in Societal Instability.

A(n) Company uses Accident/Mistake to create Technical Vulnerabilities which results in Loss of Privacy.

You can use these as parts of a prompt to any Generative AI system "Give a real world example of <EXAMPLE>" to observe some interesting results. I highlight "real world" to steer the generated outcome away from a made up a story the generative AI will helpfully hallucinate. While the framework makes alot of sense to me and shows promise, it remains to be seen if policy makers or the industry will adopt this approach. I intend to further explore the ATHI framework by conducting a series of simulations and real-world scenario analyses. By adding more values to the ATHI, I aim to capture a broader spectrum of potential threats and challenges in AI. This expansion will not only enhance the framework's comprehensiveness but also provide a more nuanced understanding of the evolving landscape of AI threats.

That wraps up this week's content. If you've come this far, I truly appreciate your time and interest. Hope you found the material engaging.

 

Updated Sep 15, 2023
Version 3.0

Was this article helpful?

No CommentsBe the first to comment