ChatGPT and security - This Week in Security Feb 18th to Feb 25th, 2023

Editor's introduction 

This week in security editor is Koichi.  Not a day goes by these days that we don't hear about AI. In particular, ChatGPT, the OpenAI's AI chat bot, responds in a very natural way which is hard to distinguish from human's response.. This week, I have collected stories about ChatGPT and security for considering what kind of cybersecurity threats this useful and revolutionary tool brings.

We in F5 SIRT invest loa t of time to understand the frequently changing behavior of bad actors. Bad actors are a threat to your business, your reputation, and your livelihood. That’s why we take the security of your business seriously. When you’re under attack, we’ll work quickly to effectively mitigate attacks and vulnerabilities, and get you back up and running. So next time you are under security emergency please contact F5 SIRT

ChatGPT can program - therefore, fake application is also possible.

"This massive popularity and rapid growth forced OpenAI to throttle the use of the tool and launched a $20/month paid tier (ChatGPT Plus) for individuals who want to use the chatbot with no availability restrictions."

Bleeping Computer reported on February 22, that, many cyber attack taking advantage of the ChatGPT is observed. The methodology is to create a fake services and apps by ChatGPT and place it on the site as a bait of malware infection and information theft. Please be careful not to fall for non-existent apps or non-official websites, now those are easily created.

Hackers use fake ChatGPT apps to push Windows, Android malware

No confidential information should be given.

If the employees want to chat, they'll just have to talk to each other instead.”

JP Morgan had issued a restriction on the use of OpenAI's ChatGPT in the workplace due to compliance concerns. Considering the risk of leakage of confidential information, the ban on the use of ChatGPT is not limited to JP Morgan. For example, if you use a service that requires you to enter information or upload files, you should always consider the risk of that information or file being harvested by the service provider. For example, VirusTotal has a service that checks files for viruses. However, this means that not only the presence or absence of a virus, but also the data it contains will be passed on to VirusTotal. Similarly, if you do not use these services after removing sensitive information, the sensitive information will be harvested by OpenAI. 

Giant Bank JP Morgan Bans ChatGPT Use Among Employees

ChatGPT had service down.

On on February 21, ChatGPT (Both of the ChatGPT's website and API ) had down. Down means, it does not give response. When you submit a question to the ChatGPT, you will receive a message saying, "A server error occurred while processing your request. We are sorry. Please retry your request or contact the Help Center if the error persists." It recovered within a day, however, it was observed not only this time, but also last week. When you see similar message, better to check the site below.

https://downdetector.com/status/openai/

AI synthesized voice can be used for attacking.

“Banks in the U.S. and Europe tout voice ID as a secure way to log into your account. I proved it's possible to trick such systems with free or cheap AI-generated voices.”In this article, AI synthesized voice had passed the voice recognition authentication and break into the bank account. Some banks in the U.S. allow access to bank accounts after a few conversations with voice recognition. One of the text-to-speech service, ElevenLabs' service, wh was able to do pass the authentication.

https://vice.com/en/article/dy7axa/how-i-broke-into-a-bank-account-with-an-ai-generated-voice

 

One more for thinking about cyber security (not this week):

Cybersecurity Experts Warn the threat of more sophiscated phishing mail.

2 articles discussing about the impact and usage of ChatGPT for cybersecurity. The common threat in the two articles is the increase in phishing e-mails. Usually, phishing e-mails are easily detected because of the unnatural wording and phrasing. This is a barrier for non-native speakers to create effective phishing emails. However, ChatGPT allows non-native speakers to write natural sentences, which risks generating a large number of naturally worded phishing emails.

OpenAI's new ChatGPT bot: 10 dangerous things it's capable of

ChatGPT and more: What AI chatbots mean for the future of cybersecurity

Updated Mar 08, 2023
Version 2.0

Was this article helpful?

No CommentsBe the first to comment