How dangerous is ChatGPT?

type
Article
author
By Kordia
date
23 Feb 2023
read time
2 min to read
old typewriter on a desk

Heard of ChatGPT? This AI chatbot is creating fascination in the tech world with its ability to generate natural, human like text and answer complex questions in near real time. But while ChatGPT has plenty of potential to be used as a tool to boost productivity and simplify tasks like writing essays (something the University plagiarism teams are already onto), like any new technology there’s plenty of scope for ‘bad actors’ to exploit this new tool to improve cyber-attacks. 

One way in which ChatGPT could be used for phishing attacks is by generating convincing and realistic messages that impersonate legitimate entities such as banks, e-commerce websites, and social media platforms. For example, an attacker could use ChatGPT to create a message that appears to be from a bank and asks the recipient to update their account information by clicking on a link. The link could lead to a fake website that captures the victim's login credentials and other sensitive data.

Moreover, ChatGPT could be used to create convincing social engineering tactics to trick users into revealing sensitive information. For instance, an attacker could generate a message that appears to be from a friend or family member, asking for financial help due to an emergency. The message could be designed to create a sense of urgency, prompting the victim to act quickly without verifying the authenticity of the request.

In addition, ChatGPT could be used to create spear-phishing attacks, which are targeted attacks aimed at specific individuals or organisations. The attacker could use ChatGPT to gather information about the victim and craft personalised messages that appear to be from a trusted source. This could increase the likelihood of the victim falling for the attack.

In fact, the preceding three paragraphs of this article were written by ChatGPT – in case you needed any further convincing!

So, what should directors and businesses make of all this? A key thing to keep in mind is that all new technology has the potential to be used for both good and bad – and hackers are continually evolving the way they approach their victims.

One thing worth considering is whether your organisation needs to create an AI usage and ethics policy. This will lay out how the organisation uses AI, including employee usage and any precautions that might need to be put in place to ensure it isn’t misused.

And while it’s yet to be seen whether ChatGPT and similar tools will have a marked impact on the number of attacks being launched against businesses, it’s worth keeping abreast of any new developments in the AI space and how they might be used by cyber criminals.

While you may not be able to control incoming threats, going back to the basics and making sure you have some robust cybersecurity measures in place will help defend your organisation, even if someone does unsuspectingly fall for a convincing ChatGPT generated phishing email.

That means implementing things like two-factor authentication, building awareness on how to identify a scam email, and keeping up regular cycles of patching and updating software. Afterall, while the method of attack may be made more sophisticated by the help of AI, it’s still just an evolution of what tactics we are already seeing cyber criminals use in the threat landscape. 

Kordia logo