Skip to main content

Cybersecurity AI a quick guide

Artificial Intelligence and Cybersecurity: Is ChatGPT a threat?

In recent years, we have seen the enterprise level attack surface grow rapidly. More people are working remotely than ever, meaning that at any time, there could be billions of signals that need to be analyzed in order to accurately calculate risk.

What this means is that security professionals can no longer manage and improve systems on their own – the sheer volume of data and information flowing through modern computer systems is simply too great.

In response to this challenge, we have had to move towards AI systems that can help the amplify efforts of security professionals.

AI and machine learning have become key technologies in the world of information security by allowing security teams to quickly analyze millions of events and identify malware attacks. The ability of these technologies to learn over time and improve allows us to build a much more robust security posture than before.

However, despite these gains in cybersecurity capability, malicious actors have not grown complacent – with cyber criminals using similar AI and deep learning techniques to defeat defences and avoid detection.

Understanding Artificial Intelligence Basics

The term Artificial Intelligence refers to a group of technologies that can understand, learn and act based on the information that is fed to it. In other words, think of AI systems as our best attempt at recreating human intelligence.

Here are some examples of AI technology that are already being used today:

  • Machine Learning: This subset of AI uses statistical techniques to “teach” computer systems what to do and how to function rather than explicitly programming in specific functions.
  • Expert Systems: Programs designed to solve programs within a specialized domain. Intended to mimick the thinking of human experts to help solve problems and make decisions.
  • Neural Networks: Programming paradigm that allows a computer to learn from observational data
  • Deep Learning: A broader family of machine learning methods based on learning data representations. Examples of Deep learning technologies include image recognition software

Artificial Intelligence (AI) in Cybersecurity

Luckily for security teams, AI is particularly well suited to tackle many cybersecurity problems. With today’s evolving cyber-attacks and massive increase in the amount of data generated, machine learning and AI systems can be used to help security teams keep up.

Here are some of the unique challenges faced by security professionals in today’s modern work environment:

  • Massive attack surface
  • Thousands of devices per organization each generating thousands of data points
  • Hundreds of attack vectors
  • Huge shortages in qualified security personnel

Fortunately self-learning technologies deployed within AI will allow cyber security systems to continuously and independent gather data from enterprise information systems. This gathered data is then analyzed and used to recognize patterns across billions of signals relevant to the enterprise attack surface.

Here are some of the ways that we are already seeing AI in cybersecurity:

  • IT Asset Inventory: Cybersecurity personnel are able to use artificial intelligence to gain a complete, accurate inventory of all devices within an information system. This provides the crucial visibility needed to assess the impact and risk of various vulnerabilities
  • Threat Exposure: AI based cybersecurity systems can provide up to date knowledge on industry specific trends that are most relevant to your enterprise.
  • Controls Effectiveness: Cybersecurity personnel have begun using AI and machine learning systems to help evaluate the gaps within an infosec program.
  • Automate Threat Detection and Breach Risk Prediction: By providing a complete account of IT assets and their associated risk, AI systems can help enhance security posture by providing prescriptive insights that are aimed at enhancing security controls and processes.

How Cyber Threats use AI Technologies

While we have seen massive gains in our security tools due to the incorporation of machine learning and artificial intelligence, we have also seen bad actors ramp up their malicious activity by deploying the same AI technologies to defeat defences and avoid detection.

Here are some of the main ways that attackers are leveraging machine learning:

  • Searching for vulnerabilities within machine learning algorithms by targeting the data they train on and the patterns they recognize
  • Use Artificial Intelligence to develop malware that changes its structure to avoid detection
  • Using AI to create large numbers of phishing emails

ChatGPT and Cybersecurity – Should I be Worried?

Unless you’ve been living under a rock, you’ve probably heard of ChatGPT by now. An incredibly powerful tool currently being developed by OpenAI, ChatGPT is an incredible piece of technology that gives us a glimpse at the vast potential of artificial intelligence.

Despite these incredible gains, however, concerns are already being raised about the potential cyber risk that such a tool could pose.

Cybercriminals Starting to use ChatGPT for cyber attacks

While initially skeptical, it seems the cybersecurity industry is finally beginning to take notice of the potential implications of modern AI. It is already fairly easy to generate legitimate-seeming phishing emails using the tool – built in guardrails technically prevent you from doing so but a slightly altering of requests allow users to easily bypass safety features.

Particularly among threat actors who are not native english speakers, they now have access to a tool that can spit out convincing American English in seconds.

Perhaps even more alarming is ChatGPT’s ability to seemingly spit out perfectly written code with no technical skills needed from the user – We are already seeing reports of users with little to no technical knowledge leveraging ChatGPT for malicious behaviour.

Despite this, ChatGPT is not yet perfect – there have been several cases where text or code generated by the tool is incorrect or containing security vulnerabilities. It is clear that we should not be fully relying on anything generated completely by AI any time soon.

However, AI technology is improving quickly and there is no telling where we will be in ten years time. Until then, it is important to be aware of the potential risks and to take appropriate steps to mitigate them.