The Chatbot cyber threat

With 7 May marking the two-year anniversary of the Colonial Pipeline ransomware attack, Emerging Risks spoke to BigID’s CISO Tyler Young about the continuing threat to critical infrastructure, and the threat now posed by conversational AI.

Why was Colonial Pipeline so significant and why did it take people by surprise? 

It’s the first time that a NATO country in recent days has had a cyber-attack that essentially shut down a component of critical infrastructure that caused shock waves that were economic, physical, and potentially environmental as well. There were a multitude of ripple effects that came from this.

Oil and gas pipelines are how people heat their homes, and other factories and other facilities are running on the same infrastructure, and it’s the bedrock of civilisation in most places. So when you have a large scale cyber-attack which shuts that down, it causes both panic in the moment and what’s impacted, but also wider panic. What about our power grid? What about other power plants? What about nuclear power? What about other things that are critical? What about the water treatment plants?

This was the first time we’ve seen a large scale cyber-attack take something down like this.

Yes- it caused significant consternation at the time.

This was during COVID, and there were already supply chain issues, with oil and gas getting spread throughout the world, and this continued to build on those existing issues.

A more positive spin on the event might be that, although it was unwelcome and caused significant disruption at the time, it was more of a ‘one-off’ event, and that in the main, Western critical infrastructure remains reasonably secure. What would you say to that?

I would first of all laugh, and say that I don’t think we spend enough effort on operational technology (OT) – the mechanisms that run our critical infrastructure. There’s a handful of companies that are focusing on OT, providing the cyber security coverage for it, but there are two problems here. One is that a lot of these things have existed since the fifties and the sixties and they haven’t been updated. So in some places you will have legacy infrastructure that just works, and engineers coming out of school today may not necessarily understand how it’s working, but they do, so the decision is made to keep using them because it’s cheaper and there is not the same overhead of costs. In theory, they are not compromisable because they are not on the network.

Now the problem is that we are seeing a modernisation of OT, with IT being introduced to these facilities, which is an external presence and [introduces] an external threat that they never had to worry about in the past.

Updating technology makes us more efficient and causes cleaner energy, and there are positive sides to it, but there is also the negative side. If you put something online, it’s breachable in theory in some way, shape or form, and attackers are going to leverage that. Where are they going to get the most money for a ransomware attack? Well, shutdown critical infrastructure, or a hospital, or a bank – which are the bedrocks of civilisation – and they’re going to pay it, every single time.

One supposes that a fair amount of time, effort and money has since gone into managing this threat to critical infrastructure, though?

I think it’s one of those things that in a lot of cases some of these facilities are so far behind from a modernisation perspective, that it’s going to take a few years – potentially a decade – to catch up and to get where they need to be. Money is becoming available because of events such as Colonial Pipeline- the budget opens up. If someone in your industry, if one of your competitors, is impacted, you can now leverage that as a security leader to go and get the funding. After Colonial, the White House came out with the Zero Trust initiative, which is what you want to see happen after something like this.

How will bad actors leverage conversational AI to take ransomware to the next level? 

Conversational AI, or language learning models (LLMs), is going to be used by both the good guys and the bad guys… in terms of ransomware, about a month ago we saw polymorphic malware that was written by one these LLMs, and what that allows it to do is for the malware to propagate throughout an environment, learning the environment, and changing itself on the fly so that it can’t be detected. They tested it against every detection tool, and it evaded every single one them. This is extremely concerning, because every organisation in the world is leveraging these tools and hoping to provide that first layer of defence. And if that can’t be detected, how do you stop it?

[Once you are in] you can access data, you can shut down networks, you can encrypt files, you can steal large amounts of passwords. But the positive side is, in theory if you are now building security software you can leverage the same technology to defend against it.

Nation state actors have been developing this stuff for years, but it’s now readily available to the cyber criminals.

SHARE: