Game Of Thrones Style Game Theory And The Threat Landscape

News Room

One of the topics related to AI that many people are most interested in revolves around cybersecurity. We talk about how to “circle the wagons” and how to promote zero trust architectures, as we look at the actual building blocks of implementing AI systems and creating AI policy.

With that in mind, I wanted to go over some of the main points that I heard in recent conferences and talks on managing attack surfaces, using Palo Alto network tools, and helping governments from the municipal level to the federal level to handle threat models. (Specifically, all of these suggestions and much more were covered in an IIA panel this year with Gurvinder Ahluwalia, Arlene Hart, Shingai Manjengwa, Russ Wilcox and Todd Baer).

Army of Intelligence | Gurvinder Ahluwalia & Panel of Four | MIT 2024 (youtube.com)

Some of what you can imagine is sort of a Game of Thrones style battle between all of these different actors across the threat landscape. in other words, it’s not just a team of white hats against a team of black hats. It’s different regional actors, it’s different players with different intentions, and they’re all playing on one global Internet.

That’s something you have to keep in mind as you plan.

So without further ado, here are some of the key points that experts are suggesting as they talk about AI and risk policy:

Adopt zero trust models – in general, the default should be on disallowing something, not allowing it. The zero trust concept is behind a lot of new efforts at battening down security in AI-related systems.

Secure coding practices – you want to be able to know that your internal practices are up to par. Infrastructure integrity. These guidelines form github illustrate some of the relevant principles well.

Infrastructure management – it might not be as catchy as something like insider threat reporting, but good infrastructure helps foil hackers. That includes having good firewall rules, and understanding security beyond the perimeter, which is a major factor in new cloud systems.

Identity and access management – this is the practice of creating individual rules for each specific person in your organization. We’ll tackle this in more detail later.

Verify providence of data – we have to make sure that the data we use is good, and that it’s also secure and protected, from a privacy standpoint.

Measurable trust – having actual metrics in place can help a good deal. I found this guide from NIST to be immensely helpful and thought-provoking.

Leveraging the blockchain – we can actually use of elements of the blockchain ledger, and Web3 to evolve security systems.

Refining data – making previously chaotic data into neat, ordered sets through transformations can also really enhance our security systems.

Quotes from the Panel:

“If you need to protect something, be aggressive in protecting it, and hide it, and make sure that … no one can see your assets, except for you.” – Arlette Hart of Appgate, who previously worked on cybersecurity for the FBI

“There is a complete black box of information with how these policies are developed. There’s a lack of visibility into how any policy is made.” – panelist Russ Wilcox, speaking on the importance of managing data at municipal levels

“An army of sheep, led by a lion, can defeat an army of lions led by a sheep.” – Gurvinder Ahluwalia, presenting a fable with the metaphor of warfare applied to IT

“What are you doing at the base layer? What are you doing at the infrastructure layer? What are you doing at the language model layer? What are you doing at the agent layer? At the deployment layer, at the training layer, and then in the interfaces, where the users are interacting with the tool? … all of that becomes part of, in my view, a governance approach, where you have to have policies and things at a company organizational level that deal with that. What’s your collection, retention, use, disclosure and deletion policy, of how you treat your data, because there’s no AI without data.” – Shingai Manjengwa, CEO of Fireside Analytics

Now, there are some key ways that AI can help with some of these tasks.

In the above talk, I was listening to one of the panelists talk about firewall rules specifically – and how they’re sort of dense and boring for human readers to wade through. You can imagine ChatGPT or some AI boiling down this stuff into a readable format, which will actually lead to real efficiencies, and probably, a better cybersecurity approach. In other words, if the human readers didn’t pay attention to the salient points, the AI might be able to help them to do that.

Another way that AI might help has to do with data governance and data control.

We have quite a bit of concern around handling large data sets that may contain sensitive information. There’s also the identity and access management policies I mentioned earlier. AI could help nail down both of those with key decision support – in other words, it could help the human operators to have an idea of what to protect at a data set/workflow level.

If I can use an example from the above panel at IIA, the moderator made this prediction – that in tomorrow’s freight systems, a container will tell you where it’s going.

The same could be applied to these AI policies, where instead of humans having to tell the computer what to do, the sentient AI system handles these decisions pretty much on its own.

So that’s some of what I learned from these panelists and others about threat management in the AI age. As one of them, Russ Wilcox, remarked, it’s been kind of a ‘wild west’ when it comes to data management, and we’ll need to change that in order to really make our new tools and systems useful to us, without generating whole worlds of risk and the threat of a Game of Thrones style battle royale inside of a given network system.

Read the full article here

Share this Article
Leave a comment