The Cyber AI Dilemma—And What It Means For The Cybersecurity Industry

News Room

Nir Ayalon: Founder & CEO of Cydome, Maritime Cybersecurity Leader | Cyber Security & AI Expert | ISO Committee Member.

Ever since OpenAI released ChatGPT last year and DALL-E the year before, there has not only been an explosion in the amount of AI tools available for general consumption, but AI also transitioned from being perceived as largely academic or some magic employed by Big Tech to a family dinner conversation topic. Heck, I think ChatGPT is even more popular than Taylor Swift!

Regardless of whether these tools are a technological breakthrough, it’s clear that tools such as ChatGPT, DALL-E and others have popularized and taken AI from the hegemony of elite experts to become accessible by basically anyone.

So besides helping you to plan your next vacation or write your resume, recent AI/ML tools have a huge impact on cybersecurity as well. Some of it is negative, but some of the effects are also positive. As with any technology, it’s all a matter of learning how to use it.

The Attack Vector

Perhaps you’ve experienced it yourself, but it seems that phishing and scams have become more frequent and sophisticated since the launch of ChatGPT. Writing text in a way that copies someone else’s style, tone of voice and grammar requires basically zero research and minimal social engineering skills compared to what was needed in the past.

Another particularly popular use of generative AI is its ability to write code based on verbal text requirements. The road to writing malware is obviously short. Some sites even provide a simple interface to write malicious code without the user having any technical skills.

Beyond the proliferation of attack tools that less sophisticated attackers can use, skilled attackers can also use AI to create new, innovative attacks. This can greatly impact cyber safety since most cyber tools are built to protect against the majority of known attackers, and it’s much harder to defend against new, unknown attacks. The numbers suggest that this is the trend—we see a significant increase in the reported “zero-day” vulnerabilities. “Zero-day” attacks exploit previously unknown vulnerabilities that are hard to find in many cases, and it can even take years until an exploit is found.

The Protection Vector

While advanced AI helps perpetrators to attack more easily, it also helps create better defenses.

Large language models (LLMs) excel in sifting through massive amounts of data, identifying patterns and making predictions.

Additionally, AI is getting much better at detecting anomalies, thus increasing protection against unknown vulnerabilities such as “zero-day” exploits. For example, IBM research has shown that utilizing AI shortens the time to identify and contain a breach by 108 days and $1.76 million lower data breach costs compared to organizations that did not use security AI and automation.

This poses a significant shift from the traditional practice of firewall rules and antivirus looking for file signatures. Improved anomaly detection also helps protect assets that are not the traditional endpoint or server—for example, operational technology (OT) systems and Internet of Things (IoT) devices that are much more diverse and much less heterogeneous than traditional IT. This is extremely important as we see many recent attacks, for example, the attack on the Port of Nagoya (Japan’s largest maritime port), that target non-traditional assets as they are more likely to be exposed.

Finally, AI helps create sophisticated automation, such as “decoys,” that help slow attackers and provide cyber protection teams more time to identify attacks.

The Implications For The Cybersecurity Industry

So, how can you accelerate the adoption of AI for your cybersecurity?

1. Make it an organizational target to educate your employees about AI. Discuss the various tools, how they work, what the threats are and how AI can help employees in their daily work. The key to starting AI adoption is understanding how AI works and the limits of current tools.

2. Remember that AI is as good as the data it’s trained on. When adopting AI-driven tools, make sure they are trained on your own data or data from similar organizations.

3. The weakest link is usually the human factor. Ensure you run regular training sessions and educate your employees with up-to-date phishing drills, etc., so they don’t simply assume that “they would know a bad email when they see one.”

As we can see, AI significantly impacts cybersecurity—in ways both good and bad. It is our role as cyber leaders to make sure that new technologies are implemented quickly and leveraged to prevent more sophisticated cyber attacks from happening.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Read the full article here

Share this Article
Leave a comment