Gaidar Magdanurov is the Chief Success Officer at Acronis.
Generative AI and large language models (LLMs) like ChatGPT and Bard quickly got used in all spheres of human activity—helping with generating ideas, assisting with research, creating and editing content, writing code and automating tasks, providing customer support, helping sales and marketing to discover and qualify leads, assisting with education and explaining complicated concepts in simple words—and many, many more applications.
However, as well as this technology helps to increase the productivity of workers, it also helps to increase the productivity of malicious actors.
Understanding LLMs
By processing an enormous amount of content available on the internet, models like ChatGPT are trained to understand text input and provide answers based on the accumulated knowledge.
The ability to learn from new input and generate the text based on use requirements enables generative AI to be an efficient assistant. LLMs are effective in creating unique content and verifying and explaining the information provided by users. For example, it can help generate Excel formulas and write or explain code snippets.
Generative AI’s knowledge is limited by the information it ingests, and it is also prone to making mistakes and generating incorrect information—the output requires validation. Therefore, it makes sense to treat generative AI as an intern that requires a detailed briefing and a thorough review.
Cybersecurity Risks
The weakest link in cybersecurity is the human. Verizon’s 2023 Data Breach Investigations Report shows that 74% of security breaches involved a human element: social engineering attacks, user errors or misuse of systems. People are being tricked, phished and disclosed sensitive data. And AI plays a role in making the issue bigger.
1. Spreading False Information
Let’s start with malicious actors using LLMs to spread false information. Many of us have received advanced-fee scam emails from a “Nigerian Prince.” Even though those emails were not well-written, they inspired users to pay money to fraudsters to receive non-existing rewards in the future.
LLMs allow fraudsters to create more convincing emails and augment them with content on social media and dedicated websites to make them even more believable. Users got used to looking for validation of the offers in the emails. Now, digital con artists can create multiple websites and fill social media with posts to make information look more credible.
2. Advanced Phishing
Generative AI is an excellent tool for assisting in phishing attempts. It takes very little time to create multiple custom-made emails targeting specific people using information from public sources. LLMs allow attackers to build phishing emails at scale and make the content look legitimate.
Imagine receiving emails on behalf of co-workers, friends or services you use with the content mentioning your life events and people you know—chances are you will be inclined to click links and maybe even provide some information through forms.
3. Malicious Code
Generative AI can create and explain code and can be used by threat actors to write malicious code—automating attacks, writing exploits and many other tasks. These AI tools serve as a coding partner for software engineers and security researchers and as a partner and a teacher for the threat actors.
4. Sensitive Information
And last, sharing sensitive information with any public cloud service is risky. Employees using LLMs in their work may inadvertently share confidential information. It will be exposed if the accounts employees use are compromised.
Those threats pressure businesses like managed service providers (MSPs), as AI automation makes every customer a target.
The Importance Of Employee Education
The primary solution for businesses to decrease the risk is to educate users on the potential threats. Recurrent training on phishing and social engineering is required. Many users were trained to recognize phishing based on poor-quality content. This is not the case anymore. A phishing email can look credible and surpass the most advanced email filtering solutions.
There are many more items to check.
• Mismatched URLs: displayed link and the actual link.
• Generic greetings, like “Dear Customer” instead of specific names.
• Requests for sensitive information.
• Urgent or threatening language pressures to act quickly.
• Unnecessary attachments or links.
• Unusual sender address or domain.
• Email not matching previous communications with the sender.
A phishing email can come on behalf of anybody—a co-worker, partner, vendor, bank, government body. Employees should always be on high alert.
As for information disclosure, businesses should advise a policy on using generative AI—and train employees to get value from the tool without the risk of disclosing sensitive information.
Implementing Technology
For those looking for technology solutions, I recommend businesses employ advanced URL filtering to block malicious and suspicious websites. It is crucial to detect suspicious behavior of users falling victim to the attackers. Implementing an endpoint detection and response (EDR) on workstations and receiving alerts in case of suspicious activity helps to prevent the development of successful attacks and limit the damage. (Disclosure: My company provides these solutions, as do others.)
Implementing email security and EDR brings cost and complexity with configuring and maintaining the solutions, generating additional overhead for false positive detections and removing legitimate emails.
To overcome those challenges, it is essential to have rigorous pilot testing and cost/benefit analysis done based on the results of the tests. Continuous monitoring and adjustments of the configuration of the solutions is required, as well as recurrent training of the IT staff working with those solutions.
AI is still an essential tool to stop malicious actors using AI. Modern security solutions rely on AI to detect suspicious behavior and filter through millions of events. Yet, many aspects of our day-to-day jobs still need to be automated with AI.
In the dynamically changing threat landscape, company leaders have to look for solutions that automate their operations constantly and use AI, increasing the capacity of their technicians and preventing human errors.
We live in a world where cyberattacks are available to everybody. A user with limited computer skills can execute sophisticated attacks, and businesses must prepare before it is too late.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Read the full article here