Four Risks To Discuss With Your Team

News Room

CEO of DeskTime—a time tracking and productivity app for companies and freelancers. He’s also an amateur athlete and father of two.

The situation regarding today’s generative artificial intelligence (AI) tools and their adoption in the workplace is unprecedented—never before has a technology so thoroughly permeated organizations worldwide in such a bottom-up way.

Earlier in the year, ChatGPT set a record (since broken by Threads) of being the first application to reach 100 million monthly active users in just two months. My company’s study revealed that in Q1 2023, ChatGPT use in the workplace doubled every month.

For the overwhelming majority of companies, I’ve noticed the growing use of generative AI in the workplace hasn’t been a systematic and well-assessed integration of new technology. Rather, it has mostly been a free-for-all, as workers scramble to find ways to leverage these publicly available applications to enhance, simplify and accelerate their own work.

Consequently, guardrails are few and far between, particularly among smaller organizations that lack the resources to thoroughly assess the technology’s impact and dangers. Plus, many companies are going through a honeymoon phase of boosted productivity, with little incentive to risk curtailing it.

But when talking about the pros and cons of AI in the workplace, far too little attention is paid to the latter.

Four Generative AI Risks To Watch Out For

Let’s be clear—there are risks to using generative AI. Some are acute, others are abstract, and others we don’t yet know about. Regardless, there’s no better time than now to instill a culture of caution and proactively educate your employees about how to make the most of this transformative technology, without endangering the company or their own position.

1. Overreliance

A recent study supports a growing sentiment that ChatGPT may be “getting dumber.” Though the decrease in capabilities appears to be the result of well-intentioned updates and may be reversed in the near future, the entire situation puts a spotlight on the one-way relationship between the product and its consumer. Namely, in this case, OpenAI—the company behind ChatGPT—holds all the cards, which presents certain dangers to users who apply their tool in day-to-day work.

Perhaps, one morning, a critical use case doesn’t work anymore. Or, the powerful free version is throttled to make space for monetization. In either case, users—and by extension companies—who have become overly reliant on artificial intelligence tools may find themselves severely impacted.

To minimize risk, organizations must be cognizant of the extent of AI use in their work environment and take preemptive measures to ensure work could continue uninterrupted were something to happen to their now-invaluable tools.

2. Hallucinations

A more acute risk is AI’s propensity for hallucination. In this context, hallucination is when AI models make up information in a bid to fulfill a user request. One of the most visible examples has been that of the New York lawyer, whose filing to a judge turned out to include cases that didn’t exist. The lawyer had turned to AI for assistance with finding precedents, but the cases listed by AI weren’t real—something the lawyer failed to check.

Though generative AI tools have explicit disclaimers that they may produce inaccurate information, it’s often difficult for users to spot dubious information without pressing the AI model for the answer’s veracity or deep-diving into a rabbit hole of research—something that would undermine the productivity gains they’re after in the first place.

Companies must train employees in the appropriate use of these tools to avoid inadvertently spreading misleading or blatantly false information.

3. Data Security

In January, Amazon warned employees not to input sensitive information into ChatGPT. As did Microsoft, followed by Walmart, citing security concerns and pointing to our general lack of clarity on how AI companies handle the data. Even if it’s used anonymously for model training, that doesn’t permit sharing confidential information with third parties.

The convenience of AI tools can sometimes overshadow data security, especially when there is no perceived risk by the employee. But that doesn’t mean the risk isn’t there. For instance, by sharing contractual or personal details with a third party, the worker might overstep legal boundaries and damage the company’s reputation. Or a developer collaborating with AI on proprietary software may lead to loss of intellectual property. Among several other potential issues.

To safeguard against unwelcome surprises, companies must implement a comprehensive data security policy and ensure any related employee training extends to incorporate best practices concerning AI in the workplace.

4. Dehumanization

As it stands, AI-generated content often sounds unnatural, and people familiar with the style of content produced by chatbots can often distinguish between computer and human writing. In my experience, there’s something deflating when you find yourself on the receiving end of AI-generated articles or worse—messages; it’s a shared sentiment.

It feels like a core pillar of communication is ignored. Much like with a generic cold pitch, by using unpolished AI content, you are requesting your audience to engage with you without showing proper readiness to invest yourself first.

Accordingly, companies should be cautious of leaving employees to their own devices regarding communication and content creation. A potential remedy is a robust brand book, with detailed tone of voice guidelines.

A Costly Misstep Can Undo Productivity Gains

None of this is to scare you. On the contrary, the benefits of artificial intelligence in the workplace are truly game-changing. Already today’s AI tools are revolutionary, and if your company isn’t seeking ways to leverage the transformational potential of AI, then you’re just holding yourself back. Because there’s no putting the genie back into the bottle.

However, the unique nature of how AI entered the broader workplace is something to be mindful of. With AI entrenching itself as a core element of various business operations, you need to find ways to safeguard yourself against one-sided decisions by the product company, technical troubles and unregulated use of generative AI, among other risks.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Read the full article here

Share this Article
Leave a comment