Those Secret Cyborgs Using Generative AI At Your Workplace Need Some Well-Devised AI Governance, Says AI Ethics And AI Law

News Room

Blaring headlines this last week proclaimed that secret cyborgs are in your workplace and that they are using generative AI in the most inventive of ways. Yikes, you might be looking at your officemates and wondering which of them is a regular human and which one is a secretive cyborg. It might be hard to tell. Being back in the office again is itself a rarity and those sneaky conniving human-resembling cyborgs are indubitably pretty good at masquerading like the rest of us.

Well, I don’t want to burst anyone’s bubble, but the zany phrasing of so-called secret cyborgs is altogether misleading and entirely wrong. Just plain wrong. There aren’t any cyborgs involved. None. We find ourselves once again having to deal with the ongoing advent of AI mania, outsized hyperbole, and exasperating anthropomorphizing of anything veering into the AI realm.

Here’s the deal about the alleged cyborgs.

Suppose that there are fellow workers in your office that opt to use generative AI such as the widely and wildly successful ChatGPT by AI maker OpenAI, or any such AI app such as Bard (Google), GPT-4 (OpenAI), Claude (Anthropic), etc. They might be using generative AI to aid in their work activities and yet might also be hiding that fact from their co-workers and their boss. You could say that they are secretly using generative AI.

I’ll explain in a moment why office workers might go underground with their use of generative AI.

Anyway, workers that are using generative AI on the sly have been outrageously and falsely labeled as secret cyborgs. Stupid. First, a cyborg is defined as a person with extended physical abilities via mechanical elements built into their body. None of the people using generative AI in the office have gotten an AI app embedded into their bodies.

On a separate topic, you might find of interest my coverage of advances in brain-machine interfaces (BMI) and where the future might take us, see the link here.

Second, I’m sure that defenders of the lousy verbiage would insist that these are office workers that are essentially extending their mental capacities by using generative AI. Why not then refer to them as cyborgs, they would vehemently argue. No harm, no foul. The problem though is that you can start calling just about everyone a cyborg. Do you use a smartphone? Aha, you must be a cyborg because it extends your capabilities. Do you use word processing and spreadsheets? In that case, obviously, you are a cyborg. Rinse and repeat.


Let’s then agree to put aside the outstretched headlines and see if we can salvage something useful from the matter at hand. Indeed, we can. In today’s column, I will take a look at how workers are opting to make use of generative AI when they either aren’t supposed to be doing so or when they want to hide the fact that they are doing so. Organizations can get themselves into a belly full of trouble by ignoring the practical merits of generative AI and putting a blind eye toward a sensible and prudent form of generative AI governance in their firm.

Welcome to the secret clandestine world of generative AI as used by employees that might be earnestly trying to do their best for their employer. You see, some employees perceive generative AI as an important work tool, and they believe in their heart of hearts that using generative AI is a great benefit to the firm. They don’t want to go covert but they believe they have no other choice.

To clarify, the best of intentions does not guarantee a reliable outcome. The sad thing is that employees left to their own means to make use of generative AI can inadvertently get themselves and their organization into hot water. Their hearts were in the right place. Without sufficient guidance and proper direction, regrettably, they can make a mess that will potentially bring forth reputational loss to the firm and legal liability that could cost big bucks.

Some employees take a less than sincere attitude towards such matters. You might have an employee that figures they can do their job in half the time it regularly takes, and then spend the rest of their time playing video games or skipping out of the office to go skiing. Many managers that I interact with assume that this is the bulk of those that surreptitiously are using generative AI in their firms.

Maybe, but I doubt it.

In my experience, by and large, the belowground use of generative AI in organizations is predominantly undertaken by legitimate workers that legitimately want to get their job done. They are often shall we say forced into a predicament that they don’t wish to be in. It comes about due to the generative AI governance principles that a firm has laid out.

I’ll cover in a moment the need for sensible AI governance principles and how organizations can prudently devise them and put them into practice. Let’s first cover some foundational facets of generative AI.

Foundations Of Generative AI

Generative AI is the latest and hottest form of AI and has caught our collective rapt attention for being seemingly fluent in undertaking online interactive dialoguing and producing essays that appear to be composed by the human hand. In brief, generative AI makes use of complex mathematical and computational pattern-matching that can mimic human compositions by having been data-trained on text found on the Internet. For my detailed elaboration on how this works see the link here.

The usual approach to using ChatGPT or any other similar generative AI such as Bard (Google), Claude (Anthropic), etc. is to engage in an interactive dialogue or conversation with the AI. Doing so is admittedly a bit amazing and at times startling at the seemingly fluent nature of those AI-fostered discussions that can occur. The reaction by many people is that surely this might be an indication that today’s AI is reaching a point of sentience.

On a vital sidebar, please know that today’s generative AI and indeed no other type of AI is currently sentient. I mention this because there is a slew of blaring headlines that proclaim AI as being sentient or at least on the verge of being so. This is just not true. The generative AI of today, which admittedly seems startling capable of generative essays and interactive dialogues as though by the hand of a human, are all using computational and mathematical means. No sentience lurks within.

There are numerous overall concerns about generative AI.

For example, you might be aware that generative AI can produce outputs that contain errors, have biases, contain falsehoods, incur glitches, and concoct seemingly believable yet utterly fictitious facts (this latter facet is termed as AI hallucinations, which is another lousy and misleading naming that anthropomorphizes AI, see my elaboration at the link here). A person using generative AI can be fooled into believing generative AI due to the aura of competence and confidence that comes across in how the essays or interactions are worded. The bottom line is that you need to always be on your guard and have a constant mindfulness of being doubtful of what is being outputted. Make sure to double-check anything that generative AI emits. Best to be safe than sorry, as they say.

Into all of this comes a plethora of AI Ethics and AI Law considerations.

There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing coverage of AI Ethics and AI Law, see the link here and the link here.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

We are now ready to proceed with the matter at hand.

How Organizations Need To Approach Generative AI Governance

All organizations today should have an AI governance structure in place. Period, full stop.

This includes establishing companywide policies about how AI can be used, along with rules for procuring systems that contain AI elements. There should be a governing body within the firm that provides teeth to the policies and serves as a point of escalation. Furthermore, another crucial purpose of the governing body is to communicate the policies and practices and be ready to rapidly adjust as marketplace changes occur. For my coverage of these vital organizational aspects of AI, see the link here.

Within an overarching framework of AI governance, there needs to be a subsection devoted to generative AI. I realize this might seem odd to you in that most people think only of generative AI, and do not realize that there are other types of AI beyond generative AI. If you focus exclusively on generative AI for your policies, you will have left wide open a slew of other equally imposing exposures due to AI adoption in many other allied areas.

That being said, I gleefully admit that generative AI has gotten most firms off the stick. Whereas trying to get firms to adopt AI governance was like rowing upstream, once generative AI became a worldwide phenom, all of a sudden many companies eagerly sought to establish AI governance policies of one kind or another.

I’ve repeatedly dealt with firms that ask me this classic chicken or egg question:

  • Should you start by establishing generative AI governance (only), or should you start by aiming for the bigger picture of AI governance all told?

I say that whether you start via generative AI and inevitably and industriously get to the bigger picture, or whether you start with the bigger picture and then ultimately and stridently get to generative AI, the end result ought to be the same. Whatever will best grease the skids. And provide the soonest and most needed dividends to your firm.

Not all firms are approaching generative AI governance in the same manner. I’d wager that most firms have no semblance of an AI governance structure in place. They are thinking about it, even though the barn door is wide open and the horses and cows have already wandered out. Generative AI is already here and going to get bigger with each passing day. Organizations need to get their act together.

Based on my working hand-in-hand with organizations wanting to devise and promulgate sound AI governance policies and practices, I’ve come up with five common typologies of where they sit when I first walk in the door. The classifications range from the doing-nothing to the doing-something, along with the best of the choices which is my listed fifth category.

When it comes to generative AI governance, my five classifications that organizations tend to land into are:

  • (1) Ambiguity in the absence of generative AI governance. The firm remains silent about generative AI and lets employees and managers fend for themselves. The firm either doesn’t care what happens, doesn’t understand what chaos they are sowing, or is stuck in some kind of analysis paralysis and can’t make headway on generative AI governance. Pressure will build. Something eventually is going to pop.
  • (2) Outright full-on ban of generative AI. Sometimes a firm will summarily declare that generative AI is never to be used. This seems pretty cut and dry. Top executives believe they have settled the matter and can pretend that no further attention is required. They wash their hands of generative AI. The problem is, they are merely putting their heads in the sand. I’ll be discussing in a moment how this puts employees and managers into rather awkward conditions.
  • (3) Loosey-goosey policies about generative AI. Some firms rush to establish generative AI governance and figure that something is better than nothing. In one sense, yes, the odds are that something is better than nothing. The issue though is that if the policies are vacuous and provide little if any practical guidance, you are possibly going to start an internal gold rush that will bring massive headaches. The refrain will be that they could do this because the policies allowed it. A perfect excuse for perfect bedlam.
  • (4) Overly restrictive governance of generative AI. You’ve likely worked in companies where they decide to make it so darned hard to get things done that you nearly give up trying. Fill these forms out in triplicate. Get these ten managers to do a sign-off. Firms that think they are being safest by tightening down on any form of innovation that might be spurred via generative AI are fooling themselves. They are overprotecting against bad outcomes at the cost of enabling good ones. The good ones might be transformative to the business. The golden gems won’t ever see the light of day due to bureaucratic inertia.
  • (5) Reasonable policies and practical practices about generative AI. I saved this one for the last of this list. It is the Goldilocks version. You want a form of generative AI governance that makes sense for your organization, as dependent upon the company culture, company size, potential uses of generative AI, and a slew of related factors. Proper controls that aren’t killers and encourage innovation are needed. The AI governance should be logical, easily conveyed, streamlined, and something that even the most maverick of your AI-desiring employees will see has sense and sensibility to it.

Now that we’ve covered those five classifications, we can dig further into one particular category, namely how employees and managers are likely to react in the category of an outright full-on ban of generative AI by a firm (listed as my second bullet point above).

Imagine that you are working in a company and they announce that no one is to use generative AI. This comes from the top brass. A short memo accompanies the strict missive and explains simply that generative AI is dangerous and you are to avoid it.

Top executives might have heard about generative AI having so-called AI hallucinations and this alone might have dissuaded them from wanting to use generative AI at the company. They have bigger fish to fry. The rise of generative AI is in their minds a fad or short-term trend that will soon enough fade. Why open the door to Pandora’s box? Best to nail that door shut.

Meanwhile, astute employees do some exploring on their own and perchance discover that there are interesting and fruitful opportunities to use generative AI in the firm. They try using ChatGPT or one of the other generative AI apps on their own time on their own home computer. They simulate using it for work-related tasks just to see if it will work suitably (being careful to not make use of actual work data). After toying with this for many late hours and on weekends, they realize that there are ways to mitigate the downsides of generative AI and leverage the upsides.

What is such an employee going to do?

If they bring up the idea, it sure seems like a career-ending endeavor. Here’s the dialogue. Look, we told you in no uncertain language that we aren’t allowing generative AI here. The memo was abundantly clear. You are wasting our time by trying to bring up an already settled matter. Worse, you are harboring fresh verboten ideas that don’t belong here. Are you sure this is the right place for you to work at?

Okay, so any employee with a scintilla of sense knows that they would be cutting their own throat to try and push this generative AI usage idea up the ladder. The easier answer is to give up. Toss the generative AI usage into the dustbin. Don’t ever bring it up again. Life goes on.

But a really astute employee is probably not the type that caves in. The odds are that they see writing on the wall. They see that other firms are using generative AI. Their career is falling behind. Their resume is going to be empty of any generative AI work-related accomplishments. The firm is essentially dampening the career prospects of its own employees.

You might be familiar with a famous adage that pertains to this kind of organizational attitude. Firms that treat their employees like mushrooms, including keeping them in the dark, will inevitably get the kind of work and loyalty that they deserve in return (dismal).

We are looking at one of those perfect storms. Employees are bombarded by news headlines and social media that generative AI is the ticket to a bright future. The firm, in this instance, has banned generative AI. An employee embracing curiosity and a desire to do their job in a better way has taken their own personal time to identify how generative AI could be productive. They would undoubtedly like to tell others at the firm, but doing so is akin to telling the secret police. You never know which fellow employee will backstab you or innocently allow loose lips to sink ships.

Time to go clandestine.

This can be tricky.

The Covert Generative AI Route Is Treacherous

Let’s ponder the path of a well-intended employee trying to leverage generative AI.

Suppose the firm has gotten the IT team to lock down all company laptops and desktops to not allow access to any generative AI apps. If you start using a generative AI app, the chances are that it will show up on some kind of internal usage report and you’ll be brought before the tribunal. Don’t want that to happen.

You can try using a generative AI app that doesn’t seem to be on the list of banned apps. The problem though is that the odds are that such a generative AI is probably questionable. You might be opening yourself to potential malware or other maladies. I’m not saying that there aren’t generative AI apps that are small names that aren’t reliable, but it is something that all else being equal is worthy of concern.

You can try to change the name of the generative AI app and make it seem like it is something already on the non-banned list. The problem there is that most any qualified digital checker is going to figure out that you aren’t using that actual app (this is actually a good precaution, so you don’t get tricked into using an app that seems familiar, but it is an imposter).

If you are working from home, you could use your personal laptop or desktop to access the generative AI. You would then simply cut and paste the outputs over to your work computer. A bit of a hassle, but not overly excessive. Most firms usually have a general policy about using non-work computers for work purposes, thus you might be violating that provision (ergo, you are in double trouble, violating that policy and violating making use of generative AI).

While working in the office, one supposes you could use generative AI from your smartphone, assuming that it is your personal smartphone (else, you once again would get caught). I’d say that trying to do so from a smartphone is probably going to be overly tedious and not something tenable for any length of time.

There are other ways to try and circumvent things, but this is not the point of this discussion. The point really is that we have an employee that wants to do what they believe to be the right thing for themselves and their employer, and they are being forced into all manner of contortions to do so.

The crux of this is that there isn’t any viable outlet for them to bring up their attempts to be innovative.

Sadly, things can devolve from there.

These inventive employees are trying to do their best. Nonetheless, they don’t necessarily fully understand the potential issues of using generative AI. What about the issue of possibly giving up company confidential data and allowing privacy intrusions when using generative AI, see my coverage at the link here. What about concerns over copyrights and Intellectual Property (IP) infringement liabilities, see my coverage at the link here. The list of potential gotchas goes on and on.

In a firm with a reasonable and comprehensive AI governance structure, those important matters are already encompassed, including stipulated practices to undertake and practices to avoid. An employee doesn’t have to learn by the seat of their pants. They are provided with guidance and practices that steer them in the proper direction.

Back to the type of company that bans generative AI, some pundits have said that such firms should be proffering rewards or bonuses for employees that step forward and reveal that they have been secretly using generative AI at the firm.


If the firm is still of the mind that they have banned generative AI, it is completely crazy to suggest that they suddenly start offering employees added inducements to come forward. You would have to also offer a complete amnesty, which seems doubtful that any such firm would do. A firm that has opted to ban generative AI is likely mired in a mental cloud of its own choosing and you cannot band-aid your way around that.

Only once the firm moves out of the banned mode can it begin to try things like offering incentives to come forward about generative AI usage. But that’s putting the cart in front of the horse. You see, incentivizing employees to find beneficial uses of generative AI ought to be part and parcel of the overarching AI governance structure. Get the AI governance structure sorted out.

I almost spit up my glass of milk in a classic spit-take when I saw that some pundits were intermixing the classifications of firms pertaining to generative AI governance. The ridiculousness is astounding. Here’s what I mean. Imagine a firm that has repeatedly denounced any use of generative AI. One day, they suddenly wake up and decide that maybe some consultant has told them to start offering trips to Hawaii or cash for ways that they have been using generative AI at the firm.

In the Star Wars film Return of the Jedi, we get this famous line: “It’s a trap!”

You would certainly assume that most employees would see this as a ruse to dig them out. Once they came forth, they would almost certainly be summarily dinged. Until the firm dramatically and radically changes its attitude and acceptance of generative AI, you would be taking a mighty risk to step forward. I guess you’d need to decide whether the trip to Hawaii was worth it, knowing that upon return from the sandy beaches, your desk would be cleaned out.


One aspect that I didn’t dwell on is the employee that hides their use of generative AI when there is no readily apparent reason to do so. In the case of a firm with banned use of generative AI, you can see why an employee goes submerged in their AI use.

If a firm is otherwise open to generative AI usage and has a thoughtful and documented approach to encouraging the piloting and adopting of generative AI, you would think that there is little or no reason for someone to not come forward.

Let’s dive into that.

A worker starts quietly making use of generative AI. They keep track of how well things are going. Gradually, it dawns on them that the work they are doing can be substantially shortened via generative AI. They can do twice the amount of work in the same hours.

There are fellow workers in the firm that do the same work too. This generative AI discoverer realizes that if they reveal the productivity bonanza, it could harm their own job and their ongoing prospects as an employee because the firm might decide it can start laying off people. The firm doesn’t need so many workers in that role. This could get them laid off and their fellow workers and beloved colleagues laid off too.

What would you do?

If you insist that you would adamantly come forward and right away inform management of your findings, I have to say that you are one in a million. I doubt that most people would. Why risk your job? Why risk the jobs of your beloved colleagues?

Alright, let’s now use the pundits calling for trips to Hawaii and token prizes. Would that still be enough to get you to come forward? The equation is maybe beginning to tilt, but it has little to do with job security. When your entire job is at risk, those piecemeal payments aren’t going to move the needle.

We have returned to the overarching AI governance structure.

A well-devised approach anticipates these very kinds of productivity-boosting improvements via generative AI. An already laid out path should be included. I’m not saying that these are easy aspects to resolve. They can be quite arduous, though you can say the same for just about any productivity gains that an employee might identify during their tenure. This is not new. The use of generative AI is new. Coping with the advent of high-tech that materially impacts workers and worker productivity has been around for many years.

A final thought on this topic for now.

This entire discussion was about people. Everyday people, like you or me. At no point did I need to refer to them as cyborgs. Some believe that if you use a smartphone, you are a cyborg. Some claim that if you wear glasses, you are a cyborg. You can profess to be a cyborg at the drop of a hat.

I don’t buy into it.

Please put aside the kitschy reference of secret cyborgs and instead let’s focus on real people trying to do real things in the real world while using generative AI. We don’t need to invoke science fiction to do so.

Read the full article here

Share this Article
Leave a comment