How To Approach Generative AI In Practice

News Room

Whether you’ve used ChatGPT to create a grocery list or spent time wondering if a robot will take over your job, many of us have become increasingly familiar with the benefits and risks of artificial intelligence (AI). The immediate question people leaders specifically are grappling with now is how to address the fact that employees will increasingly be leveraging AI in the course of their work.

Despite the hype, many of us in HR are learning that this isn’t a time to panic but to adapt. The fact is many of the tools we use in HR processes already have AI components, and that’s only going to become more prominent. Further, when it is used responsibly and ethically, AI can have a positive impact on an organization by taking over time-consuming but critical tasks.

To better understand how to mitigate the inherent risks of AI while allowing room for experimentation and exploration within your organization, I turned to Robert Scott, General Counsel and the SVP of Legal at Lattice. As a lawyer and thought leader in the data privacy community currently working in HR technology, Robert was early to recognize the rising impact of AI in the workplace as access to high-quality AI models spread.

In our conversation, we discuss Robert’s approach to crafting AI policy as well as the use cases and application of AI within high-growth organizations and their risks and benefits.

You were early to create an AI policy for Lattice, where many organizations may not yet have one in place. How did you approach this policy, and do all organizations need one?

Robert: Whether or not you have an AI policy in place today, there are people within your organization using AI to do business. To set the stage, I’m a huge proponent of AI and what the technology can help individuals accomplish. I ultimately believe that the benefits outweigh the risks.

That said, different organizations will have different use cases and risk tolerances. When developing and implementing a policy within your own organization, you’ll want to work with your counsel and understand your business needs as they pertain to AI as well as your organization’s risk tolerance. This will look different for every organization, but we’ll get into some common use cases you may want to consider.

A good place to start: Understand how different teams within your organization are currently using AI. Conduct a listening tour and determine which use cases you should be trying to restrict or limit, as well as which are low-risk, high-value activities that you want to encourage and help facilitate.

Obviously, one of the reasons we are discussing this today is that the explosive rise of ChatGPT has made AI tools even more accessible to employees. What are some of the use cases and applications of AI that you see in the workplace today, and how are you thinking about them in terms of risk levels?

Robert: Some common low-risk, high-value use cases are those which do not require the user to share personal data or proprietary information. Sales outreach is a great example in that an account executive (AE) can share with, say, ChatGPT, what they want to achieve with a prospecting email and get help drafting this communication. We all get writer’s block and know what a time suck these emails can be, so using AI really increases efficiency here. Marketing content is similar in that AI is great for ideation and can help explore potential paths for creative content.

While you probably don’t want to go to a large language model (LLM) and ask it to create a new application for you, quality assurance (QA) is a great engineering use case. An engineer can take code they’ve written and ask AI to debug it, driving efficiency without exposing your organization to a lot of risks.

Drafting policy and creating presentations are other low-risk, high-value ways I’ve seen teams and individuals leverage AI.

One use case that is particularly relevant for People teams, but is a bit murkier in terms of risks and values, is with applicant tracking systems (ATS). Right now, there are tools that allow you to run a video interview with an applicant and get recommendations based on the candidate’s suitability for a role. This may seem spooky – and we all know there’s a risk of bias in AI – but I’m actually excited for large (and more very risk-tolerant) companies to pursue and experiment with these kinds of tools.

For most teams, however, I’d encourage you to work closely with legal counsel when using AI for ATS, because there are certainly more regulations there than with other use cases, with more coming down the pike.

What are some other risks of working with AI that we should be thinking about?

Robert: Data privacy is top of mind. Lattice has sales operations in EMEA and we’re based in California, so we need to think through compliance with the California Consumer Privacy Act (CCPA), and the General Data Protection Regulation (GDPR). These may require opt-in consent in order to use a person’s data, so the use of AI would be considered a processing activity.

Even though AI is not human, sharing your confidential information with the tool could result in the tool somehow repurposing that information or breaching a confidentiality obligation in a customer contract. You could also inadvertently disclose strategic business information. One of the best ways to mitigate this risk is by entering an enterprise contract with your AI vendor of choice and working with your legal counsel to ensure confidentiality protections are in place.

Another potential risk to consider is intellectual property issues. If you leverage AI to develop an innovation, it may not be patentable. Copyright protection is a bit more up in the air at this point, but I wouldn’t count on it. Generally, try to limit the use of AI anywhere where you need innovations or ideas to be protected for your organization.

Lastly, I wouldn’t recommend including AI in any employment decision-making at this point, unless you’ve worked with your vendors and employment law counsel and are sure your use case will be compliant. There is a lot of opportunity for us to reduce human bias by leveraging AI capabilities, but we’re not quite there in practice yet.

At the end of the day, many of the risks around AI can be managed as long as you put guardrails in place to ensure employees are using AI ethically.

I’ve heard from my own networks that many CPOs and other HR leaders are recognizing the need for putting a policy in place around AI, but it can be hard to know where to start. What are some key considerations for leaders to keep in mind when crafting an AI policy?

Robert: First and foremost, keep it simple. Policies should be developed so people can easily understand them.

It’s important to avoid “don’t”. Creating a policy that fundamentally says, “Don’t do it”—that won’t work. Instead, look to restrict or limit use in high-risk areas (of course what these areas actually entail is subjective and will largely depend on the risk tolerance and functionality of your organization). Encourage the use of AI in high-value, low-risk areas such as sales and marketing.

Allow for ideation and innovation, and encourage employee feedback at every step. Understanding how people want to use AI to advance business interests will help you provide a path for them to do that.

How do you think about getting opt-in from employees and/or helping folks understand how to keep business information private when using these tools? One of the things you suggested was having an enterprise instance (like ChatGPT Business)—will this make the most sense for those of us thinking about data privacy?

Robert: ChatGPT made a big splash because it made AI accessible. Everyone started using it before regulations could catch up. So we don’t have as many answers about the risks that our organizations are being exposed to. A lot of this work involves keeping up with best practices and trends by doing your research, experimenting with the technology yourself, and working with your counsel to meet the needs of your organization.

For ATS and the tools that are leveraging AI for employment decision-making, first, figure out what your use case is, and just like any procurement initiative, make sure you understand what business outcome you’re trying to achieve.

There’s a lot of sales sizzle around AI tools right now, but keep your eye on the ball—what specific problem do you want AI to address? One example is if you have a retention problem that you think stems from managers not selecting the best candidates. You may want to leverage AI here, but first, screen your vendors with that need in mind, and as part of the procurement process, add a regulatory scoping piece and really drill down with potential vendors to see if they can meet those needs.

When it comes to AI and confidentiality, how do we make sure we have the necessary protections in place?

Robert: This will be context-specific, but generally, vendors are evolving to meet business needs in response to those businesses pushing on them and saying, essentially, ‘we’d love to use your tool, but if the model can be trained on our data, this violates our confidentiality agreements.’

Each vendor has different default terms and enterprise terms and in some use cases, your organization may be willing to take on more risk than others. For example, you can imagine a sales representative putting confidential information around a new product launch in a prospecting email, and it’s up to you whether or not that’s OK—how secretive is the information? The risk of ChatGPT sharing that information is probably very low, but as an organization, you need to set those guardrails both internally and with your vendors.

For folks who are getting started with an AI policy – where should they start? ?

Robert: It’s so important to initiate the dialogue around AI, and then keep it going. One of the biggest mistakes I’ve seen teams make is to avoid adopting a policy at all, or adopting a policy and then assuming that the work is done. First, reach an agreement with your key stakeholders around guardrails—which use cases are greenlit, which need additional review, and which you are just not going to pursue, at least for now.

All of these use cases are dynamic. Your vendors are exploring them as well and some will do a great job at incorporating AI in a compliant and non-biased way, and others might not. So know that you’ll need to continuously monitor the situation and keep up the conversation.

To bring it back to your work at Lattice – it’s been a few months now since we put our AI policy in place. What are some of the responses and feedback you’ve had from the plan so far?

Robert: The honest answer is that it’s been anticlimactic– and that’s a good thing. We did our homework and felt confident that the policy we were creating, and the stance we were taking on AI for Lattice, were correct.

Read the full article here

Share this Article
Leave a comment