As generative AI usage continues to rise, you needed an AI governance policy written yesterday. Here’s where to start.
Since OpenAI launched ChatGPT last year, the speed at which generative AI has changed the way software engineers do their jobs and think about their products has been hard to keep up with. This technology also introduced a new set of AI risks that needed managing before being unleashed on the world.
Take what happened at Samsung last March. When employees pasted confidential source code into ChatGPT to check for errors and optimize the code, sensitive company data was leaked. A month later, Samsung declared a full ban on using generative AI at work.
This embarrassing incident could have been avoided if the megacorporation had a clear AI usage policy, alongside any privacy and cybersecurity policies. But the electronics giant isn’t alone. It’s estimated that less than a third of companies have an AI governance policy.
Also, by adopting a knee-jerk, blanket ban approach, Samsung risks falling behind competitors that are able to safely and effectively leverage these tools for innovation or productivity gains. Success in the age of AI requires a healthy mix of caution, optimism, and skepticism. And while your engineering organization needed an AI governance policy yesterday, here’s how to get started before it’s too late.
How to be AI governance compliant
As with any policy document, the best place to start is with the law. Audrey Glover Dichter is an attorney at Glover Dichter, specializing in AI governance and data privacy law, along with advertising, marketing, and promotions, which includes intellectual property clearances and protection.
Just like the European Union’s General Data Protection Regulation (GDPR), any upcoming regulation around AI governance are both industry agnostic and extraterritorial. While each country or region seems to have something in the works, Glover Dichter is focused on the forthcoming EU AI Act, and upcoming regulations from last year’s Executive Order on AI safety and security in the United States. If you are tasked with writing an AI governance policy, make sure to keep an eye on the news to stay up to date on government policy and avoid mistakes others are making with the nascent technology.
“I think there’s urgency,” Glover Dichter said, as the US Federal Trade Commission (FTC) is already enforcing existing rules with regards to AI violations of the law. “I think there is consensus to limit some aspects of AI [and] be more cautious and aware of what the legal ramifications of using AI are.”
While we wait for clearer regulation, Glover Dichter offers these six guiding principles should keep you on the right track:
- Know how AI integrates with everything else in your business.
- Be careful what and how much information AI collects.
- Be aware what AI does with the information once collected.
- If you use AI for content creation you must verify that the work created is not infringing on someone’s copyright and/or trademark.
- If you use AI in your business, you must have an AI governance policy to set the rules for its use. And communicate it to all stakeholders.
- If you sell AI services, you must share your AI governance policy with your customers.
These considerations should be familiar to anyone that has had to consider data privacy issues. “Why are you using it? What is the purpose? How much are you willing to trust it?,” Glover Dichter asks. ”You need to understand the backend of the AI because you want to know what the AI is doing with the PII [personally identifiable information] you are collecting.”
Another consideration is that, when you are using a free version of something, you are a user, not a customer. That means that you are likely training the large language model for everyone. By using a premium version of AI tools, you should be able to restrict this outbound data, just make sure to read the fine print first, such as the terms and conditions or terms of use.
Remember, if you are looking to have a legally enforceable AI governance policy, you will need to consult with a lawyer. But many organizations haven’t yet thought about how they approach experimenting with AI. With the majority of software developers already tinkering with different types of AI, you need to establish a governance policy that clearly sets guidelines and considerations for internal usage of the technology. Let’s get started.
Questions to consider
The agile practice of Consequence Scanning is useful when considering an AI governance policy, as it requires you to stop and discuss the following with your attorney:
- What are the intended and unintended consequences of this product or feature?
- What are the positive consequences we want to focus on?
- What are the consequences we want to mitigate?
AI adoption is definitely a time when “worst case scenarios” should be openly discussed.
If you’re going to use the AI to collect information, you need to understand what the model is going to do with that information and build appropriate guardrails.
“If it’s being shared, who’s it being shared with?” Glover Dichter asks. “Under GDPR and a lot of other data privacy laws around the world, sharing or selling PII becomes questionable. But if you are sharing without consent, you obviously are violating privacy laws.”
Once you know the answer to those questions, ask yourself: “How much are you willing to trust it? Are there any biases built into it? You need to test it. You need to understand it. Don’t just plug it in and forget it,” she said.
Once you have those answers, you can start to build internal guardrails around:
- How you use each AI tool.
- Why you want to use it.
Instead of going for a catch-all policy, follow suit of the EU AI Act in assessing and categorizing specific tools and models:
- Unacceptable AI – red flag and prohibit these which are considered a threat to people, including social scoring and facial recognition use cases.
- High-risk AI – proceed with caution, perhaps only approved departments can use these tools and models. This includes ChatGPT, which has to comply with heavy EU transparency regulations as it is deemed to pose a systemic risk.
- General purpose risk AI – AI systems that are transparent by default, allowing users to make informed decisions.
This categorization will not only be useful in setting your own policy, but in outlining your own approach to AI adoption. And remember that some parts of the business may be more restricted than others when it comes to experimenting with these technologies, so your policy needs to be clear about where those boundaries are for different teams and departments.
“We take a look at how data is used before we try out the tools and create guidance on what kind of data you can use – for example in prompts – then we do a trial and see how everyone feels about the tool,” said Steve Fenton, principal technical content creator at Octopus Deploy. Most of the tools that were adopted as a result had strong privacy and security features.
Grammarly has been leveraging AI in their product for the last 15 years. Their User Trust Center is industry-leading, and breaks the risks down into four areas:
- Privacy – The model doesn’t save any data and is blocked from accessing sensitive data.
- Security – Adheres to principle of least privilege and complies with regulations.
- Compliance – Assessing which certifications are in place.
- Responsible AI – Ensuring a human is always in the loop and all content is scanned to mitigate bias.
Any AI policy also needs to include provisions for revisions when new versions are released. Start by scheduling a quarterly review to stay on top of these changes.
What your engineers need to know
The purpose of an AI governance policy shouldn’t be to scare your engineers into towing the line. It’s important to remember that the majority of your developers are already using generative AI – they just might be using it secretly.
Considering the measurable developer productivity benefits and rise in developer satisfaction already being seen, you don’t want to deter them. You want to make sure they proceed with clarity and caution – and that they understand why they should.
All organizations should remind all staff not to feed any private information – from recruitment candidates, to customer intel, to your proprietary code – into public machine learning models. Guide your engineers with what should and should not go into public versus paid-for AI models – and how much they should rely on any outputs.
For example, preliminary court cases have found that AI-generated work is not copyrightable, meaning if a large amount of your codebase starts being written without human authors, you may risk your ability to claim it as intellectual property.
While it may seem like these AI tools are offering speed to delivery, remind your engineering teams of the risk of quality over quality. A study out of Cornell University found that 52% of ChatGPT’s code responses were wrong, and that these hallucinations were persuasive.
“Like any tool, you need to understand the limitations and the kind of things that you use it for,” said Patrick Debois, VP of engineering at Showpad. “If you’re not that well-versed, it’s also very hard to see whether this is good code or bad code.”
While GitHub’s own research has found a 55% increase in developer productivity with 46% more code written due to its generative AI product Copilot, recent research has found a worrying trend of deprecation of code quality due to an overreliance on Copilot. That code is at higher risk for maintainability, code bloat and increased technical debt.
“We’re taking a deliberate path of avoiding the allure of volume. Yes, we could get more things by pumping them out, but we’re more interested in creating fewer things at higher quality. That makes some of the options – [like] ChatGPT – less appealing,” Octopus Deploy’s Fenton said. “When software delivery is the constraint, a business has to make choices. Imagine if that constraint were removed, so you could have all the features you ever wanted – would that be good software?”
Make sure your team continues to run their code through linters to spot errors. Debois also emphasized that progressive delivery practices like enabling feature flags, blue-green deployments, and rollbacks are doubly necessary when a third-party bot is involved in writing your code.
Where to start?
That may sound like a lot, so where should you start? Sit down with an attorney to draft an AI governance policy for your organization and think about the following steps:
- Read up on the law and refresh staff of existing data privacy laws through mandatory training.
- Set up Google alerts and sign up to relevant newsletters to stay on top of the news.
- Map and document existing AI integrations across the business.
- Establish a process for verifying that any code written is not infringing on someone’s copyright and/or trademark.
- Categorize tools and models according to their risk profile. Map these risk profiles to workloads and departments according to their appetite for risk.
- Read the small print on any generative AI tools your engineers are already using and set guidelines for their usage accordingly.
- Clearly outline what private information should never be fed into an AI model.
- Set guidelines for what information and data should and should not go in public versus paid-for AI models.
- Clearly define rules around trusting the output of these models and how to verify the results before putting anything into production.
- Clearly outline the limitations of AI tools and establish a set of standard counterbalances to ensure that any output is verified.
- Establish standard tooling – such as linters and feature flags – to quickly and effectively test the code.
- Write and circulate an external version of your policy for customers.
- Schedule a quarterly policy review. Sooner if there’s a big legal change or new tool introduced.