Is big tech being too hasty in integrating these capabilities into the software we use every day?
Just how much of a role will generative artificial intelligence (AI) play in the future of enterprise software? For vendors of the most popular workplace software on the planet, like Microsoft and Salesforce, the answer is clear: a major one.
As Google Cloud CEO Thomas Kurian wrote in a recent blog post, “Over the past decade, artificial intelligence has evolved from experimental prototypes and early successes to mainstream enterprise use. And the recent advancements in generative AI have begun to change the way we create, connect, and collaborate.”
While Microsoft’s GitHub might have been an early mover in the AI assistant race, leveraging large language models (LLMs) to build its Copilot product, competing technologies from GitLab, Google, and Amazon haven’t taken long to step into the assistive coding arena. According to GitHub data, an average of 46% of developers’ code was already being generated by the tool at the time of writing, up from 27% in June 2022.
Microsoft has since gone a step further, building Copilot into its ubiquitous Office 365 suite of enterprise applications, putting these capabilities into the hands of the world’s knowledge workers.
For the engineers tasked with building AI into their software, or even using these tools as part of their own day-to-day workflow, the response may be a little more nuanced. Those responsible for these products must ensure they are aware of what the AI is actually doing and, most importantly, what data is flowing in and out of their training models before even considering implementation. Here are six risks to consider before bringing generative AI into your organization.
Attribution of outputs
Chris Aniszczyk, Chief Technology Officer for the Cloud Native Computing Foundation (CNCF) already describes these assistive technologies as “super useful”, as he can “go make a comment and it’ll go and spit me out a bunch of basics that gets me probably 80% of the way there”.
One of the big risks for enterprise users, however, is the lack of clear attribution of outputs. “It’s not properly attributing where it’s got that stuff from,” Aniszczyk says, “I would love to see what its models are inspired from.”
To its credit, Amazon’s Code Whisperer tool already claims to flag or filter code suggestions that resemble open-source training data and makes the adding of attribution and retrieval of the associated repository URL and license simpler. “I think that is more in line with the open-source spirit and ethos,” he said.
That being said, a detailed assessment of the tool by the team behind a competing AI coding tool, Codeium, concluded that there are still clear limitations in the tool’s ability to detect non-permissively licensed code, amongst other issues.
Intellectual property and legality concerns
Allowing an AI to generate code can result in an intellectual property nightmare for the unwary. One of the reasons that assistants can now produce so much usable code is due to the vast amount of source material scraped from public repositories, not all of which are permissively licensed.
While this approach might make for impressive demonstrations, there is also a risk of opening yourself up to intellectual property litigation. At the end of 2022, a complaint was filed in the U.S. District Court for the Northern District of California, on behalf of open-source programmers against GitHub Copilot, its parent, Microsoft, and its AI technology partner, OpenAI, alleging violations of open-source licenses.
Cautious engineers who want to ensure the code generated does not have the potential to trigger a lawsuit down the line have three choices: auditing, running locally, or simply avoiding using the technology altogether.
While the first and last options will not sit well with demands for increased productivity, keeping things local is certainly a possibility and, with an increasing focus from technology leaders on data sovereignty and privacy, this should be a consideration for enterprises accustomed to keeping their data where they can see it.
Similarly, the European Parliament has already begun considering rules around AI. Any companies that do business in the region must keep a close eye on these developments and carefully consider any implications they may have on their business.
Inputting confidential data
Another significant risk with generative AI tools is the possibility of employees inputting confidential data into the training models, which can occur whenever someone asks the AI assistant to solve a problem.
While simply asking a tool such as ChatGPT for the answer to a problem might seem innocuous enough, should that question contain company secrets (such as credentials or keys in code) then there is every chance that that code has been added to the LLM and thus made its way outside of your private organization.
As full stack developer and blogger Jordan Hewitt puts it: “If you’re not comfortable posting it on StackOverflow, don’t paste it in ChatGPT.”
Managers will need to carefully assess the risks of inputting confidential data into any generative AI assistant, consider using a local solution, and set effective guardrails to protect against these risks.
Trusting the outputs
AI is already helping professionals outside of programming, from customer support workers to marketing professionals. One profession that isn’t convinced that AI tools will help them automate key tasks just yet is the database administrator (DBA).
For example, using the output from AI tools to tune the performance of a database would be a concern if the DBA can’t validate how the suggestions have been generated in the way they would with an existing rules-based system.
“From my experience of the hardcore DBAs, they don’t trust AI unless you show them how it works. Unless you write down the exact algorithm, they will not use it,” says Bartek Gatz, a group product manager at open-source database specialists Percona.
Gatz comments that investments in AI are currently more in the analytics space rather than in day-to-day operations. Of the AI technology in the latter, he remarks, “I don’t think that’s going to happen quite that soon.”
Venkat Ramakrishnan, VP of Engineering and Products at Pure Storage is also keen to differentiate between using AI for analytics, and for task automation. His company’s Autopilot storage monitoring engine has been built with strict guardrails in place to ensure customers must give permission for storage changes when a metric slips outside of the range, for example.
Because of this, Ramakrishnan sees AI fitting in as an assistant rather than a replacement for humans. “Your AI can assist them in running infrastructure at scale, but still give them a lot of control in what changes they want to make. It informs them on the changes that potentially could happen and assists them on answering and aiding them find the right answers. I think that is the kind of AI that’d be really successful.”
Increasing technical debt
Then there is the risk of increased technical debt being accumulated by these coding assistants. “People have talked about technical debt for a long time, and now we have a brand new credit card here that is going to allow us to accumulate technical debt in ways we were never able to do before,” Armando Solar-Lezama, a professor at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory told the Wall Street Journal.
While fewer humans might be needed to generate the code mountain, that mountain still needs to be manintained. The danger of orphan code running on corporate systems has always been present, but the risk could increase if these coding assistants are truly unleashed onto the company codebase.
What about our jobs?
AI is finding its way into every corner of the workplace as the technology rapidly rises from a curiosity to an essential. Salesforce, for example, announced the ChatGPT App for Slack in March, swiftly followed by May’s announcement that it would be building AI natively into the user experience replete with conversation summaries and writing assistance. With Copilot on its way to Microsoft 365 (now available as an invitation-only, paid preview) AI assistants will inevitably become part of corporate life and impact managers and engineers alike.
Companies such as BT and IBM are already talking publicly about cutting thousands of jobs due to the impact of generative AI technologies on the workplace, raising questions about whether these tools are an aid to white-collar workers, or a threat. According to Microsoft’s 2023 Work Trend Index, 49% of workers have concerns about job security, with fears of job loss to AI.
That said, a whopping 70% of respondents to Microsoft’s survey would delegate work to an AI in order to reduce their workload. Unsurprisingly, Microsoft optimistically points to AI being a force to permit employees to focus on impactful work, rather than more mundane tasks.
“Employees know what’s in it for them, and the promise of relief outweighs the threat,” Jared Spataro, CVP for modern work and business applications at Microsoft says.
On the other hand, a 2022 White House report into the impact of AI on the workforce worried that these technologies could expose workers to disruption by automating away both routine and non-routine tasks.
What next?
Engineering managers are clearly faced with a difficult balancing act when it comes to the promise and risk of generative AI in the workplace.
While these technologies could usher in a new wave of productivity for their teams and customers, there is a clear need to proceed with caution while we continue to understand what these tools are capable of, and the legal and ethical ramifications of bringing them into our working lives.