Estimated reading time: 3 minutes
As AI agents take the keyboard, do the rules of code authorship and responsibility need rewriting?
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
GitHub’s Copilot has officially stepped out of the passenger seat.
In a major shift announced by Microsoft in May 2025, GitHub’s Copilot coding assistant can now autonomously write code, open pull requests, and even make commits directly to repositories.
Once a developer points the agent to a task, they’ll receive a notification when the completed work is available for review in the repository.
“The new GitHub Copilot coding agent expands on GitHub’s investments into agentic AI, as developers can now assign tasks for Copilot to work on in the background,” Chris Reddington, senior program manager of DevRel Strategy at GitHub, tells LeadDev.
“Developers can assign one or more GitHub issues to Copilot, just as they would assign tasks to team members or themselves.”
When you assign an issue to Copilot in GitHub, the coding agent responds with the eyes emoji. When it’s done, the work is stored in a new file complete with a summary of its workings.
An accountability minefield
This shift raises an important question: who is accountable when something goes wrong – Copilot, the reviewer, or someone else?
Rajesh Jethwa, CTO of software engineering consultancy Digiterre, describes this issue as a “minefield”, because there are a number of entities involved in creating the code.
First, there are the providers of the models themselves, such as OpenAI or Anthropic. It is currently unclear whether these providers own the code generated by their models.
Second, there are the authors of the code used to train the model. There are still questions around whether they have any claim to ownership of the resulting code, given the provenance of the training data.
Third, there are employees and the organizations they work for. Typically, when an employee creates code as part of their job, the organization owns that code.
However, it remains uncertain whether the organization or the individual employee should bear responsibility for the code that is produced with the help of a coding assistance.
“There is a legal precedent [because] there is an ongoing debate around whether people can copyright AI generated artwork and that will form an interesting precedent for who can hold a copyright notice of AI generated content including code,” Justin Reock, deputy CTO at DX, explains.
The U.S. Copyright Office has made it clear: fully machine-generated work isn’t protected by copyright. If there’s no meaningful human contribution – just code generated by an AI tool – it falls outside the scope of copyright law. For engineering teams, this has real implications.
In open source projects, contributors are expected to disclose which tools – including AI – were used to generate code. The human reviewer who approves the contribution is ultimately liable for compliance.
Still a copilot, not a pilot
Just as you shouldn’t take your eyes off the road when driving an autonomous vehicle, these agents aren’t ready to operate unsupervised just yet.
“It may be doing more than a pilot, but at the end of the day the developer has to put their own stamp on it,” said Jeff Watkins, CTO of CreateFuture.
GitHub’s Reddington recommends sticking with existing policies when working with Copilot in this new way.
“With Copilot on your team – existing policies like branch protections still apply in the way you’d expect. Plus, the agent’s pull requests require human approval before any CI/CD workflows are run, creating an extra protection control for the build and deployment environment,” explained. “Organizations should still apply the same security and privacy measures and capabilities to AI as they would to any other technology.”
When asked about overall responsibility, GitHub stated that it will continue to apply responsible AI principles, which are focused on ensuring AI is developed and used responsibly and ethically.
“In terms of where the buck stops, I think that stops with leadership,” DX’s Reock said. Overall, engineering management is responsible for any code that makes it into production. “It still needs to go through rigorous testing and if it breaks things in production, then there is something wrong with the testing.”