London

June 2–3, 2026

New York

September 15–16, 2026

Berlin

November 9–10, 2026

AI adoption has to be driven from the top

Not by mandate, but clear leadership and guidance over why AI is being adopted.
March 09, 2026

Estimated reading time: 6 minutes

Key Takeaways:

  • Leadership must drive AI implementation by defining clear goals and metrics, rather than just mandating tool usage.
  • Because AI accelerates coding, slow executive decision-making becomes the primary bottleneck.
  • Leaders can use AI to refresh their technical skills, helping them maintain vital empathy for the daily developer experience.

Engineering leadership is supposed to sit at the crux of business and technology. Reality rarely matches this ideal.

“The CTO is always at the kiddy table,” says Charity Majors, CTO of Honeycomb, despite how central technology is to modern business. “AI is compressing and accelerating things, more than changing.”

This places leaders under pressure to drive AI adoption and deliver on the promised productivity gains, at pace. Where to start?

AI is inherently top down

A lot of AI experimentation happens bottom up, but that’s where it gets stuck. AI adoption has to be driven from above.

While researching my AI strategy book, it became clear that the complexities of data integration and the expense of AI sprawl means AI policies and intentions must be set and communicated from the top. That doesn’t mean an AI mandate, but ensuring everyone knows why any AI tool is important to the business.

“Make sure the ‘why’ of your AI rollout is clear, and repeat it until everyone on your team knows it by heart,” a recent report by Multitudes advises. 

A recent DX AI-assisted Engineering report echoed this need for executive buy-in. AI “demands thoughtful measurement, implementation strategies, and proper training to ensure that the promise of increased velocity doesn’t come at the expense of the code quality that underpins long-term software sustainability.”

Of course, engineers love evidence, so you also need to find the metrics – particularly around throughput, code maintainability, change failure, and quality – that matter alongside any success stories. 

AI objectives, not mandates

Adoption of AI developer tooling is already very high. That doesn’t mean effective usage or instant developer productivity gains are a given. 

The Multitudes study examined AI adoption, impact, and best practices from January to October 2025, including a deep dive into AI usage data across ChatGPT, Gemini, Cursor, Claude Code, and GitHub Copilot. The report found, “Depending on an organization’s practices, even the same AI tool would have a very different adoption curve.”

In other words: buying AI tooling doesn’t guarantee adoption. Instead, the report recommends leadership to:

  1. Set clear expectations about the role of AI in your organization. Why did you choose this tool? What does success look like?
  2. Track outcome metrics to measure the impact of your AI pilots and projects.
  3. Make code quality a clear goal, measuring it consistently and aim to maintain or increase it with each AI rollout.
  4. Enable peer-to-peer learning, identifying your super-users and amplifying their experiences.

These campaigns range from official AI guilds and communities of practice, to asynchronous communication

Financial firm Capital One has a very popular Slack channel dedicated to quick AI wins, where technologists share:

  • Win. A one sentence summary.
  • Impact. Of AI developer tooling with metrics when possible.
  • Resources. Link to GitHub code, rules, workflows, docs, videos, screenshots, etcetera.

Senior VP of developer experience at Capitol One, Catherine McGarvey, said that she wants to encourage more losses shared in that channel too, as often that’s where the most learning happens.

Following the blueprint of successful platform engineering adoption campaigns, some organizations even have internal advocacy and marketing roles dedicated to helping teams identify AI opportunities. 

For example, Trustpilot is hiring an AI enablement manager to establish its AI Champions network, run regular knowledge-sharing sessions to share knowledge, and to scale impact across the organisation. S&P Global has even hired an associate director of AI marketing enablement to serve as the central hands-on AI practitioner, tasked with “identifying, building, and scaling AI-enabled workflows that materially improve marketing performance, productivity, and customer experience.”

Management becomes the next bottleneck

When code generation gets faster, the new bottleneck appears at the top. 

“Right now a lot of large enterprises use the fact that software can be slow to build to keep the decisioning at a pace they can handle,” Thoughtworks CTO Rachel Laycock said. Now, with the plethora of on-demand information, prototypes, job descriptions, applications, and more being generated by AI, “I feel like I am overwhelmed with decisions to make.”

Reflect on your own management decision-making processes in light of AI. In larger enterprises, it’s not uncommon for eight different executives to have to get together to make a decision. That doesn’t scale at the speed of AI. “If the software is getting built very fast and the bottleneck just becomes that ability to make decisions, do we need to change decision structures?”

AI helps managers get closer to the code

Engineering managers have always grappled with becoming detached from the code

In a room full of executives, engineering leadership is often alone in representing developers’ interests. In order to do this properly, you have to understand how many developers it takes to do a job of what length, so you know where you need to fight for more budget and where you can make tradeoffs. 

“Managers have to be able to read pull requests and follow the plot without having to outsource their judgment to an engineer,” Majors said. “You have to feel it in your bones. You have to be accurate enough,” she said, lest you end up in a situation “where you’ve underestimated the difficulty, underestimated how long it will take, and you’ve left the business hanging.”

Also, while employees come and go, vendor contracts last longer. Decisions to adopt an AI tool now will be hard to pivot from once they are ingrained in your processes. “Execs really need to ask themselves about who they’re trusting and why. And keeping sharp hands on engineering is about sharpening our own radar for who to trust and who not to trust,” said Majors.

This isn’t about writing code every day or submitting diffs. Fixing a bug or test or paying down some technical debt could suffice – just nothing within the critical path that will have devs waiting on you. 

This habit is about maintaining a sort of fluency and ability to learn like an engineer, Majors continued, keeping “empathy with the people on the ground, relationships with the people that you’ve hired, making sure there are open lines of communication.” 

Good senior leadership usually retains an understanding of complexity and architecture, she said, but that the ability to write code calcifies more quickly. AI changes that.

AI makes it a lot easier and more friendly for people who have been managing for a while to dip their toes back in to become fluent again,” Majors said. 

“AI makes it easy to stay up on things, to stay close to things, to be playful. It’s more fun because we’re all kind of newbies together when it comes to AI. Your team is playing around with it for the first time, you play with it with them.”

Plus you need to have hands-on experience with the now AI-infused software delivery lifecycle if you can begin to understand the experience your developers have of adopting it. You certainly can’t mandate or regulate something you’ve not experienced.

Empathy still reigns

Brushing up your technical chops can’t be the only way to bridge that empathy across teams. 

“You have got to know what you don’t know,” Majors said. You need recent information “about what your engineers are experiencing every day.”

As AI underscores the human side of tech more than ever, Majors says you need to keep your door open. In particular, work to actively have conversations with those that are not your direct reports, asking: 

  • What are your bad experiences?
  • What shouldn’t be changed?
  • How is AI impacting your work directly? Indirectly?

Particularly you have to build a relationship with individual contributors, like Staff+ engineers, she said, where there is a “gulf of the way people think, the context they share, the information that they have. The more individuals that can cross that, the better decisions everyone gets to make.”

At the speed of AI, these sociotechnical feedback loops will make or break organizations.

LDX3 London 2026 agenda is live - See who is in the lineup