London

June 2–3, 2026

New York

September 15–16, 2026

Berlin

November 9–10, 2026

Making AI productivity gains stick across your engineering team

How to build a measurement framework for what comes after AI adoption.

Anastasia Zamyshlyaeva, Maxime Najim and Abiodun Olowode

Date & time

17:00

Register for the panel discussion

Sign up to watch this panel discussion, hosted in partnership with GitKraken.

This field is for validation purposes and should be left unchanged.

Analysis of 2,172 developer-weeks across teams using Copilot, Cursor, and Claude Code shows regular AI users improving output roughly 25% year-over-year, with real wins in test coverage and review efficiency. But the data also shows code churn climbing faster than output, duplication expanding, and quality signals that most dashboards don’t surface.

The challenge for engineering leaders isn’t adoption anymore. It’s making sure the gains hold up as AI becomes a permanent part of how teams work.

This session explores how we can practically measure AI’s impact on productivity, quality, developer experience, and efficiency ratios. You’ll walk away with a clear picture of where AI delivers durable value, where the gains are more fragile, and how to build visibility into both.

This panel discussion will cover:

  • Where AI is delivering value beyond lines of code
  • How to get started with an AI impact measurement framework
  • Sustaining AI-driven gains beyond the initial productivity bump

panelists:

Anastasia Zamyshlyaeva

GitKraken
VP Engineering

Maxime Najim

Target
Distinguished Engineer

Abiodun Olowode

Cleo
Engineering Manager

Moderator: