London

June 2–3, 2026

New York

September 15–16, 2026

Berlin

November 9–10, 2026

Shipping nano updates in the AI era

Are nano updates the missing link in your strategy?
January 30, 2026

Estimated reading time: 6 minutes

Chainguard VP Dustin Kirkland explains why enterprise software teams should continuously update with a scalpel to avoid mega burdens

Nano is a prefix for a unit of measure meaning one billionth. In contexts like nanometer and nanotechnology, it refers to something infinitesimally small. And for software teams, going nano with updates to your software stack’s dependencies is the key to retaining stability across intricate software environments.

At least, that’s according to Dustin Kirkland, VP of engineering at software supply chain security company Chainguard. He frames nano updates as continuous and comprehensive support for new open source software versions: “A nano update is all day, every day, all the time updating and upgrading to the latest and greatest everything.”

Up to 90% of software applications are built with open source components. Yet it’s becoming more challenging to keep pace with these dependencies since the timespan between new releases is shortening. Likely as a result, 90% of applications have components more than ten versions behind the current version, found a 2025 report from Black Duck.

It’s easy to let outdated software components build up in the background over time and then attempt an upgrade all at once. But sweeping organization-wide software updates can be a big engineering chore and cause breaking changes when they don’t account for the nuances of every individual system. 

Different application stacks typically require a unique blend of various major and minor versions of specific programming languages, external libraries, frameworks, and infrastructure-level operating systems – that’s where nano updates come in.

Taking a more incremental approach to updates meshes better with a more rapid cadence of version releases, the requirements of legacy enterprise software, and the need to ship code faster in the AI age. “The pace of innovation requires a different approach to the problem of updates and newer versions of software,” says Kirkland.

The problem with batch software releases

Although the software industry views DevOps principles like progressive delivery and continuous integration as standard practice nowadays, an iterative approach hasn’t fully extended throughout all engineering activities, such as continuous, incremental support for software infrastructure versions.

For instance, rolling out a new major version of Red Hat Enterprise Linux (RHEL), a stable commercial distribution of the pervasive Linux operating system, into a major bank could be a twelve-month ordeal, says Kirkland. “It’s not something that you undertake over a weekend or patch on a Tuesday.”

These traditional, batched, long-tail updates take considerable preparation, he says. This is learned experience from his past roles guiding software development at large companies like Goldman Sachs, Google, IBM, and others.

Going down the road of large-scale updates soaks up engineering time and can stall the release of new features. “You’re going to accumulate hundreds, if not thousands, of unpatched things that can’t be fixed until your next major update,” says Kirkland. “That is the opposite of a nano update.”

Instead, nano updates are more precise, minimal changes. They focus on continually leveling up different components iteratively and in isolation. For instance, this could be upgrading to Python 3.13.0 in one stack, while persisting Python 3.12.0 in another due to compatibility requirements. Or, it could equate to testing how a minor update to a Node.js framework affects a certain system.

To enable this sort of micro-dependency management, Kirkland recommends using containers, a technology that packages up software and dependencies into isolated environments. Using containers as a boundary helps to manage dependencies independently since everything you ship is in the same place.

The benefits of nano updates

Compared to large-scale engineering changes, smaller updates carry a number of benefits, says Kirkland.

More AI-ready

Taking a nano update approach helps retain ongoing modernization. If you’re trying to deploy AI in your environment, you’ll need a modern stack such as a recent version of Python, and modern versions of libraries for AI. However, legacy underlying technology incompatibilities can easily stall these initiatives. “You’re not going to be able to run that two, three, or four-year-old Linux distribution,” Kirkland explains.

Varied support

Nano updates aren’t about making micro adjustments across the board – they’re about moving to minor or major versions in different areas when needed. “A once-size-fits-all solution for everyone doesn’t necessarily work,” says Kirkland. “Having many and varied versions available ends up becoming quite important.”

Performing microscopic updates is becoming more necessary due to the varied nature of different stacks and their interconnected dependency trees. “You might need one version of a library here, and another version of that library here,” explains Kirkland. “You can expect to need small, incremental, very precise ‘carve out with a scalpel’ style updates in specific places.”

Fewer vulnerabilities

At a big enterprise, “hundreds of thousands, if not millions, of engineering hours can go into remediating security vulnerabilities,” says Kirkland. It’s not just installing and upgrading dependencies, it’s also a matter of time-consuming operational side-activities, he adds, like retesting, requalifying, and silencing vulnerability scanners for false positives.

Taking a continuous approach to updates is good cybersecurity hygiene since it can avoid risks in end-of-life components that may be left unpatched, leaving vulnerabilities for attackers. According to HeroDev, most cyberattacks leverage out-of-date packages with known vulnerabilities like Log4Shell or WannaCry.

Kirkland adds that using hardened libraries – software packages that have been vetted by third-party security experts for known vulnerabilities – can reduce the potential for risks when continually updating versions. The big result here is reduced human labor required to find and remediate vulnerabilities, and the opportunity to reassign workers to more value-added work, he says.

Better observability

Lastly, a nano-style approach to dependency management reduces the variables involved, making the effect of changes easier to track. When teams update a single library, runtime, or dependency in isolation, you can directly observe how the change impacts an individual software stack. This can expose possible breaking changes and pinpoint them to the specific update that caused them. 

Large batch upgrades, on the other hand, are harder to diagnose, since many components change simultaneously, and issues can emerge in many different directions. In other words, the blast radius is very high, making observability more challenging.

When something breaks with a nano update, however, it’s easy to identify catalysts for breakage. “Breakages are inevitable,” says Kirkland. “I’d rather stub my toe than break my leg.”

Keeping pace with constant change

From shipping new products to retain relevancy to testing new features like MCP, ML models, and AI agents, the pace of innovation in the era of AI has quickened. It now demands software teams to release and update more frequently, and in smaller chunks. Enterprise-grade releases are especially cumbersome, says Kirkland, and nano updates help keep pace.

Others echo the pressing need to constantly ship software changes. “During this period of AI-driven cosmological inflation in tech there is a market premium on shipping,” writes analyst firm Red Monk’s co-founder James Governor. “You’re either shipping or you’re being left behind.”

Another important aspect is automation. Automatically installing third party software and putting it straight into production is risky, so it’s best to have some sort of scanning process in place within a continuous integration and continuous deployment (CI/CD) pipeline to check for known vulnerabilities. “All of this hinges around CI/CD,” says Kirkland. 

The role of agentic AI in nano updates

AI will also likely play a greater role in the dependency management process. “We ourselves started early testing and adoption of developer-friendly AI utilities that act as an engineering copilot,” says Kirkland. “Over the course of the last year we’ve become heavily more invested there.”

“Every engineer here is utilizing AI,” he adds. “Not a day goes by that I don’t utilize Claude or ChatGPT or Gemini.”

That said, you can’t give autonomous AI agents access to all operations, since their behavior is unpredictable. And as a software supply chain security company, Chainguard needs to play it pretty safe.

To date, there’s a lot of deterministic automation in place at Chainguard, and while its engineers mainly utilize AI for things like engineering suggestions, they are looking to expand the scope of their agents – this would mean not only evaluating what went wrong with bugs or updates, and the proper course of action, but actually taking that action. 

“Human in the loop is where we’re at right now,” says Kirkland. However, the next big step is becoming more agentic, he adds.