New York

October 15–17, 2025

Berlin

November 3–4, 2025

London

June 2–3, 2026

How LinkedIn built an AI hiring agent 

To make AI agents sustainable for the long haul, engineering leaders need to keep users at the forefront of design.
October 31, 2025

Estimated reading time: 7 minutes

LinkedIn VP of engineering Prashanthi Padmanabhan shares why it’s important to keep users in the driver’s seat when developing AI agents tasked with important tasks.

The tech industry is very enthusiastic about AI agents, yet we’re still in a period of nascent AI maturity, where many agents underperform or fail to meet user expectations. However, there are some signs of early success in certain cases.

Prashanthi Padmanabhan, VP of engineering for talent solutions at LinkedIn, is leading the design of an AI Hiring Assistant, as part of their recruitment product, LinkedIn Recruiter. The assistant is an agentic product due to its ability to orchestrate complex flows, retain knowledge, and reason on sources, such as candidate profiles and external hiring systems. According to Padmanabhan, the key to success with AI agents is grounding development in lessons from customer usage and feedback.

Instead of turning on AI features for everyone by default, LinkedIn’s process has involved thoughtfully introducing agents to select enterprise customers in a curated and tightly controlled fashion, and continuously evolving the user-to-agent interaction.

Padmanabhan says the rollout has cut the time recruiters spend searching for qualified applicants, as Hiring Assistant users spend 48% less time reviewing applications. But efficiency is only half of the equation: the other is trust. Users value deep transparency in how agents operate if they are to trust them with critical decisions.

Build trust with transparency and control

LinkedIn’s recruiter agent, after being fed preferences, can do the hard, deep research of evaluating hundreds or even thousands of resumes to surface high-match candidates. “Our goal is for it to feel like the hiring assistant is working alongside you,” says Padmanabhan.

But outsourcing so much power to an AI is tricky. The propensity of large language models (LLMs) to hallucinate has been widely reported. This, paired with biases stemming from training data, unpredictable behaviors, and privacy concerns, has left some apprehensive about trusting AI agents with critical business decisions.

“Users and the customers are going through the journey of trusting these agents,” says Padmanabhan. “It’s hard to give control to a black box to make decisions.” Establishing trust is extra critical in areas that directly impact people’s lives, like hiring, she adds.

To get around this hesitancy and build faith in agents, Padmanabhan recommends a few key steps. The first is providing a window into the agent’s “thinking” so the user can see what’s happening on the backend. For example, LinkedIn’s recruiter agent provides detailed text instructions on how it reasons and makes decisions while evaluating applicants. It shares its reasoning in the moment of processing, like a loading bar, with phrases like:

  • “Generating role and project details”
  • “Gathering candidates”
  • “Generating sourcing strategies based on your hiring requirements”
  • “Searching for qualified profiles using the following…”

Source: Hiring Assistant – shaped by customers, powered by AI innovation 

The second point is balancing autonomous action with human-led direction, a practice known as human-in-the-loop. 

“It’s important for our customers to know they are still in the driver’s seat.” This can be as simple as providing a method for a user to actively evaluate the agent’s performance with a simple thumbs-up and thumbs-down alongside each AI assistant response within the user interface, as LinkedIn has done. 

Their assistant includes other checkpoints that put humans in the driver’s seat, like suggesting different candidate sourcing processes, and asking why a candidate is not a good fit to gather more context. Additionally, at this point in the agent’s development, it acts more as a recommendation agent for hiring candidates, leaving it up to the human user to take real-world action.

Lastly, when an agent can’t make a decision or doesn’t know the answer, it’s far better to program it to respond with “I don’t know” and pause to ask for clarifications. “That’s a good user experience pattern, rather than just assuming something,” says Padmanabhan.

Get out of the walled garden

While transparency and user control are crucial, trust is also built on the agent’s ability to deliver effective results. This requires agents to have access to external data and systems to build a broader context for the task at hand. This is especially important in recruiting, which often involves a mishmash of different hiring platforms and applicant intake software systems.

For LinkedIn, this has meant building connectors via APIs to various applicant tracking systems (ATSs) that enterprises commonly use in the hiring funnel. With these integrations, their agent can review the whole pool of candidates and resumés beyond LinkedIn profiles.

In general, with agentic AI, access to the right data matters a lot. As such, agents working in other domains will similarly benefit from connectivity to relevant third-party systems, which will help deliver more tailored outcomes.

Curate the rollout and evolve

When building new agentic AI workflows, designers must be open to continuous learning and evolving the project, says Padmanabhan. Humans adjust their workflows and habits as they learn to work alongside agents, and agents must evolve to meet customer expectations and adapt as the underlying technology improves.

To aid this journey, LinkedIn first piloted the agent with a small handful of enterprise customers in a charter beta program to analyze real-world use and refine the user experience.

“You can’t just give any agentic product to a user,” says Padmanabhan. “You almost have to crack the product. You grow with the product, and the product grows with you.”

“Most of the time users don’t use the product how you think they’ll use the product,” adds Padmanabhan. For instance, one interesting finding was that many users prefer to sit and watch the AI perform, rather than multitask and come back later.

Early trials revealed that engineers needed a more conversational start to the prompting process. Instead of relying on users to type everything or know how to craft the perfect prompt, the team evolved the agent to adopt a more step-by-step questionnaire that asks what specific qualifications the recruiter is looking for.

Early testing shaped the agent’s ability to remember past sessions, which evolved into a simple capability to reuse patterns (such as similar candidate search profiles) from previous sessions. Both improved the efficiency of the overall workflow and end-user satisfaction. “Customers get excited because they can move fast,” says Padmanabhan.

The takeaway? Organizations shouldn’t rush to deploy AI agents without careful planning. Taking a more curated approach can help pinpoint usage patterns and shape the agentic interaction before a major rollout. “Our approach to cautiously roll it out and evolve worked out a ton for us,” added Padmanabhan.

Log user signals and recalibrate

Once launched, continually improving the AI experience relies on monitoring user signals automatically. “Our product is constantly learning from users’ actions and behaviors,” says Padmanabhan.

According to Padmanabhan, monitoring user actions can help fine-tune the agent to better understand unique user intuitions and patterns. At LinkedIn, these data points fall into two camps: implicit and explicit learning.

Implicit learning covers the user preferences implied by their on-screen behaviors. For instance, if a recruiter visits a specific candidate profile multiple times with a resume attribute such as “five years’ experience building zero-to-one products,” it will log that indicator and show more relevant future examples.

Explicit learning involves more overt cues. For example, in this context, this could be the end outcome: the recruiter either adds a proposed candidate to the pipeline or declines them. The agent will flag those candidates as valuable and use this knowledge to suggest more targeted future candidates.

By monitoring various types of cues from user sessions, agentic systems can build a memory of intricate preferences and use this to continuously train the underlying model and optimize the end result.

Last few spots left for Berlin

A symphony of agency and trust

Agentic AI is a quickly evolving space. Experts anticipate the underlying technology will continuously outpace the current status quo at a faster and faster rate.

As Padmanabhan says: “This is the worst the product will ever be.” To her, current AI agents are destined to become more intelligent as the underlying LLMs improve and learn new patterns. “I believe this technology will get better every day and every week.”

Padmanabhan views current agents as a baseline for what’s to come and indicates LinkedIn will continue to support agentic AI as part of its long-term product roadmap. “This is not a sprint,” says Padmanabhan.

This isn’t to say Padmanabhan’s team hasn’t already had early success. Recruiters using the agent, on average, review 62% fewer profiles before they make a decision on a candidate. LinkedIn has also observed a higher overall acceptance rate from recruiters, where they see a 69% higher InMail acceptance rate with Hiring Assistant versus traditional sourcing methods, something that Padmanabhan views as another metric of success.

Early signs look good, but establishing agentic technology for the long haul will hinge on figuring out the proper handshake between human and machine to build credible, intuitive, and secure experiences. Without that trust and considering the long-term consequences, it will be hard to stomach letting AI make really important decisions that affect people’s livelihoods.

“We’re aiming for a beautiful symphony of agency and trust,” says Padmanabhan. “If we can achieve that, it will be the absolute ‘cloud nine’ for product owners.”