New York

October 15–17, 2025

Berlin

November 3–4, 2025

London

June 2–3, 2026

Lessons learned launching an MCP server

Steps you can follow if you're thinking about launching your own MCP server.
September 26, 2025

Estimated reading time: 7 minutes

Thomas Johnson, co-founder and CTO of Multiplayer, on how – and when – to launch your own MCP server.

Model Context Protocol (MCP) is all the rage. Since Anthropic open-sourced MCP in November 2024, it has quickly become a standard protocol to connect generative AI-powered applications and agents with external data, tools, and APIs.

By integrating MCP-compatible servers, coding agents like Cursor, Claude Code, Windsurf, or Copilot are able to gather more engineering context, autonomously interact with key systems like Jira or GitHub, and more – all without custom integrations. 

Amid a rush of excitement around AI agents, many software companies are currently launching MCP servers to extend their platform’s capabilities to AI agents. One such company is Multiplayer, a provider of full-stack session recording and debugging tools.

“MCP has been great for us because it’s been an easy way to connect our data with these models running in AI developer tools,” says Thomas Johnson, co-founder and CTO of Multiplayer. “We’ve had good adoption and we’re looking to do more and more with it.”

Despite having just soft-launched their MCP server, Multiplayer is witnessing considerable user interest. But they’ve also discovered emerging best practices along the way, like how best to create highly-scoped use-case-driven MCP tools. 

“There are a lot of bad MCP servers collecting dust,” says Johnson. “If you have a good use case, MCP is invaluable. However, if you’re just trying to get the checkbox that says ‘MCP support,’ people won’t use it.”

MCP isn’t a silver bullet

Before even starting, it’s important to realize that not everything can – or should – be solved using MCP. “Start with the problem being solved,” says Johnson. “If it can be solved better by adding an MCP integration, go for it. If not, don’t.”

MCP makes a lot of sense for Multiplayer, says Johnson, since their platform logs valuable information about user sessions, like screen recordings, back-end server calls, and session data. All of this can provide engineering context for debugging and logging efforts. Once LLMs have such context, a simple command like “fix the bug” can power highly-tailored, contextually-aware results with coding agents.

In this scenario, the MCP server creates a “very sticky relationship” with users, says Johnson, giving them a single platform to code, test, and iterate. This closed-loop workflow makes the tool indispensable to devs when creating features and fixing bugs. 

But not every situation is such a golden opportunity for MCP. To date, most view MCP as better for non-production use cases that enhance the developer’s workflows on the fly, from providing engineering context for debugging and code suggestions to operating on git repositories, generating documentation, or scaffolding SDKs. This is opposed to empowering autonomous agents within critical business flows.

So, Multiplayer has let the use cases drive the design of their MCP tools, which abstract and combine various API calls under the hood. “We noticed developers want to do two things: fixing bugs or creating new features,” says Johnson. “We optimized our tools for doing those two things. We saw the use cases first, then we mapped those to tools.” This philosophy directly guided their development; Johnson explains that initially, the first MCP tool focused on bug fixing, but after realizing that users wanted help with feature development, the team added sketching and annotations within another MCP tool as part of their server.

Start with simple use cases

Some technical leaders are investigating AI agents for key business flows, like automating financial transactions, or important back-end tasks like order processing. However, Johnson is hesitant to entrust MCP-powered agents with too much power. “I don’t know if I would use MCP tools and an LLM for actual business integrations where the continuity would be affected,” says Johnson. This reluctance is partially due to the propensity for models to hallucinate and the security risks inherent in MCP environments.

Actions that involve requesting data are much easier to stomach from a security perspective than using MCP to perform actual server-side changes. “I would avoid mutable tools – we’re all about pulling data,” says Johnson. If you’re using open authorization (OAuth) for access control, and you’re just retrieving data, it limits the possible blast area, he adds, although it doesn’t totally rule out the possibility of certain generative AI risks, like prompt injection attacks, where attackers manipulate LLMs into exposing sensitive details.

Lastly, MCP tools shouldn’t just be an alternative for interactions on your website or mobile applications, either. “A chat interface is not a better replacement for visual data,” says Johnson. “It’s got to be something really valuable and use-case driven.”

With MCP, less is more

In terms of MCP server design, less is usually more. “You want a small number of tools,” says Johnson. Instead of mapping every API method to its own tool, keep it lean. Most software-as-a-service (SaaS) APIs have hundreds, if not thousands, of endpoints, so trying to capture them all creates unnecessary context bloat.

Plus, LLMs aren’t great at taking a general request and turning it into multiple chained API requests on their own – they need more guidance. “The language model won’t wire things together for you,” says Johnson. Instead, MCP tools should represent common use cases and should act as wrappers for sequences of API calls on the backend, he says.

Multiplayer’s command-line interface (CLI) and libraries expose their rich API for many intricate operations around initiating, collecting, and working with session data. While direct API integration is useful for deep, business-critical, stable integrations, it’s not necessary for on-the-fly agentic interactions with the platform, like bug fixing and feature creation, says Johnson. For those, Multiplayer has scoped its MCP tools tightly around developer intent.

Other engineering experts agree that scoping MCP tools around specific user intents, instead of wholly mapping APIs one-to-one, is the way to go. “Try to minimize the number of tools and drive it by the use cases, and you’ll have some success,” adds Johnson. 

Taking a “less is more” approach to MCP tooling design doesn’t mean that servers shouldn’t support various IDEs. Multiplayer’s MCP integrates with Cursor, Visual Studio Code, Claude Code, Copilot, Windsurf, and Zed out of the box, and provides a manual integration option as well.

Launch your MCP tools gradually

Multiplayer began rolling its server out selectively to core power users to gather feedback. This gradual approach has been helpful to see how people use the MCP server in practice.

As Johnson says: “It’s better to give users a few things, have them start to use them, and then ask, ‘What else would you like to see?” This will help you start at the basics to address your primary use case, establish a clear value, and then expand from there.

Another valuable lesson they’ve learned through this trial and error is that providing human-readable data to LLMs yields the best results. “Don’t just shove really raw data into an LLM,” says Johnson. Agents can typically parse JSON, but Markdown plus other human-readable instructions are typically easier for language models to ingest, he adds.

Certain limitations remain

MCP is quickly evolving, and not everything is quite ironed out yet. For instance, Johnson has noticed some pains when authenticating and account switching. In the Multiplayer case, their platform supports multiple projects and accounts – whether for businesses, clients, or personal side projects – but most coding agents don’t yet provide a smooth developer experience for switching between them.

It’ll also take smart thinking and optimizations to avoid including too many MCP servers in development pipelines, since this can bloat workflows and add unnecessary operational costs, which is a possible risk Johnson has observed across the ecosystem.

Another limitation is that MCP is not inherently event-driven – it follows the classic request-response server paradigm for retrieving information. As such, if you’re creating an application that expects an asynchronous callback, like from a webhook endpoint, you’ll require another layer with a background agent to trigger interactions with an MCP server.

A lack of event-driven support is not the end of the world, and Johnson is fine with the MCP standard remaining relatively simple, without bloat. That said, it’s a limitation to consider when developers are stitching together MCP-driven workflows.

LeadDev Berlin is coming up soon

What the future looks like for MCP

For those thinking of creating their own MCP servers, Johnson advises to really consider the user intent. Start with a clear value, and release the server progressively to discover how users put it into practice. “Put your APIs aside, and think about integrations and what users actually want to do,” he says. 

But releasing MCP servers into the wild is not the only way to achieve agentic workflows around testing and feature development. For instance, Multiplayer plans to add this functionality within native extensions into popular IDEs upon the full MCP server rollout, says Johnson. Such a move can provide developers with an easier layer to implement MCP-like functionality, with the simplicity of installing a VSCode extension.

MCP is a new technology, but it’s impressively helping pull together LLMs and tools. Looking to the future, Johnson hopes that the MCP protocol itself will stay relatively simple and that coding environments will support MCP better, like with project configurations for easier account switching.

All in all, Johnson sees extraordinary potential. “The imagination can run wild. It’s a fun time to be a developer.”