Building moats around agentic coding platforms
Network effects, tacit knowledge, and agent coordination
Summary: The long-term success of agentic coding platforms will come from everything around the model: workflows, institutional knowledge, and multi-agent coordination. The platforms that absorb a team's tacit engineering culture and help developers manage simultaneous agents will be the hardest to displace.
Agentic coding platforms like Claude Code, OpenAI’s Codex, Cognition’s Devin, and Cursor continue to race to release and integrate new models, agents, and form factors: on the same day that OpenAI released the Codex desktop app and GPT-5.3-Codex, Anthropic released Opus 4.6, its most capable model yet. We can assume that this close competition will continue for the foreseeable future, especially as coding agents become the harnesses for agentic tools across a wide range of domains, from financial analysis to scientific research.
Developers, hobbyists, and enterprises have been the beneficiaries of this close competition so far. For a relatively low cost, developers can employ multiple agents, switching between Claude Code and Codex as they reach usage limits or to work on different tasks. This ease of multi-homing is a problem for coding agent providers, as they’ll continue to face downward price pressure on their agents and models from their competitors.
Agent providers outside the frontier labs face an even steeper challenge. Claude Code and OpenAI Codex benefit from ownership of their foundation models; presumably, they can serve Claude and Codex models at a lower cost than third-party companies. To compete, Cursor and Devin must offer a user experience so differentiated as to attract and retain developers, or continue to develop task-specific foundation models themselves, like Cursor Composer.
Faced with winner-take-all market dynamics and stiff competition, AI coding providers must create network effects, deepen their ownership of the end-to-end developer workflow, and help the developer effectively manage multiple simultaneous agents.
Becoming networked platforms
Agentic coding platforms can create more value and establish stronger competitive positions by creating network effects.
Networked platforms create value by reducing search and transaction costs (e.g. Airbnb, Uber), or facilitating the creation of value-creating applications (e.g. Windows, App Store). Platforms often benefit from strong cross-side network effects between their users and developers of third-party plugins and applications. Each new user increases the value of the platform for developers, and each new third-party plugin or application increases the value for customers, giving rise to virtuous cycles of user acquisition and retention.
Agentic coding providers can facilitate cross-side network effects by creating marketplaces for tools, plugins, and MCP servers to aid developers in building apps. The application or agent with access to the best third-party plugins will attract more developers, encouraging more plugin/integration development. They can also facilitate strong network effects within individual enterprises by allowing employees to publish and share MCP servers built on company data or workflows to automate repeated tasks.
Reducing the incentive to multi-home agents
To survive in a winner-take-all market, AI coding apps must reduce the incentive for developers to switch between alternatives easily. Switching between products is often as simple as working on the same project simultaneously in Cursor and Claude Code in different branches. Learning the workflows of a new coding tool are often intuitive, and rely more on familiarity with prompting foundation models than the particularities of the agent itself.
However, agentic coding providers should not increase switching costs artificially – i.e. by locking enterprise customers into long-term contracts. They must instead integrate company- and user-specific workflows and knowledge: the engineering best practices, policies, and culture that differentiate software development teams. Today, engineering teams share their highly-opinionated cultures via GitHub comments, docs, long Slack threads, and word-of-mouth. This tacit knowledge often defines how a team writes and evaluates code, creates reliability, encodes its distinct philosophy into software, and usually lives outside the agentic coding platform in platforms like GitHub.
Agentic coding providers have already started to bring coding-adjacent practices inside their platforms. Devin’s DeepWiki indexes a codebase and constructs detailed technical documentation, making onboarding easier for both developers and agents themselves. Engineering culture is often most visible during code reviews; Cursor’s BugBot automates these. Developers also define their own personal coding philosophies using CLAUDE.md/AGENTS.md files, defining how the agent should behave across the entire codebase.
AI coding platforms should go further to capture more of a software development team’s tacit knowledge. Shared team-level AGENTS.md files can define how an agent writes code across an entire repository, regardless of which developer is in the driver’s seat. Code review agents like Devin Review should refer to institutional knowledge to learn from past issues and ensure code follows a team’s style guide.
Engineering teams often have particularly high-leverage developers who steward culture, mentor new members, and share hard-won lessons accumulated up over time. While they may be effective individual contributors themselves, they accelerate the learning and progress of their teammates. By owning and integrating more of a team’s development practices and workflows, an AI coding platform can turn itself from a tool into a trusted source of institutional knowledge, culture-setter, and accelerant to their team that is not so easily replaced.
Coordinating teams of agents
Over time, a coding agent will be able to take on longer, more complex tasks in the background as new models improve the agent’s code quality, system design, and planning ability. Therefore, the productivity bottleneck will shift to the developer’s ability to direct, understand, and assess the work of multiple simultaneous agents. The best agentic coding platform may therefore be the one that provides the best interface to coordinate the work of multiple coding agents, rather than the best agent.
Currently, coordinating the work of multiple agentic coding tasks is cognitively intense, requiring an engineering manager’s skillset to train developers, direct tasks, and evaluate output. This is a downside of CLI-based coding tools: visually understanding multiple workstreams is not widely intuitive. This is where an IDE like Cursor can improve upon a CLI agent. Perhaps multi-agent IDEs look like a cross between Linear and GitHub: allowing developers to group issues together under projects, track the progress of multiple agents, and evaluate the code’s performance against tests.
Conclusion
The agentic coding market is intensely competitive, and it’s an almost-certainty that today’s user experiences, model performance, and leading companies will look different in a year. Models are improving fast, switching costs are low, and developers are happy to use multiple tools at once. Every agentic coding provider is racing to improve model performance, but the lasting advantages will probably come from everything surrounding the model, including workflows, integrations, institutional tacit knowledge, and coordination. The platforms that figure this out first will be the hardest to displace.


The tacit knowledge moat is the one nobody writes about honestly. An agent that has absorbed your team's conventions, naming patterns, and review standards is genuinely hard to replace - not because of lock-in tricks but because it's become calibrated to how you actually work.
I built what I call a self-extending agent that grows its own skill library based on repeated tasks. The longer it runs, the more useful it gets. That compounding is real. More on multi-agent use cases built around this principle: https://thoughts.jock.pl/p/ai-agent-use-cases-moltbot-wiz-2026
PS. I like image of this post! I have similar style :D