Back to Thoughts

Designing Agent-Native Interfaces

Michael HofwellerFeb 8, 20266 min readField Notes
Developer ExperienceAgent Tooling

#Designing Agent-Native Interfaces

Most developer tools are built for humans. They have GUIs, CLI flags, configuration files — all designed around human cognition and workflow. But what happens when your primary user is an AI agent?

This question led us to rethink how Nexus exposes its coordination capabilities.

#The Natural Language Interface

Traditional tools communicate through structured APIs. You call an endpoint, pass parameters, get a response. This works, but it creates friction for AI agents that think in natural language.

Claude Code skills gave us a different approach. Instead of forcing agents to learn an API, we expose coordination as natural language commands:

  • "Claim src/auth/login.ts — I'm refactoring the token validation"
  • "What files does Agent B currently have claimed?"
  • "Record a learning: the legacy auth tokens use a non-standard format"
  • "Checkpoint my progress on the auth refactor"

The agent doesn't need to know about Redis keys, TTLs, or API endpoints. It just describes what it wants to do, and the skill translates that into coordination operations.

#The Proactive Agent Pattern

The most powerful design pattern we discovered wasn't about making tools easier to use — it was about making agents that suggest when to use them.

In a standard workflow, the developer (or agent) explicitly decides when to record a learning, create a checkpoint, or claim a file. This requires constant awareness of what's worth saving.

The proactive pattern flips this. The agent monitors its own work and suggests coordination actions:

  • After discovering a non-obvious pattern: "This seems worth recording as a learning. Should I save it?"
  • Before modifying a shared file: "I should claim this file before editing. Let me check if it's available."
  • After completing a significant chunk of work: "Good checkpoint opportunity — let me save my progress."

This dramatically reduces the cognitive overhead for the human operator. Instead of remembering to coordinate, the agent handles it and just asks for confirmation.

#Agent-Native vs. Human-Native

The distinction between agent-native and human-native interfaces is subtle but important:

Human-native tools optimize for discoverability and visual feedback. Menus, buttons, progress bars, error dialogs. Humans need to see what's available and get visual confirmation that actions worked.

Agent-native tools optimize for composability and semantic clarity. Clear descriptions of what each operation does, predictable behavior, structured error messages that agents can reason about. Agents don't need visual feedback — they need reliable programmatic interfaces with good documentation.

The best tools are both. Nexus provides a WebSocket-based real-time dashboard for humans and a natural language skill interface for agents. Same underlying system, different surface areas.

#What's Next

The agent-native tooling space is just getting started. As more development teams adopt multi-agent workflows, the demand for coordination infrastructure will grow. The teams that build great agent-native interfaces will have a significant advantage.

We're continuing to experiment with how agents can be better participants in development workflows — not just executing tasks, but actively contributing to team coordination and knowledge management.