We’re entering a new era of defense software design. We’ve moved beyond static tools that require analysts to configure tedious settings or manually parse data. Reasoning models make it possible for software to do more than just process inputs—they allow agents to collaborate directly in the work and solve for the warfighter’s mission. Agentic interfaces give them an active partner: one that contributes across cognitive tasks like pattern recognition, creative tasks like generating content, and logistical tasks like retrieving data.
The result is not just efficiency; it unlocks our users to focus on the higher-order judgment required for the missions that matter most. At Vannevar, our goal is to harness this shift to design capabilities that augment defense analysts and operators in their daily workflows—making them faster, sharper, and more decisive. That’s why our approach to agentic UX centers on three core principles:
- Embed agents directly where the work happens
- Make agent contributions transparent and collaborative
- Make agent outputs action oriented
1. Embed agents directly where the work happens
Agents belong inside the tools where operators already do their work. The core capabilities of our products stay at the center and our agents plug into them. Agents have the full on-screen context (selections, filters, objects in focus, etc.) and act through the same interfaces accessible to the user. By integrating agent actions directly into our tools, they feel like a native capability of the workflow, not an extra step.
A user shouldn't have to context-switch or type a long prompt to analyze a trend that's already on screen. They should be able to highlight a series on a chart, draw an area on a map, or select an entity and invoke the agent to characterize it. Keeping actions in context meets the user at moment of need, and moves the mission forward instead of adding steps.
2. Make agent contributions transparent and collaborative
In high-stakes environments, trust comes from clarity. As Erik wrote in Simulating Adversary Behavior With AI, "Traceability is critical. Our users can't solely rely on black boxes. They need to be able to verify themselves." We design agents to show their work, surface intermediate steps, and link every output back to the source data that informed it. For example:
- Show progress, not a spinner: "Reviewing 100 documents for mentions of X..."
- Cite every claim with deep links: "Open on Map", "Open Source"
When users can see how an agent reached a conclusion, they can validate it, refine it, and build trust over time.
3. Make agent outputs action oriented
The value of an agent isn't in generating more output—it's in accelerating the mission. Outputs should change the interface (apply filters, cluster results, draw a geofence) or produce structured objects the system can act on (watchlists, notifications, report drafts). When the results appear embedded within the UX with clear affordances ("Accept", "Edit", "Undo") the user remains in control and empowered to work efficiently with the agent.
Agent actions are paired with human-in-the-loop control: operators can approve or edit agent outputs in place and trace an assertion back to its source. This turns the agent into a reliable collaborator whose work can be validated, redirected, and ultimately, trusted and built upon.
Our approach to bringing agents into our user experience is grounded in the core mission of our company: empowering the warfighter. It’s about designing tools that let them focus on the judgment and decisions that only humans can make, while agents handle the work that slows them down. As Thomas articulated in Defense is the Vertical for Agentic AI, “Agents act as a force multiplier for analysts.” We’re translating that conviction into practice by embedding agents where the work happens, making their contributions transparent and collaborative, and ensuring outputs are action oriented. At Vannevar, this is how we make AI not just present, but operational. If this is the kind of challenge that excites you, join us.