NavVis | Blog | BUILD BETTER REALITY

Why People Matter in This New Age of AI-Empowered Engineering

Written by Tim Runge | Sep 17, 2025

Large language models are already a common presence in software development. Every NavVis engineering team, for example, has adopted them in its daily work and, when reasonable, uses them to autocomplete functions, generate routine boilerplate code, and even draft documentation.

But one persistent limitation is context: these systems don’t “remember” what has been learned across sessions, and they struggle to coordinate when multiple tools or tasks are involved.

At NavVis, we’re constantly investigating ways to improve the useful integration of AI into our product development lifecycle, which is why two of our team leads have investigated complementary approaches to this context problem.

Martin Friedli explored how memory banks could give an AI assistant persistent project knowledge, while Ivano Alvino tested multi-agent frameworks that coordinate specialized agents to complete tasks end-to-end. Together, these experiments highlight both the opportunities and the unresolved questions around AI-enabled engineering workflows.

But there’s another reason why this matters for us. The environment we work in at NavVis (e.g., multi-platform mobile applications, shared C++ code, and vast 3D datasets) makes for a particularly demanding proving ground.

Compared to industries where abundant and standardized training data support coding tasks, our reality capture workflows can get especially messy and complex. That doesn’t necessarily make AI inherently better or worse here, but it does mean the stakes are higher, and the lessons more valuable. If workflows succeed in this context, they are hopefully robust enough for anywhere.

They also point to something broader: how do we manage and scale engineering teams in an environment where not just people, but potentially dozens of AI agents, contribute code? This is where the topic of the sociotechnical system becomes central.

Preserving project knowledge

Martin’s work focused on the concept of a memory bank. Traditional AI coding tools operate in a stateless way: each time you start a new session, the assistant has no awareness of what has come before. This means that project knowledge, team conventions, and past fixes must be reintroduced again and again.

To address this, Martin set up a memory bank inside the IVION Go iOS project, which lives in NavVis’ larger mono-repo (i.e., a single codebase that contains multiple projects side by side, including iOS, Android, and shared C++ libraries). A mono-repo makes it easier to share code and keep everything in sync, but it also raises questions about how to scope experiments like this. Should the memory bank cover just iOS, or the shared code as well?

Martin created a dedicated .clinerules folder, added a projectbrief.md file with structured instructions, and used Cline’s “initialize memory bank” command to generate the first set of entries. These markdown files act as a project “diary” the AI can read at the start of every session. For example, if one developer discovers that long test outputs overwhelm the model, they can record a rule like “only read the relevant section of test results.” The next teammate benefits automatically — without even knowing the issue ever existed.

The result is invisible knowledge sharing. Instead of re-learning the same lessons repeatedly, the AI begins each task with a persistent awareness of the project’s norms and pitfalls.

Martin also raised several open questions:

  • How should memory banks work in a mono-repo where code is shared across platforms?
  • Should instructions be duplicated in iOS and Android projects, or unified in one place?
  • How can memory banks be integrated into environments beyond VSCode, such as Xcode or Android Studio?
  • Who is responsible for maintaining them: the AI, the engineer, or both?

These questions underline that while memory banks can work in practice, they also require a shared discipline. They are as much about building the right engineering culture as they are about technical setup.

Coordinating multiple agents

While Martin focused on depth of knowledge, Ivano’s experiments looked at breadth, specifically, to what extent multiple AI agents could work together. To make this part of the NavVis development workflow wasn’t his primary goal. His aim was research.

So, using the Python framework Agno, he created a small “team” of agents:

  • A Jira agent to fetch ticket details.
  • A file system agent to read and edit source files.
  • A coordinator agent to orchestrate the workflow.

The system was tested on a real ticket: adding a welcome screen feature for NavVis IVION Go. Normally, such a task might take a developer twenty minutes. In Ivano’s trial, the agents fetched the ticket, located the relevant code, created a plan, and proposed edits to two files, all in about twelve minutes.

This approach, often called agentic coding, shifts the interaction model. Instead of a developer iterating line by line with a copilot, the human assigns a task, and the agents coordinate among themselves to deliver a solution.

Yes, this still requires oversight, and rate limits prevented a live demo at the time Ivano presented, but the potential is clear: multi-agent workflows can extend what a single AI agent can do.

A visual interface (the "playground") of Ivano's local agentic team.
The task ("IV-7503") that Ivano gave the agentic team.
The list of steps the agentic team took on its own to complete the task.

Where the projects converge

Both projects address the same core limitation: context.

  • Martin’s memory banks ensure that an AI assistant doesn’t forget what it has already learned about a project.
  • Ivano’s multi-agent workflows ensure that different tools and agents can coordinate effectively.

Together, they suggest a future where AI systems have memory and the ability to act as a cohesive team, applying knowledge consistently while distributing tasks across specialized roles.

Leadership in a sociotechnical system

The impact of these experiments isn’t limited to tools. As Tom Renner, one of our engineering team leads, likes to always say: software development is a sociotechnical system, meaning success depends on both the technical system and the social system around it.

Introducing AI changes this balance. Machines now contribute significant portions of code, but they make mistakes in very different ways than humans do. They can hallucinate, generate plausible but incorrect solutions, or confidently reintroduce errors.

Sometimes those mistakes are wrapped in a kind of artificial charm. Anyone who has used ChatGPT will recognize the pattern: “You’re absolutely right, thank you for the brilliant insight!” In coding, the effect can be the same. The model flatters, agrees more than it maybe should, and then produces something that might still be wrong. It’s pretty funny, but it also underlines the point: these systems don’t inspire trust the way colleagues do, which makes human oversight and validation essential.

All the more reason for processes like code review and testing to evolve.

At NavVis, memory-enabled tools like CodeRabbit are already part of our workflow, learning from past feedback and improving over time. Leadership in this environment means designing processes that capture the strengths of AI while ensuring reliability through the right checks and balances.

For NavVis engineers, this is a cultural shift as much as a technical one. You can’t build trust with an AI in the same way you do with a teammate, but you can design systems that account for its unique strengths and weaknesses. That, ultimately, is the essence of a sociotechnical system, and it’s something we’re working to define and better create every day.

The next steps

Obviously, there are still barriers. Tool fragmentation makes integration difficult. Compute costs may rise once providers move beyond subsidized usage. And no matter how advanced the tools become, NavVis engineers will remain in the loop to handle edge cases and ensure quality.

But the opportunities are equally clear:

  • Faster iteration on repetitive tasks.
  • Knowledge that persists across sessions and teams.
  • Engineers freed to focus on architecture, design, and innovation.

As Tom said, now is exactly the right time to experiment. Costs are low (relatively), the tools are evolving quickly, and the lessons we learn now will prepare us to evaluate future offerings critically when the economics inevitably shift.

Yet the real story here isn’t just about memory banks or multi-agent systems. It’s about how engineering teams evolve when humans and machines share the work. How do you scale a team when some of its members are not people? What new skills do engineers need, and what new responsibilities do leaders take on?

At NavVis, we don’t claim to have all the answers. But through experiments like these, we’re constantly improving our understanding of and the effective integration of AI-empowered development works. Already, we’re unlocking exponential gains in productivity, and step by step, we’re bound to uncover more. But the key is to create workflows where human creativity guides and directs AI usage.

Upholding this balance is what makes a sociotechnical system.