In a short time, chat interfaces have become the standard way we interact with AI. New features are launched as dialogues: write here, ask a question, and the system will take care of the rest. That is understandable. It feels flexible, powerful, and human. It also clearly signals that AI is present. For many products, this is an effective way to make built-in intelligence visible. But in many cases, it is also a sign that the design process stopped at the first working interface. For users, it is not about wanting to use AI. It is about getting something done. This is the perspective we at Intunio use when designing AI-supported products.
Tobias Rydenhag
Head of Design
Mar 8, 2026
5 min

When language models took a decisive leap in capability, users encountered them through a chat-based interface. This became the starting point for how advanced AI was introduced into products and services. Suddenly, systems could write, summarize, explain, suggest, and reason, often better than users expected.
Chat became the natural way to demonstrate this. A single interface could showcase the full breadth of the model. It required no decisions about use cases, no prioritization, no clear boundaries. Everything could be handed over to the user: formulate your question, and the system will handle the rest.
For general-purpose AI tools, this is an effective way to make capability accessible. The same solution becomes less obvious when AI is meant to be part of a specific system or workflow.
From a product and development perspective, chat is hard to beat.
It is quick to build.
It is extremely flexible.
It is easy to demonstrate in meeting rooms and at conferences.
Above all, it requires very few design decisions.
Chat functions as a universal fallback interface. Whether the user wants to analyze data, write text, or make decisions, the same input field can be reused. When something does not work, it is easy to blame the prompt rather than the design.
There is also a clear historical parallel. Terminal and command-line interfaces offered great freedom of action to users with the right knowledge. They were effective, expressive, and difficult to master. In many ways, chat is the terminal reborn, now using natural language instead of commands.
For those building the system, this appears rational.
For users, the situation often looks very different.
The problem with chat as a primary interface is not that it is bad.
The problem is that it places too much responsibility on the human.
To succeed, the user must:
- understand what is possible
- formulate the right question
- interpret the response
- judge whether the answer is reasonable
- and often iterate several times
This creates a high cognitive load, especially for everyday or recurring tasks. The same action can produce different results depending on phrasing, context, or timing. Precision is low, and predictability is limited.
In many systems, this is not acceptable. Users do not want to reason their way to the correct answer every time. Users want control, consistency, and confidence that the same input leads to the same outcome.
Chat also shifts decision-making responsibility from the system to the user. Instead of the product guiding, constraining, and helping, the human is expected to hold the entire problem space in mind. This is rarely a sign of good UX.
There are situations where chat is exactly the right interaction model.
Chat works well when the problem is open and exploratory. When users themselves do not yet know what is needed. In early phases of thinking, learning, and ideation. When the goal is to reason, compare perspectives, or think out loud.
In these cases, flexibility is a strength. Being able to ask follow-up questions, rephrase input, and let the conversation take new directions is valuable. Chat becomes cognitive support rather than a production tool.
And that is precisely where chat belongs: in the open, the unclear, and the not-yet-formulated.
When tasks are recurring, structured, or sensitive to consequences, AI should play a more restrained role.
In these cases, other interaction patterns often provide better UX:
- constrained choices instead of open-ended questions
- suggestions based on context rather than generic answers
- autofill that reduces effort without removing control
- ranking and prioritization that support decisions rather than replace them
When AI is used in this way, the result often feels “magical.” The system feels smart, fast, and helpful. At the same time, it remains understandable. Users can see what is happening, make adjustments, and understand why a certain decision is suggested.
This is where AI truly begins to work for the user, rather than demanding work from the user.
At Intunio, we start from the task, not from the interface. In many situations, we choose chat as an interaction model, often because it is a fast and flexible way to make advanced AI accessible. It is easy to build, easy to demonstrate, and can deliver significant value in the right context.
At the same time, chat is far from always the best solution. In products and systems with clear workflows, recurring tasks, or high demands for precision, we often choose other ways of applying AI. This typically means integrating intelligence into existing flows, rules, and UI controls, where it can support decisions and reduce friction without taking over the user’s focus.
In practice, this means that AI is sometimes clearly visible, and sometimes barely noticeable at all. In both cases, the goal is the same: to make the product more usable, more reliable, and easier to work with over time. This approach is not always the most spectacular in a demo, but it is where long-term value is created.
Ultimately, the future is unlikely to be about more chat-driven systems.
It is about better tools.
Tools that require less formulation, less interpretation, and less mental effort. Tools where intelligence is felt, but does not take center stage.
When AI does its job best, it is barely noticed at all.