For decades, customer experience (CX) design was built around deterministic interfaces: menus, scripted decision trees, and rigid workflows. These models operated like finite state machines, where every possible path was predefined. Conversation design in this paradigm meant creating predictable dialogue flows, anticipating each branch of interaction, and constraining users within boundaries that were safe but inflexible.
Research tells us that such deterministic models align with user mental models—the expectations and assumptions people bring when interacting with a system. When mental models are supported, users feel in control; when they are broken, frustration arises.
Large language models (LLMs) disrupt this paradigm by introducing non-deterministic conversational interfaces. Here, there are no fixed menus or finite input options. Instead, users can articulate their intent in their own words—through text, voice, or even video.
This marks a shift from designing for personas and linear journeys to designing for ecosystems of interactions, where conversation becomes open-ended, contextual, and adaptive; suggesting that conversational AI may represent the first genuinely new UI paradigm in 60 years, underscoring the magnitude of this shift.
Designing for an Interaction Ecosystem
Cognitive CX is not just about a single conversation — it is about orchestrating an ecosystem of interactions. In service design, ecosystems describe the interconnected web of actors, touchpoints, and channels that together shape the customer’s experience. When we bring LLMs into this space, their non-deterministic nature makes them uniquely capable of navigating complexity across these layers:
Actors
Every interaction involves human and non-human actors — the customer, the AI assistant, and maybe a human agent in the background. Traditional systems handle these handoffs mechanically, but LLMs enable smoother role negotiation: they can detect when escalation is needed, summarize the context for a human agent, or maintain continuity when the user switches from self-service to live support. This transforms actors from isolated roles into collaborators within a fluid, shared dialogue.
Touchpoints
Customer journeys are scattered across multiple touchpoints — from browsing a site to contacting support, from receiving a notification to engaging with a sales rep. Deterministic systems treat each touchpoint as a silo. LLMs, however, enable context stitching: remembering what was said on one touchpoint (e.g., chat) including sentiment, and carrying that context seamlessly into another (e.g., voice call or email). This creates a unified journey narrative where touchpoints feel connected, not fragmented.
Channels
Channels — web, mobile apps, IVR, social messaging, in-person kiosks — have historically required separate designs and logic. LLMs make it possible to design for channel fluidity. Because they understand natural language across modalities, they preserve tone, sentiment, and personalization whether the conversation happens via text, voice, or multimodal interfaces. They also enable adaptive recovery strategies, helping guide users gracefully when they go off-script, regardless of channel.
The challenge for enterprises is that every interaction must be framed by the enterprise’s identity — its values, policies, tone and brand promise. As the Conversation Design Institute points out, LLMs may sound empathic, but strategic conversational goals must be designed deliberately. Finding the balance requires a platform that supports multiple topics and instruction-flexibility to ensure every response aligns with compliance requirements, service objectives, and brand voice.
By designing across actors, touchpoints and channels, we reflect the ecosystemic view of UX: experiences are shaped not just by the interface, but by the network of relationships between people, systems, and organizations. Cognitive CX leverages LLMs to weave continuity through this ecosystem, enabling conversations that are coherent, contextual, and empathetic.
Let Empathy drive your Interaction Design
At Clouding AI, empathy is more than a value—it is our design system. In UX, empathy is the ability to understand and share the motivations, frustrations, and goals of users. It means designing for an interaction ecosystem with awareness of emotional triggers, intentionality, and presence.
Academic research highlights that conversational systems which incorporate affect matching and emotion triggers deliver stronger trust and engagement outcomes. However, over-emphasizing emotional empathy while under-investing in cognitive empathy—the deeper ability to reason about a user’s intent and context—yields undesired results. The balance informs our approach: empathy in CX must not just acknowledge emotion, but translate into meaningful, reliable action.
This principle also shapes how we handle so-called “off-topic” requests. From a customer experience perspective, there is no truly off-topic input. Every question represents a moment of trust. Even if the system cannot resolve the domain-specific issue, it must preserve continuity of experience—visually, audibly, and emotionally. We design our agents to manage these moments consistently, while embedding verification layers to improve accuracy and protect trust.
Translating Empathy into Design
Our design journey has been iterative, translating empathy into tangible principles. We developed a layered instruction model:
- At the base, lightweight instructions ensure quick, accurate responses to straightforward queries.
- At higher layers, instructions capture tone, sentiment, and complexity, dynamically adapting dialogue in ways that mirror human emotional intelligence.
This echoes NN/g’s long-standing view that effective design must go beyond functional efficiency to account for how users feel during interaction, and also helps us address the risk of “empathy fog”, where users may doubt whether an AI truly understands their needs. By combining presence (attentiveness), receptiveness (listening cues to tone and sentiment), and intentionality (purposeful guidance), we build clarity into every conversational layer.
Applying our Methods with Agentforce
Our methods come to life through Salesforce Agentforce. In practice, this means building agents with a holistic AI experience that goes beyond prompts, shaping how agents think, respond, recover and voice the brand tone throughout an interaction. We believe in building agents that:
- Allow open-ended, conversation-first interactions rather than pre-defined click paths. The Topic architecture helps us use prompt templates and dedicated flows to inject off-topic logic that aligns with the brand tone and customer expectations.
- Apply verification strategies to ensure reliable, brand-safe outputs. The cost and performance challenge remains one we continue to work on; but having APEX actions help us augment and contextualize requests for enhanced accuracy.
- Maintain multimodal consistency—textual, visual, and audible—even when facing unexpected queries. The user experience should not be defined at the point of request but rather at response time with enhanced rendering logic that understands responses even if not planned.
- Capture long-term preferences in tone, depth, and presentation, evolving alongside the user’s journey. Data Cloud along side Agentforce gives us the capacity to store and augment responses with user interaction preferences.
For us Agentforce is the operational canvas where empathy is scaled. It allows us to combine the flexibility of LLMs with the structure of enterprise-grade governance, turning empathy from an abstract value into measurable design practice.
Our journey to learn more about our users continues
Cognitive CX is not about replacing deterministic systems—it is about transcending them. Deterministic design ensures clarity; cognitive design ensures connection. By weaving empathy into every layer of conversation design, guided by research from UX leaders like Nielsen Norman Group and academic work on empathetic AI, we are building agents that don’t just answer questions but understand people.