UX patterns for AI-powered experiences

By

Maruša Hrobat

UX patterns

We undertook some exploratory research to understand what best practice looks like across the industry and inform our strategy for integrating generative AI within our products in a user-centric way.  

The current landscape of AI-powered user experiences 

Companies are racing to integrate AI-powered features into existing applications and build entirely new experiences – from chatbots to code, image and even video generation – it feels like AI is everywhere. Amid this early experimentation phase, some common ways of interacting with AI are emerging across the various products, offering established approaches to designing intuitive and effective AI-driven interactions. We set out to identify and document these interaction patterns and understand how we can use them to design effective solutions for our new AI-driven products and features.  

Why do we care about generative AI?

At Better, we see generative AI as a powerful tool to enhance user experiences and add value across our products in many ways. From providing relevant information and advice on how to query data using Archetype Query Language (AQL), summarising product documentation or even automating complex processes such as form building, we are full of ideas of how to integrate AI-powered features across our products. Rather than reinventing the wheel, we sought to learn from existing industry best practices to ensure we create familiar, consistent, and high-quality AI experiences. 

Our research journey and learnings

Our approach involved synthesising both academic and industry-led research on human-AI interaction principles, guidelines, and frameworks. We also explored various online collections of documented UX patterns and tried out a wide range of real-world AI products and features, including conversational AI chatbots, code generators, low-code assistants, browser extensions and even AI agents. Although the landscape keeps changing, here are some key learnings from our research.  

Firstly, some of the big players such as Microsoft and GitLab have tried to make sense of the explosion of different AI-powered experiences through their AI UX frameworks, which classify interactions into the following 4 high-level interaction modes. 

🔷 Full-screen or immersive:

AI is the main context, at the forefront, with a dedicated focus, think entirely new, AI-first experiences such as full-screen chat interfaces. This mode is best suited for broad user goals that combine the need to comprehend, create and/or collaborate as it enables the user to ask open-ended questions, brainstorm, discover or just simply have a back-and-forth conversation with AI. Examples include the pioneering ChatGPT, Claude or Perplexity. 

🔷 Sidebar or assistive:

AI appears next to the main application, complements the main context, assisting users with actions and information while enabling them to keep an in-app focus. This is a rather straightforward way of augmenting existing applications with generative AI functionality and is well suited for tasks in which users only require periodic input or advice, while keeping the main content in focus. Examples include Microsoft’s Copilot as a sidebar assistant within tools such as Word, PowerPoint and Excel.  

🔷 Embedded or integrated:

AI is closely integrated within the main application as contextually relevant actions, helping users with discrete tasks in relation to specific on-screen elements. Rather than conversing with AI as in the full-screen and sidebar modes, here AI is more in the background, often accessible via context menus, which appear when users highlight or select on-screen elements. Examples include Miro AI’s ability to sort and summarise selected sticky notes, SiderAI’s ability to generate YouTube video summaries or ChatGPT’s ability to review code and apply suggested edits in Canvas.  

🔷 Invisible or proactive:

AI operates in the background and proactively enhances user productivity by anticipating and supporting their next steps. It is important that AI-assistance is only provided when the confidence over user goals is high, and that the user always remains in charge by accepting/rejecting AI outputs or taking over control when needed. This mode is becoming increasingly important, especially given the recent push towards agentic AI. Examples include Gmail’s Smart Compose, Copilot’s code autosuggest or OpenAI’s Operator. 

Beyond these high-level AI interaction modes, we came across several interesting collections or libraries of specific UX patterns. While each organises and describes these patterns somewhat differently, they can generally be categorised into common approaches for handling user inputs, AI outputs, context, customisation, and representation of AI within user interfaces. Together with the previously mentioned interaction modes, we find these more specific UX patterns particularly useful in helping us talk about and prototype new AI interfaces and interactions.  

Over time, we noticed that new interactions and experiences are being introduced weekly if not daily, with even the biggest players such as OpenAI, Microsoft and Google continuously experimenting, user testing, refining, and evaluating potential solutions to create better user experiences. So, in short, despite some early emerging AI UX frameworks and patterns, everyone seems to still be learning and trying out new things behind closed doors as well as live, at scale… suggesting we should too!  

The road ahead for Better AI 

At Better, we will use these findings and real-world examples to develop our own framework of AI UX interaction modes and specific patterns tailored to the healthcare domain in order to guide our strategy for integrating AI features across our products. Over time, as we develop prototypes and implement such features, we want to document and incorporate the most effective AI UX patterns and related UI components into our Better Design System, ensuring they can be consistently reused by product teams across Better. 

In fact, we have already started putting all this into practice. Our Innovations team is working closely with several product teams on prototyping and implementing new AI-powered features. For example, together with the Studio team, they have already implemented the Query Assistant for AQL within Studio and are continuing to work on an improved version of this assistant. This new version will combine the sidebar and embedded modes and utilise several specific UX patterns to help users generate, understand, and fix AQL queries. 

While the presented research has been helpful in shaping our initial thinking and designs, we approach the insights with caution. AI in healthcare demands a higher standard of usability, reliability, and trust, which is why user research remains at the core of our process. Rather than simply replicating patterns from general-purpose AI applications like ChatGPT or Microsoft Copilot, we rigorously test our prototypes with our users to ensure they meet their unique needs. By balancing industry best practices with a user-centred, evidence-driven approach, we can create AI-powered experiences that truly make a difference in healthcare. 

Share Article