← Back
Expert Insights

The Product Obsessed's Guide To: AI Chatbot Interfaces

  • Product
A photo of Nimrod Priell next to the text "The Product Obsessed's Guide" and "AI Chatbot Interfaces"

Here at Cord, we love thinking of, talking about, and mapping the patterns of product UX in a LOT of detail. It’s a hobby of reflecting, refining, and polishing… an obsessive interest in the nitty gritty details of product design. This blog is part one of a series about the different components that make up most of our digital experiences, why they matter so much, and what good (and bad) looks like.

In a previous post we talked about badging. This time, it’s emerging UX patterns in AI 🤖 The sudden popularity of AI co-pilots and chatbots creates a very exciting challenge opportunity for product and UX geeks like us.

Sure, to-date, the majority of the focus has been centered around how to get these bots to do an increasingly complex set of tasks. But if you leave that question to the mathematicians and data scientists to figure out, we’re left with a wholly different, human-focused space ripe for innovation: the human-bot chat interface.

Just as the emergence of mobile phones and touch screens led to years of innovative UX patterns – from the abundance and richness of emojis to ‘swipe-right-to-archive’ on emails (and dating profiles…) – with every new technology there are also new discoveries to make in the human interface side of the equation.

So with this, I looked, asked, explored and summarized for you, dear PM, the best way to build the interface for your product’s AI copilot…at least that we know of today. I’ve outlined key things to consider, potholes to avoid, and shared relevant examples whenever possible.

One important note before we get into it: We’re all still learning, so if you find some detail I’m missing or have gotten wrong, please get in touch! I’d love to connect with fellow product obsessives and feature your suggestions.

Is there value in personification?

Zoologists are given strong directives to not anthropomorphise their subjects. When you’re researching animals, you’re meant to avoid thinking of foxes as devious, ants as hard-working, and penguins as silly.

But Data Scientists operate with no such constraints: we tend to give AI bots a persona. From Clippy 📎 to Siri, by default – and unlike other features – a bot needs an avatar, a name, and often a personality that mimics a generically helpful and patient human.

A lot of ink has been spilled on some of the cultural biases inherent in how these personas are shaped. For example, a lot of AI assistants, like Siri or Alexa, are portrayed as women. (I recently learned that in the UK specifically, Apple explicitly chose to have the default Siri voice be male. That’s a topic for another day, but if you’re interested, it’s certainly a social can of worms worth looking into.)

Interestingly though, certain lines of thinking run completely counter to this. David Holz, founder of Midjourney, the AI image generation product, says they explicitly wanted to avoid giving the bot a persona and avatar because “a chatbot should not try to be a person”.

And I get it. There’s something skeuomorphic about AI co-pilots taking the form factor and mimicking a helpful assistant chatting with you. Like many early iterations where new tech borrows heavily from patterns people are already familiar with rather than lean into its unique advantage, I think settling for a very conversational, generic, human-like response is missing part of the magic that a software-based bot can achieve while borrowing a lot of the cruft and issues in the emotional response communicating with humans naturally generates.

It seems like the “right” answer lies somewhere between Siri and David. Labeling, but not humanizing. Something like what Slack and Discord have done. Why? Because when bots pretend to be real people, but clearly aren’t, it tends to annoy the actual real people in the conversation - your users - and more often than not, makes them distrust the platform.

Prompts and response types

AI bots exist (for now, anyway) to empower humans to get more sh*t done better, faster, and more easily. Instead of being constrained by learning a specific interface – a set of buttons to click in the right order, or a language or code to accurately do a job – we’re free to just tell the bot what we want to achieve, and it does it.

It could be writing a poem about apples, drawing a cabbage playing a guitar, or planning a scenario where sales revenue gradually increases to $3 million. (Our friends at Vareto can help you with the last one…)

This is all to say that we humans will invariably have to prompt the bot to do something for us with text input (or voice, which will be transcribed to be fed as text to the bot). And, unlike our correspondence with other humans, we don’t need to spend time crafting a perfectly polite, articulate, and compelling message. We just get straight to the point.

But, that doesn’t mean the prompts we write are always one clean message. Far from it. More often, people write instructions in fragments across several messages. These stream-of-consciousness brain dumps can include amendments or corrections to previous statements or requests, and even answers to their own questions they posed earlier on.

Unfortunately, all chatbots I’ve tried have failed miserably at handling this. That’s because most are powered by basic backends that send every message to the AI APIs, meaning the bot can quickly get out of sync and confused, answering old messages and missing the point entirely.

I believe soon, a new UX will emerge that tells us users when our additional inputs have caused a previous AI response to be canceled or removed from the chat, and clearly show what particular message the AI is responding to.

But I digress. So far, all we need to conveniently draft and send our prompt is a comfortable text composer; an interface where you can copy and paste text, maybe attach more context by pointing to something on the screen or dragging a picture in, format some stuff with markdown for emphasis, perhaps include code snippets. This is all table stakes.

What’s not immediately obvious, though, is whether the bot should also reply this way.

Again, if we take the chat metaphor very literally, then we end up with a bot that acts like another person on the chat. Sometimes, that’s exactly what we want: a human-like, open-ended reply. It’s certainly what some AI technologies like ChatGPT and other LLMs popularized.

But it’s not the only way, nor is it the most useful form of AI.

For example, Notion’s AI doesn’t reply in chat. Instead, it generates text directly inside the document.

Animated image of a Notion document being created

Midjourney’s generates a few images based on the prompt, with no textual reply at all.

An animated gif of Midjourney creating an image of a cabbage playing a guitar

GitHub’s co-pilot generates code that must compile and run in a pre-selected programming language; it’s more important that it conforms with the actual code you plug it into (in terms of language, variable use etc) than that it is an answer humans can parse.

An animated gif of Visual Studio Code being used with Github Copilot to build a React app in TypeScript

And Vareto’s AI generates automated financial summaries, answers data questions in natural language, and assists users with building models and reports.

An animated GIF of the Vareto FP&A tool receiving query instructions via an AI assistant.

Importantly, in these examples, the bot’s “responses” are suggestions, not yet committed changes. That means there needs to be a way for the user to quickly and easily approve and/or undo them. You wouldn’t want your AI Assistant to automatically start publishing images, editing code, or changing spreadsheets based on a simple prompt, right?

Now, regardless of the type of response, users will want some indication that the input was received, computation is taking place, and the answer is forthcoming.

In classic UX, this is a spinner or progress bar. In the case of a traditional chat, this is the other user’s typing indicator… the three dots that tell you your question was received and the other side is composing an answer.

What do we do if the response is taking too long, though? Given humans’ dwindling attention spans, we’ll invariably switch tabs or move to another part of the app which means we’ll need to be notified when the answer comes through. These notifications should pop up in-app, and be sent via email and Slack. They should also link us back to the record of the conversation, bringing us right to the most recent message.

Fair to say that human-AI interactions require a heck of a lot more than a basic text composer after all…

Training your bot

While so far, we’ve focussed mainly on the UI of AI Assistants and chatbots, there’s also a meta layer that we have to explore. Still far from perfect, bots and their operators constantly need feedback to tweak the model. Was the response what you expected? Was it good enough?

Some of this can be gleaned implicitly from the continued conversation with the bot, but most interfaces we saw include some form of 👍/👎 buttons in each response. More advanced implementations include a way to tweak and toggle the AI response more directly, without re-prompting it, like a ‘retry’ button.

There should also be ways to tell it what to use and what to ignore from our previous conversations and/or the context it had, as well as a way for the user to force it to turn its attention and tune its responses. A sort of ‘hard overwriting’ that gives humans the power to seamlessly merge multiple responses to fit within the specific constraints of the problem we’re trying to solve.

I believe that’s where skeuomorphism and an over-confidence in the correctness of the AI response will give way to the true nature of co-pilots and an emergence of a new pattern: using AI as an exploratory tool to generate several paths and options based on our prompt and our feedback to help it help us create the solution.

This will require more than just a whole new language in UX. The people using these AI Assistants will have to be educated and trained on how to interact with these widgets and that mental model. I’m especially excited to see how this evolves.

Which brings me to another (mostly unexplored) area I’m particularly interested in and passionate about…

Enter multiplayer collaboration

So far every AI bot experience I’ve encountered is single-player. That is, a single human prompts, receives responses, and interacts with the bot. But this isn’t how we work in real life.

In most B2B applications, multiple people are working together, giving each other feedback, bouncing ideas off of each other, sharing work, and asking for help and approval. Today this happens in products like Google Docs and Figma that have collaboration built-in, via Slack, and over email. Why not, then, have the co-pilot work as a shared resource that everyone can work with together?

Midjourney was the first to experiment with this. In an interview with Ben Thompson, David Holz explains why the Midjourney experience would not work at all if it was just one person talking to a chatbot in room by themselves and why collaboration between humans and bots doesn’t “devolve into insanity and hate speech and slurs”.

“There’s a bunch of people in a room and ideas are swirling around the room. And there’s something to talk about – a bot generating incredible images every few seconds. In this environment…all of a sudden… complete strangers, they go, ‘Dog.’ And someone else goes, ‘Space dog’, ‘Space dog with lasers’, ‘Space dog with lasers and angel wings’.”

This creativity can only be tapped when multiple people are participants in the creative process…but it creates a new set of challenges when it comes to the interface and back-end. At the highest level, it needs to be crystal clear when the AI is prompted, invited to participate and contribute, and where it should stay silent. From a UI perspective, I think a button or /action to ‘turn this into a prompt’ will emerge as a pattern here.

Key takeaways

Because the goal of this guide is to help other PMs build the best AI co-pilot, I’ve summarized the most important takeaways for you here. But remember! I am still learning along with all of you. I’ll continue to update this post as the UX of AI is more widely researched and discussed. And my inbox is always open for musings and feedback.

  1. At the very least, label your bot as a bot. You don’t have to ‘humanize’ it any further with a name or persona…but you can. It’s a choice you’ll have to make based on your product, and the functionality of your co-pilot.
  2. We know one thing for sure: users will input information into a chat interface. It could be text, attachments, or a transcription from a voice note. Whatever it is, make it comfortable and real-time, and let them easily share context with the bot (like what they’re looking at within your app and its current state). That said, users will need a way to intuitively control what the bot remembers, forgets, and retries, and they’ll need a way to tweak its responses until they’re satisfied.
  3. Humans are impatient, and thanks to the UX in apps like iMessage and WhatsApp, we’ve become accustomed to chat-like behavior. That means you’ll need some sort of typing indicator or progress bar that shows the bot is ‘on it’ and processing. And in case the response takes more than a few seconds and the user has turned their attention elsewhere, you’ll also need notifications that bring them back to the persisted conversation.
  4. Bots need to be capable of more than just text responses. Maybe it points to or summarizes changes it’s about to make in the app on your behalf. Maybe it creates and shares images, code snippets, charts, or spreadsheets. In these cases, interactive buttons that allow users to approve and even undo actions will be essential.
  5. If multiple users are going to interact with the co-pilot – which is probably eventually how the best co-pilots will behave – then you’ll need a way to ensure the bot knows what it should respond to, and what it shouldn’t respond to.

There you have it! A comprehensive guide to AI chatbot interfaces, from one product-obsessed person to another. Worth mentioning the stellar team at Cord have built an AI Chat Interface with all the bells and whistles. Use our SDK to create a slick, intuitive front-end for any integration you want to build. Minimal code required.

And if you want to take an even deeper dive into design considerations and the technical implementation of an AI Assistant, check out this webinar replay featuring Cord’s CTO and Co-Founder, Jackson Gabbard, and Cord’s Founding Designer, Tom Petty.