'Active listening' for chat interfaces

Pete Kowalczyk

In AI Studio, we’ve been working on AI-enabled chat interfaces.

We recently analysed conversation logs from a classic ‘Retrieval Augmented Generation’ (RAG) system connected to all GOV.UK content. RAG is a technique for using AI to retrieve and generate answers to user questions based on the content you give it access to.

From this, we found that trying to meet a user’s request straight away is not always the best thing to do. Jumping straight in with answers can mean that a chat agent can mis-interpret the user’s needs and is unable to handle multiple requests. These kinds of answers tend to include redundant information, and can lead to dead ends.

A good conversation between 2 people is based on ‘active listening’. This means concentrating on, understanding, and acknowledging what a person is saying, before offering your own thoughts.

We think a good chat interface should use the same principles of active listening to identify the user’s underlying needs or goals before deciding what to do next. And by designing the system around active listening, we reckon lots of useful conversation patterns can emerge.

Allowing user-led and AI-led conversations

We looked at 200 real question-answer pairs, and identified at least 8 ways that active listening could more effectively help the user.

Currently, these are:

  1. Ask clarifying questions before trying to answer.
  2. Prioritise from multiple requests.
  3. If you’re not sure, double check.
  4. Move the conversation from generic to specific.
  5. Orient the user within their wider goal or journey.
  6. Get a deeper pragmatic understanding.
  7. Offer helpful alternatives.
  8. Get to the point.

When appropriate, these patterns shift the experience from user-led to AI-led conversations. They aim to remove the burden of learning and navigating government from the user, and allow the AI system to guide users through their experience.

These behaviours also shift users’ expectations about what the system can do. The system gives signals, or ‘affordances’, that it is capable of doing more than just answering simple questions.

Some examples of active listening

Let’s look at number 5: Orient the user within their wider goal or journey. In this case, the AI can infer the broader goals around the literal question about the State Pension age.

The AI is aware of the broader goals around retirement, as represented on the GOV.UK step by step guide for planning for retirement.

The image shows two side-by-side screenshots of conversations between a user and an ai-chat agent. On the left, the user asks, "What is the State   Pension age?". The agent responds by saying it doesn’t have specific State Pension age information but provides guidance on where to find this       information. It mentions that the user can check their State Pension age online through GOV.UK and provides a link to check the State Pension   forecast. On the right, the user again asks, "What is the State Pension age?". This time, the agent responds with more specific details: "For most people today, the UK State Pension age is 66," and notes that between May 2026 and March 2028, the pension age will gradually increase to 67. The agent offers a link to check the State Pension forecast on GOV.UK and also offers to help the user plan for retirement by providing options such as checking how much pension the user will get, increasing their pension, checking for financial support, and deciding when to retire.

On the left is a real answer provided by a RAG system. On the right is a directional sketch of a more helpful answer, orienting the user within a wider process.


Now, let’s look at number 6: Get a deeper pragmatic understanding. If we explicitly tell the system to understand the underlying goals or needs, it can then determine what it can do to help with that goal.

This forces the AI to distinguish the semantic meaning (what the user said) from the pragmatic meaning (what the user meant). In this example below, the AI does not have access to the accounts. But it can help the user find the accounts, which will ultimately help the user get their task done.

The image shows two side-by-side screenshots of conversations between a user and an ai-chat agent. On the left, the user asks, "What are the latest accounts for the 6th Tonbridge Hadlow Scouts charity?". The agent responds that it cannot answer the question directly but suggests that the user may find the information by searching on GOV.UK. The agent also provides additional options to check Freedom of Information (FOI) responses or make an FOI request to find out if the question has already been answered. On the right, the user again asks about the latest accounts for the 6th Tonbridge Hadlow Scouts charity. The assistant responds with more details: it explains that access to the charity’s accounts depends on the user’s role. It states that anyone can access general accounts information, such as total income and expenditure, through the charity register on GOV.UK. The agent also lists the specific accounts that trustees automatically have the right to access, including management accounts, budgets, annual accounts, and cashflow projections. It offers further help if the user provides more details to assist in finding the information they need.

On the left is a real answer provided by a RAG system. On the right is a directional sketch of a more helpful answer, telling the user how to get what they need.


Designing by building

If we instruct the AI system to use active listening — to reflect on what the underlying goal or need is — we think that it can be better able to help the user.

But working with a pro-active system poses new design challenges. For example, how can we allow for free exploration and task completion in the same experience, and how might we balance better personalisation without adding more steps for a user to complete.

To approach this, we’re designing by building.

By working with AI prototypes, it’s easier for us to see where interactions work or where they break. It also helps us find implementation challenges earlier, and helps bring our multi-disciplinary team together around a shared ideation space.

Get in touch

If you’re also working on AI-enabled chat systems, we’d like to share experiences with you.

Have you seen a similar need? What is your approach to ensuring users’ underlying needs are being met, and how is that experience quantified?

You can email us at govuk-ai@dsit.gov.uk.