How might AI agents make work visible?
An AI agent is different from an AI chatbot because it doesn’t just chat – it can, with a user’s permission, act on their behalf. Interacting with public services is often highly impactful, so we need our agents to act in a way that is worthy of the user’s trust and confidence.
When working with people, advisers and helpers will often make their processes and decision-making visible. They sketch, list, and point to things. This is how work moves from being something nebulous and closed, into something that can be interrogated, refined or refused – collaboratively, with the user.
In our recent user research into how real people work to support others, we attended some Citizens Advice sessions to generate ideas about what this could look like in practice. In these sessions, we observed advisers printing out plans and laying them on the table – not as outcomes, but as thinking made visible, something the person could point to and question.
So, how might an AI agent’s tasks be made visible in a similar way, so that they can be understood, questioned, or – if needed – cancelled?
How might an AI agent represent tasks?
We’re exploring the idea of a task as a basic unit of work – one clear action that an AI agent could, with the user’s permission, carry out on their behalf. It might be renewing a document, updating an address, or booking an appointment.
Wrapping actions into recognisable, visible units like this is one possible way of showing what is happening, what has been understood, and what next steps the user or agent needs to take.

How might users see more details about a task?
At first glance, tasks can appear simple. But each one may represent something more complex and involved.
Any unit of work an agent proposes should be examinable, allowing users to look closely before deciding whether to go ahead.
One option might be to allow users to select a task to reveal a fuller description – what is required, what the consequences are, and what to expect.

How might progress be made visible?
If an AI agent was managing multiple tasks over time, users would need a clear way to see progress at a glance.
One approach to this might be a kanban-style view that lets users see what is moving, what is waiting, and what has been completed.

How might an AI agent be helpfully proactive?
An AI agent capable of understanding an overall process could spot what hasn’t happened yet – this creates an opportunity for it to proactively support users, nudging them before a missed step causes a delay.
For example, for a user who is moving house, it could flag that a survey had not been scheduled, and the delay could cause problems later.

Making tasks visible: what comes next?
For users to grant permission to AI agents to act on their behalf, it’s essential that they have a clear picture of what’s being proposed, what’s being actioned, and what’s coming up.
The ideas shared here are a starting point that helps us to imagine and explore the future of AI-enabled government services, and identify the dependencies of that work.