What human agency can teach us about designing agentic AI systems
Over the past few months in AI Studio, we’ve been exploring what it means to design AI-enabled agents in the context of government. Unlike traditional AI chatbots, AI agents could – with permission from users – carry out tasks on people’s behalf.
Although this is a new way of using digital technology, people already rely on agents all the time to do things in real life like:
- advise us
- act on our behalf
- help us navigate complexity
- share responsibility when the cost of getting things wrong feels consequential
For example, we might enlist the support of tutors if our children need extra help at school, or specialist charity workers to help with health-related, legal or financial issues.
Rather than simply speculating on what AI agents might be able to do, we’ve worked with an innovation studio to carry out research into how agency already operates in the non-digital world.
We spoke to carers, advisers, teachers, birth doulas, charity workers, and informal networks to get an understanding of the challenges, opportunities and dependencies that come with delegating to agents.

We looked at how:
- trust is established
- action is prepared for
- work is made visible and interruptible
- responsibility is shared or withdrawn
- messy reality is handled in practice
Through this research, we saw the same patterns recur. Not solutions, but stable ways of making agency safe enough to act – patterns that matter just as much when systems act on someone’s behalf as they do when another human is involved.
For example, we discovered that agents must:
- be addressable - agents must be discoverable and identifiable as a trusted and legitimate source of authority and support
- be able to represent discrete tasks - plans, notes and artefacts allow thinking to be inspected, refined, paused or abandoned before commitment: agents must be able to translate vague intent into identifiable units of work
- separate planning from doing - in real life, exploration often precedes action, and agents must be able to offer users distinct times and spaces to plan and act
- help users plan ahead - agents can help users with future-planning by providing scenarios and simulations to consider and compare
- be collaborative - agents need to support the reality that many interactions with government involve multiple people, with parents helping children, carers supporting others, and trusted people and organisations stepping in when needed
- set and uphold boundaries - privacy and data sharing policies between users and other systems need to be clearly established and communicated, and agents must be able to clearly communicate the limits of their capabilities
These findings are not exhaustive, but provide some really helpful signposts to the way forward for agentic AI in government.
In the coming weeks, we’ll be posting about some of these insights in more detail.