
In the two previous articles in this series, we have seen how AI increasingly functions as actual labor – illustrated through the McKinsey example – and what happens when this labor encounters real volume, as Klarna has experienced. Once we accept that machines can perform work, a new and more challenging question arises:
Who is actually leading the work as an increasing portion of it is no longer carried out by humans?
Most stop in pilot
Most organizations today have at least one AI project. The presentations look good, the demos work, and the intentions are correct.
That's how it should be. Pilots are necessary to learn, test, and understand what actually works in practice.
The challenge arises first when the pilot is left alone – without clear ownership of how the experiences should be carried forward into operations.
Not necessarily because the technology is too poor, but because no one has taken responsibility for how AI should actually work over time. The pilots are not lacking potential. They lack anchoring. Operations lack leadership.
It is within this gap between insight and implementation that many organizations are now struggling.
Work happens at meeting points – not in models
Work rarely occurs where we have built the most advanced systems. It happens during clarifications, handovers, and conversations.
Often on the phone.
It is in these real-time meetings that capacity is tested. This is where queues form, and this is where the difference between availability and friction is immediately felt. For AI to function as genuine labor, it must be present where the work actually happens – not just where automation is easiest.
When these questions become concrete, they rarely appear in strategic documents. They arise where the work actually happens – in conversations, clarifications, and escalations.
Voting – the last interface for friction
Vote is not just a channel. It is an organizational interface.
This is where the business meets customers, partners, and the market in real-time when something is unclear or urgent. We may accept a two-hour wait for an email, but we rarely accept two minutes in a phone queue.
When AI takes over the volume in voice-based interfaces, the dynamics change fundamentally. AI is an unparalleled transaction machine, but an mediocre relationship builder.
For this to work over time, it requires an architecture that understands the difference between transaction and relationship – and knows when the machine should give up.
When AI fails – and why it is often a leadership problem
In recent months, the debate around Klarna has given us an important realization. After the initial euphoria about massive automation, came the correction.
Some customers found the solutions too rigid. Human agents needed to step back more quickly in complex situations.
Some interpret this as proof that frontline AI doesn't work. A more accurate picture is that AI has been given the wrong role.
When AI is introduced primarily as an isolated cost cut, without clear division of labor, ownership, and a clear escalation logic, the quality will decline. Not because the technology is useless, but because the interaction is poorly designed.
AI needs a safety valve. Not in the form of more rules, but in the form of a conscious architecture that knows when human judgment must take over.
From tools to operational architecture
It is in this space between people, systems, and responsibility that we at Threll.ai have chosen to build.
Not just a tool in the toolbox, but an operational architecture – an intelligent call center where AI and humans collaborate to manage work in real time.
When AI handles the volume of repetitive inquiries, the switchboard acts as a safety valve. This is where conversations are escalated, context is handed over, and humans are brought in when assessment, responsibility, and relationships are truly needed.
This is not a retreat from AI. It is a maturing of how AI is managed.
The new leadership responsibility
When AI becomes part of the workforce, management cannot be reduced to an IT issue or delegated away.
A new leadership responsibility arises that practically involves four things:
- Authorities: What is AI allowed to decide on its own?
- Escalation logic: When – exactly – should a human step in?
- Responsibility sharing: Who owns the results when the work is performed by a machine?
- Overall: How do human and digital capacity actually connect over time?
These are not theoretical questions. They determine whether AI becomes a real resource – or a source of friction.
A Friday reflection
We like to talk about AI as something we "test." It feels safer than admitting that we need to redesign how work is actually organized.
But at that moment, when capacity is no longer equal to staffing, testing alone is not enough. Management also needs to be reinterpreted.
The question is not whether AI will become part of the workforce.
The real question is who takes responsibility for the whole – the interaction between people, machines, and the work that happens in between.
This is the final part in a three-part series about AI as workforce, volume, and management.




