Something Shifted This Week

Two things happened in the same week that, taken together, tell a story about where we are in the AI era. OpenAI published a sweeping economic policy document proposing robot taxes, public wealth funds, and a four-day workweek. And Anthropic quietly opened its most powerful agentic feature — Claude Cowork — to anyone paying $20 a month.

On the surface, these look like separate events. One is a policy statement from a company worth $852 billion. The other is a product update from a competitor. But they are both responses to the same underlying pressure: AI is concentrating capability and wealth at a speed that existing institutions were not designed to handle, and the people building these systems know it.

This issue is about that pressure — and what it means for the leaders, organisations, and workers who are living inside it right now.

The question is no longer whether AI will reshape the economy. It is who gets to shape that reshaping — and whether the answers will come from governments, corporations, or the market itself.

A New Deal for the Intelligence Age

OpenAI's policy document, released on 6 April 2026, is remarkable not because its proposals are radical — most of them are not — but because of who is making them. The company that has done more than any other to accelerate AI deployment is now publicly arguing that the economic consequences of that acceleration require structural intervention.

The framework centres on three goals: distributing AI-driven prosperity more broadly, building safeguards to reduce systemic risks, and ensuring widespread access to AI capabilities so that economic power does not become too concentrated. To achieve these goals, OpenAI proposes shifting the tax burden from labour to capital — higher taxes on corporate income, AI-driven returns, and capital gains at the top. It also floats a robot tax: the idea, first proposed by Bill Gates in 2017, that an automated system replacing a human worker should contribute the same amount to the social safety net as the human it displaced.

The most striking proposal is a Public Wealth Fund — a mechanism that would give every American an automatic stake in AI companies and infrastructure, distributing returns directly to citizens regardless of whether they hold market investments. It is, in effect, a proposal to treat AI as a public utility and distribute its dividends like a national resource.

ProposalMechanismWhat It Addresses
Robot TaxAutomation pays equivalent of displaced worker's payroll taxErosion of the social safety net tax base
Capital Gains IncreaseHigher taxes on AI-driven corporate profits and investment returnsWealth concentration at the top
Public Wealth FundGovernment stake in AI infrastructure, dividends to all citizensExclusion of non-investors from AI gains
Four-Day WorkweekSubsidised reduction in working hours with no pay lossLabour displacement and work-life balance
Portable BenefitsBenefits that follow workers across jobs and platformsJob insecurity from AI-driven labour market shifts

The document is careful to frame these as proposals rather than commitments. And it arrives alongside a more direct political push: OpenAI president Greg Brockman has donated millions to President Trump, and tech billionaires have funnelled hundreds of millions into super PACs supporting light-touch AI regulation. The tension between the policy document and the political donations is not lost on observers. OpenAI is simultaneously arguing for redistribution and funding the people most likely to resist it.

"We are entering a new phase of economic and social organization that will fundamentally reshape work, knowledge, and production." — OpenAI, April 2026

Claude Cowork: The $20 AI Colleague

While OpenAI was publishing policy documents, Anthropic was shipping product. In January 2026, Anthropic launched Claude Cowork — an agentic feature that gives Claude direct access to your desktop, file system, and applications. It can handle tasks autonomously: formatting documents, organising files, synthesising research, managing email workflows, and connecting to tools like Google Drive, Gmail, DocuSign, and FactSet.

Initially priced at $100 per month for the Max tier, Anthropic made a significant move in the same week: opening Cowork to all Pro subscribers at $20 per month. The decision was not purely altruistic — Pro subscribers hit usage limits faster because Cowork consumes significantly more compute than standard conversations. But the pricing signal matters. Anthropic is explicitly positioning its most powerful agentic capability as something accessible to individual knowledge workers, not just enterprise teams with procurement budgets.

Simultaneously, Anthropic expanded the free tier in February 2026, adding features previously reserved for paid subscribers — Projects, Artifacts, and app connectors — to non-paying users. And in April 2026, Anthropic announced Claude Mythos, its newest generation model, described by the New York Times as a development that is "a terrifying warning sign" precisely because of Anthropic's characteristic restraint in deploying it.

What Claude Cowork Can Do

Autonomous task execution

Describe an outcome, step away, return to finished work

File system access

Read, write, organise files across your desktop

App integrations

Google Drive, Gmail, DocuSign, FactSet, and more

Scheduled tasks

Set recurring workflows that run without your involvement

Research synthesis

Multi-source research compiled into structured documents

Workflow automation

End-to-end process handling across multiple tools

The significance of Cowork is not just functional. It represents a shift in what AI means for the individual worker. Previous AI tools augmented specific tasks — writing, coding, summarising. Cowork augments the worker's entire workflow. The question it raises is not "can AI help me do this?" but "should I be doing this at all, or can Claude handle it while I focus on something else?"

Access Is Not the Same as Equity

The word "democratisation" appears constantly in AI discourse. OpenAI uses it. Anthropic uses it. Every major lab uses it. And in a narrow technical sense, it is accurate: the cost of accessing powerful AI has dropped dramatically, and continues to fall. A $20 monthly subscription now gives an individual worker access to capabilities that, two years ago, required an enterprise contract and a dedicated implementation team.

But access and equity are not the same thing. Democratisation of access means that the tools are available. It does not mean that the benefits are distributed. And it does not mean that the risks are shared equally.

Consider what is actually happening. The workers most likely to have their roles disrupted by AI — those in routine cognitive tasks, data processing, customer service, and administrative functions — are not the same workers who are most likely to benefit from AI augmentation. The workers who benefit most from tools like Claude Cowork are those with the education, the organisational context, and the time to learn how to use them effectively. The workers most at risk are those for whom AI is not a tool but a replacement.

Democratisation of access without democratisation of benefit is not equity. It is the appearance of equity — and it is one of the most important distinctions leaders need to hold right now.

A survey of people from 60 countries published in April 2026 found that a majority would prefer jobs to a guaranteed income. This is not a rejection of AI. It is a statement about identity, purpose, and the social function of work that goes well beyond economics. Any framework for AI governance that treats this as a simple redistribution problem is missing the point.

Should AI Be Taxed? The Honest Answer Is: It Depends What You're Trying to Do

The robot tax debate is not new. Bill Gates proposed it in 2017. It was largely dismissed at the time as premature. It is no longer premature. Sam Altman and investor Vinod Khosla have both argued publicly that AI will break the existing tax code — and that the fix is to eliminate income tax for most Americans by replacing it with taxes on AI-generated corporate profits and capital gains.

The logic is straightforward: if AI replaces labour, and labour is what the current tax system taxes, then the tax base erodes as AI scales. The solution is to tax the thing that replaced labour — the capital and the systems that generate profit without generating payroll.

But taxation is not just a revenue mechanism. It is a signal. It shapes behaviour. A robot tax could slow AI adoption in ways that reduce productivity gains. A capital gains increase could reduce investment in AI infrastructure. These are not arguments against taxation — they are arguments for designing it carefully, with a clear view of what you are trying to achieve.

Tax ApproachPotential BenefitRisk
Robot TaxMaintains social safety net funding as labour declinesMay slow adoption; hard to define 'robot' at scale
AI Profit TaxCaptures value at the point of generationCould reduce R&D investment; complex to enforce globally
Capital Gains IncreaseRedistributes returns from AI-driven market gainsPolitically contentious; may reduce venture investment
Public Wealth FundUniversal stake in AI growth regardless of market accessRequires governance infrastructure that doesn't yet exist
No New TaxesMaximises short-term growth and investmentAccelerates inequality; erodes the social contract

What is striking about OpenAI's document is not the specific proposals — most of which are vague enough to be politically flexible — but the acknowledgement that the current trajectory is unsustainable. A company worth $852 billion, founded on the premise of benefiting all of humanity, is now publicly stating that without structural intervention, AI will concentrate wealth in ways that undermine the social contract. That is a significant admission. And it should be taken seriously, even by those who are sceptical of the company's motives.

The Questions That Matter Inside Your Organisation

The policy debate about AI taxation and public wealth funds will play out over years, in legislatures and boardrooms that most of us will not be in. But the democratisation question is not only a macro-economic one. It is also an organisational one — and it is one that leaders can act on right now.

When Anthropic opens Claude Cowork to $20 subscribers, it is making a bet that AI capability should not be gated by organisational budget. But inside most organisations, that is exactly what happens. The teams with the biggest budgets, the most technical staff, and the strongest executive sponsorship get the most AI investment. The teams that would benefit most — the ones doing high-volume, repetitive cognitive work — often get the least.

The democratisation question inside your organisation is: who has access to AI capability, and who doesn't? Not in theory — in practice. Which teams have been trained? Which workflows have been redesigned? Which roles have been given the time and support to learn how to work with AI effectively?

01

Who in your organisation is benefiting from AI — and who is being displaced by it?

Access to tools is not the same as benefit from tools. Map both sides of this question honestly.

02

Are you measuring AI's impact on the people doing the work, or only on the outputs?

Productivity metrics miss the human cost of poorly managed AI transitions.

03

What is your organisation's position on AI-driven job displacement — and have you communicated it?

Silence on this question is itself a communication. People are already drawing their own conclusions.

04

If AI generates significant productivity gains in your organisation, where does that value go?

The macro debate about wealth distribution starts with micro decisions inside organisations.

05

Are you building AI literacy across your entire workforce, or only in technical teams?

The gap between AI-literate and AI-illiterate workers will be one of the defining inequalities of the next decade.

The macro debate about who owns the intelligence will be decided by governments and corporations. The micro version of that debate — who benefits from AI inside your organisation — is being decided by you, right now, through the choices you are making about investment, training, and deployment.

OpenAI's policy document invokes the New Deal as a precedent — the moment when a previous era of economic disruption was met with new institutions, protections, and expectations about what a fair economy should provide. Whether or not you believe that analogy holds, the underlying point is worth sitting with: the transitions that go well are the ones where the people with power actively choose to share it. The transitions that go badly are the ones where they don't.

The intelligence age is here. The question of who owns it — and who benefits from it — is the defining leadership question of the next decade. It starts in policy. But it also starts in your next team meeting, your next budget cycle, and your next decision about where AI investment goes.