For the last year, you couldn’t open a browser without someone telling you AI was either going to save the world or end it. Most of that was noise. The people selling AI tools wanted you excited. The people scared of it wanted you worried. Neither was particularly helpful if you were just trying to run a business.
But here we are in 2026, and something more useful is happening. The hype is fading. The dust is settling. And instead of predictions and hot takes, we’re starting to get actual data on what AI is doing in the real world — not what people thought it would do, but what’s genuinely happening, day to day, in real jobs.
The picture that’s emerging is more interesting — and more practical — than the headlines ever were.
What the Research Actually Shows
Anthropic — the company behind the AI model Claude — has been quietly building one of the most useful bodies of research on this. Their Anthropic Economic Index doesn’t rely on surveys or predictions. It analyses millions of real conversations to track what people are actually using AI for at work, how they’re using it, and what that means for different jobs and industries.
Their most recent report, published in March 2026, introduces something they call “observed exposure” — a measure that combines what AI could theoretically do with what it’s actually being used for in professional settings. And the gap between those two things is the most important finding.
AI is nowhere near its theoretical capability in most jobs. There’s an enormous difference between what AI could speed up in theory and what businesses are actually deploying it for. For example, in computer and maths roles — the most AI-exposed category — the theoretical capability covers around 94% of tasks, but real-world usage covers just 33%. In most other job categories, the gap is even wider.
What that tells us is that the “AI is coming for your job” narrative is running years ahead of the reality. The technology might be capable, but adoption is slow, patchy, and constrained by all the things that predictions tend to ignore: regulation, trust, messy data, the cost of getting it wrong, and the simple fact that most businesses don’t change overnight.
Is AI Actually Costing People Their Jobs?
This is the question everyone really wants answered, so let’s be direct about it.
Anthropic’s research looked at unemployment data from the US Current Population Survey, comparing the most AI-exposed workers with those in jobs that have almost no AI exposure. The finding: there’s no systematic increase in unemployment for highly exposed workers since late 2022, when ChatGPT launched. The gap between the two groups is small and statistically insignificant.
That doesn’t mean nothing is happening. There’s suggestive evidence that hiring of younger workers has slowed in AI-exposed occupations. It’s not that people are being fired — it’s that fewer new positions are opening up. For a 23-year-old trying to get their first job in marketing, admin, or tech, that matters. We’ll come back to that.
The World Economic Forum’s Future of Jobs Report projects that 92 million roles will be displaced globally by 2030 — but 170 million new ones created, a net gain of 78 million. Goldman Sachs estimates that the overall effect on US unemployment will be about half a percentage point during the transition, and that the disruption is likely to be temporary — fading within two years, as it has with every previous wave of technology.
None of which means you can ignore it. But it does mean the sky isn’t falling.
The Workers Most Affected Aren’t Who You’d Expect
Here’s something that doesn’t fit the usual narrative. The workers in the most AI-exposed jobs aren’t low-paid or low-skilled. Anthropic’s data shows they’re more likely to be older, female, more educated, and higher paid. People with graduate degrees are nearly four times more represented in the most exposed group than in unexposed jobs. The most exposed workers earn, on average, 47% more than the least exposed.
That flips the usual automation story on its head. Previous waves of technology mostly affected manual, repetitive, blue-collar work. AI is different. It’s going after cognitive tasks — writing, analysis, research, data processing, customer communication — the kind of work that sits in the middle of most white-collar roles.
But: being exposed doesn’t mean being replaced. It means parts of your job are changing. The repetitive cognitive stuff — the first drafts, the data summaries, the standard responses — that’s what AI handles. The judgement calls, the relationship management, the strategic thinking — that stays firmly with humans.
What AI Does Well in a Small Business
In practice, AI in a small business looks pretty unglamorous. It drafts things. It summarises things. It processes things. It answers predictable questions. It pulls patterns from data that would take you ages to find manually.
That means first drafts of emails and proposals that someone then edits. Meeting summaries so nobody types them up. Product descriptions generated from a brief. Customer queries handled overnight. Research pulled together in minutes instead of hours.
A UK Government trial found that civil servants using AI tools saved about 26 minutes a day. Scale that to a team of ten and it’s like gaining a part-time staff member. HSBC research shows mid-sized firms embedding AI into core operations are seeing around 4% more revenue per employee — not from cutting people, but from removing bottlenecks.
Those aren’t headline-grabbing numbers. But compounded across months and multiple team members, they add up to real capacity — which is exactly what most small businesses are short of.
The 30% Rule
There’s a framework doing the rounds called the 30% rule, and it’s a useful way to think about where to start.
The idea: AI should handle roughly 30% of your work — the repetitive, rule-based, time-heavy tasks — while humans stay responsible for the 70% that needs judgement, creativity, and real thinking. Some people frame it the other way round (AI does 70% of the grunt work, humans own the critical 30%), but the principle is the same: don’t try to automate everything, and don’t automate anything that matters too much to get wrong.
30% works as a benchmark because it’s big enough to make a genuine difference — that’s a third of your team’s busywork gone — but small enough that you’re not ripping up your processes or making your customers feel like they’re dealing with a machine. And it’s achievable quickly. No transformation programme required. Pick three high-volume, low-risk, repetitive tasks. Point a tool at them. See what happens.
The Anthropic research supports this kind of measured approach. Their data shows that most people are using AI as a thinking partner — 52% of users are working with it collaboratively, not just handing tasks off. The value isn’t in replacing human thinking. It’s in making human thinking faster and better-informed.
Using It Without Losing What Makes You Good
This is where a lot of businesses quietly get it wrong. They let AI do the thinking, rather than using it to support their own.
If you’re publishing AI-generated content without properly reviewing it, your output will read like everyone else’s — because it is. If you’re feeding customer data into tools without understanding where it goes, you’ve created a risk you haven’t measured. If AI is making decisions that affect people and nobody’s checking the output, you’re one bad result away from a problem.
The businesses getting this right treat AI like a very fast, very knowledgeable junior team member who needs supervising. The output is a starting point, not a finished product. The ideas are a prompt for your own thinking, not a replacement for it.
That means: always review before publishing or sending. Know where your data goes — if you wouldn’t stick it on a noticeboard, don’t paste it into a tool you don’t fully understand. And keep your own voice. AI can mimic a tone, but it can’t replicate the specific way your business talks to its customers, or the experience behind your decisions. Use it for the scaffolding. Add the substance yourself.
What About the People Coming Up Behind You?
This is the bit that doesn’t get talked about enough, and it’s the bit I think matters most.
If AI handles the entry-level tasks that graduates traditionally cut their teeth on, how do new people learn the business? How do they build the judgement and context that only comes from doing the work?
The Anthropic research flags this directly. They found suggestive evidence that hiring of younger workers is slowing in AI-exposed occupations. It’s not mass redundancy — it’s a quieter problem. The entry-level jobs that used to be the bottom rung of the ladder are thinning out, because AI can do the data entry, the basic research, the first-draft writing that those roles were built around.
For small businesses, this is actually an opportunity if you approach it right.
The graduates coming into the workforce now grew up with AI. They’re not scared of the tools. What they lack isn’t technical ability — it’s context. They don’t know your industry, your customers, or how your business actually works. And those are things AI genuinely cannot teach. Only your experienced people can do that.
So instead of starting juniors on data entry and filing — work AI now handles — start them closer to the real work. Pair them with senior people earlier. Let them sit in on client meetings sooner. Give them the mentorship and context that turns a graduate into a professional, and let AI handle the busywork that used to fill their first year with tedium they weren’t learning much from anyway.
The businesses that treated entry-level roles as cheap labour for repetitive tasks will struggle, because AI does that cheaper. The businesses that use AI to free up experienced people to actually mentor and develop the next generation — those are the ones that will build stronger teams.
Where to Start
If you’ve read this far and you’re thinking “right, but what do I actually do on Monday morning?” — keep it simple.
Week one: spend five days noticing where your time goes. Not a formal audit — just pay attention to the tasks that eat hours and don’t require much thought. Email, data processing, report formatting, chasing the same information, answering the same questions.
Week two: from that list, pick three that are high volume, low risk, and repetitive. Those are your starting 30%.
Week three: try one AI tool on one of those tasks. ChatGPT, Copilot, Zapier — nothing expensive, nothing complicated. See if it saves time without creating new problems.
Week four: review honestly. Did it help? Was the output good enough? Did your team find it useful or annoying? If it worked, expand. If it didn’t, try a different task or a different tool.
No transformation required. No consultant necessary. Just a sensible, measured approach that lets you build confidence before you build complexity.
The Honest Summary
For over 25 years, and I’ve watched every major technology shift arrive with the same pattern: breathless predictions, widespread anxiety, and then quiet, practical adoption by the businesses that focused on what was actually useful.
AI is following the same arc. The data shows it’s changing work — but not in the dramatic, overnight way the headlines suggest. It’s making skilled people faster. It’s taking the drudge out of jobs that were always more than just drudge. And it’s creating a genuine question about how we develop the next generation of workers when the tasks they used to learn on are increasingly done by machines.
The businesses that will do well with AI are the ones asking practical questions: what’s the 30% of our work that’s holding us back? How do we free our people to do what they’re actually good at? And how do we make sure juniors still get the experience and mentorship they need?
The hype is over. The real work — and the real opportunity — is just getting started.