Explore the map Analysis Questions Sources

What AI Exposure Means for the European Job Market

We scored 125 European occupation groups covering 199.6 million jobs for AI exposure, measuring both raw technical capability and practical impact under EU regulation. The gap between these two numbers reveals where the European regulatory framework creates real friction for AI adoption — and where it doesn’t.

199.6M
Jobs scored across EU-27
1.2 pts
Avg regulatory friction
∼€1T
Delta in exposure-weighted wages

A note on intent

This analysis started with a simple question: What does AI exposure look like when you stop viewing it through an American lens?

Global data proved difficult to assemble. European data was well covered thanks to ESCO, Eurostat, and public legal frameworks. So that’s where this began. The regulatory comparison with the US emerged naturally from the data, not from an agenda.

I’m a founder, builder, and operator—not a researcher or policy expert. This is the most grounded analysis I could produce using public data, primary legal sources, and an open methodology. The data, sources, and code are open. This is not meant to fuel arguments for or against any regulatory approach; draw your own conclusions.

— Philipp Maul, Nexalps

1. The Screen Test The dividing line is physical presence, not education level.

The single strongest predictor of AI exposure in the European data is not education level, seniority, or even pay. It is whether the work product is purely digital. Occupations where workers spend their days producing text, code, spreadsheets, analyses, and communications — regardless of the credentials required to do so — consistently score in the 7–9 range for technical exposure.

Clerical support workers, as a group, average 8.1 for technical exposure — the highest of any ISCO major group. Keyboard operators score 9.5, general office clerks 8.5, and numerical clerks 8.5. Software developers score 9.0. These are occupations whose entire output passes through a screen.

On the other side of this line sit occupations where physical presence is the product: building frame workers (2.5), blacksmiths (3.5), machinery mechanics (4.0), and personal care workers (3.2). These scores are low not because the work is simple — many physical trades require years of training — but because AI cannot yet be physically present. The constraint is embodiment, not intelligence.

A general office clerk (8.5) faces greater AI exposure than a skilled electrician (3.5), despite the electrician receiving higher pay and more training. If your work passes through a screen, AI can access and potentially replicate it.

2. Education Doesn’t Protect Degrees correlate with higher exposure, not lower.

The ISCO major groups requiring the most formal education score highest for AI exposure. Professionals (university-level occupations) average 7.0 in technical exposure. Clerical support workers average 8.1. Meanwhile, craft and related trades workers average 4.0, and plant and machine operators average 4.2. Elementary occupations — those requiring no formal qualifications — average just 2.5.

This overturns a common assumption about technology: past automation mainly affected low-skilled manual jobs, but now knowledge work is heavily affected, despite years of education aimed at such roles. Formal education does not shield workers from AI; in fact, more education often means more digital exposure, and thus greater risk.

The exception proves the rule. Doctors and nurses need extensive education, but score moderately (4.5) since their work remains inherently physical and interpersonal. Other health professionals score 6.5, as their roles involve more documentation and analysis. Education protects only when it leads to physical practice, not a desk.

3. The Regulatory Buffer EU regulation reduces practical exposure by 1.2 points on average — but unevenly.

The average occupation group in our dataset scores 5.6 for technical exposure and 4.4 for regulated exposure. This 1.2-point difference quantifies the collective impact of the EU AI Act’s requirements, employment protection laws, works council consultations, and GDPR limits on automation. The gap directly reflects how regulation slows or alters the practical application of AI, compared to its technical capability.

However, this average regulatory impact varies widely by job type. Legislative and senior government roles see the largest gap, at 3.0 points, as nearly all AI tools in these areas are subject to strict regulation. Government professionals in regulatory roles show a similar 3.0-point gap. Education roles, such as teachers, consistently show 2.5-point gaps due to additional restrictions on using AI with students. These examples illustrate how regulation creates larger barriers in sensitive jobs.

At the other end, building finishers, roofers, and agricultural workers show near-zero deltas: regulation adds no friction because there is minimal AI exposure in these domains to regulate. By contrast, a regulatory buffer is most significant where technical exposure to AI is highest, and the work requires decisions about people, such as hiring, evaluation, education, law enforcement, and governance. Here, regulation acts as a significant mediator of AI’s impact.

The regulatory buffer is a genuine constraint, but not a barrier: it acts as a speed bump, slowing deployment, increasing compliance costs, and requiring organisations to justify their AI use. While it does not halt adoption, it raises both the expense and the need for careful consideration in deployment.

The Draghi Report on European competitiveness (September 2024) puts this friction in stark terms. EU private investment in AI reached roughly €8 billion in 2023 — against $68 billion in the United States. Only 11% of EU firms have adopted AI, well short of the EU’s stated target of 75% by 2030. Meanwhile, 55% of European SMEs flag regulatory complexity as their single biggest barrier to technology adoption. The regulatory buffer may protect workers, but it also widens the gap between what European firms could deploy and what they actually do.

Occupation groupTechnicalRegulatedDelta
Legislators and senior officials4.51.5−3.0
Regulatory government professionals6.53.5−3.0
Vocational education teachers7.55.0−2.5
Administration professionals7.55.0−2.5
Legal professionals7.55.0−2.5
Secondary education teachers6.54.0−2.5

4. The DACH Angle Works councils add friction beyond the EU baseline.

Germany and Austria operate under co-determination regimes that go further than the EU baseline. Germany’s Works Constitution Act (BetrVG §87) gives works councils mandatory co-determination rights over the introduction of technical devices designed to monitor employee behaviour — which, in practice, includes most workplace AI tools. Austria’s Labour Constitution Act (ArbVG §96a) contains parallel provisions requiring works council consent for systems that affect human dignity.

These are not theoretical constraints. Any German employer deploying an AI-powered scheduling system, a performance analytics tool, or an automated email triage system must negotiate with the works council before rollout. The practical effect is a 6–18-month delay in AI adoption in co-determined workplaces, plus ongoing negotiations over system parameters, data access, and dispute resolution.

Switzerland faces a different landscape. Not bound by the EU AI Act, Switzerland is instead developing its own framework under the new Federal Act on Data Protection (nFADG). Swiss employers face fewer procedural barriers to AI deployment but operate under stricter general data protection requirements. For equivalent occupational groups, regulatory friction in Switzerland is typically lower than in Germany or Austria, reflecting the absence of mandatory co-determination and a lighter-touch AI governance regime.

The United Kingdom sits at the opposite extreme. Post-Brexit, the UK has deliberately chosen not to legislate AI-specific regulation, instead relying on existing sector regulators (FCA, Ofcom, CMA) to apply existing law to AI systems. There is no AI Act equivalent, no high-risk classification, no mandatory notification, and no works council system. UK employers deploying AI face the Equality Act 2010 (discrimination in AI-assisted hiring and pay decisions), the Data Protection Act 2018 (UK GDPR Art 22 equivalent on automated decision-making), and ICE Regulations 2004 (information and consultation rights for 50+ employees) — but these are materially weaker constraints than anything in the EU or even Switzerland. The UK’s regulatory friction is the lowest of any country in this analysis, making it the natural experiment for what happens when AI deployment is essentially a management prerogative.

5. What This Means for Enterprises The delta between scores is your compliance gap.

Organisations planning AI deployment should map their workforce against both scores. The technical score tells you what is possible. The regulated score tells you what is practical without triggering regulatory obligations. The gap between the two is your compliance surface area — every AI deployment in a high-delta occupation requires impact assessments, works council consultation, documentation, and potentially conformity assessment under the AI Act.

This isn’t an argument against deployment. It’s an argument for sequencing: start where technical and regulated scores are close, build compliance in low-risk roles, then move to high-delta occupations that demand a full regulatory process.

The 20.8 million workers in the 8–10 technical exposure band represent the sharpest commercial opportunity — and the highest compliance burden. Under regulated exposure scoring, that band shrinks to 7.1 million workers. The difference — 13.7 million jobs — sits behind a wall of regulatory process. Enterprises that learn to navigate that process efficiently will have a structural advantage.

The scale of what is required should not be underestimated. The Draghi Report estimates that Europe needs €750–800 billion in additional annual investment to close the competitiveness gap with the US and China — a figure comparable in scale to the Marshall Plan. Only 4 of the world’s top 50 technology companies are European. For enterprises, this means the AI transition is not just a workforce challenge but a capital allocation challenge: the firms that invest early in compliant AI infrastructure will compound that advantage over competitors still navigating their first works council consultation.

Mariana Mazzucato’s research on the entrepreneurial state complicates the standard narrative that regulation simply slows innovation. Every foundational technology in today’s AI stack — from the internet to GPS to early neural-network research — was publicly funded, with private capital entering only after the state absorbed the highest-risk, longest-horizon investments. If Europe’s problem is not regulation per se but insufficient public investment in foundational AI research, the policy response looks very different: not less regulation, but more ambitious state-led R&D through institutions modelled on DARPA rather than incremental tax credits. European enterprises waiting for venture capital to close the gap may be waiting for the wrong actor.

One structural response is already on the table. The EU-Inc proposal — a “28th regime” for a pan-European legal entity — directly targets the fragmentation that makes scaling European AI companies harder than incorporating in Delaware. EU-Inc would offer digital-first registration within 24 hours, a standardised employee share option scheme (EU-ESOP) across all 27 member states, and a single convertible investment instrument (EU-FAST) that replaces 27 national frameworks. For AI enterprises specifically, this addresses two binding constraints: the cost of multi-jurisdictional compliance, which drives unicorn relocation, and the inability to offer competitive equity compensation, which drives talent to US firms. Whether EU-Inc becomes law will signal how seriously Europe treats the capital allocation problem the Draghi Report identified.

6. What This Means for Workers Exposure is a starting point, not destiny.

A high exposure score does not mean a job disappears. It means the job changes. Within every high-exposure occupation, there will be workers who use AI to multiply their output and workers who are displaced by it. The differentiator is not seniority or credentials — it is adaptability: the ability to evaluate AI outputs critically, to work iteratively with AI tools, and to judge when AI should and should not be trusted.

European workers have one structural advantage their US counterparts do not: time. The regulatory buffer documented in this analysis buys 2–4 years of slower deployment in the most affected occupations. That window is not infinite. Workers and social partners should use it for retraining, role redesign, and negotiating the terms of AI introduction — not for assuming the status quo will persist.

The skills data underlines the urgency. Europe produces 203 ICT graduates per million people, compared with 335 in the United States. The STEM pipeline is also thinner: 845 STEM graduates per million, versus 1,106 in the US. And 30% of EU-founded unicorns have relocated their headquarters abroad, draining precisely the talent pool that could build European AI capabilities. The regulatory buffer buys time, but Europe’s workforce is not currently structured to use it. Without deliberate investment in AI literacy and technical reskilling, the buffer becomes a waiting room rather than a training ground.

Ray Dalio’s analysis of long-term debt cycles adds a fiscal dimension to this urgency. AI-driven workforce disruption is arriving at a moment when sovereign debt levels constrain the fiscal space available for transition support. In the US, where much of the data originates, the top 1% now hold more wealth than the bottom 90% combined — a concentration last seen in the 1930s. Intergenerational mobility has collapsed: the share of children earning more than their parents fell from 90% in 1970 to 50% by 2015. If AI accelerates these dynamics in Europe — concentrating gains among capital owners and high-adaptability workers while displacing the middle — the political consequences are predictable. Dalio’s framework on populism shows that economic displacement reliably produces anti-establishment politics, weakening precisely the institutional capacity needed to manage the transition. The question for European workers is not just whether they can reskill, but whether the political and fiscal environment will support them in doing so.

The data suggests a specific priority: workers in occupations scoring 6–8 on technical exposure with high regulatory deltas (administration professionals, legal professionals, education) have the most time to adapt, but the most to lose if they don’t. These are the occupations where the regulatory buffer is widest — and where it will eventually narrow.

7. The Regulatory Compliance Surface Multiple overlapping frameworks create a compliance matrix, not a single obligation.

Mapping 125 occupation groups against the EU AI Act Annex III reveals a structural asymmetry. All 125 groups are classified as high-risk subjects under Annex III category 4 (employment), because any AI system used for recruitment, performance evaluation, task allocation, or workforce management triggers high-risk obligations regardless of the occupation. The universality of this classification raises questions about its practical value in enforcement.

The more analytically interesting variation is in deployer status: 40 of 125 groups (32%) deploy high-risk AI as part of their job duties — HR managers using AI recruitment tools, legal professionals using AI legal research, health professionals using AI diagnostics, teachers using AI assessment systems. These deployers face additional obligations under Articles 13, 14, and 26 of the AI Act, in addition to the employment-related obligations they share with all workers.

The average occupation group faces 3.7 overlapping regulatory frameworks simultaneously: the AI Act (as subject), GDPR, national works council law (BetrVG in Germany, ArbVG in Austria), and, frequently, the Platform Work Directive or the Pay Transparency Directive on top. For deployer groups, this rises to 4–6 simultaneous frameworks. A German HR manager deploying an AI recruitment tool must satisfy AI Act Annex III requirements, obtain works council agreement under BetrVG §87(1) Nr. 6, conduct a GDPR Art 35 DPIA, and ensure Pay Transparency Directive compliance if the tool affects compensation decisions.

In Germany, all 125 occupation groups trigger BetrVG §87(1) Nr. 6 co-determination when AI monitoring systems are introduced. For a large German employer deploying AI across multiple departments, this means parallel works council consultations for each distinct AI system — a significant organisational burden that compounds as the number of systems deployed increases.

The Swiss gap is material. Swiss domestic employers face the FADP, Code of Obligations Art 328b, and ArGV3 Art 26 — but no AI Act, no mandatory works council consent, and only consultation rights under the Mitwirkungsgesetz. For equivalent occupation groups, Swiss employers face 2–3 fewer regulatory layers than their German or Austrian counterparts. Whether this lighter regulatory surface creates a competitive advantage or a trust deficit remains an open question.

The UK gap is wider still. A UK employer deploying the same AI recruitment tool faces only the Equality Act 2010 and Data Protection Act 2018 — no AI Act conformity assessment, no works council negotiation, no high-risk classification, and only weak consultation rights under the ICE Regulations (which apply only to undertakings with 50+ employees and carry no veto power). Where a German HR manager navigates 4–6 overlapping frameworks, a UK counterpart navigates two. The DSIT “pro-innovation” AI framework is a policy statement, not legislation — it creates no enforceable obligations. This makes the UK the clearest test case for whether lighter regulation produces faster adoption, greater displacement, or both.

31 occupation groups (25%) trigger the Platform Work Directive in addition to the AI Act, adding provisions against automated firing and algorithmic transparency requirements on top of existing obligations. The transposition deadline of December 2026 means employers in platform-adjacent sectors face a second regulatory wave arriving within months of the AI Act’s full application.

Explore the data behind these findings, or read the questions this analysis raises.

If your organisation is navigating this transition, let’s build together.

Opinion

A builder’s perspective

Everything above is intended as analysis. What follows is opinion.

Having worked through the data, my tentative reading is this: regulation that isn’t oriented toward enabling building and creation has historically deferred the consequences of disruptive innovation rather than prevented them. The textile workers of early industrialization, manufacturing communities affected by offshoring, retail displaced by e-commerce — protective regulation slowed each transition without ultimately changing its direction.

The 1.2-point regulatory delta identified in this analysis across Europe is real. But when I consider it alongside the other forces acting on European economies — demographic decline, the energy transition, geopolitical fragmentation, de-globalisation — I find it difficult to see how deferral alone produces a better outcome.

What could change that calculus is if Europe uses the time that regulatory friction creates to do what it has so far struggled to do: build the structural foundations for competitiveness. A genuine single market. A capital markets union. Workable financing instruments for startups. The kind of institutional integration that the Draghi report calls for and that initiatives like the proposed EU-Inc framework represent.

In other words, regulation as a substitute for building has only ever bought time. Regulation as a foundation for building — where the friction enables better outcomes rather than just slower ones — is a proposition entirely different. Whether Europe is on the first path or the second is, in my view, the central question this data raises.

None of this is settled. The pragmatism of the people making decisions matters more than their position on any political spectrum. And before anyone — myself included — prescribes what society should do, the honest question is how much each of us is personally prepared to commit to that transition.

My own answer, for what it’s worth, is to build. This analysis exists because I believe that understanding the problem clearly is the first step toward building something useful in response to it.

— Philipp Maul
Methodology & Data Notes

Data Sources & Timeliness

This analysis combines multiple data sources across 36 countries from different reference periods:

  • Occupation descriptions: ESCO v1.2.1 (European Commission, released May 2024). These are structural descriptions of what occupations involve and are not time-dependent.
  • Employment data: Eurostat EU-LFS (lfsa_egai2d), reference year 2024. Employment counts in thousands by ISCO-08 2-digit occupation group for 35 European countries (EU-27 plus EFTA and candidate countries). UK employment from ONS Annual Survey of Hours and Earnings (ASHE) 2025, providing employee job counts by SOC 2020 2-digit mapped to ISCO-08. Note: UK figures cover employee jobs only and exclude self-employed workers.
  • Wage data: Three sources by country:
    • Default (34 countries): Eurostat Structure of Earnings Survey (earn_ses22_28), reference year 2022. Mean gross annual earnings by ISCO-08 1-digit major group. SES is conducted every four years; 2022 is the most recent available.
    • Switzerland: BFS Lohnstrukturerhebung (LSE) 2024, providing mean annual wages at ISCO-08 2-digit level (converted from CHF at 0.96 EUR/CHF). This is preferred over Eurostat SES for Switzerland due to higher granularity and more recent data.
    • United Kingdom: ONS ASHE 2025 Table 2.7a, providing mean annual gross pay at SOC 2020 2-digit level mapped to ISCO-08 (converted from GBP at 1.16 EUR/GBP).
    Due to EU-wide inflation of approximately 10–15% between 2022 and 2025, absolute Eurostat wage figures and derived metrics likely understate current levels. BFS and ONS data are more current. Relative comparisons between occupation groups are unaffected.

Employment Distribution Method

Eurostat publishes employment at ISCO 2-digit level. To display data at the more granular ISCO 3-digit level (~130 groups), we distribute 2-digit employment totals to 3-digit sub-groups proportionally, using the count of ESCO occupations within each 3-digit group as a structural weight. This is an approximation—actual employment distributions within 2-digit groups may differ.

AI Exposure Scoring

Each ISCO 3-digit occupation group was scored by Claude Sonnet 4 (Anthropic) on two dimensions:

  • Technical exposure (0–10): pure AI capability to reshape the occupation, based on the occupation’s composite description, constituent ESCO occupations, and associated skills.
  • Regulated exposure (0–10): practical European exposure, factoring in EU AI Act high-risk obligations, works council co-determination rights, GDPR constraints, and employment protection.

Scoring used a standardised rubric with calibration anchors (roofers at 0–1, software developers at 8–9, medical transcriptionists at 10). The LLM was instructed to score technical capability independently from regulatory adoption speed.

Limitations

  • LLM-based scoring reflects the model’s training knowledge, not empirical measurement of actual AI adoption or displacement.
  • ESCO structural weights for employment distribution are a proxy, not a measurement.
  • Eurostat wage data (SES 2022) is 2–3 years older than employment data. BFS (2024) and ONS (2025) wages are more current. Cross-source timing differences are noted but not adjusted.
  • Wage granularity varies by country: Eurostat provides ISCO 1-digit (9 major groups), while BFS and ONS provide ISCO 2-digit (~40 sub-major groups). All wages are assigned at the most granular level available, with 1-digit fallback where 2-digit data is missing.
  • All 36 countries use the same exposure scores, derived from EU-27 occupation descriptions and EU regulatory context. In practice, exposure may vary by country due to differences in industry structure, digitisation levels, and enforcement practices.
  • UK data uses a UK-specific regulated score reflecting the Equality Act 2010, UK GDPR Art 22, Employment Rights Act 1996, and the DSIT pro-innovation framework. EU regulated scores are also available for comparison.
  • UK employment figures from ASHE cover employee jobs only and exclude self-employed workers, making UK totals not directly comparable to Eurostat LFS figures (which include all employed persons).
  • Some candidate countries (e.g., Albania) have wage data but no employment data, limiting the analysis for those countries.

Source code: The full pipeline (data preparation, Eurostat fetching, scoring, site generation) is open source on GitHub. Code is MIT licensed. Scored data is CC-BY 4.0.