Explore the map Analysis Questions Sources

Questions This Data Raises

Scoring 199.6 million European jobs for “AI exposure”—that is, assessing how likely roles are to be affected by advances in artificial intelligence—produces more questions than answers. These are the ones we think matter most—for societies, regulators, enterprises, and individuals navigating the transition.

AI exposure does not emerge in a stable macro environment; it enters one already under strain from overlapping structural pressures.

  • Geopolitical fragmentation is redrawing supply chains and access to technology. If the US and China decouple on AI, does Europe simply adopt their systems, or does the EU AI Act enable a distinct European AI industry?
  • Europe faces a sharper demographic decline than any other major economic region. Germany, Italy, and Spain’s working-age populations will shrink by 10–20% by 2040. Will AI-driven productivity offset this, or will it further hollow out mid-skill occupations just when those workers are most needed?
  • The climate and energy transition requires substantial investment in physical infrastructure—precisely the segment of work least affected by AI exposure. Could this dynamic result in a bifurcated labour market, where workers in physical infrastructure gain greater bargaining power, while those in knowledge-based roles face potential displacement?
  • Capital’s share of national income—here meaning profits, dividends, rents, and interest—has risen relative to labour’s (wages and salaries) for decades. If AI boosts the productivity of capital-intensive firms and displaces labour (jobs relying heavily on human workers), does this accelerate the shift? At what point does this trend become politically unsustainable?
  • If AI lowers the value of knowledge work but not physical services, do trades and service workers’ economic positions improve?
  • The Draghi Report estimates that Europe requires €750–800 billion in additional annual investment to address the competitiveness gap—a scale reminiscent of the Marshall Plan. In 2023, EU private AI investment was approximately €8 billion, versus $68 billion in the US. Is it feasible for Europe to close this gap through policy measures alone, or does it necessitate a fundamental transformation in capital allocation structures?
  • If AI is needed to offset workforce shrinkage, do current regulations protect workers today but harm future productivity?
  • If AI disrupts labour while governments face debt constraints, who pays for retraining at scale?
  • Dalio documents that in the US, the top 1% now hold more wealth than the bottom 90% combined, and intergenerational mobility has halved since 1970. In Europe, wealth concentration is less pronounced but increasing: ECB data indicate the top 1% of euro area households hold approximately 28% of total net wealth, up from 22% in 1995, while the bottom 50% hold just 4%. If AI primarily amplifies returns for capital owners and highly adaptable workers, does Europe’s more robust social safety net mitigate, or simply postpone, the populist dynamics Dalio describes?
  • If Europe’s AI gap is about public R&D, should the debate shift from deregulation to the creation of European DARPA equivalents?
  • If AI gains flow mostly to shareholders, does the “augmentation” narrative merely mask value extraction over workforce development?
  • If the occupations most exposed to AI are also those that require the most education, what does that mean for the social contract around higher education? Parents and states have invested heavily on the premise that university degrees lead to stable, well-compensated careers. Does that premise still hold?
  • The European welfare state was designed around full-time, long-tenure employment. If AI fragments knowledge work into project-based, augmented micro-tasks, do existing social insurance systems (pension, health, unemployment) remain viable?
  • Public trust in institutions is already under strain. If AI makes it cheaper and easier to generate convincing disinformation, and the occupations most equipped to counter it (journalists, researchers, analysts) are themselves highly exposed, how do democratic societies maintain epistemic integrity?
  • European societies value the stability of the Mittelstand, social partnership, and gradual change. AI-driven disruption is neither gradual nor negotiated. Is there a European model for managing rapid technological displacement that preserves social cohesion, or does the speed of change overwhelm existing institutions?
  • Dalio identifies a consistent historical pattern: when economic displacement concentrates gains at the top and stagnates the middle, populism follows within a decade. European populist vote shares are already at 1930s-era levels. If AI accelerates the displacement of the educated middle class — the constituency most invested in institutional stability — does this undermine the democratic legitimacy needed to govern the transition?
  • Mazzucato’s mission economy framework proposes that transformative technology transitions require Apollo-scale public coordination — not market-led adjustment. If the AI transition is closer in scale to electrification or industrialisation than to a normal business cycle, are Europe’s current policy instruments (R&D tax credits, innovation grants, regulatory sandboxes) an order of magnitude too small?
  • If you work in an occupation scoring 7+ for technical exposure, the question is not whether AI will change your role but when and how. Are you actively building the skills to work with AI tools, or waiting for your employer or union to provide retraining?
  • The regulatory buffer gives European workers a 2–4-year advantage over their US counterparts. That is a window, not a permanent shield. Are you using it? What does your personal transition plan look like?
  • If education no longer protects — if degrees correlate with higher AI exposure, not lower — what does career planning look like for a 20-year-old entering the European labour market in 2026? Should they pursue a trade, a profession, or something that doesn’t exist yet?
  • Works council rights give European workers a seat at the table when AI is deployed. But only if they exercise those rights. If you are on a works council, do you have the technical literacy to evaluate an employer’s AI deployment proposal? If not, where do you get it?
  • The occupations with the highest regulatory deltas (education, law, administration) are also those with the strongest institutional inertia. Workers in these fields may feel safe precisely because change is slow. But the regulatory buffer is a delay, not a cancellation. How do you prepare for change that is certain but not yet urgent?
  • Identity and meaning are often tied to occupation. If AI transforms what it means to be a lawyer, teacher, journalist, or accountant, the challenge is not just economic but psychological. How do you maintain professional identity when the substance of the profession changes?
  • Dalio’s UBI research suggests that direct income support may be the most efficient response to technology-driven displacement — more efficient than retraining programmes with uncertain outcomes. If AI displaces faster than workers can reskill, should European policy prioritise income floors (UBI, negative income tax) over retraining — or are both needed simultaneously?
  • The gap between technical and regulated exposure represents a compliance cost — but also a moat. If navigating the EU AI Act, works council processes, and GDPR requirements is genuinely hard, does this create defensible market positions for European-first AI companies that learn to do it well?
  • Most AI investment flows to horizontal platforms (such as foundation models and developer tools). The European opportunity may lie in vertical, regulation-aware AI applications — HR tools that satisfy Annex III, legal AI that meets bar association standards, and healthcare AI that meets MDR requirements. Is the European vertical AI market underpriced?
  • European venture capital has historically underweighted deep tech and AI. If the most valuable AI companies of the next decade are those that solve compliance at scale, does Europe’s regulatory complexity become an asset rather than a liability for European founders?
  • Enterprise buyers in Europe face a build-vs-buy decision complicated by regulation. If off-the-shelf AI tools from US providers don’t satisfy EU compliance requirements out of the box, does this create a European integration and consulting layer that captures significant value?
  • Only 4 of the world’s top 50 technology companies are European, and 30% of EU-founded unicorns have relocated their headquarters abroad. If Europe cannot retain its most promising AI companies, does the regulatory moat protect European workers, or does it simply ensure that AI value accrues to non-European shareholders?
  • The EU-Inc proposal introduces EU-ESOP — a standardised employee share option scheme with capital gains treatment deferred to the point of sale. If European AI startups could offer equity compensation competitive with Silicon Valley across all 27 member states, does the talent retention calculus change fundamentally? Or is equity compensation a necessary but insufficient condition when the underlying capital markets remain shallower?
  • EU-FAST, the proposed pan-European convertible investment instrument, would replace 27 different national frameworks (French BSA AIR, German convertible notes, UK ASAs) with a single open-source template. If early-stage AI investment is currently fragmented across incompatible legal instruments, how much deal flow is Europe losing not to regulation but to legal friction?
  • Our data shows that 102.5 million European jobs (51% of the workforce) sit in the 6–10 range for technical exposure. How should a corporation with 10,000 knowledge workers think about sequencing AI adoption across roles with different exposure scores and regulatory deltas?
  • The compliance cost of deploying AI in high-delta occupations (HR, legal, education) is substantial. Does this favour large enterprises that can absorb fixed compliance costs, or does it create an opening for nimble firms that specialise in compliant AI deployment?
  • If AI augments rather than replaces most knowledge workers, the key capability becomes managing human-AI teams effectively. Very few corporations have this capability today. What does the organisational structure of an AI-augmented enterprise look like?
  • Works council negotiation over AI deployment is a new organisational muscle. Companies that develop it early — building trust, establishing frameworks, creating precedents — may deploy AI faster than competitors who treat works councils as obstacles. Is co-determination actually a competitive advantage in the long run?
  • The exposure-weighted wages metric in our data represents approximately €4.7 trillion in annual wages at the technical exposure level. The regulated exposure level drops this to €3.7 trillion. That €1 trillion gap is a measure of economic activity constrained by regulatory friction. How should boards think about this number?
  • Your legal team needs to evaluate AI deployments against the AI Act, GDPR, Platform Work Directive, Pay Transparency Directive, and national works council law — simultaneously for each deployed system. Is your compliance infrastructure designed for this level of regulatory surface area, or are you discovering obligations after deployment?
  • The EU AI Act creates a 1.2-point average regulatory buffer in our data. Is this calibrated correctly? Too little friction and workers are exposed without protection. Too much friction and European firms fall behind non-EU competitors deploying AI freely.
  • The AI Act’s high-risk classification system was designed in 2021–2022. AI capabilities have advanced dramatically since then. Are the Annex III categories still the right ones, or do they need updating before the Act is even fully enforced?
  • Works council co-determination rights were designed for an era of identifiable, discrete technical systems. Modern AI tools are embedded, ambient, and constantly updating. How do you negotiate co-determination over a system that changes weekly?
  • GDPR’s right to explanation and prohibition on purely automated decisions with legal effects were written before large language models could generate human-quality explanations. Does an LLM-generated explanation of an AI decision satisfy Article 22, or is it a compliance fiction?
  • If European regulation slows AI adoption by 2–4 years relative to the US and China, is that a feature (protecting workers during transition) or a bug (ensuring European firms are permanently disadvantaged)? The answer may depend on whether the transition window is used productively.
  • If nearly every occupation is classified as high-risk under Annex III category 4 because employment AI applies universally, does the classification retain practical enforcement meaning — or does universal coverage dilute the high-risk concept to the point where everything is high-risk and therefore nothing is?
  • The average occupation group faces 3.7 overlapping regulatory frameworks (AI Act, GDPR, Platform Work Directive, Pay Transparency Directive, and national works council law). Are compliance teams and works councils resourced to evaluate AI systems simultaneously?
  • The Platform Work Directive and the AI Act both regulate algorithmic management, but through different legal instruments and enforcement mechanisms. How do employers reconcile overlapping obligations — and how do national authorities coordinate enforcement across frameworks?
  • The Draghi Report calls for harmonised AI regulatory sandboxes across EU member states and for simplified GDPR implementation for AI development. If sandboxes remain fragmented and GDPR interpretation varies by data protection authority, does Europe effectively have 27 different AI regulatory regimes behind a single AI Act façade?
  • The EU-Inc proposal offers a “28th regime” — a pan-European legal entity with digital-first registration, standardised employee share options, and a single investment instrument replacing 27 national frameworks. If regulatory innovation (not just deregulation) is the missing piece, could EU-Inc do for European startups what the AI Act’s compliance burden cannot: make building in Europe as frictionless as incorporating in Delaware?

Two-speed Europe, free movement dynamics, and the transatlantic comparison.

  • The EU-27 is not one labour market but twenty-seven. Our data uses the same AI exposure scores across all countries because the underlying occupations are structurally similar. But regulatory enforcement, digital infrastructure, and institutional capacity vary enormously. Does AI exposure create a new axis of divergence between member states — on top of existing north-south and east-west divides?
  • Free movement of workers is a cornerstone of the EU single market. If AI adoption proceeds faster in some member states (e.g., the Netherlands, Estonia) than others (e.g., Italy, Greece), does this trigger new migration patterns — knowledge workers moving toward AI-augmented economies, or displaced workers moving toward economies where their occupations are less disrupted?
  • The US has no federal AI regulation, no works councils, no co-determination, and weaker employment protection. Our regulatory buffer of 1.2 points exists only in Europe. If US firms deploy AI 2–4 years faster, does the transatlantic productivity gap widen — and does this become a geopolitical issue?
  • The EU AI Act applies uniformly across all member states, but enforcement will be national. Given Europe’s track record with GDPR enforcement (dramatic variation between national authorities), should we expect similar variation in AI Act enforcement — creating de facto regulatory competition within the single market?
  • Eastern European member states (Poland, Romania, Czech Republic) have large workforces in the 4–6 technical exposure range — exposed enough to be disrupted but not so exposed as to attract significant regulatory attention. Is this the overlooked middle of the European AI transition?
  • Only 11% of EU firms have adopted AI (according to Eurostat enterprise statistics), well short of the EU target of 75% by 2030. Europe produces 203 ICT graduates per million people, compared with 335 in the US. If the talent pipeline cannot scale fast enough to close this adoption gap, does the AI Act’s transition window become irrelevant — because Europe lacks the human capital to use it?
  • The Societas Europaea (SE) was meant to be the pan-European company form but proved too complex for startups. EU-Inc proposes a simpler alternative: no minimum capital, 24-hour digital registration, and cross-border branch operation without restructuring. If EU-Inc succeeds, does the resulting wave of pan-European AI companies fundamentally change the single market for talent — or does tax and employment law fragmentation remain the binding constraint regardless of corporate form?
  • Dalio’s framework suggests that AI-era gains will flow disproportionately to capital owners unless structural reforms redistribute opportunity. Europe has stronger redistribution mechanisms than the US, but weaker capital formation. Does Europe face a choice between equality and competitiveness — or can instruments like EU-ESOP (broadening equity ownership among AI workers) achieve both simultaneously?

ArbVG §96a consent requirements, the Vienna tech scene, and public-sector concentration.

  • Austria’s Labour Constitution Act (ArbVG §96a) requires works council consent (not just consultation) for systems that affect human dignity — a higher bar than Germany’s co-determination. In an economy of ~4.5 million workers, does this make Austria the most friction-heavy environment for AI deployment in Europe?
  • Vienna has developed a growing tech startup ecosystem, but Austrian AI companies must navigate one of Europe’s most employee-protective regulatory environments. Does this create a competitive disadvantage for Austrian AI firms, or does it force them to build compliance-first products that can sell anywhere in the EU?
  • Austria has a disproportionately large public sector. Public-sector occupations consistently show high regulatory deltas in our data. If AI adoption in the Austrian government lags significantly behind the private sector, does this create a growing productivity gap between public and private services?
  • The Austrian social partnership model (Sozialpartnerschaft) brings employer and employee representatives into policy formation. Is this model equipped to handle the speed and scope of AI-driven change, or does consensus-based decision-making become a bottleneck when the technology moves faster than negotiation cycles?

BetrVG §87 co-determination, the Mittelstand backbone, and the Fachkräftemangel.

  • Germany’s Works Constitution Act (BetrVG § 87(1) No. 6) grants works councils codetermination rights over technical monitoring devices. In practice, this covers most AI tools that process employee data. With ~42 million workers and one of Europe’s strongest co-determination traditions, does Germany become the slowest EU economy to deploy workplace AI — or does structured negotiation produce better outcomes than unilateral deployment?
  • The Mittelstand (small and medium enterprises) employs the majority of German workers but typically lacks the compliance infrastructure to navigate AI Act requirements. If AI adoption requires dedicated compliance teams, does this further concentrate economic power in DAX corporations at the expense of the Mittelstand?
  • Germany faces a Fachkräftemangel (skilled worker shortage) of approximately 400,000 unfilled positions, concentrated in trades and healthcare. AI exposure scores suggest these are precisely the occupations least affected by AI. Does AI exacerbate the shortage by making knowledge work more efficient (reducing demand for those workers) while doing nothing for the trades where demand is acute?
  • IG Metall and ver.di have begun developing AI frameworks for collective bargaining. These frameworks will set precedents for how AI deployment is negotiated across entire sectors. Are unions building sufficient technical expertise to negotiate effectively, or will information asymmetry between employers and worker representatives produce suboptimal outcomes?

No AI Act, the nFADG data protection framework, financial services concentration, and Europe’s highest wages.

  • Switzerland is not bound by the EU AI Act but depends on EU market access via bilateral agreements. If Swiss financial institutions deploy AI tools that would be classified as high-risk under EU law, does this create a regulatory arbitrage opportunity — or a compliance risk when serving EU clients?
  • Swiss wages are the highest in Europe. Financial and mathematical associate professionals — scoring 8.5 for technical exposure — are concentrated in Zurich and Geneva. If AI displaces or augments even 20% of financial services output, the wage impact per worker is larger in Switzerland than anywhere else in Europe. Is the Swiss financial sector pricing this in?
  • Switzerland’s new Federal Act on Data Protection (nFADG) focuses on individual data rights rather than AI system classification. Does this lighter-touch approach to AI governance prove more adaptive than the EU’s prescriptive framework, or does the absence of AI-specific regulation leave gaps that data protection law alone cannot fill?
  • Swiss direct democracy means that any significant AI regulation would likely face a referendum. Given Switzerland’s track record of conservative referenda outcomes on technology and immigration, could public sentiment become the binding constraint on AI policy — producing either surprisingly restrictive or surprisingly permissive outcomes?
  • Swiss firms face 2–3 fewer regulatory layers than German or Austrian competitors for equivalent occupation groups. Does this lighter regulatory surface create a genuine arbitrage opportunity — attracting AI-intensive operations to Switzerland — or does it create a trust deficit with EU clients and talent who expect AI Act-level governance?
  • The Mitwirkungsgesetz gives Swiss workers information and consultation rights but no co-determination. In an era where AI decisions increasingly affect working conditions, is consultation without veto power a meaningful worker protection — or a procedural formality?

Post-Brexit regulatory divergence, the pro-innovation AI framework, no works councils, City of London exposure, and NHS workforce implications.

  • The UK has explicitly rejected the EU’s prescriptive AI regulation model in favour of a principles-based, sector-specific approach. The UK’s pro-innovation AI framework delegates AI oversight to existing regulators (FCA, Ofcom, CMA) rather than creating a horizontal AI Act. Does this light-touch approach accelerate AI adoption, or does the absence of clear rules create uncertainty that slows enterprise deployment?
  • The UK has no works council system. No co-determination rights. No BetrVG §87 equivalent. AI deployment decisions in UK workplaces are largely unilateral management prerogatives, constrained only by general employment law. Does this mean UK workers face greater exposure to AI with less institutional protection than workers in any other major European economy?
  • The City of London employs over 500,000 financial services workers, many of whom work in occupations with technical exposure scores of 7–9 (financial analysts, traders, compliance officers, actuaries). Post-Brexit, London financial firms face competitive pressure from both EU-regulated and US-unregulated rivals. Does AI become the tool that maintains the City’s competitive position — or the force that hollows out its workforce?
  • The NHS employs 1.4 million people, making it one of the largest employers in the world. Healthcare occupations show moderate AI exposure (4.5–6.5), but healthcare AI deployment faces unique constraints: clinical validation, patient safety, liability frameworks, and public trust. Could the NHS become a proving ground for large-scale AI deployment in European healthcare, or are institutional constraints the last place AI arrives?
  • Post-Brexit, the UK can no longer shape EU AI regulation from within. But UK firms selling into the EU must still comply with the AI Act for EU-facing products. Does the UK end up with the worst of both worlds — no regulatory influence but full compliance burden — or does regulatory independence prove to be genuinely valuable?
  • The UK’s regulatory landscape for AI deployment is the thinnest of any country in this analysis: the Equality Act 2010 for discrimination risk, the Data Protection Act 2018 for automated decision-making, and the ICE Regulations 2004 for weak consultation rights. Where a German HR manager navigates 4–6 overlapping frameworks, a UK counterpart navigates two. Does this make the UK the leading indicator of what AI-driven workforce disruption looks like without institutional brakes—and should the EU be watching closely?
  • The DSIT “pro-innovation” AI framework is a policy statement, not legislation — it creates no enforceable obligations. If the UK’s approach produces visible worker harm (algorithmic dismissal, opaque performance management, discriminatory hiring at scale), does public pressure force retrospective legislation — or does the absence of a works council tradition mean displaced workers lack the organised voice to demand it?
  • The Employment Rights Act 1996 makes unfair dismissal claims actionable, but proving that an algorithmic decision was unfair requires technical literacy that most employment tribunals do not yet possess. If AI-assisted performance management and termination decisions become widespread in UK workplaces before the legal system develops the capacity to scrutinise them, is there a gap where workers are formally protected but practically undefended?
  • The UK’s average regulatory friction of 0.5 points — less than half the EU’s 1.2 — makes it the closest thing to a controlled experiment in light-touch AI workplace regulation among advanced economies. In 3–5 years, UK labour market outcomes will provide the first empirical evidence for whether regulatory friction protects workers or merely delays adjustment. Is anyone systematically measuring this?

These questions don’t have answers yet. That’s the point.

If your organisation is navigating this transition, let’s build together.