WIP — Data and labels are in active iteration
202220232024

AI Risk Mentioned

87 of the 150 annual reports examined between 2022–2024 include at least one AI risk disclosure, across 378 individual passages.

Per Report counts each company filing once — useful for measuring how broadly a risk is disclosed across the market. Per Excerpt counts every individual passage where a risk from AI appears — useful for visualising the depth of emphasis companies place on that risk.

You can use the controls above to narrow the date range, focus on a specific risk category, or switch the sector taxonomy between CNI (Critical National Infrastructure) and ISIC (international standard industry codes).

The risk trend chart shows how often each risk category has been mentioned, year by year. The heatmap shows where those mentions concentrate across sectors. With all risk types selected, heatmap columns represent risk categories; selecting a single category switches the columns to years, letting you track how that risk has evolved within each sector over time.

Risk Trend Over Time

Stacked bar chart showing the number of annual reports mentioning each AI risk category (y-axis) across fiscal years (x-axis). Each colour represents one risk category; bars are additive because a single report can be tagged with multiple categories. The y-axis scale adjusts dynamically to the data shown.

Risk Distribution by Sector

Risk TypeCNI Sector
Cybersecurity
Operational / Technical
Regulatory / Compliance
Reputational / Ethical
Information Integrity
Third-Party Supply Chain
Strategic Competitive
Workforce Impacts
Environmental Impact
National Security
Total
Chemicals
2 reports
3 reports
1 reports
1 reports
2 reports
9
Civil Nuclear/Space
1 reports
1 reports
1 reports
1 reports
2 reports
1 reports
2 reports
9
Communications
6 reports
4 reports
6 reports
6 reports
4 reports
5 reports
3 reports
2 reports
36
Defence
1 reports
4 reports
3 reports
3 reports
1 reports
1 reports
3 reports
1 reports
1 reports
2 reports
20
Government Services
2 reports
3 reports
1 reports
2 reports
1 reports
3 reports
3 reports
15
Energy (Extraction)
2 reports
1 reports
2 reports
2 reports
1 reports
2 reports
1 reports
11
Finance (Banking)
15 reports
17 reports
19 reports
16 reports
8 reports
10 reports
16 reports
11 reports
2 reports
114
Food (Retail)
2 reports
2 reports
1 reports
5
Health (Pharma)
3 reports
4 reports
5 reports
5 reports
4 reports
4 reports
5 reports
5 reports
1 reports
36
Energy (Transmission)
5 reports
2 reports
3 reports
3 reports
2 reports
1 reports
3 reports
2 reports
21
Water
5 reports
3 reports
1 reports
1 reports
2 reports
3 reports
5 reports
2 reports
22
Insurance
5 reports
6 reports
5 reports
5 reports
4 reports
4 reports
6 reports
5 reports
40
Asset Management
3 reports
2 reports
3 reports
3 reports
2 reports
3 reports
3 reports
19
Transport
1 reports
1 reports
1 reports
1 reports
1 reports
1 reports
6
Shipping
1 reports
1 reports
2 reports
2 reports
1 reports
3 reports
10
Other
11 reports
10 reports
10 reports
11 reports
7 reports
2 reports
10 reports
6 reports
1 reports
68
Total
65
62
62
61
38
33
64
43
8
5
441
No reports containing specified mention

Heatmap of report counts by CNI sector (rows) and risk category (columns). Colour intensity encodes the number of annual reports containing each risk type within each sector; darker cells indicate higher counts relative to the dataset maximum.

Risk Category Definitions

Categories were assigned by an LLM-assisted classifier trained on an AI risk taxonomy. A single report or passage can be tagged with more than one category.

  • Cybersecurity: Risks relating to data breaches, AI-enabled attacks, or vulnerabilities introduced by deploying AI systems.
  • Operational / Technical: Risks of system failures, integration problems, or performance degradation arising from AI implementation.
  • Regulatory / Compliance: Risks from evolving compliance obligations, legal liability, or uncertainty in the regulatory landscape for AI.
  • Reputational / Ethical: Risks of brand damage, public concern over algorithmic bias, or broader ethical considerations in AI deployment.
  • Information Integrity: Risks from AI-generated misinformation, model hallucinations, or degraded data quality affecting decision-making.
  • Third-Party Supply Chain: Risks arising from dependence on external AI vendors, APIs, or suppliers whose reliability or conduct is outside the company's direct control.
  • Strategic / Competitive: Risks of competitive displacement, market disruption, or falling behind peers in AI adoption and innovation.
  • Workforce Impacts: Risks relating to job displacement, emerging skills gaps, or changes in labour relations driven by AI automation.
  • Environmental Impact: Risks associated with the energy consumption, carbon footprint, or resource demands of AI infrastructure.
  • National Security: Risks to critical systems, geopolitical exposure, or security-of-state concerns linked to AI deployment or dependency.