AI Risk Mentioned
87 of the 150 annual reports examined between 2022–2024 include at least one AI risk disclosure, across 378 individual passages.
Per Report counts each company filing once — useful for measuring how broadly a risk is disclosed across the market. Per Excerpt counts every individual passage where a risk from AI appears — useful for visualising the depth of emphasis companies place on that risk.
The risk trend chart shows how often each risk category has been mentioned, year by year. The heatmap shows where those mentions concentrate across sectors. With all risk types selected, heatmap columns represent risk categories; selecting a single category switches the columns to years, letting you track how that risk has evolved within each sector over time.
Risk Trend Over Time
Stacked bar chart showing the number of annual reports mentioning each AI risk category (y-axis) across fiscal years (x-axis). Each colour represents one risk category; bars are additive because a single report can be tagged with multiple categories. The y-axis scale adjusts dynamically to the data shown.
Risk Distribution by Sector
Heatmap of report counts by CNI sector (rows) and risk category (columns). Colour intensity encodes the number of annual reports containing each risk type within each sector; darker cells indicate higher counts relative to the dataset maximum.
Risk Category Definitions
Categories were assigned by an LLM-assisted classifier trained on an AI risk taxonomy. A single report or passage can be tagged with more than one category.
- Cybersecurity: Risks relating to data breaches, AI-enabled attacks, or vulnerabilities introduced by deploying AI systems.
- Operational / Technical: Risks of system failures, integration problems, or performance degradation arising from AI implementation.
- Regulatory / Compliance: Risks from evolving compliance obligations, legal liability, or uncertainty in the regulatory landscape for AI.
- Reputational / Ethical: Risks of brand damage, public concern over algorithmic bias, or broader ethical considerations in AI deployment.
- Information Integrity: Risks from AI-generated misinformation, model hallucinations, or degraded data quality affecting decision-making.
- Third-Party Supply Chain: Risks arising from dependence on external AI vendors, APIs, or suppliers whose reliability or conduct is outside the company's direct control.
- Strategic / Competitive: Risks of competitive displacement, market disruption, or falling behind peers in AI adoption and innovation.
- Workforce Impacts: Risks relating to job displacement, emerging skills gaps, or changes in labour relations driven by AI automation.
- Environmental Impact: Risks associated with the energy consumption, carbon footprint, or resource demands of AI infrastructure.
- National Security: Risks to critical systems, geopolitical exposure, or security-of-state concerns linked to AI deployment or dependency.