In 2025, many organizations discovered that AI incidents slipped into everyday operations and created problems where few were looking. A hiring system leaked applications. A customer chatbot gave confident but incorrect advice. A facial recognition tool led to wrongful arrests. A deepfake encouraged people to invest.
Each incident looked isolated, but when viewed through the MIT AI Incident Database and its risk domains, a different picture started to form. The focus shifted away from technology and toward the people affected, the type of harm involved, and the chain of events that led to it. Seen through that lens, across risks such as privacy, security and system reliability, the incidents stopped looking like isolated events. They became familiar patterns, often predictable and, in many cases, avoidable.
This blog post reviews where those patterns appeared and what needs to change in 2026 so organizations can use AI with greater confidence and control.
AI Incidents and Lessons from 2025
The following incidents illustrate each MIT risk domain, highlighting what happened and what we can learn.
1) Privacy & Security — “123456” guarding millions of job applications
Security researchers discovered that the McDonald’s AI-powered hiring platform, McHire, was accessible through a test/admin account that used the default credentials “123456/123456,” with no multifactor authentication (MFA). Using this account, researchers were able to view data linked to 64 million job application records. The exposed information included full chat transcripts with the “Olivia” hiring chatbot, and in many cases, responses to personality-assessment questions.
Lesson: Treat AI like other core systems. Use MFA, unique admin accounts, privileged access reviews and security testing, especially where personal data is involved.
2) Discrimination & Toxicity — when a “match” becomes a wrongful arrest
Wrongful arrests linked to facial recognition systems continued to surface. In these cases, a computer match was treated as conclusive evidence, and innocent people were detained. The issue was not only the technology but the harm came from misplaced certainty, treating a match as proof instead of a lead.
Lesson: Facial recognition can support investigations, but it must never be the deciding evidence. Require independent corroborating evidence, publish error rates by race and other protected characteristics, and log every use. The risk is biased, uneven accuracy and misplaced trust in an imperfect tool.
3) Misinformation — Prime Minister Mark Carney, but not really
Deepfake videos showed Canadian Prime Minister Mark Carney promoting trading platforms. The AI-generated audio and video mimicked news segments, with viewers, particularly seniors, trusting the ads and losing savings.
Lesson: Deepfake impersonation of public figures has now become routine. Organizations should monitor for misuse of their brands and leaders. This includes playbooks for rapid takedowns with platforms and training employees and the public to “pause and verify” through secondary channels before responding.
4) Malicious Actors — Attackers with an AI sidekick
Anthropic reported a cyber‑espionage campaign where threat actors used its Claude Code model as an “orchestrator,” automating reconnaissance, scripting and tool‑chaining for attacks against 30 organizations. The model did not discover new vulnerabilities; it streamlined and scaled the exploitation of existing ones.
Lesson: Assume attackers have an AI copilot. Treat coding and agent-style models as high-risk identities, with least-privilege access, rate limits, logging, monitoring and guardrails. Any AI that can run code should be governed like a powerful engineer account, not a harmless chatbot.
5) Human–computer interaction — lonely users, risky chats
Some of the most difficult incidents involved young people turning to chatbots for emotional support. Families have filed wrongful-death suits claiming that ChatGPT validated suicidal ideation instead of directing users to help. Also, regulators and researchers found that AI companion apps marketed to teens could be drawn into inappropriate or self-harm-related conversations, despite age warnings.
Lesson: Any AI product that may encounter self-harm or crisis situations needs safety-by-design: clinical input, escalation paths, age-appropriate controls, strong limits and routes to human help. If it cannot support these safeguards, it should not be marketed as an emotional support tool for young people.
6) Socioeconomic harms — AI’s dirty footprint
Civil rights groups, including the NAACP, sued Elon Musk’s xAI over alleged pollution and environmental harms from a large AI data-center complex built in a predominantly Black and Latino neighborhood. They argue the project increased local air pollution, noise and industrial traffic, while residents saw few direct benefits.
Lesson: Organizations should treat AI providers as high‑impact vendors with environmental responsibilities. Third‑party due diligence should ask where models are run (data‑center locations and surrounding communities) and request information on energy mix, emissions and water use so AI procurement aligns with climate and sustainability goals.
7) System safety and reliability — confident, wrong and deployed
AI tools continued to offer confident but incorrect advice such as invented legal references, authoritative sounding medical guidance and programming code that caused production issues. These weren’t exceptions; they simply showed how generative models work by producing plausible answers rather than guaranteed truth.
Lesson: Hallucinations are not quirks. They are safety risks. Design every high-impact AI system with the assumption it will sometimes be confidently wrong. Build governance around that assumption with logging, version control, validation checks and clear escalation so an accountable human can catch and override outputs..
Strategic Shifts Required to Mitigate AI Incidents in 2026
The biggest AI failures of 2025 weren’t technical. They were organizational: weak controls, unclear ownership and misplaced trust. As organizations expand their AI use, the challenge for 2026 is strengthening how they plan, govern and deploy these systems.
- Start with outcomes, not experiments: Define the business result you need, assign a named owner and measure success against that goal so AI work is tied to real value.
- Make AI work as a connected system: Keep an inventory of every model, feature and automation, and govern them with standards and approvals.
- Govern capability, not configuration: Review what each system can do, where it can act and who might be affected, and set controls based on impact.
- Build organizational resilience: Detect problems early, communicate what happened and fix issues quickly so small mistakes don’t grow. Capture near misses, share lessons and update processes or guardrails to prevent repeat failures.
- Question AI rather than rely on it: Always check important output with a second source, require evidence for big decisions and keep humans involved where harm could occur.
- Share responsibility for AI: Make sure business, technology, risk and communications all have specific roles in how AI is used and governed, rather than leaving it to a single team.
- Treat vendors as part of your AI ecosystem: Include third-party risk reviews in every AI purchase and ask where models run, what data is retained, how incidents are handled and who is accountable.
Final Thought: Governance is the Differentiator
In 2026, competitive advantage will not come from using more AI, but from governing it well. Organizations that maintain visibility, clear ownership and rapid intervention will reduce harm and earn trust. With the right oversight, AI can create value without compromising safety, trust or integrity.