You are currently viewing Consulting in the Age of AI: What Deloitte’s Citation Controversy Means for Trust, Governance, and Public Policy

Consulting in the Age of AI: What Deloitte’s Citation Controversy Means for Trust, Governance, and Public Policy

  • Post author:
  • Post category:Blog

Introduction:

In late 2025, global consulting firm Deloitte found itself at the center of a growing controversy over the use of artificial intelligence in research and reporting. A high-value government contract for a healthcare workforce report in Newfoundland and Labrador was flagged for fabricated citations, allegedly generated with the assistance of AI tools. Rather than minor footnote errors, the revelations have raised broader questions about trust, quality assurance, and the governance of AI-assisted work in consulting and public policy (CBC/Radio-Canada, 2025; Fortune, 2025).

1. The Case at Hand: Fabricated Citations in a Major Report

In May 2025, the Department of Health and Community Services for the Government of Newfoundland and Labrador commissioned Deloitte to produce a Health Human Resources Plan, costing nearly $1.6 million CAD. Intended to guide policy for healthcare staffing shortages, the 526-page document contained at least four citations to academic research that do not exist and included references attributed to authors who had never written such papers (The Independent, 2025; HR Reporter, 2025).

These non-existent sources were used to support claims about recruitment strategies, monetary incentives for retention, virtual care, and pandemic impacts—topics central to long-term planning. When a researcher named in one of the citations stated that the referenced work “does not exist,” it underscored the seriousness of the allegations (The Independent, 2025; HR Reporter, 2025).

Following the discovery, the provincial government asked Deloitte to review the report’s accuracy and confirm the veracity of all citations, and Deloitte acknowledged the incorrect references while maintaining that the overall conclusions remain sound (The Independent, 2025; CBC/Radio-Canada, 2025).

2. Deloitte’s Response and Broader Pattern

Deloitte Canada issued a public statement saying it “stands by the recommendations put forward” and would revise citation errors but emphasized that AI was only used selectively to assist in research citations, not to write the entire report (The Independent, 2025; Fortune, 2025).

This is not an isolated incident for the firm. In a similar case earlier in 2025, Deloitte admitted that a federal-level report for the Australian government contained AI-related inaccuracies—including fabricated quotes and non-existent references—and agreed to partially refund part of the contract (NDTV, 2025; policyalternatives.ca, 2025). These successive episodes have put Deloitte’s internal controls, QA processes, and use of AI tools under intense scrutiny.

3. Why It Matters: Trust, Accuracy, and AI’s Roleesponse and Broader Pattern

At stake is public trust in expert analysis. Governments rely on consulting firms for research that informs public policy and major spending decisions. When foundational evidence—academic citations and source material—is flawed or fictitious, it not only undermines specific recommendations but also diminishes confidence in the institutions that depend on that work (HR Reporter, 2025; policyalternatives.ca, 2025).

The situation also illustrates a well-documented weakness of generative AI models: they can produce plausible-looking but factually incorrect content—a phenomenon referred to as AI hallucination (Wikipedia, 2025). When such models are used without rigorous human verification, especially in high-stakes reports, the potential for misleading conclusions increases dramatically.

Concerns have mounted among public figures. In response to the Deloitte revelations, Newfoundland and Labrador’s New Democratic Party called for strict regulations governing AI use in public sector commissioning, arguing that fabricated sources “undermine confidence” in governance and that decisions must be based on real human consultation and accurate evidence (The Independent, 2025; HR Reporter, 2025).

Union leaders and healthcare professionals have echoed these worries, emphasizing that reports meant to shape workforce and staffing strategies must be grounded in verified evidence, lest flawed information exacerbate real-world problems (HR Reporter, 2025).

4. Implications for the Consulting Industry

The Deloitte episode highlights broader challenges as AI tools become embedded in professional services workflows:

  • Quality Assurance and Verification: Firms must establish stronger internal controls to ensure that AI-assisted content is checked against authoritative sources. Relying on raw outputs without human oversight risks embedding errors that go undetected until public release (policyalternatives.ca, 2025; HR Reporter, 2025).
  • Contractual Clarity: Governments and large organizations may begin to demand explicit language in contracts regarding the permissible use of AI tools, including audit rights and requirements for transparent methodologies.
  • Ethics and Transparency: The expectation that expert analysis should be rooted in verifiable sources remains foundational. Consultancies will need to clarify how they integrate AI into research without diminishing intellectual rigor.
  • Regulatory Pressure: Calls from politicians for AI regulations in public sector reports may influence future policies, shaping how AI can and cannot be used in high-stakes work (The Independent, 2025).

5. Conclusion: Toward Responsible AI Integration

The Deloitte citation controversy is a cautionary tale about the limits of generative AI when used without robust verification. At its core, the issue is not simply about technological tools, but about responsibility, trust, and the credibility of expert advice in an era increasingly defined by AI.

If governments, consultancies, and organizations are to harness the power of AI, they must balance efficiency with rigorous human oversight, ensuring that evidence remains factual, transparent, and defensible. The events of 2025 make it clear that without such guardrails, the promise of AI can be overshadowed by its pitfalls.

References: 

CBC/Radio-Canada. (2025, November 22). Department asks Deloitte to review health report after false citations found. https://www.cbc.ca/news/canada/newfoundland-labrador/wakeham-health-report-sources-9.6992094

Fortune. (2025, November 25). Deloitte caught with fabricated AI-generated research in a million-dollar report. https://fortune.com/2025/11/25/deloitte-caught-fabricated-ai-generated-research-million-dollar-report-canada-government/

HR Reporter. (2025, November 25). AI errors: Province grapples with Deloitte report marred by false citations. https://www.hrreporter.com/focus-areas/automation-ai/ai-errors-province-grapples-with-deloitte-report-marred-by-false-citations/

NDTV. (2025, October 8). Deloitte’s AI fallout explained: The $440,000 report that backfired. https://www.ndtv.com/world-news/deloittes-ai-fallout-explained-the-440-000-report-that-backfired-9417098

The Independent. (2025, November 22). Major N.L. healthcare report contains errors likely generated by A.I. https://theindependent.ca/news/lji/provincial-government-responds-to-deloitte-report-found-to-contain-false-citations-likely-generated-by-a-i/

policyalternatives.ca. (2025, December 4). Consulting firms’ latest hustle: Using AI to write government reports. https://www.policyalternatives.ca/news-research/consulting-firms-latest-hustle-using-ai-to-write-government-reports

Wikipedia. (2025). Hallucination (artificial intelligence). https://en.wikipedia.org/wiki/Hallucination_%28artificial_intelligence%29