Deloitte announced a major expansion of its artificial intelligence (AI) capabilities on Monday, deploying Anthropic's Claude chatbot to its nearly 500,000 global employees. This strategic move occurred concurrently with reports that the professional services firm would issue a refund to the Australian government for an official review containing AI-generated inaccuracies.
The firm's enterprise AI initiative involves rolling out Anthropic's Claude to its extensive workforce, building upon a partnership established last year. According to an Anthropic blog post, Deloitte and Anthropic plan to develop compliance products and features for regulated sectors, including financial services, healthcare, and public services. CNBC reported that Deloitte also intends to create distinct AI agent "personas" to represent various internal departments, such as accounting and software development.
Ranjit Bawa, Deloitte's global technology and ecosystems and alliances leader, stated in the Anthropic blog post, "Deloitte is making this significant investment in Anthropic’s AI platform because our approach to responsible AI is very aligned, and together we can reshape how enterprises operate over the next decade. Claude continues to be a leading choice for many clients and our own AI transformation." Financial terms of the alliance were not disclosed.
Simultaneously, the Australian Department of Employment and Workplace Relations confirmed that Deloitte would refund the final installment of a A$439,000 "independent assurance review." The Financial Times reported that the review, commissioned by the department and published earlier this year, contained multiple errors, including citations to non-existent academic reports, as detailed by the Australian Financial Review in August. A corrected version of the document was subsequently uploaded to the department’s website.
The incident in Australia is not isolated within the broader landscape of AI adoption. Recent reports indicate challenges with AI accuracy across various applications. In May, the Chicago Sun-Times acknowledged running an AI-generated list of books with hallucinated titles. Business Insider reported that Amazon’s AI productivity tool, Q Business, demonstrated accuracy struggles in its first year. Furthermore, Anthropic itself encountered an incident where its lawyer apologized for using an AI-generated citation from Claude in a legal dispute earlier this year.