Skip to content

Kim Kardashian Details ChatGPT Factual Inaccuracies in Legal Studies

Kim Kardashian Details ChatGPT Factual Inaccuracies in Legal Studies
Published:

In a recent interview with Vanity Fair, television personality Kim Kardashian disclosed her experiences with ChatGPT, characterizing her relationship with the artificial intelligence as a "toxic" "frenemy" due to its reported provision of inaccurate legal information during her law studies. Kardashian stated that the large language model (LLM) has contributed to her failing examinations. Kardashian described her process of utilizing ChatGPT for legal advice, involving capturing images of legal questions and submitting them to the AI for answers. According to her statements, the responses received were consistently inaccurate, directly leading to test failures. She recounted instances of expressing frustration to the AI, stating, "They're always wrong. It has made me fail tests... And then I'll get mad and I'll yell at it and be like, 'You made me fail!'" These interactions highlight a public figure's direct encounter with the reliability challenges of AI in critical knowledge domains. The reported behavior aligns with documented limitations of generative artificial intelligence, specifically the phenomenon referred to as "hallucinations," where LLMs generate plausible but factually incorrect or entirely fabricated information. This characteristic stems from the technology's design to predict the most statistically likely response based on its vast training data, rather than to guarantee factual accuracy or verify information with certainty. The potential for such inaccuracies has been a subject of professional scrutiny; lawyers, for example, have faced sanctions in legal proceedings for submitting briefs that cited non-existent legal cases generated by ChatGPT, a development reported by various media outlets including Reuters. Such incidents underscore the necessity of human verification when deploying AI in domains requiring strict factual precision. Kardashian also detailed her attempts to engage with ChatGPT on an emotional level after receiving incorrect information, despite the fact that AI systems do not possess emotions. She described conversations where she would prompt the AI with questions like, "'Hey, you're going to make me fail, how does that make you feel that you need to really know these answers?'" She noted that ChatGPT's responses typically included prompts for her to "trust your own instincts." Kardashian further indicated sharing screenshots of these exchanges with a personal group chat, illustrating the personal impact of AI's performance on users seeking factual assistance.

Tags: Live AI Agents

More in Live

See all

More from Industrial Intelligence Daily

See all

From our partners