In a personal injury lawsuit in Manhattan, a lawyer representing a plaintiff has found himself in a precarious situation after submitting a federal court filing that cited nonexistent cases. The attorney, Steven A. Schwartz, employed the AI chatbot ChatGPT for the first time, unaware that it would fabricate the cases without any basis in reality.

Schwartz is representing an individual who is suing Avianca Airlines over an incident in 2019 when a serving cart allegedly struck the plaintiff’s knee. The lawyer claimed to have consulted ChatGPT, specifically asking if the cases cited were genuine, to which the chatbot responded affirmatively. It was only after the airline’s legal team pointed out the nonexistence of the cases in a subsequent filing that Schwartz realized the error—whether attributable to him or the computer.

Judge P. Kevin Castel, presiding over the case, has scheduled a hearing on June 8 to determine the appropriate course of action. The judge reportedly expressed dissatisfaction with the situation, as reported by the New York Times.

ChatGPT, launched in late 2022, quickly gained popularity. As a generative AI technology, it engages users in seemingly organic conversations for extended periods. However, the technology is known for its inaccuracy and tendency to invent facts and sources. Google’s comparable product, Bard, faces similar challenges.

Nevertheless, despite these limitations, people continue to rely on this experimental technology as a reliable source of information. Numerous reports have surfaced of students using ChatGPT to write their papers, and some teachers mistakenly assuming they can verify the authenticity of those papers by consulting the chatbot. OpenAI, the company behind ChatGPT, does offer a detection service to identify the use of AI chatbots, but its accuracy rate reportedly stands at only 20%. Additionally, when presented with random paragraphs, ChatGPT cannot distinguish whether it generated them itself. Personal experimentation revealed that ChatGPT took credit for the work of others.

Controversy surrounds chatbots like ChatGPT for various reasons, including concerns among tech experts that AI could spiral out of control, potentially leading to a scenario resembling the Terminator franchise, where humanity is annihilated. Elon Musk, a billionaire and former co-founder of OpenAI, hinted at this possibility when he recently called for a six-month pause in AI development—an appeal that may have been influenced by his own efforts to build a competitor to ChatGPT. Musk’s involvement with OpenAI ceased in 2018 after his attempted takeover of the company.

However, the idea of machines rebelling against organic life remains far-fetched. Chatbots like ChatGPT essentially function as advanced predictive text tools. They rapidly attempt to predict appropriate responses, often resulting in inaccurate information. Rather than acknowledging uncertainty, the technology conjures up fictitious sources. If asked about the authenticity of these sources, ChatGPT confidently affirms their validity.

Humans have grown accustomed to search engines like Google, imperfect as they may be, which strive to provide the most accurate information. Wikipedia, initially met with skepticism, has proven to be a generally reliable source due to its continuous oversight by a community of experts committed to accuracy. In contrast, ChatGPT lacks concern for the veracity of the information it generates. It is akin to a magic trick, impressing users while disregarding accuracy. As AI becomes increasingly integrated into everyday technologies, internet users will repeatedly encounter the harsh reality that these new tools do not prioritize truth. The danger lies not in AI developing a will of its own but rather in humans unquestioningly accepting whatever machines assert, no matter how incorrect. ChatGPT does not possess the awareness that it is disseminating inaccurate information. Consequently, the responsibility falls upon us to fact-check and prioritize accuracy in our information-seeking endeavors. While ChatGPT may be a remarkable technological advancement, users must be cautious and diligent in verifying the information it provides.

The real danger lies in humans blindly believing whatever machines, like ChatGPT, communicate, even when the information is demonstrably incorrect. It is incumbent upon us to critically assess the facts and ensure that accuracy remains paramount.

As AI becomes more embedded in our daily lives, this hard lesson will continue to unfold. The true peril of AI lies not in its potential to develop a will of its own, but rather in our inclination to unquestioningly accept its output, even when it deviates from the truth. To counteract this, we must remain vigilant, questioning and fact-checking the information we encounter, irrespective of its source.

The responsibility ultimately rests with us to prioritize accuracy and truth in our interactions with AI and to recognize its limitations. Only through a diligent commitment to verification can we mitigate the risks and harness the potential of AI in a responsible and informed manner.

Leave a Reply

Your email address will not be published. Required fields are marked *