🚨 AI Hallucination
Today Korean Social News for Beginners | 2026.03.23
0️⃣ Fake Court Rulings Made by AI Were Submitted as Real Evidence
📌 Fake Rulings Created by AI Appeared in Court Documents…Supreme Court Forms Task Force
💬 At the Seoul Northern District Court, a legal brief submitted by the plaintiff cited court rulings that were created by AI and do not actually exist. The Supreme Court formed a dedicated task force to respond to the spread of false information caused by AI hallucinations. As AI generates nonexistent rulings and laws as if they were real, building a verification system has become an urgent task in courts and academia. The problem of legal professionals lacking AI literacy and failing to catch false information is growing serious, and discussions on technical fixes and institutional responses have begun in earnest.
💡 Summary
- AI hallucination is when AI generates information that does not exist as if it were true.
- Fake rulings made by AI were cited in court documents, raising serious concerns about the reliability of the justice system.
- Building verification systems and providing AI literacy education for legal professionals have emerged as key priorities.
1️⃣ Definition
AI Hallucination refers to a phenomenon where artificial intelligence generates information that it was not trained on, or that does not exist in reality, and presents it as if it were fact. It commonly appears in conversational AI and generative AI tools. Because the content is wrapped in convincing sentences and professional terminology, it is hard for ordinary people to tell whether it is true or false.
In simple terms, it is when AI does not say "I don't know" but instead makes up a plausible-sounding answer. For example, AI might produce something like "Supreme Court Ruling 2024Da○○○○○○" with a specific case number — but that ruling may not exist at all. The problem is that even the format looks so complete and professional that even experts can be fooled at first glance.
💡 Why does this matter?
- When false information created by AI hallucination enters professional fields like courts, medicine, or research, it can lead to serious harm.
- If fake case citations appear in legal briefs, they can directly affect trial outcomes and the rights of the people involved.
- Public trust in the justice system is a form of social capital — once shaken, it takes a long time to rebuild.
- As AI tools become more convenient, critically reviewing their outputs has become essential in every field.
2️⃣ What Happened and What Are the Issues
📕 What Happened
A legal brief was submitted to court citing rulings that do not exist. Here is what happened:
- A brief submitted by the plaintiff to the Seoul Northern District Court was found to cite fake rulings generated by AI.
- The cited rulings were confirmed not to exist anywhere — not in the Supreme Court or any lower court.
- AI is believed to have created highly convincing fake information that included a case number, case name, and summary.
- The falsity was discovered when the opposing party or the court independently searched for and checked the citations.
The Supreme Court responded immediately. Key actions taken include:
- The Supreme Court formed a dedicated task force (TF) to address the spread of false information caused by AI hallucinations.
- The court is reviewing plans to build a system to verify whether AI-generated content is authentic.
- Plans to strengthen AI literacy training for legal professionals are also being discussed.
- Guidelines for submitting documents that use AI tools are expected to be released.
📕 Structural Problems
Existing verification systems are vulnerable to AI-generated content. Key reasons include:
- Court document review procedures have focused on formal requirements rather than verifying the accuracy of the content.
- It is not practically possible to manually cross-check every cited case in a brief against legal databases.
- Text generated by AI is almost indistinguishable from text written by human experts in style and format.
- If a single brief cites dozens of cases, the burden of fully verifying all of them is enormous.
A lack of AI literacy among professionals has made the problem worse. Key issues include:
- Even among legal professionals, a practice of using AI outputs directly without independent verification has been spreading.
- The more convenient AI tools become, the less carefully people tend to check the results.
- Law schools and judicial training programs have not yet provided sufficient education on AI tools.
- There are also cases where lawyers include materials prepared by clients using AI without verifying them first.
💡 Key Issues in This Case
- Sophistication of false information: AI-generated fake rulings are now refined enough to deceive even experts
- Verification gap: Court document review procedures have no standard for checking AI-generated content
- Who is responsible: Debate over the legal responsibility of parties and legal representatives who cited false rulings
- Damage to trust: The impact that fake case citations have on the credibility of the entire justice system
- No rules yet: The legal community has no guidelines or standards for using AI tools
3️⃣ What Should Be Changed
✅ Build Verification Systems
- Technical tools are needed to check whether cited cases are real. Key directions include:
- The court's electronic filing system should add an automatic case-matching feature to verify submissions before they are accepted.
- Real-time search and confirmation tools linked to official case databases such as the Supreme Court's Legal Information system should be developed.
- Software that detects AI-generated text can be piloted in court administrative procedures.
- For important documents, a separate list of cited cases should be required so that cross-checking becomes mandatory.
✅ Strengthen AI Literacy Education
- AI education for legal professionals and experts is urgently needed. Key tasks include:
- Judicial training programs and law school curricula should include the limitations of AI tools and how to verify their outputs.
- Regular AI literacy training courses should be created for working legal professionals and government officials.
- Making it an industry norm to always cross-check AI-generated outputs against primary sources such as original rulings and statutes.
- When legal representatives receive materials prepared by clients using AI, their responsibility to verify those materials should be clearly defined.
✅ Guidelines and Legal Accountability
- The legal community needs rules for using AI tools. Key directions include:
- A requirement to disclose whether AI-generated content was used in court submissions should be considered.
- Legal penalties such as bearing litigation costs or facing disciplinary action should be clearly defined for submitting briefs with false case citations.
- The bar association and the Supreme Court should jointly create AI Usage Guidelines and update them regularly.
- Cross-ministry discussions are needed so that similar guidelines can spread to other professional fields including medicine, academia, and accounting.
4️⃣ Key Terms Explained
🔎 Generative AI
- Generative AI is artificial intelligence that creates new content such as text, images, and audio on its own.
- Generative AI refers to AI technology that produces new outputs — documents, images, music, and more — based on questions or instructions entered by a person. Well-known examples include GPT-based models and conversational AI like Claude.
- This technology learns from enormous amounts of data and quickly generates results at a level comparable to human work. Its use is growing in the legal field for tasks like drafting legal documents, summarizing case law, and assisting with contract review.
- However, for information not included in its training data or for content that has changed after training, AI can produce errors or simply make things up. This is exactly what hallucination is. Any output produced by AI must go through a process where a human expert reviews it and checks it against original source materials.
🔎 Court Administration Office
- The Court Administration Office is the agency under the Supreme Court that oversees administrative operations for all courts nationwide.
- The Court Administration Office (법원행정처) is a body under the Supreme Court responsible for nationwide court administration, including personnel management, budgeting, facility management, and judicial informatization. It also handles the personnel affairs of non-judicial court staff.
- It has recently been leading the operation of the electronic litigation system, management of court databases, and digitalization of judicial services. The task force formed to respond to AI hallucination was organized under the leadership of the Supreme Court and the Court Administration Office.
- The document submission guidelines and any new verification features added to the electronic litigation system that the Court Administration Office develops will likely be the key to preventing AI misuse going forward.
🔎 Expert System
- An expert system is an AI system designed to assist with decision-making based on specialized knowledge in a specific field.
- An expert system is a type of AI designed to assist human specialists by encoding the knowledge and decision-making processes of experts in fields such as medical diagnosis, legal advice, or financial analysis into a computer.
- Because it only operates within pre-defined rules and a knowledge database, it is less likely to produce hallucinations — making up content — compared to generative AI. A legal expert system that accurately contains official statutes and case law can be highly reliable.
- However, it has limitations: it cannot handle new situations or questions outside its trained scope. Recent research is exploring how to combine generative AI and expert systems to improve both creativity and reliability.
🔎 AI Literacy
- AI literacy is the ability to understand AI correctly and use it critically.
- AI literacy refers to the ability to understand how artificial intelligence works and its limitations, use AI tools appropriately, and critically evaluate the outputs AI produces. The key is not blindly trusting AI-generated content but always putting it through a verification process.
- Just as computer literacy (digital literacy) has become a basic skill for modern people, AI literacy is expected to become a fundamental competency required in all professions. It is especially important for those in fields where accuracy of information is critical — such as legal professionals, medical workers, researchers, and journalists.
- Practical ways to improve AI literacy include: first, making it a habit to always cross-check AI outputs against original sources; second, treating important information as something that must be re-confirmed from official sources, under the premise that AI can be wrong; and third, consistently receiving foundational education about the characteristics and limitations of AI tools.
5️⃣ Frequently Asked Questions (FAQ)
Q: What are the legal consequences of submitting AI-generated fake rulings to court?
A: Under current law, this could be treated as fraud or obstruction of official duties using deception.
- Submitting a nonexistent ruling to a court as if it were real is, in principle, a submission of false materials. If the intent was to deceive the opposing party for personal gain, it could be charged as litigation fraud (Article 347 of the Criminal Act). If it interfered with the legitimate duties of the court, it could be charged as obstruction of official duties through deception (Article 137 of the Criminal Act).
- If a lawyer knowingly submitted the document, disciplinary proceedings could be initiated under the Attorney-at-Law Act. However, if someone submitted the information without knowing it was AI-generated, whether there was intent becomes the central issue. Currently, there is no specific provision that directly governs this situation, making it urgent for the Court Administration Office to establish guidelines.
Q: Should AI not be used in legal work at all?
A: It is not about banning it entirely — the problem is using outputs without verification.
- AI can be very helpful for searching case law, drafting documents, and summarizing statutes. The key is to use AI as a "support tool" and always verify the outputs against official databases or original texts.
- Just like how you might double-check an important calculation even when using a calculator, AI outputs must always be reviewed by the responsible professional before final submission. The moment you enjoy the convenience but skip the verification, accidents like this one happen. Going forward in the legal field, the standard of accountability will likely be not "did you use AI?" but "did you go through a verification process?"
Q: Can ordinary people also become victims of AI hallucinations?
A: Yes, AI hallucination harm can easily occur in everyday information searches as well.
- A typical example is asking AI about a legal issue or medical information and receiving an answer that sounds convincing but is wrong. Information that AI provides about medication dosage, contract terms, or tax rules may differ from reality.
- The best way to protect yourself is to always cross-check AI answers against official government websites, professional consultations, or original source materials before making any important decision. Always keep in mind that AI does not say "I don't know." The more convincing information seems, the more you need to question and verify it.
Table of Contents