Teach Your LLM to Always Answer With Facts Not Fiction

Large Language Models are advanced AI systems that can answer a wide range of questions. Although they provide informative responses on topics they know, they are not always accurate on unfamiliar topics. This phenomenon is known as hallucination.

What Is Hallucination?

Before we look at an example of an LLM hallucination, let's consider a definition of the term "hallucination" as described by Wikipedia.com(opens a new window:

CategoriesUncategorized