LLMs Get a Memory Boost with HippoRAG

Large Language Models (LLMs) have quickly proven themselves to be invaluable tools for thinking. Trained on massive datasets of text, code, and other media, they can generate human-quality writing, translate languages, generate images, answer your questions in an informative way, and even write different kinds of creative content.  But for all their brilliance, even the most advanced LLMs have a fundamental constraint: their knowledge is frozen in time.  Everything they "know" is determined by the data they were trained on, leaving them unable to adapt to new information or learn about your specific needs and preferences.

To address this limitation, researchers developed Retrieval-Augmented Generation (RAG). RAG allows LLMs access to access datastores that can be updated in real-time. This access to dynamic external knowledge bases, allows them to retrieve relevant information on the fly and incorporate it into their responses. Since they tend to rely on keyword matching, however, standard RAG implementations struggle when a question requires connecting information across multiple sources —  a challenge known as "multi-hop" reasoning. 

CategoriesUncategorized