Skip to main content

3 posts tagged with "semantic search"

View All Tags

How To Use RAG To Crowdsource Event Forecasts

· 31 min read
Morgan Moneywise
CEO at Morgan Moneywise, Inc.

picture of a robot in a room full of monitors

Introduction

As someone who works with vector databases daily, I've become accustomed to the conventional applications of Retrieval-Augmented Generation (RAG) in scenarios such as extracting information from dense user manuals, navigating complex code bases, or conducting in-depth legal research. These "talk to your documents" use cases, while impressive, often revolve around similar challenges across different datasets, which can become somewhat monotonous.

So, it was particularly refreshing when I came across the paper "Approaching Human-Level Forecasting with Language Models" by researchers Danny Halawi, Fred Zhang, Chen Yueh-Han, and Jacob Steinhardt from UC Berkeley. They propose a novel (at least to me) use of RAG: forecasting events!

How To Use RAG To Improve Your LLM's Reasoning Skills

· 12 min read
Morgan Moneywise
CEO at Morgan Moneywise, Inc.

picture of gears to represent integration tests

Introduction

Retrieval Augmented Generation (RAG) typically finds its place in enhancing document-based question answering (QnA), effectively leveraging extensive databases to provide contextually relevant information for Large Language Models (LLMs) to formulate precise answers. Traditionally, when looking to boost the reasoning capabilities of LLMs, the go-to strategy has been fine-tuning these models with additional data. However, fine-tuning is not only resource-intensive but also presents scalability challenges.

Interestingly, RAG could potentially offer a more efficient pathway to enhance LLMs' reasoning skills without the hefty costs of fine-tuning. This intriguing premise is explored in depth in Enhancing LLM Intelligence with ARM-RAG: Auxiliary Rationale Memory for Retrieval Augmented Generation by Eric Melz, which proposes a novel use of RAG beyond its conventional application, aiming to refine and expand the problem-solving prowess of LLMs efficiently.

How to do RAG without Vector Databases

· 13 min read
Morgan Moneywise
CEO at Morgan Moneywise, Inc.

picture of gears to represent integration tests

Introduction

When it comes to bestowing Large Language Models (LLMs) with long-term memory, the prevalent approach often involves a Retrieval Augmented Generation (RAG) solution, with vector databases acting as the storage mechanism for the long-term memory. This begs the question: Can we achieve the same results without vector databases?

Enter "RecallM: An Adaptable Memory Mechanism with Temporal Understanding for Large Language Models" by Brandon Kynoch, Hugo Latapie, and Dwane van der Sluis. This paper proposes the use of an automatically constructed knowledge graph as the backbone of long-term memory for LLMs.