宣布 LlamaCloud 正式发布(以及我们的 1900 万美元 A 轮融资)!
LlamaIndex

Andrei 2024-01-05

高级 RAG 构建指南和一些技巧

新的一年开始了,也许您正希望通过构建您的第一个 RAG 系统来进入 RAG 领域。或者,您可能已经构建了基础 RAG 系统,现在正寻求将其增强为更高级的系统,以便更好地处理用户的查询和数据结构。

无论哪种情况,知道从哪里或如何开始本身可能就是一个挑战!如果确实如此,那么希望这篇博文能为您指明下一步的正确方向,更重要的是,为您提供一个思维模型,以便您在构建高级 RAG 系统时锚定您的决策。

上述分享的 RAG 备忘单在很大程度上受到了最近一篇 RAG 调查论文 (“大型语言模型的检索增强生成:一项调查” Gao, Yunfan, et al. 2023) 的启发。

基础 RAG

当今主流的 RAG 定义包括从外部知识库中检索文档,并将这些文档连同用户查询一起传递给 LLM 进行响应生成。换句话说,RAG 包括一个检索组件、一个外部知识库和一个生成组件。

LlamaIndex 基础 RAG 技巧

from llama_index import SimpleDirectoryReader, VectorStoreIndex

# load data
documents = SimpleDirectoryReader(input_dir="...").load_data()

# build VectorStoreIndex that takes care of chunking documents
# and encoding chunks to embeddings for future retrieval
index = VectorStoreIndex.from_documents(documents=documents)

# The QueryEngine class is equipped with the generator
# and facilitates the retrieval and generation steps
query_engine = index.as_query_engine()

# Use your Default RAG
response = query_engine.query("A user's query")

RAG 的成功要素

为了使 RAG 系统被视为成功(指能够提供有用且相关的答案给用户问题),实际上只有两个高级别的要求

  1. 检索必须能够找到与用户查询最相关的文档。
  2. 生成必须能够充分利用检索到的文档来充分回答用户查询。

高级 RAG

定义了成功要素后,我们可以说构建高级 RAG 实际上是将更复杂的技术和策略应用于(检索或生成组件),以确保最终满足这些要素。此外,我们可以将复杂的技术分为两类:一类是或多或少独立于另一类,仅解决两个高级成功要素中的一个;另一类是同时解决这两个要素。

用于检索的高级技术必须能够找到与用户查询最相关的文档

下面我们简要介绍几种更复杂的技术,以帮助实现第一个成功要素。

  1. 块大小优化: 由于 LLM 受上下文长度限制,在构建外部知识库时有必要对文档进行分块。过大或过小的块可能会给生成组件带来问题,导致不准确的响应。

LlamaIndex 块大小优化技巧 (notebook 指南):

from llama_index import ServiceContext
from llama_index.param_tuner.base import ParamTuner, RunResult
from llama_index.evaluation import SemanticSimilarityEvaluator, BatchEvalRunner

### Recipe
### Perform hyperparameter tuning as in traditional ML via grid-search
### 1. Define an objective function that ranks different parameter combos
### 2. Build ParamTuner object
### 3. Execute hyperparameter tuning with ParamTuner.tune()

# 1. Define objective function
def objective_function(params_dict):
    chunk_size = params_dict["chunk_size"]
    docs = params_dict["docs"]
    top_k = params_dict["top_k"]
    eval_qs = params_dict["eval_qs"]
    ref_response_strs = params_dict["ref_response_strs"]

    # build RAG pipeline
    index = _build_index(chunk_size, docs)  # helper function not shown here
    query_engine = index.as_query_engine(similarity_top_k=top_k)
  
    # perform inference with RAG pipeline on a provided questions `eval_qs`
    pred_response_objs = get_responses(
        eval_qs, query_engine, show_progress=True
    )

    # perform evaluations of predictions by comparing them to reference
    # responses `ref_response_strs`
    evaluator = SemanticSimilarityEvaluator(...)
    eval_batch_runner = BatchEvalRunner(
        {"semantic_similarity": evaluator}, workers=2, show_progress=True
    )
    eval_results = eval_batch_runner.evaluate_responses(
        eval_qs, responses=pred_response_objs, reference=ref_response_strs
    )

    # get semantic similarity metric
    mean_score = np.array(
        [r.score for r in eval_results["semantic_similarity"]]
    ).mean()

    return RunResult(score=mean_score, params=params_dict)

# 2. Build ParamTuner object
param_dict = {"chunk_size": [256, 512, 1024]} # params/values to search over
fixed_param_dict = { # fixed hyperparams
  "top_k": 2,
    "docs": docs,
    "eval_qs": eval_qs[:10],
    "ref_response_strs": ref_response_strs[:10],
}
param_tuner = ParamTuner(
    param_fn=objective_function,
    param_dict=param_dict,
    fixed_param_dict=fixed_param_dict,
    show_progress=True,
)

# 3. Execute hyperparameter search
results = param_tuner.tune()
best_result = results.best_run_result
best_chunk_size = results.best_run_result.params["chunk_size"]

2. 结构化外部知识: 在复杂场景中,可能需要使用比基础向量索引更具结构的方式构建外部知识,以便在处理合理分离的外部知识源时,允许递归检索或路由检索。

LlamaIndex 递归检索技巧 (notebook 指南):

from llama_index import SimpleDirectoryReader, VectorStoreIndex
from llama_index.node_parser import SentenceSplitter
from llama_index.schema import IndexNode

### Recipe
### Build a recursive retriever that retrieves using small chunks
### but passes associated larger chunks to the generation stage

# load data
documents = SimpleDirectoryReader(
  input_file="some_data_path/llama2.pdf"
).load_data()

# build parent chunks via NodeParser
node_parser = SentenceSplitter(chunk_size=1024)
base_nodes = node_parser.get_nodes_from_documents(documents)

# define smaller child chunks
sub_chunk_sizes = [256, 512]
sub_node_parsers = [
    SentenceSplitter(chunk_size=c, chunk_overlap=20) for c in sub_chunk_sizes
]
all_nodes = []
for base_node in base_nodes:
    for n in sub_node_parsers:
        sub_nodes = n.get_nodes_from_documents([base_node])
        sub_inodes = [
            IndexNode.from_text_node(sn, base_node.node_id) for sn in sub_nodes
        ]
        all_nodes.extend(sub_inodes)
    # also add original node to node
    original_node = IndexNode.from_text_node(base_node, base_node.node_id)
    all_nodes.append(original_node)

# define a VectorStoreIndex with all of the nodes
vector_index_chunk = VectorStoreIndex(
    all_nodes, service_context=service_context
)
vector_retriever_chunk = vector_index_chunk.as_retriever(similarity_top_k=2)

# build RecursiveRetriever
all_nodes_dict = {n.node_id: n for n in all_nodes}
retriever_chunk = RecursiveRetriever(
    "vector",
    retriever_dict={"vector": vector_retriever_chunk},
    node_dict=all_nodes_dict,
    verbose=True,
)

# build RetrieverQueryEngine using recursive_retriever
query_engine_chunk = RetrieverQueryEngine.from_args(
    retriever_chunk, service_context=service_context
)

# perform inference with advanced RAG (i.e. query engine)
response = query_engine_chunk.query(
    "Can you tell me about the key concepts for safety finetuning"
)

其他有用链接

我们提供了几个指南,演示了其他高级技术的应用,以帮助确保在复杂情况下准确检索。以下是一些精选链接:

  1. 使用知识图谱构建外部知识
  2. 使用自动检索器执行混合检索
  3. 构建融合检索器
  4. 对检索中使用的嵌入模型进行微调
  5. 转换查询嵌入 (HyDE)

用于生成的高级技术必须能够充分利用检索到的文档

与上一节类似,我们在此类别下提供了一些复杂技术的示例,可以描述为确保检索到的文档与生成器的 LLM 良好对齐。

  1. 信息压缩: LLM 不仅受上下文长度限制,而且如果检索到的文档包含太多噪声(即不相关信息),还可能导致响应质量下降。

LlamaIndex 信息压缩技巧 (notebook 指南):

from llama_index import SimpleDirectoryReader, VectorStoreIndex
from llama_index.query_engine import RetrieverQueryEngine
from llama_index.postprocessor import LongLLMLinguaPostprocessor

### Recipe
### Define a Postprocessor object, here LongLLMLinguaPostprocessor
### Build QueryEngine that uses this Postprocessor on retrieved docs

# Define Postprocessor
node_postprocessor = LongLLMLinguaPostprocessor(
    instruction_str="Given the context, please answer the final question",
    target_token=300,
    rank_method="longllmlingua",
    additional_compress_kwargs={
        "condition_compare": True,
        "condition_in_question": "after",
        "context_budget": "+100",
        "reorder_context": "sort",  # enable document reorder
    },
)

# Define VectorStoreIndex
documents = SimpleDirectoryReader(input_dir="...").load_data()
index = VectorStoreIndex.from_documents(documents)

# Define QueryEngine
retriever = index.as_retriever(similarity_top_k=2)
retriever_query_engine = RetrieverQueryEngine.from_args(
    retriever, node_postprocessors=[node_postprocessor]
)

# Used your advanced RAG
response = retriever_query_engine.query("A user query")

2. 结果重排: LLM 存在所谓的“中间迷失”现象,即 LLM 更关注提示词的首尾部分。鉴于此,在将检索到的文档传递给生成组件之前对其进行重排是有益的。

LlamaIndex 重排以优化生成技巧 (notebook 指南):

import os
from llama_index import SimpleDirectoryReader, VectorStoreIndex
from llama_index.postprocessor.cohere_rerank import CohereRerank
from llama_index.postprocessor import LongLLMLinguaPostprocessor

### Recipe
### Define a Postprocessor object, here CohereRerank
### Build QueryEngine that uses this Postprocessor on retrieved docs

# Build CohereRerank post retrieval processor
api_key = os.environ["COHERE_API_KEY"]
cohere_rerank = CohereRerank(api_key=api_key, top_n=2)

# Build QueryEngine (RAG) using the post processor
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(documents=documents)
query_engine = index.as_query_engine(
    similarity_top_k=10,
    node_postprocessors=[cohere_rerank],
)

# Use your advanced RAG
response = query_engine.query(
    "What did Sam Altman do in this essay?"
)

同时解决检索和生成成功要素的高级技术

在本小节中,我们将考虑利用检索和生成协同作用的复杂方法,以便实现更好的检索和更准确的对用户查询的生成响应)。

  1. 生成器增强检索: 这些技术利用 LLM 固有的推理能力,在执行检索之前优化用户查询,以便更好地指示需要什么才能提供有用的响应。

LlamaIndex 生成器增强检索技巧 (notebook 指南):

from llama_index.llms import OpenAI
from llama_index.query_engine import FLAREInstructQueryEngine
from llama_index import (
    VectorStoreIndex,
    SimpleDirectoryReader,
    ServiceContext,
)
### Recipe
### Build a FLAREInstructQueryEngine which has the generator LLM play
### a more active role in retrieval by prompting it to elicit retrieval
### instructions on what it needs to answer the user query.

# Build FLAREInstructQueryEngine
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
index = VectorStoreIndex.from_documents(documents)
index_query_engine = index.as_query_engine(similarity_top_k=2)
service_context = ServiceContext.from_defaults(llm=OpenAI(model="gpt-4"))
flare_query_engine = FLAREInstructQueryEngine(
    query_engine=index_query_engine,
    service_context=service_context,
    max_iterations=7,
    verbose=True,
)

# Use your advanced RAG
response = flare_query_engine.query(
    "Can you tell me about the author's trajectory in the startup world?"
)

2. 迭代检索-生成器 RAG: 对于一些复杂情况,可能需要多步推理才能为用户查询提供有用且相关的答案。

LlamaIndex 迭代检索-生成器技巧 (notebook 指南):

from llama_index.query_engine import RetryQueryEngine
from llama_index.evaluation import RelevancyEvaluator

### Recipe
### Build a RetryQueryEngine which performs retrieval-generation cycles
### until it either achieves a passing evaluation or a max number of
### cycles has been reached

# Build RetryQueryEngine
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
index = VectorStoreIndex.from_documents(documents)
base_query_engine = index.as_query_engine()
query_response_evaluator = RelevancyEvaluator() # evaluator to critique 
                                                # retrieval-generation cycles
retry_query_engine = RetryQueryEngine(
    base_query_engine, query_response_evaluator
)

# Use your advanced rag
retry_response = retry_query_engine.query("A user query")

RAG 的衡量方面

RAG 系统的评估当然至关重要。在高云帆等人撰写的调查论文中,他们指出了随附 RAG 备忘单右上角所示的 7 个衡量方面。llama-index 库包含多种评估抽象以及与 RAGAs 的集成,旨在帮助构建者通过这些衡量方面的视角了解其 RAG 系统在多大程度上实现了成功要素。下面,我们列出了一些精选的评估 notebook 指南。

  1. 答案相关性和上下文相关性
  2. 忠实性
  3. 检索评估
  4. 使用 BatchEvalRunner 进行批量评估

您现在已具备构建高级 RAG 的能力

阅读本博文后,希望您在应用这些复杂技术构建高级 RAG 系统时感到更有准备和信心!