用检索增强生成让大模型更强大,这里有个手把手的Python实现
选自towardsdatascience
作者:Leonie Monigatti
机器之心编译
编辑:Panda
自从人们认识到可以使用自己专有的数据让大型语言模型(LLM)更加强大,人们就一直在讨论如何有效地将 LLM 的一般性知识与专有数据整合起来。对此人们也一直在争论:微调和检索增强生成(RAG)哪个更合适?
参数化知识:在训练期间学习到的知识,以隐含的方式储存在神经网络权重之中。 非参数化知识:储存于外部知识源,比如向量数据库。
检索增强生成(RAG)的工作流程
检索:将用户查询用于检索外部知识源中的相关上下文。为此,要使用一个嵌入模型将该用户查询嵌入到同一个向量空间中,使其作为该向量数据库中的附加上下文。这样一来,就可以执行相似性搜索,并返回该向量数据库中与用户查询最接近的 k 个数据对象。 增强:然后将用户查询和检索到的附加上下文填充到一个 prompt 模板中。 生成:最后,将经过检索增强的 prompt 馈送给 LLM。
langchain,编排 openai,嵌入模型和 LLM weaviate-client,向量数据库
OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"
import dotenv
dotenv.load_dotenv()
收集并载入数据 将文档分块 对文本块进行嵌入操作并保存
import requests
from langchain.document_loaders import TextLoader
url = "https://raw.githubusercontent.com/langchain-ai/langchain/master/docs/docs/modules/state_of_the_union.txt"
res = requests.get(url)
with open("state_of_the_union.txt", "w") as f:
f.write(res.text)
loader = TextLoader('./state_of_the_union.txt')
documents = loader.load()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=50)
chunks = text_splitter.split_documents(documents)
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Weaviate
import weaviate
from weaviate.embedded import EmbeddedOptions
client = weaviate.Client(
embedded_options = EmbeddedOptions()
)
vectorstore = Weaviate.from_documents(
client = client,
documents = chunks,
embedding = OpenAIEmbeddings(),
by_text = False
)
retriever = vectorstore.as_retriever()
from langchain.prompts import ChatPromptTemplate
template = """You are an assistant for question-answering tasks.
Use the following pieces of retrieved context to answer the question.
If you don't know the answer, just say that you don't know.
Use three sentences maximum and keep the answer concise.
Question: {question}
Context: {context}
Answer:
"""
prompt = ChatPromptTemplate.from_template(template)
print(prompt)
from langchain.chat_models import ChatOpenAI
from langchain.schema.runnable import RunnablePassthrough
from langchain.schema.output_parser import StrOutputParser
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
rag_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
query = "What did the president say about Justice Breyer"
rag_chain.invoke(query)
"The president thanked Justice Breyer for his service and acknowledged his dedication to serving the country.
The president also mentioned that he nominated Judge Ketanji Brown Jackson as a successor to continue Justice Breyer's legacy of excellence."
© THE END
转载请联系本公众号获得授权
投稿或寻求报道:[email protected]
微信扫码关注该文公众号作者
戳这里提交新闻线索和高质量文章给我们。
来源: qq
点击查看作者最近其他文章