add langchain example

This commit is contained in:
Jai Chopra
2023-05-05 16:00:14 -07:00
parent 62abe2d96f
commit 7e3db16225
3 changed files with 365 additions and 0 deletions

1
.gitignore vendored
View File

@@ -2,6 +2,7 @@
**/*.whl
*.egg-info
**/__pycache__
.DS_Store
rust/target
rust/Cargo.lock

View File

@@ -0,0 +1,7 @@
# Code documentation Q&A bot with LangChain
## use LanceDB's LangChain integration to build a Q&A bot for your documentation
<img id="splash" width="400" alt="langchain" src="https://user-images.githubusercontent.com/917119/236580868-61a246a9-e587-4c2b-8ae5-6fe5f7b7e81e.png">
This example is in a [notebook](https://github.com/lancedb/lancedb/blob/main/notebooks/code_qa_bot.ipynb)

357
notebooks/code_qa_bot.ipynb Normal file
View File

@@ -0,0 +1,357 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "13cb272e",
"metadata": {},
"source": [
"# Code documentation Q&A bot example with LangChain\n",
"\n",
"This Q&A bot will allow you to query your own documentation easily using questions. We'll also demonstrate the use of LangChain and LanceDB using the OpenAI API. \n",
"\n",
"In this example we'll use Pandas 2.0 documentation, but, this could be replaced for your own docs as well"
]
},
{
"cell_type": "code",
"execution_count": 40,
"id": "66638d6c",
"metadata": {},
"outputs": [],
"source": [
"!pip install --quiet openai langchain\n",
"!pip install --quiet -U lancedb"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "d1cdcac3",
"metadata": {},
"source": [
"First, let's get some setup out of the way. As we're using the OpenAI API, ensure that you've set your key (and organization if needed):"
]
},
{
"cell_type": "code",
"execution_count": 42,
"id": "58ee1868",
"metadata": {},
"outputs": [],
"source": [
"import openai\n",
"import os\n",
"\n",
"# Configuring the environment variable OPENAI_API_KEY\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" # OR set the key here as a variable\n",
" openai.api_key = \"sk-...\"\n",
" \n",
"assert len(openai.Model.list()[\"data\"]) > 0"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "34f524d3",
"metadata": {},
"source": [
"# Loading in our code documentation, generating embeddings and storing our documents in LanceDB\n",
"\n",
"We're going to use the power of LangChain to help us create our Q&A bot. It comes with several APIs that can make our development much easier as well as a LanceDB integration for vectorstore."
]
},
{
"cell_type": "code",
"execution_count": 43,
"id": "b55d22f1",
"metadata": {},
"outputs": [],
"source": [
"import lancedb\n",
"import re\n",
"import pickle\n",
"from pathlib import Path\n",
"\n",
"from langchain.document_loaders import UnstructuredHTMLLoader\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain.vectorstores import LanceDB\n",
"from langchain.llms import OpenAI\n",
"from langchain.chains import RetrievalQA"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "6ccf9b2b",
"metadata": {},
"source": [
"You can download the Pandas documentation from https://pandas.pydata.org/docs/. To make sure we're not littering our repo with docs, we won't include it in the LanceDB repo, so download this and store it locally first."
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "ae42496c",
"metadata": {},
"source": [
"We'll create a simple helper function that can help to extract metadata, so we can use this downstream when we're wanting to query with filters. In this case, we want to keep the lineage of the uri or path for each document that we process:"
]
},
{
"cell_type": "code",
"execution_count": 44,
"id": "d171d062",
"metadata": {},
"outputs": [],
"source": [
"def get_document_title(document):\n",
" m = str(document.metadata[\"source\"])\n",
" title = re.findall(\"pandas.documentation(.*).html\", m)\n",
" if title[0] is not None:\n",
" return(title[0])\n",
" return ''"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "130162ad",
"metadata": {},
"source": [
"# Pre-processing and loading the documentation\n",
"\n",
"Next, let's pre-process and load the documentation. To make sure we don't need to do this repeatedly if we were updating code, we're caching it using pickle so we can retrieve it again (this could take a few minutes to run the first time yyou do it). We'll also add some more metadata to the docs here such as the title and version of the code:"
]
},
{
"cell_type": "code",
"execution_count": 45,
"id": "33bfe7d8",
"metadata": {},
"outputs": [],
"source": [
"docs_path = Path(\"docs.pkl\")\n",
"docs = []\n",
"\n",
"if not docs_path.exists():\n",
" for p in Path(\"./pandas.documentation\").rglob(\"*.html\"):\n",
" if p.is_dir():\n",
" continue\n",
" loader = UnstructuredHTMLLoader(p)\n",
" raw_document = loader.load()\n",
" \n",
" m = {}\n",
" m[\"title\"] = get_document_title(raw_document[0])\n",
" m[\"version\"] = \"2.0rc0\"\n",
" raw_document[0].metadata = raw_document[0].metadata | m\n",
" raw_document[0].metadata[\"source\"] = str(raw_document[0].metadata[\"source\"])\n",
" docs = docs + raw_document\n",
"\n",
" with docs_path.open(\"wb\") as fh:\n",
" pickle.dump(docs, fh)\n",
"else:\n",
" with docs_path.open(\"rb\") as fh:\n",
" docs = pickle.load(fh)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "c3852dd3",
"metadata": {},
"source": [
"# Generating emebeddings from our docs\n",
"\n",
"Now that we have our raw documents loaded, we need to pre-process them to generate embeddings:"
]
},
{
"cell_type": "code",
"execution_count": 47,
"id": "82230563",
"metadata": {},
"outputs": [],
"source": [
"text_splitter = RecursiveCharacterTextSplitter(\n",
" chunk_size=1000,\n",
" chunk_overlap=200,\n",
")\n",
"documents = text_splitter.split_documents(docs)\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "43e68215",
"metadata": {},
"source": [
"# Storing and querying with LanceDB\n",
"\n",
"Let's connect to LanceDB so we can store our documents. We'll create a Table to store them in:"
]
},
{
"cell_type": "code",
"execution_count": 48,
"id": "74780a58",
"metadata": {},
"outputs": [],
"source": [
"db = lancedb.connect('/tmp/lancedb')\n",
"table = db.create_table(\"pandas_docs\", data=[\n",
" {\"vector\": embeddings.embed_query(\"Hello World\"), \"text\": \"Hello World\", \"id\": \"1\"}\n",
"], mode=\"overwrite\")\n",
"docsearch = LanceDB.from_documents(documents, embeddings, connection=table)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "3cb1dc5d",
"metadata": {},
"source": [
"Now let's create our RetrievalQA chain using the LanceDB vector store:"
]
},
{
"cell_type": "code",
"execution_count": 49,
"id": "6a5891ad",
"metadata": {},
"outputs": [],
"source": [
"qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type=\"stuff\", retriever=docsearch.as_retriever())"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "28d93b85",
"metadata": {},
"source": [
"And thats it! We're all setup. The next step is to run some queries, let's try a few:"
]
},
{
"cell_type": "code",
"execution_count": 50,
"id": "70d88316",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' The major differences in pandas 2.0 include installing optional dependencies with pip extras, the ability to use any numpy numeric dtype in an Index, and enhancements, notable bug fixes, backwards incompatible API changes, deprecations, and performance improvements.'"
]
},
"execution_count": 50,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What are the major differences in pandas 2.0?\"\n",
"qa.run(query)"
]
},
{
"cell_type": "code",
"execution_count": 51,
"id": "85a0397c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' 2.0.0rc0'"
]
},
"execution_count": 51,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What's the current version of pandas?\"\n",
"qa.run(query)"
]
},
{
"cell_type": "code",
"execution_count": 52,
"id": "923f86c6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' Optional dependencies can be installed with pip install \"pandas[all]\" or \"pandas[performance]\". This will install all recommended performance dependencies such as numexpr, bottleneck and numba.'"
]
},
"execution_count": 52,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"How do I make use of installing optional dependencies?\"\n",
"qa.run(query)"
]
},
{
"cell_type": "code",
"execution_count": 53,
"id": "02082f83",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" \\n\\nPandas 2.0 includes a number of API breaking changes, such as increased minimum versions for dependencies, the use of os.linesep for DataFrame.to_csv's line_terminator, and reorganization of the library. See the release notes for a full list of changes.\""
]
},
"execution_count": 53,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What are the backwards incompatible API changes in Pandas 2.0?\"\n",
"qa.run(query)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75cea547",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}