tokens
int64 182
43.1k
| doc_id
stringlengths 36
36
| name
stringlengths 6
64
| url
stringlengths 42
109
| retrieve_doc
bool 2
classes | source
stringclasses 1
value | content
stringlengths 801
96.2k
|
---|---|---|---|---|---|---|
1,869 | f2a017cd-c6a2-4611-b722-10951ad23a91 | Welcome to LlamaIndex 🦙 ! | https://docs.llamaindex.ai/en/stable/index | true | llama_index | <script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
# Welcome to LlamaIndex 🦙 !
LlamaIndex is a framework for building context-augmented generative AI applications with [LLMs](https://en.wikipedia.org/wiki/Large_language_model).
<div class="grid cards" markdown>
- <span style="font-size: 200%">[Introduction](#introduction)</span>
What is context augmentation? How does LlamaIndex help?
- <span style="font-size: 200%">[Use cases](#use-cases)</span>
What kind of apps can you build with LlamaIndex? Who should use it?
- <span style="font-size: 200%">[Getting started](#getting-started)</span>
Get started in Python or TypeScript in just 5 lines of code!
- <span style="font-size: 200%">[LlamaCloud](#llamacloud)</span>
Managed services for LlamaIndex including [LlamaParse](https://docs.cloud.llamaindex.ai/llamaparse/getting_started), the world's best document parser.
- <span style="font-size: 200%">[Community](#community)</span>
Get help and meet collaborators on Discord, Twitter, LinkedIn, and learn how to contribute to the project.
- <span style="font-size: 200%">[Related projects](#related-projects)</span>
Check out our library of connectors, readers, and other integrations at [LlamaHub](https://llamahub.ai) as well as demos and starter apps like [create-llama](https://www.npmjs.com/package/create-llama).
</div>
## Introduction
### What is context augmentation?
LLMs offer a natural language interface between humans and data. LLMs come pre-trained on huge amounts of publicly available data, but they are not trained on **your** data. Your data may be private or specific to the problem you're trying to solve. It's behind APIs, in SQL databases, or trapped in PDFs and slide decks.
Context augmentation makes your data available to the LLM to solve the problem at hand. LlamaIndex provides the tools to build any of context-augmentation use case, from prototype to production. Our tools allow you to ingest, parse, index and process your data and quickly implement complex query workflows combining data access with LLM prompting.
The most popular example of context-augmentation is [Retrieval-Augmented Generation or RAG](./getting_started/concepts.md), which combines context with LLMs at inference time.
### LlamaIndex is the Data Framework for Context-Augmented LLM Apps
LlamaIndex imposes no restriction on how you use LLMs. You can use LLMs as auto-complete, chatbots, semi-autonomous agents, and more. It just makes using them easier. We provide tools like:
- **Data connectors** ingest your existing data from their native source and format. These could be APIs, PDFs, SQL, and (much) more.
- **Data indexes** structure your data in intermediate representations that are easy and performant for LLMs to consume.
- **Engines** provide natural language access to your data. For example:
- Query engines are powerful interfaces for question-answering (e.g. a RAG pipeline).
- Chat engines are conversational interfaces for multi-message, "back and forth" interactions with your data.
- **Agents** are LLM-powered knowledge workers augmented by tools, from simple helper functions to API integrations and more.
- **Observability/Evaluation** integrations that enable you to rigorously experiment, evaluate, and monitor your app in a virtuous cycle.
## Use cases
Some popular use cases for LlamaIndex and context augmentation in general include:
- [Question-Answering](./use_cases/q_and_a/index.md) (Retrieval-Augmented Generation aka RAG)
- [Chatbots](./use_cases/chatbots.md)
- [Document Understanding and Data Extraction](./use_cases/extraction.md)
- [Autonomous Agents](./use_cases/agents.md) that can perform research and take actions
- [Multi-modal applications](./use_cases/multimodal.md) that combine text, images, and other data types
- [Fine-tuning](./use_cases/fine_tuning.md) models on data to improve performance
Check out our [use cases](./use_cases/index.md) documentation for more examples and links to tutorials.
### 👨👩👧👦 Who is LlamaIndex for?
LlamaIndex provides tools for beginners, advanced users, and everyone in between.
Our high-level API allows beginner users to use LlamaIndex to ingest and query their data in 5 lines of code.
For more complex applications, our lower-level APIs allow advanced users to customize and extend any module—data connectors, indices, retrievers, query engines, reranking modules—to fit their needs.
## Getting Started
LlamaIndex is available in Python (these docs) and [Typescript](https://ts.llamaindex.ai/). If you're not sure where to start, we recommend reading [how to read these docs](./getting_started/reading.md) which will point you to the right place based on your experience level.
### 30 second quickstart
Set an environment variable called `OPENAI_API_KEY` with an [OpenAI API key](https://platform.openai.com/api-keys). Install the Python library:
```bash
pip install llama-index
```
Put some documents in a folder called `data`, then ask questions about them with our famous 5-line starter:
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("Some question about the data should go here")
print(response)
```
If any part of this trips you up, don't worry! Check out our more comprehensive starter tutorials using [remote APIs like OpenAI](./getting_started/starter_example.md) or [any model that runs on your laptop](./getting_started/starter_example_local.md).
## LlamaCloud
If you're an enterprise developer, check out [**LlamaCloud**](https://llamaindex.ai/enterprise). It is an end-to-end managed service for data parsing, ingestion, indexing, and retrieval, allowing you to get production-quality data for your production LLM application. It's available both hosted on our servers or as a self-hosted solution.
### LlamaParse
LlamaParse is our state-of-the-art document parsing solution. It's available as part of LlamaCloud and also available as a self-serve API. You can [sign up](https://cloud.llamaindex.ai/) and parse up to 1000 pages/day for free, or enter a credit card for unlimited parsing. [Learn more](https://llamaindex.ai/enterprise).
## Community
Need help? Have a feature suggestion? Join the LlamaIndex community:
- [Twitter](https://twitter.com/llama_index)
- [Discord](https://discord.gg/dGcwcsnxhU)
- [LinkedIn](https://www.linkedin.com/company/llamaindex/)
### Getting the library
- LlamaIndex Python
- [LlamaIndex Python Github](https://github.com/run-llama/llama_index)
- [Python Docs](https://docs.llamaindex.ai/) (what you're reading now)
- [LlamaIndex on PyPi](https://pypi.org/project/llama-index/)
- LlamaIndex.TS (Typescript/Javascript package):
- [LlamaIndex.TS Github](https://github.com/run-llama/LlamaIndexTS)
- [TypeScript Docs](https://ts.llamaindex.ai/)
- [LlamaIndex.TS on npm](https://www.npmjs.com/package/llamaindex)
### Contributing
We are open-source and always welcome contributions to the project! Check out our [contributing guide](./CONTRIBUTING.md) for full details on how to extend the core library or add an integration to a third party like an LLM, a vector store, an agent tool and more.
## Related projects
There's more to the LlamaIndex universe! Check out some of our other projects:
- [LlamaHub](https://llamahub.ai) | A large (and growing!) collection of custom data connectors
- [SEC Insights](https://secinsights.ai) | A LlamaIndex-powered application for financial research
- [create-llama](https://www.npmjs.com/package/create-llama) | A CLI tool to quickly scaffold LlamaIndex projects |
979 | 4ce1a9a2-e91a-47ae-9cbe-0566b5db3acb | Building an LLM application | https://docs.llamaindex.ai/en/stable/understanding/index | true | llama_index | # Building an LLM application
Welcome to the beginning of Understanding LlamaIndex. This is a series of short, bite-sized tutorials on every stage of building an LLM application to get you acquainted with how to use LlamaIndex before diving into more advanced and subtle strategies. If you're an experienced programmer new to LlamaIndex, this is the place to start.
## Key steps in building an LLM application
!!! tip
If you've already read our [high-level concepts](../getting_started/concepts.md) page you'll recognize several of these steps.
This tutorial has two main parts: **Building a RAG pipeline** and **Building an agent**, with some smaller sections before and after. Here's what to expect:
- **[Using LLMs](./using_llms/using_llms.md)**: hit the ground running by getting started working with LLMs. We'll show you how to use any of our [dozens of supported LLMs](../module_guides/models/llms/modules/), whether via remote API calls or running locally on your machine.
- **Building a RAG pipeline**: Retrieval-Augmented Generation (RAG) is a key technique for getting your data into an LLM, and a component of more sophisticated agentic systems. We'll show you how to build a full-featured RAG pipeline that can answer questions about your data. This includes:
- **[Loading & Ingestion](./loading/loading.md)**: Getting your data from wherever it lives, whether that's unstructured text, PDFs, databases, or APIs to other applications. LlamaIndex has hundreds of connectors to every data source over at [LlamaHub](https://llamahub.ai/).
- **[Indexing and Embedding](./indexing/indexing.md)**: Once you've got your data there are an infinite number of ways to structure access to that data to ensure your applications is always working with the most relevant data. LlamaIndex has a huge number of these strategies built-in and can help you select the best ones.
- **[Storing](./storing/storing.md)**: You will probably find it more efficient to store your data in indexed form, or pre-processed summaries provided by an LLM, often in a specialized database known as a `Vector Store` (see below). You can also store your indexes, metadata and more.
- **[Querying](./querying/querying.md)**: Every indexing strategy has a corresponding querying strategy and there are lots of ways to improve the relevance, speed and accuracy of what you retrieve and what the LLM does with it before returning it to you, including turning it into structured responses such as an API.
- **Building an agent**: agents are LLM-powered knowledge workers that can interact with the world via a set of tools. Those tools can be RAG engines such as you learned how to build in the previous section, or any arbitrary code. This tutorial includes:
- **[Building a basic agent](./agent/basic_agent.md)**: We show you how to build a simple agent that can interact with the world via a set of tools.
- **[Using local models with agents](./agent/local_models.md)**: Agents can be built to use local models, which can be important for performance or privacy reasons.
- **[Adding RAG to an agent](./agent/rag_agent.md)**: The RAG pipelines you built in the previous tutorial can be used as a tool by an agent, giving your agent powerful information-retrieval capabilities.
- **[Adding other tools](./agent/tools.md)**: Let's add more sophisticated tools to your agent, such as API integrations.
- **[Putting it all together](./putting_it_all_together/index.md)**: whether you are building question & answering, chatbots, an API, or an autonomous agent, we show you how to get your application into production.
- **[Tracing and debugging](./tracing_and_debugging/tracing_and_debugging.md)**: also called **observability**, it's especially important with LLM applications to be able to look into the inner workings of what's going on to help you debug problems and spot places to improve.
- **[Evaluating](./evaluating/evaluating.md)**: every strategy has pros and cons and a key part of building, shipping and evolving your application is evaluating whether your change has improved your application in terms of accuracy, performance, clarity, cost and more. Reliably evaluating your changes is a crucial part of LLM application development.
## Let's get started!
Ready to dive in? Head to [using LLMs](./using_llms/using_llms.md). |
182 | 5b64e132-a551-4e6f-9c95-2606810cae8c | Privacy and Security | https://docs.llamaindex.ai/en/stable/understanding/using_llms/privacy | true | llama_index | # Privacy and Security
By default, LLamaIndex sends your data to OpenAI for generating embeddings and natural language responses. However, it is important to note that this can be configured according to your preferences. LLamaIndex provides the flexibility to use your own embedding model or run a large language model locally if desired.
## Data Privacy
Regarding data privacy, when using LLamaIndex with OpenAI, the privacy details and handling of your data are subject to OpenAI's policies. And each custom service other than OpenAI has its policies as well.
## Vector stores
LLamaIndex offers modules to connect with other vector stores within indexes to store embeddings. It is worth noting that each vector store has its own privacy policies and practices, and LLamaIndex does not assume responsibility for how it handles or uses your data. Also by default, LLamaIndex has a default option to store your embeddings locally. |
869 | 7be87819-70df-4a9c-b558-ea795bb332d3 | Using LLMs | https://docs.llamaindex.ai/en/stable/understanding/using_llms/using_llms | true | llama_index | # Using LLMs
!!! tip
For a list of our supported LLMs and a comparison of their functionality, check out our [LLM module guide](../../module_guides/models/llms.md).
One of the first steps when building an LLM-based application is which LLM to use; you can also use more than one if you wish.
LLMs are used at multiple different stages of your pipeline:
- During **Indexing** you may use an LLM to determine the relevance of data (whether to index it at all) or you may use an LLM to summarize the raw data and index the summaries instead.
- During **Querying** LLMs can be used in two ways:
- During **Retrieval** (fetching data from your index) LLMs can be given an array of options (such as multiple different indices) and make decisions about where best to find the information you're looking for. An agentic LLM can also use _tools_ at this stage to query different data sources.
- During **Response Synthesis** (turning the retrieved data into an answer) an LLM can combine answers to multiple sub-queries into a single coherent answer, or it can transform data, such as from unstructured text to JSON or another programmatic output format.
LlamaIndex provides a single interface to a large number of different LLMs, allowing you to pass in any LLM you choose to any stage of the pipeline. It can be as simple as this:
```python
from llama_index.llms.openai import OpenAI
response = OpenAI().complete("Paul Graham is ")
print(response)
```
Usually, you will instantiate an LLM and pass it to `Settings`, which you then pass to other stages of the pipeline, as in this example:
```python
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
Settings.llm = OpenAI(temperature=0.2, model="gpt-4")
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(
documents,
)
```
In this case, you've instantiated OpenAI and customized it to use the `gpt-4` model instead of the default `gpt-3.5-turbo`, and also modified the `temperature`. The `VectorStoreIndex` will now use gpt-4 to answer questions when querying.
!!! tip
The `Settings` is a bundle of configuration data that you pass into different parts of LlamaIndex. You can [learn more about Settings](../../module_guides/supporting_modules/settings.md) and how to customize it.
## Available LLMs
We support integrations with OpenAI, Hugging Face, PaLM, and more. Check out our [module guide to LLMs](../../module_guides/models/llms.md) for a full list, including how to run a local model.
!!! tip
A general note on privacy and LLMs can be found on the [privacy page](./privacy.md).
### Using a local LLM
LlamaIndex doesn't just support hosted LLM APIs; you can also [run a local model such as Llama2 locally](https://replicate.com/blog/run-llama-locally).
For example, if you have [Ollama](https://github.com/ollama/ollama) installed and running:
```python
from llama_index.llms.ollama import Ollama
from llama_index.core import Settings
Settings.llm = Ollama(model="llama2", request_timeout=60.0)
```
See the [custom LLM's How-To](../../module_guides/models/llms/usage_custom.md) for more details.
## Prompts
By default LlamaIndex comes with a great set of built-in, battle-tested prompts that handle the tricky work of getting a specific LLM to correctly handle and format data. This is one of the biggest benefits of using LlamaIndex. If you want to, you can [customize the prompts](../../module_guides/models/prompts/index.md). |
363 | 888d853a-1b0c-4456-b289-be9ed2c89c2a | LlamaHub | https://docs.llamaindex.ai/en/stable/understanding/loading/llamahub | true | llama_index | # LlamaHub
Our data connectors are offered through [LlamaHub](https://llamahub.ai/) 🦙.
LlamaHub contains a registry of open-source data connectors that you can easily plug into any LlamaIndex application (+ Agent Tools, and Llama Packs).
![](../../_static/data_connectors/llamahub.png)
## Usage Pattern
Get started with:
```python
from llama_index.core import download_loader
from llama_index.readers.google import GoogleDocsReader
loader = GoogleDocsReader()
documents = loader.load_data(document_ids=[...])
```
## Built-in connector: SimpleDirectoryReader
`SimpleDirectoryReader`. Can support parsing a wide range of file types including `.md`, `.pdf`, `.jpg`, `.png`, `.docx`, as well as audio and video types. It is available directly as part of LlamaIndex:
```python
from llama_index.core import SimpleDirectoryReader
documents = SimpleDirectoryReader("./data").load_data()
```
## Available connectors
Browse [LlamaHub](https://llamahub.ai/) directly to see the hundreds of connectors available, including:
- [Notion](https://developers.notion.com/) (`NotionPageReader`)
- [Google Docs](https://developers.google.com/docs/api) (`GoogleDocsReader`)
- [Slack](https://api.slack.com/) (`SlackReader`)
- [Discord](https://discord.com/developers/docs/intro) (`DiscordReader`)
- [Apify Actors](https://llamahub.ai/l/apify-actor) (`ApifyActor`). Can crawl the web, scrape webpages, extract text content, download files including `.pdf`, `.jpg`, `.png`, `.docx`, etc. |
1,418 | 88e2611e-eb6e-43c2-97bf-9252717a0a56 | Loading Data (Ingestion) | https://docs.llamaindex.ai/en/stable/understanding/loading/loading | true | llama_index | # Loading Data (Ingestion)
Before your chosen LLM can act on your data, you first need to process the data and load it. This has parallels to data cleaning/feature engineering pipelines in the ML world, or ETL pipelines in the traditional data setting.
This ingestion pipeline typically consists of three main stages:
1. Load the data
2. Transform the data
3. Index and store the data
We cover indexing/storage in [future](../indexing/indexing.md) [sections](../storing/storing.md). In this guide we'll mostly talk about loaders and transformations.
## Loaders
Before your chosen LLM can act on your data you need to load it. The way LlamaIndex does this is via data connectors, also called `Reader`. Data connectors ingest data from different data sources and format the data into `Document` objects. A `Document` is a collection of data (currently text, and in future, images and audio) and metadata about that data.
### Loading using SimpleDirectoryReader
The easiest reader to use is our SimpleDirectoryReader, which creates documents out of every file in a given directory. It is built in to LlamaIndex and can read a variety of formats including Markdown, PDFs, Word documents, PowerPoint decks, images, audio and video.
```python
from llama_index.core import SimpleDirectoryReader
documents = SimpleDirectoryReader("./data").load_data()
```
### Using Readers from LlamaHub
Because there are so many possible places to get data, they are not all built-in. Instead, you download them from our registry of data connectors, [LlamaHub](llamahub.md).
In this example LlamaIndex downloads and installs the connector called [DatabaseReader](https://llamahub.ai/l/readers/llama-index-readers-database), which runs a query against a SQL database and returns every row of the results as a `Document`:
```python
from llama_index.core import download_loader
from llama_index.readers.database import DatabaseReader
reader = DatabaseReader(
scheme=os.getenv("DB_SCHEME"),
host=os.getenv("DB_HOST"),
port=os.getenv("DB_PORT"),
user=os.getenv("DB_USER"),
password=os.getenv("DB_PASS"),
dbname=os.getenv("DB_NAME"),
)
query = "SELECT * FROM users"
documents = reader.load_data(query=query)
```
There are hundreds of connectors to use on [LlamaHub](https://llamahub.ai)!
### Creating Documents directly
Instead of using a loader, you can also use a Document directly.
```python
from llama_index.core import Document
doc = Document(text="text")
```
## Transformations
After the data is loaded, you then need to process and transform your data before putting it into a storage system. These transformations include chunking, extracting metadata, and embedding each chunk. This is necessary to make sure that the data can be retrieved, and used optimally by the LLM.
Transformation input/outputs are `Node` objects (a `Document` is a subclass of a `Node`). Transformations can also be stacked and reordered.
We have both a high-level and lower-level API for transforming documents.
### High-Level Transformation API
Indexes have a `.from_documents()` method which accepts an array of Document objects and will correctly parse and chunk them up. However, sometimes you will want greater control over how your documents are split up.
```python
from llama_index.core import VectorStoreIndex
vector_index = VectorStoreIndex.from_documents(documents)
vector_index.as_query_engine()
```
Under the hood, this splits your Document into Node objects, which are similar to Documents (they contain text and metadata) but have a relationship to their parent Document.
If you want to customize core components, like the text splitter, through this abstraction you can pass in a custom `transformations` list or apply to the global `Settings`:
```python
from llama_index.core.node_parser import SentenceSplitter
text_splitter = SentenceSplitter(chunk_size=512, chunk_overlap=10)
# global
from llama_index.core import Settings
Settings.text_splitter = text_splitter
# per-index
index = VectorStoreIndex.from_documents(
documents, transformations=[text_splitter]
)
```
### Lower-Level Transformation API
You can also define these steps explicitly.
You can do this by either using our transformation modules (text splitters, metadata extractors, etc.) as standalone components, or compose them in our declarative [Transformation Pipeline interface](../../module_guides/loading/ingestion_pipeline/index.md).
Let's walk through the steps below.
#### Splitting Your Documents into Nodes
A key step to process your documents is to split them into "chunks"/Node objects. The key idea is to process your data into bite-sized pieces that can be retrieved / fed to the LLM.
LlamaIndex has support for a wide range of [text splitters](../../module_guides/loading/node_parsers/modules.md), ranging from paragraph/sentence/token based splitters to file-based splitters like HTML, JSON.
These can be [used on their own or as part of an ingestion pipeline](../../module_guides/loading/node_parsers/index.md).
```python
from llama_index.core import SimpleDirectoryReader
from llama_index.core.ingestion import IngestionPipeline
from llama_index.core.node_parser import TokenTextSplitter
documents = SimpleDirectoryReader("./data").load_data()
pipeline = IngestionPipeline(transformations=[TokenTextSplitter(), ...])
nodes = pipeline.run(documents=documents)
```
### Adding Metadata
You can also choose to add metadata to your documents and nodes. This can be done either manually or with [automatic metadata extractors](../../module_guides/loading/documents_and_nodes/usage_metadata_extractor.md).
Here are guides on 1) [how to customize Documents](../../module_guides/loading/documents_and_nodes/usage_documents.md), and 2) [how to customize Nodes](../../module_guides/loading/documents_and_nodes/usage_nodes.md).
```python
document = Document(
text="text",
metadata={"filename": "<doc_file_name>", "category": "<category>"},
)
```
### Adding Embeddings
To insert a node into a vector index, it should have an embedding. See our [ingestion pipeline](../../module_guides/loading/ingestion_pipeline/index.md) or our [embeddings guide](../../module_guides/models/embeddings.md) for more details.
### Creating and passing Nodes directly
If you want to, you can create nodes directly and pass a list of Nodes directly to an indexer:
```python
from llama_index.core.schema import TextNode
node1 = TextNode(text="<text_chunk>", id_="<node_id>")
node2 = TextNode(text="<text_chunk>", id_="<node_id>")
index = VectorStoreIndex([node1, node2])
``` |
581 | 81066675-5d92-4073-853a-02f7605ce032 | Evaluating | https://docs.llamaindex.ai/en/stable/understanding/evaluating/evaluating | true | llama_index | # Evaluating
Evaluation and benchmarking are crucial concepts in LLM development. To improve the performance of an LLM app (RAG, agents), you must have a way to measure it.
LlamaIndex offers key modules to measure the quality of generated results. We also offer key modules to measure retrieval quality. You can learn more about how evaluation works in LlamaIndex in our [module guides](../../module_guides/evaluating/index.md).
## Response Evaluation
Does the response match the retrieved context? Does it also match the query? Does it match the reference answer or guidelines? Here's a simple example that evaluates a single response for Faithfulness, i.e. whether the response is aligned to the context, such as being free from hallucinations:
```python
from llama_index.core import VectorStoreIndex
from llama_index.llms.openai import OpenAI
from llama_index.core.evaluation import FaithfulnessEvaluator
# create llm
llm = OpenAI(model="gpt-4", temperature=0.0)
# build index
...
vector_index = VectorStoreIndex(...)
# define evaluator
evaluator = FaithfulnessEvaluator(llm=llm)
# query index
query_engine = vector_index.as_query_engine()
response = query_engine.query(
"What battles took place in New York City in the American Revolution?"
)
eval_result = evaluator.evaluate_response(response=response)
print(str(eval_result.passing))
```
The response contains both the response and the source from which the response was generated; the evaluator compares them and determines if the response is faithful to the source.
You can learn more in our module guides about [response evaluation](../../module_guides/evaluating/usage_pattern.md).
## Retrieval Evaluation
Are the retrieved sources relevant to the query? This is a simple example that evaluates a single retrieval:
```python
from llama_index.core.evaluation import RetrieverEvaluator
# define retriever somewhere (e.g. from index)
# retriever = index.as_retriever(similarity_top_k=2)
retriever = ...
retriever_evaluator = RetrieverEvaluator.from_metric_names(
["mrr", "hit_rate"], retriever=retriever
)
retriever_evaluator.evaluate(
query="query", expected_ids=["node_id1", "node_id2"]
)
```
This compares what was retrieved for the query to a set of nodes that were expected to be retrieved.
In reality you would want to evaluate a whole batch of retrievals; you can learn how do this in our module guide on [retrieval evaluation](../../module_guides/evaluating/usage_pattern_retrieval.md).
## Related concepts
You may be interested in [analyzing the cost of your application](cost_analysis/index.md) if you are making calls to a hosted, remote LLM. |
492 | 94a22f57-ea69-4559-926d-77f80c448b7e | Usage Pattern | https://docs.llamaindex.ai/en/stable/understanding/evaluating/cost_analysis/usage_pattern | true | llama_index | # Usage Pattern
## Estimating LLM and Embedding Token Counts
In order to measure LLM and Embedding token counts, you'll need to
1. Setup `MockLLM` and `MockEmbedding` objects
```python
from llama_index.core.llms import MockLLM
from llama_index.core import MockEmbedding
llm = MockLLM(max_tokens=256)
embed_model = MockEmbedding(embed_dim=1536)
```
2. Setup the `TokenCountingCallback` handler
```python
import tiktoken
from llama_index.core.callbacks import CallbackManager, TokenCountingHandler
token_counter = TokenCountingHandler(
tokenizer=tiktoken.encoding_for_model("gpt-3.5-turbo").encode
)
callback_manager = CallbackManager([token_counter])
```
3. Add them to the global `Settings`
```python
from llama_index.core import Settings
Settings.llm = llm
Settings.embed_model = embed_model
Settings.callback_manager = callback_manager
```
4. Construct an Index
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader(
"./docs/examples/data/paul_graham"
).load_data()
index = VectorStoreIndex.from_documents(documents)
```
5. Measure the counts!
```python
print(
"Embedding Tokens: ",
token_counter.total_embedding_token_count,
"\n",
"LLM Prompt Tokens: ",
token_counter.prompt_llm_token_count,
"\n",
"LLM Completion Tokens: ",
token_counter.completion_llm_token_count,
"\n",
"Total LLM Token Count: ",
token_counter.total_llm_token_count,
"\n",
)
# reset counts
token_counter.reset_counts()
```
6. Run a query, measure again
```python
query_engine = index.as_query_engine()
response = query_engine.query("query")
print(
"Embedding Tokens: ",
token_counter.total_embedding_token_count,
"\n",
"LLM Prompt Tokens: ",
token_counter.prompt_llm_token_count,
"\n",
"LLM Completion Tokens: ",
token_counter.completion_llm_token_count,
"\n",
"Total LLM Token Count: ",
token_counter.total_llm_token_count,
"\n",
)
``` |
885 | 20ea3cb9-4145-4805-887e-7c48f1333c04 | Cost Analysis | https://docs.llamaindex.ai/en/stable/understanding/evaluating/cost_analysis/index | true | llama_index | # Cost Analysis
## Concept
Each call to an LLM will cost some amount of money - for instance, OpenAI's gpt-3.5-turbo costs $0.002 / 1k tokens. The cost of building an index and querying depends on
- the type of LLM used
- the type of data structure used
- parameters used during building
- parameters used during querying
The cost of building and querying each index is a TODO in the reference documentation. In the meantime, we provide the following information:
1. A high-level overview of the cost structure of the indices.
2. A token predictor that you can use directly within LlamaIndex!
### Overview of Cost Structure
#### Indices with no LLM calls
The following indices don't require LLM calls at all during building (0 cost):
- `SummaryIndex`
- `SimpleKeywordTableIndex` - uses a regex keyword extractor to extract keywords from each document
- `RAKEKeywordTableIndex` - uses a RAKE keyword extractor to extract keywords from each document
#### Indices with LLM calls
The following indices do require LLM calls during build time:
- `TreeIndex` - use LLM to hierarchically summarize the text to build the tree
- `KeywordTableIndex` - use LLM to extract keywords from each document
### Query Time
There will always be >= 1 LLM call during query time, in order to synthesize the final answer.
Some indices contain cost tradeoffs between index building and querying. `SummaryIndex`, for instance,
is free to build, but running a query over a summary index (without filtering or embedding lookups), will
call the LLM {math}`N` times.
Here are some notes regarding each of the indices:
- `SummaryIndex`: by default requires {math}`N` LLM calls, where N is the number of nodes.
- `TreeIndex`: by default requires {math}`\log (N)` LLM calls, where N is the number of leaf nodes.
- Setting `child_branch_factor=2` will be more expensive than the default `child_branch_factor=1` (polynomial vs logarithmic), because we traverse 2 children instead of just 1 for each parent node.
- `KeywordTableIndex`: by default requires an LLM call to extract query keywords.
- Can do `index.as_retriever(retriever_mode="simple")` or `index.as_retriever(retriever_mode="rake")` to also use regex/RAKE keyword extractors on your query text.
- `VectorStoreIndex`: by default, requires one LLM call per query. If you increase the `similarity_top_k` or `chunk_size`, or change the `response_mode`, then this number will increase.
## Usage Pattern
LlamaIndex offers token **predictors** to predict token usage of LLM and embedding calls.
This allows you to estimate your costs during 1) index construction, and 2) index querying, before
any respective LLM calls are made.
Tokens are counted using the `TokenCountingHandler` callback. See the [example notebook](../../../examples/callbacks/TokenCountingHandler.ipynb) for details on the setup.
### Using MockLLM
To predict token usage of LLM calls, import and instantiate the MockLLM as shown below. The `max_tokens` parameter is used as a "worst case" prediction, where each LLM response will contain exactly that number of tokens. If `max_tokens` is not specified, then it will simply predict back the prompt.
```python
from llama_index.core.llms import MockLLM
from llama_index.core import Settings
# use a mock llm globally
Settings.llm = MockLLM(max_tokens=256)
```
You can then use this predictor during both index construction and querying.
### Using MockEmbedding
You may also predict the token usage of embedding calls with `MockEmbedding`.
```python
from llama_index.core import MockEmbedding
from llama_index.core import Settings
# use a mock embedding globally
Settings.embed_model = MockEmbedding(embed_dim=1536)
```
## Usage Pattern
Read about the [full usage pattern](./usage_pattern.md) for more details! |
710 | 90154ae9-1d90-4442-a9b3-5bedaba0074c | Agents with local models | https://docs.llamaindex.ai/en/stable/understanding/agent/local_models | true | llama_index | # Agents with local models
If you're happy using OpenAI or another remote model, you can skip this section, but many people are interested in using models they run themselves. The easiest way to do this is via the great work of our friends at [Ollama](https://ollama.com/), who provide a simple to use client that will download, install and run a [growing range of models](https://ollama.com/library) for you.
## Install Ollama
They provide a one-click installer for Mac, Linux and Windows on their [home page](https://ollama.com/).
## Pick and run a model
Since we're going to be doing agentic work, we'll need a very capable model, but the largest models are hard to run on a laptop. We think `mixtral 8x7b` is a good balance between power and resources, but `llama3` is another great option. You can run Mixtral by running
```bash
ollama run mixtral:8x7b
```
The first time you run, it will also automatically download and install the model for you, which can take a while.
## Switch to local agent
To switch to Mixtral, you'll need to bring in the Ollama integration:
```bash
pip install llama-index-llms-ollama
```
Then modify your dependencies to bring in Ollama instead of OpenAI:
```python
from llama_index.llms.ollama import Ollama
```
And finally initialize Mixtral as your LLM instead:
```python
llm = Ollama(model="mixtral:8x7b", request_timeout=120.0)
```
## Ask the question again
```python
response = agent.chat("What is 20+(2*4)? Calculate step by step.")
```
The exact output looks different from OpenAI (it makes a mistake the first time it tries), but Mixtral gets the right answer:
```
Thought: The current language of the user is: English. The user wants to calculate the value of 20+(2*4). I need to break down this task into subtasks and use appropriate tools to solve each subtask.
Action: multiply
Action Input: {'a': 2, 'b': 4}
Observation: 8
Thought: The user has calculated the multiplication part of the expression, which is (2*4), and got 8 as a result. Now I need to add this value to 20 by using the 'add' tool.
Action: add
Action Input: {'a': 20, 'b': 8}
Observation: 28
Thought: The user has calculated the sum of 20+(2*4) and got 28 as a result. Now I can answer without using any more tools.
Answer: The solution to the expression 20+(2*4) is 28.
The solution to the expression 20+(2*4) is 28.
```
Check the [repo](https://github.com/run-llama/python-agents-tutorial/blob/main/2_local_agent.py) to see what this final code looks like.
You can now continue the rest of the tutorial with a local model if you prefer. We'll keep using OpenAI as we move on to [adding RAG to your agent](./rag_agent.md). |
971 | 9830872c-c9b8-4b01-9518-9a1fa6c14821 | Adding RAG to an agent | https://docs.llamaindex.ai/en/stable/understanding/agent/rag_agent | true | llama_index | # Adding RAG to an agent
To demonstrate using RAG engines as a tool in an agent, we're going to create a very simple RAG query engine. Our source data is going to be the [Wikipedia page about the 2023 Canadian federal budget](https://en.wikipedia.org/wiki/2023_Canadian_federal_budget) that we've [printed as a PDF](https://www.dropbox.com/scl/fi/rop435rax7mn91p3r8zj3/2023_canadian_budget.pdf?rlkey=z8j6sab5p6i54qa9tr39a43l7&dl=0).
## Bring in new dependencies
To read the PDF and index it, we'll need a few new dependencies. They were installed along with the rest of LlamaIndex, so we just need to import them:
```python
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex, Settings
```
## Add LLM to settings
We were previously passing the LLM directly, but now we need to use it in multiple places, so we'll add it to the global settings.
```python
Settings.llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
```
Place this line near the top of the file; you can delete the other `llm` assignment.
## Load and index documents
We'll now do 3 things in quick succession: we'll load the PDF from a folder called "data", index and embed it using the `VectorStoreIndex`, and then create a query engine from that index:
```python
documents = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
```
We can run a quick smoke-test to make sure the engine is working:
```python
response = query_engine.query(
"What was the total amount of the 2023 Canadian federal budget?"
)
print(response)
```
The response is fast:
```
The total amount of the 2023 Canadian federal budget was $496.9 billion.
```
## Add a query engine tool
This requires one more import:
```python
from llama_index.core.tools import QueryEngineTool
```
Now we turn our query engine into a tool by supplying the appropriate metadata (for the python functions, this was being automatically extracted so we didn't need to add it):
```python
budget_tool = QueryEngineTool.from_defaults(
query_engine,
name="canadian_budget_2023",
description="A RAG engine with some basic facts about the 2023 Canadian federal budget.",
)
```
We modify our agent by adding this engine to our array of tools (we also remove the `llm` parameter, since it's now provided by settings):
```python
agent = ReActAgent.from_tools(
[multiply_tool, add_tool, budget_tool], verbose=True
)
```
## Ask a question using multiple tools
This is kind of a silly question, we'll ask something more useful later:
```python
response = agent.chat(
"What is the total amount of the 2023 Canadian federal budget multiplied by 3? Go step by step, using a tool to do any math."
)
print(response)
```
We get a perfect answer:
```
Thought: The current language of the user is English. I need to use the tools to help me answer the question.
Action: canadian_budget_2023
Action Input: {'input': 'total'}
Observation: $496.9 billion
Thought: I need to multiply the total amount of the 2023 Canadian federal budget by 3.
Action: multiply
Action Input: {'a': 496.9, 'b': 3}
Observation: 1490.6999999999998
Thought: I can answer without using any more tools. I'll use the user's language to answer
Answer: The total amount of the 2023 Canadian federal budget multiplied by 3 is $1,490.70 billion.
The total amount of the 2023 Canadian federal budget multiplied by 3 is $1,490.70 billion.
```
As usual, you can check the [repo](https://github.com/run-llama/python-agents-tutorial/blob/main/3_rag_agent.py) to see this code all together.
Excellent! Your agent can now use any arbitrarily advanced query engine to help answer questions. You can also add as many different RAG engines as you need to consult different data sources. Next, we'll look at how we can answer more advanced questions [using LlamaParse](./llamaparse.md). |
559 | 8df3083f-e2ae-48de-b70c-82b0213e5af4 | Enhancing with LlamaParse | https://docs.llamaindex.ai/en/stable/understanding/agent/llamaparse | true | llama_index | # Enhancing with LlamaParse
In the previous example we asked a very basic question of our document, about the total amount of the budget. Let's instead ask a more complicated question about a specific fact in the document:
```python
documents = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query(
"How much exactly was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget?"
)
print(response)
```
We unfortunately get an unhelpful answer:
```
The budget allocated funds to a new green investments tax credit, but the exact amount was not specified in the provided context information.
```
This is bad, because we happen to know the exact number is in the document! But the PDF is complicated, with tables and multi-column layout, and the LLM is missing the answer. Luckily, we can use LlamaParse to help us out.
First, you need a LlamaCloud API key. You can [get one for free](https://cloud.llamaindex.ai/) by signing up for LlamaCloud. Then put it in your `.env` file just like your OpenAI key:
```bash
LLAMA_CLOUD_API_KEY=llx-xxxxx
```
Now you're ready to use LlamaParse in your code. Let's bring it in as as import:
```python
from llama_parse import LlamaParse
```
And let's put in a second attempt to parse and query the file (note that this uses `documents2`, `index2`, etc.) and see if we get a better answer to the exact same question:
```python
documents2 = LlamaParse(result_type="markdown").load_data(
"./data/2023_canadian_budget.pdf"
)
index2 = VectorStoreIndex.from_documents(documents2)
query_engine2 = index2.as_query_engine()
response2 = query_engine2.query(
"How much exactly was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget?"
)
print(response2)
```
We do!
```
$20 billion was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget.
```
You can always check [the repo](https://github.com/run-llama/python-agents-tutorial/blob/main/4_llamaparse.py) to what this code looks like.
As you can see, parsing quality makes a big difference to what the LLM can understand, even for relatively simple questions. Next let's see how [memory](./memory.md) can help us with more complex questions. |
793 | c8371e03-8cc7-4a36-b589-27a79fad6c81 | Memory | https://docs.llamaindex.ai/en/stable/understanding/agent/memory | true | llama_index | # Memory
We've now made several additions and subtractions to our code. To make it clear what we're using, you can see [the current code for our agent](https://github.com/run-llama/python-agents-tutorial/blob/main/5_memory.py) in the repo. It's using OpenAI for the LLM and LlamaParse to enhance parsing.
We've also added 3 questions in a row. Let's see how the agent handles them:
```python
response = agent.chat(
"How much exactly was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget?"
)
print(response)
response = agent.chat(
"How much was allocated to a implement a means-tested dental care program in the 2023 Canadian federal budget?"
)
print(response)
response = agent.chat(
"How much was the total of those two allocations added together? Use a tool to answer any questions."
)
print(response)
```
This is demonstrating a powerful feature of agents in LlamaIndex: memory. Let's see what the output looks like:
```
Started parsing the file under job_id cac11eca-45e0-4ea9-968a-25f1ac9b8f99
Thought: The current language of the user is English. I need to use a tool to help me answer the question.
Action: canadian_budget_2023
Action Input: {'input': 'How much was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget?'}
Observation: $20 billion was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget.
Thought: I can answer without using any more tools. I'll use the user's language to answer
Answer: $20 billion was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget.
$20 billion was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget.
Thought: The current language of the user is: English. I need to use a tool to help me answer the question.
Action: canadian_budget_2023
Action Input: {'input': 'How much was allocated to implement a means-tested dental care program in the 2023 Canadian federal budget?'}
Observation: $13 billion was allocated to implement a means-tested dental care program in the 2023 Canadian federal budget.
Thought: I can answer without using any more tools. I'll use the user's language to answer
Answer: $13 billion was allocated to implement a means-tested dental care program in the 2023 Canadian federal budget.
$13 billion was allocated to implement a means-tested dental care program in the 2023 Canadian federal budget.
Thought: The current language of the user is: English. I need to use a tool to help me answer the question.
Action: add
Action Input: {'a': 20, 'b': 13}
Observation: 33
Thought: I can answer without using any more tools. I'll use the user's language to answer
Answer: The total of the allocations for the tax credit to promote investment in green technologies and the means-tested dental care program in the 2023 Canadian federal budget is $33 billion.
The total of the allocations for the tax credit to promote investment in green technologies and the means-tested dental care program in the 2023 Canadian federal budget is $33 billion.
```
The agent remembers that it already has the budget allocations from previous questions, and can answer a contextual question like "add those two allocations together" without needing to specify which allocations exactly. It even correctly uses the other addition tool to sum up the numbers.
Having demonstrated how memory helps, let's [add some more complex tools](./tools.md) to our agent. |
983 | 105b26c9-8f71-4dbb-915e-3c10c5105353 | Adding other tools | https://docs.llamaindex.ai/en/stable/understanding/agent/tools | true | llama_index | # Adding other tools
Now that you've built a capable agent, we hope you're excited about all it can do. The core of expanding agent capabilities is the tools available, and we have good news: [LlamaHub](https://llamahub.ai) from LlamaIndex has hundreds of integrations, including [dozens of existing agent tools](https://llamahub.ai/?tab=tools) that you can use right away. We'll show you how to use one of the existing tools, and also how to build and contribute your own.
## Using an existing tool from LlamaHub
For our example, we're going to use the [Yahoo Finance tool](https://llamahub.ai/l/tools/llama-index-tools-yahoo-finance?from=tools) from LlamaHub. It provides a set of six agent tools that look up a variety of information about stock ticker symbols.
First we need to install the tool:
```bash
pip install llama-index-tools-yahoo-finance
```
Then we can set up our dependencies. This is exactly the same as our previous examples, except for the final import:
```python
from dotenv import load_dotenv
load_dotenv()
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.tools import FunctionTool
from llama_index.core import Settings
from llama_index.tools.yahoo_finance import YahooFinanceToolSpec
```
To show how custom tools and LlamaHub tools can work together, we'll include the code from our previous examples the defines a "multiple" tool. We'll also take this opportunity to set up the LLM:
```python
# settings
Settings.llm = OpenAI(model="gpt-4o", temperature=0)
# function tools
def multiply(a: float, b: float) -> float:
"""Multiply two numbers and returns the product"""
return a * b
multiply_tool = FunctionTool.from_defaults(fn=multiply)
def add(a: float, b: float) -> float:
"""Add two numbers and returns the sum"""
return a + b
add_tool = FunctionTool.from_defaults(fn=add)
```
Now we'll do the new step, which is to fetch the array of tools:
```python
finance_tools = YahooFinanceToolSpec().to_tool_list()
```
This is just a regular array, so we can use Python's `extend` method to add our own tools to the mix:
```python
finance_tools.extend([multiply_tool, add_tool])
```
Then we set up the agent as usual, and ask a question:
```python
agent = ReActAgent.from_tools(finance_tools, verbose=True)
response = agent.chat("What is the current price of NVDA?")
print(response)
```
The response is very wordy, so we've truncated it:
```
Thought: The current language of the user is English. I need to use a tool to help me answer the question.
Action: stock_basic_info
Action Input: {'ticker': 'NVDA'}
Observation: Info:
{'address1': '2788 San Tomas Expressway'
...
'currentPrice': 135.58
...}
Thought: I have obtained the current price of NVDA from the stock basic info.
Answer: The current price of NVDA (NVIDIA Corporation) is $135.58.
The current price of NVDA (NVIDIA Corporation) is $135.58.
```
Perfect! As you can see, using existing tools is a snap.
As always, you can check [the repo](https://github.com/run-llama/python-agents-tutorial/blob/main/6_tools.py) to see this code all in one place.
## Building and contributing your own tools
We love open source contributions of new tools! You can see an example of [what the code of the Yahoo finance tool looks like](https://github.com/run-llama/llama_index/blob/main/llama-index-integrations/tools/llama-index-tools-yahoo-finance/llama_index/tools/yahoo_finance/base.py):
* A class that extends `BaseToolSpec`
* A set of arbitrary Python functions
* A `spec_functions` list that maps the functions to the tool's API
Once you've got a tool working, follow our [contributing guide](https://github.com/run-llama/llama_index/blob/main/CONTRIBUTING.md#2--contribute-a-pack-reader-tool-or-dataset-formerly-from-llama-hub) for instructions on correctly setting metadata and submitting a pull request.
Congratulations! You've completed our guide to building agents with LlamaIndex. We can't wait to see what use-cases you build! |
1,197 | e539dfa2-9a44-42a8-aa53-598e47a4b591 | Building a basic agent | https://docs.llamaindex.ai/en/stable/understanding/agent/basic_agent | true | llama_index | # Building a basic agent
In LlamaIndex, an agent is a semi-autonomous piece of software powered by an LLM that is given a task and executes a series of steps towards solving that task. It is given a set of tools, which can be anything from arbitrary functions up to full LlamaIndex query engines, and it selects the best available tool to complete each step. When each step is completed, the agent judges whether the task is now complete, in which case it returns a result to the user, or whether it needs to take another step, in which case it loops back to the start.
![agent flow](./agent_flow.png)
## Getting started
You can find all of this code in [the tutorial repo](https://github.com/run-llama/python-agents-tutorial).
To avoid conflicts and keep things clean, we'll start a new Python virtual environment. You can use any virtual environment manager, but we'll use `poetry` here:
```bash
poetry init
poetry shell
```
And then we'll install the LlamaIndex library and some other dependencies that will come in handy:
```bash
pip install llama-index python-dotenv
```
If any of this gives you trouble, check out our more detailed [installation guide](../getting_started/installation/).
## OpenAI Key
Our agent will be powered by OpenAI's `GPT-3.5-Turbo` LLM, so you'll need an [API key](https://platform.openai.com/). Once you have your key, you can put it in a `.env` file in the root of your project:
```bash
OPENAI_API_KEY=sk-proj-xxxx
```
If you don't want to use OpenAI, we'll show you how to use other models later.
## Bring in dependencies
We'll start by importing the components of LlamaIndex we need, as well as loading the environment variables from our `.env` file:
```python
from dotenv import load_dotenv
load_dotenv()
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.tools import FunctionTool
```
## Create basic tools
For this simple example we'll be creating two tools: one that knows how to multiply numbers together, and one that knows how to add them.
```python
def multiply(a: float, b: float) -> float:
"""Multiply two numbers and returns the product"""
return a * b
multiply_tool = FunctionTool.from_defaults(fn=multiply)
def add(a: float, b: float) -> float:
"""Add two numbers and returns the sum"""
return a + b
add_tool = FunctionTool.from_defaults(fn=add)
```
As you can see, these are regular vanilla Python functions. The docstring comments provide metadata to the agent about what the tool does: if your LLM is having trouble figuring out which tool to use, these docstrings are what you should tweak first.
After each function is defined we create `FunctionTool` objects from these functions, which wrap them in a way that the agent can understand.
## Initialize the LLM
`GPT-3.5-Turbo` is going to be doing the work today:
```python
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
```
You could also pick another popular model accessible via API, such as those from [Mistral](../examples/llm/mistralai/), [Claude from Anthropic](../examples/llm/anthropic/) or [Gemini from Google](../examples/llm/gemini/).
## Initialize the agent
Now we create our agent. In this case, this is a [ReAct agent](https://klu.ai/glossary/react-agent-model), a relatively simple but powerful agent. We give it an array containing our two tools, the LLM we just created, and set `verbose=True` so we can see what's going on:
```python
agent = ReActAgent.from_tools([multiply_tool, add_tool], llm=llm, verbose=True)
```
## Ask a question
We specify that it should use a tool, as this is pretty simple and GPT-3.5 doesn't really need this tool to get the answer.
```python
response = agent.chat("What is 20+(2*4)? Use a tool to calculate every step.")
```
This should give you output similar to the following:
```
Thought: The current language of the user is: English. I need to use a tool to help me answer the question.
Action: multiply
Action Input: {'a': 2, 'b': 4}
Observation: 8
Thought: I need to add 20 to the result of the multiplication.
Action: add
Action Input: {'a': 20, 'b': 8}
Observation: 28
Thought: I can answer without using any more tools. I'll use the user's language to answer
Answer: The result of 20 + (2 * 4) is 28.
The result of 20 + (2 * 4) is 28.
```
As you can see, the agent picks the correct tools one after the other and combines the answers to give the final result. Check the [repo](https://github.com/run-llama/python-agents-tutorial/blob/main/1_basic_agent.py) to see what the final code should look like.
Congratulations! You've built the most basic kind of agent. Next you can find out how to use [local models](./local_models.md) or skip to [adding RAG to your agent](./rag_agent.md). |
1,069 | 37983b44-ac28-44e2-b2a8-455df06ee13b | Storing | https://docs.llamaindex.ai/en/stable/understanding/storing/storing | true | llama_index | # Storing
Once you have data [loaded](../loading/loading.md) and [indexed](../indexing/indexing.md), you will probably want to store it to avoid the time and cost of re-indexing it. By default, your indexed data is stored only in memory.
## Persisting to disk
The simplest way to store your indexed data is to use the built-in `.persist()` method of every Index, which writes all the data to disk at the location specified. This works for any type of index.
```python
index.storage_context.persist(persist_dir="<persist_dir>")
```
Here is an example of a Composable Graph:
```python
graph.root_index.storage_context.persist(persist_dir="<persist_dir>")
```
You can then avoid re-loading and re-indexing your data by loading the persisted index like this:
```python
from llama_index.core import StorageContext, load_index_from_storage
# rebuild storage context
storage_context = StorageContext.from_defaults(persist_dir="<persist_dir>")
# load index
index = load_index_from_storage(storage_context)
```
!!! tip
Important: if you had initialized your index with a custom `transformations`, `embed_model`, etc., you will need to pass in the same options during `load_index_from_storage`, or have it set as the [global settings](../../module_guides/supporting_modules/settings.md).
## Using Vector Stores
As discussed in [indexing](../indexing/indexing.md), one of the most common types of Index is the VectorStoreIndex. The API calls to create the {ref}`embeddings <what-is-an-embedding>` in a VectorStoreIndex can be expensive in terms of time and money, so you will want to store them to avoid having to constantly re-index things.
LlamaIndex supports a [huge number of vector stores](../../module_guides/storing/vector_stores.md) which vary in architecture, complexity and cost. In this example we'll be using Chroma, an open-source vector store.
First you will need to install chroma:
```
pip install chromadb
```
To use Chroma to store the embeddings from a VectorStoreIndex, you need to:
- initialize the Chroma client
- create a Collection to store your data in Chroma
- assign Chroma as the `vector_store` in a `StorageContext`
- initialize your VectorStoreIndex using that StorageContext
Here's what that looks like, with a sneak peek at actually querying the data:
```python
import chromadb
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.chroma import ChromaVectorStore
from llama_index.core import StorageContext
# load some documents
documents = SimpleDirectoryReader("./data").load_data()
# initialize client, setting path to save data
db = chromadb.PersistentClient(path="./chroma_db")
# create collection
chroma_collection = db.get_or_create_collection("quickstart")
# assign chroma as the vector_store to the context
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
# create your index
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
# create a query engine and query
query_engine = index.as_query_engine()
response = query_engine.query("What is the meaning of life?")
print(response)
```
If you've already created and stored your embeddings, you'll want to load them directly without loading your documents or creating a new VectorStoreIndex:
```python
import chromadb
from llama_index.core import VectorStoreIndex
from llama_index.vector_stores.chroma import ChromaVectorStore
from llama_index.core import StorageContext
# initialize client
db = chromadb.PersistentClient(path="./chroma_db")
# get collection
chroma_collection = db.get_or_create_collection("quickstart")
# assign chroma as the vector_store to the context
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
# load your index from stored vectors
index = VectorStoreIndex.from_vector_store(
vector_store, storage_context=storage_context
)
# create a query engine
query_engine = index.as_query_engine()
response = query_engine.query("What is llama2?")
print(response)
```
!!! tip
We have a [more thorough example of using Chroma](../../examples/vector_stores/ChromaIndexDemo.ipynb) if you want to go deeper on this store.
### You're ready to query!
Now you have loaded data, indexed it, and stored that index, you're ready to [query your data](../querying/querying.md).
## Inserting Documents or Nodes
If you've already created an index, you can add new documents to your index using the `insert` method.
```python
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex([])
for doc in documents:
index.insert(doc)
```
See the [document management how-to](../../module_guides/indexing/document_management.md) for more details on managing documents and an example notebook. |
397 | 5f60c10c-560d-47ff-87c3-228f49a478c0 | Tracing and Debugging | https://docs.llamaindex.ai/en/stable/understanding/tracing_and_debugging/tracing_and_debugging | true | llama_index | # Tracing and Debugging
Debugging and tracing the operation of your application is key to understanding and optimizing it. LlamaIndex provides a variety of ways to do this.
## Basic logging
The simplest possible way to look into what your application is doing is to turn on debug logging. That can be done anywhere in your application like this:
```python
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
```
## Callback handler
LlamaIndex provides callbacks to help debug, track, and trace the inner workings of the library. Using the callback manager, as many callbacks as needed can be added.
In addition to logging data related to events, you can also track the duration and number of occurrences
of each event.
Furthermore, a trace map of events is also recorded, and callbacks can use this data however they want. For example, the `LlamaDebugHandler` will, by default, print the trace of events after most operations.
You can get a simple callback handler like this:
```python
import llama_index.core
llama_index.core.set_global_handler("simple")
```
You can also learn how to [build you own custom callback handler](../../module_guides/observability/callbacks/index.md).
## Observability
LlamaIndex provides **one-click observability** to allow you to build principled LLM applications in a production setting.
This feature allows you to seamlessly integrate the LlamaIndex library with powerful observability/evaluation tools offered by our partners. Configure a variable once, and you'll be able to do things like the following:
- View LLM/prompt inputs/outputs
- Ensure that the outputs of any component (LLMs, embeddings) are performing as expected
- View call traces for both indexing and querying
To learn more, check out our [observability docs](../../module_guides/observability/index.md) |
899 | 5b253e54-efac-4382-b5a5-7462cefcbce2 | Indexing | https://docs.llamaindex.ai/en/stable/understanding/indexing/indexing | true | llama_index | # Indexing
With your data loaded, you now have a list of Document objects (or a list of Nodes). It's time to build an `Index` over these objects so you can start querying them.
## What is an Index?
In LlamaIndex terms, an `Index` is a data structure composed of `Document` objects, designed to enable querying by an LLM. Your Index is designed to be complementary to your querying strategy.
LlamaIndex offers several different index types. We'll cover the two most common here.
## Vector Store Index
A `VectorStoreIndex` is by far the most frequent type of Index you'll encounter. The Vector Store Index takes your Documents and splits them up into Nodes. It then creates `vector embeddings` of the text of every node, ready to be queried by an LLM.
### What is an embedding?
`Vector embeddings` are central to how LLM applications function.
A `vector embedding`, often just called an embedding, is a **numerical representation of the semantics, or meaning of your text**. Two pieces of text with similar meanings will have mathematically similar embeddings, even if the actual text is quite different.
This mathematical relationship enables **semantic search**, where a user provides query terms and LlamaIndex can locate text that is related to the **meaning of the query terms** rather than simple keyword matching. This is a big part of how Retrieval-Augmented Generation works, and how LLMs function in general.
There are [many types of embeddings](../../module_guides/models/embeddings.md), and they vary in efficiency, effectiveness and computational cost. By default LlamaIndex uses `text-embedding-ada-002`, which is the default embedding used by OpenAI. If you are using different LLMs you will often want to use different embeddings.
### Vector Store Index embeds your documents
Vector Store Index turns all of your text into embeddings using an API from your LLM; this is what is meant when we say it "embeds your text". If you have a lot of text, generating embeddings can take a long time since it involves many round-trip API calls.
When you want to search your embeddings, your query is itself turned into a vector embedding, and then a mathematical operation is carried out by VectorStoreIndex to rank all the embeddings by how semantically similar they are to your query.
### Top K Retrieval
Once the ranking is complete, VectorStoreIndex returns the most-similar embeddings as their corresponding chunks of text. The number of embeddings it returns is known as `k`, so the parameter controlling how many embeddings to return is known as `top_k`. This whole type of search is often referred to as "top-k semantic retrieval" for this reason.
Top-k retrieval is the simplest form of querying a vector index; you will learn about more complex and subtler strategies when you read the [querying](../querying/querying.md) section.
### Using Vector Store Index
To use the Vector Store Index, pass it the list of Documents you created during the loading stage:
```python
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_documents(documents)
```
!!! tip
`from_documents` also takes an optional argument `show_progress`. Set it to `True` to display a progress bar during index construction.
You can also choose to build an index over a list of Node objects directly:
```python
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex(nodes)
```
With your text indexed, it is now technically ready for [querying](../querying/querying.md)! However, embedding all your text can be time-consuming and, if you are using a hosted LLM, it can also be expensive. To save time and money you will want to [store your embeddings](../storing/storing.md) first.
## Summary Index
A Summary Index is a simpler form of Index best suited to queries where, as the name suggests, you are trying to generate a summary of the text in your Documents. It simply stores all of the Documents and returns all of them to your query engine.
## Further Reading
If your data is a set of interconnected concepts (in computer science terms, a "graph") then you may be interested in our [knowledge graph index](../../examples/index_structs/knowledge_graph/KnowledgeGraphDemo.ipynb). |
1,494 | 92a2e347-69c9-4c40-85bf-65093eb36b46 | Querying | https://docs.llamaindex.ai/en/stable/understanding/querying/querying | true | llama_index | # Querying
Now you've loaded your data, built an index, and stored that index for later, you're ready to get to the most significant part of an LLM application: querying.
At its simplest, querying is just a prompt call to an LLM: it can be a question and get an answer, or a request for summarization, or a much more complex instruction.
More complex querying could involve repeated/chained prompt + LLM calls, or even a reasoning loop across multiple components.
## Getting started
The basis of all querying is the `QueryEngine`. The simplest way to get a QueryEngine is to get your index to create one for you, like this:
```python
query_engine = index.as_query_engine()
response = query_engine.query(
"Write an email to the user given their background information."
)
print(response)
```
## Stages of querying
However, there is more to querying than initially meets the eye. Querying consists of three distinct stages:
- **Retrieval** is when you find and return the most relevant documents for your query from your `Index`. As previously discussed in [indexing](../indexing/indexing.md), the most common type of retrieval is "top-k" semantic retrieval, but there are many other retrieval strategies.
- **Postprocessing** is when the `Node`s retrieved are optionally reranked, transformed, or filtered, for instance by requiring that they have specific metadata such as keywords attached.
- **Response synthesis** is when your query, your most-relevant data and your prompt are combined and sent to your LLM to return a response.
!!! tip
You can find out about [how to attach metadata to documents](../../module_guides/loading/documents_and_nodes/usage_documents.md) and [nodes](../../module_guides/loading/documents_and_nodes/usage_nodes.md).
## Customizing the stages of querying
LlamaIndex features a low-level composition API that gives you granular control over your querying.
In this example, we customize our retriever to use a different number for `top_k` and add a post-processing step that requires that the retrieved nodes reach a minimum similarity score to be included. This would give you a lot of data when you have relevant results but potentially no data if you have nothing relevant.
```python
from llama_index.core import VectorStoreIndex, get_response_synthesizer
from llama_index.core.retrievers import VectorIndexRetriever
from llama_index.core.query_engine import RetrieverQueryEngine
from llama_index.core.postprocessor import SimilarityPostprocessor
# build index
index = VectorStoreIndex.from_documents(documents)
# configure retriever
retriever = VectorIndexRetriever(
index=index,
similarity_top_k=10,
)
# configure response synthesizer
response_synthesizer = get_response_synthesizer()
# assemble query engine
query_engine = RetrieverQueryEngine(
retriever=retriever,
response_synthesizer=response_synthesizer,
node_postprocessors=[SimilarityPostprocessor(similarity_cutoff=0.7)],
)
# query
response = query_engine.query("What did the author do growing up?")
print(response)
```
You can also add your own retrieval, response synthesis, and overall query logic, by implementing the corresponding interfaces.
For a full list of implemented components and the supported configurations, check out our [reference docs](../../api_reference/index.md).
Let's go into more detail about customizing each step:
### Configuring retriever
```python
retriever = VectorIndexRetriever(
index=index,
similarity_top_k=10,
)
```
There are a huge variety of retrievers that you can learn about in our [module guide on retrievers](../../module_guides/querying/retriever/index.md).
### Configuring node postprocessors
We support advanced `Node` filtering and augmentation that can further improve the relevancy of the retrieved `Node` objects.
This can help reduce the time/number of LLM calls/cost or improve response quality.
For example:
- `KeywordNodePostprocessor`: filters nodes by `required_keywords` and `exclude_keywords`.
- `SimilarityPostprocessor`: filters nodes by setting a threshold on the similarity score (thus only supported by embedding-based retrievers)
- `PrevNextNodePostprocessor`: augments retrieved `Node` objects with additional relevant context based on `Node` relationships.
The full list of node postprocessors is documented in the [Node Postprocessor Reference](../../api_reference/postprocessor/index.md).
To configure the desired node postprocessors:
```python
node_postprocessors = [
KeywordNodePostprocessor(
required_keywords=["Combinator"], exclude_keywords=["Italy"]
)
]
query_engine = RetrieverQueryEngine.from_args(
retriever, node_postprocessors=node_postprocessors
)
response = query_engine.query("What did the author do growing up?")
```
### Configuring response synthesis
After a retriever fetches relevant nodes, a `BaseSynthesizer` synthesizes the final response by combining the information.
You can configure it via
```python
query_engine = RetrieverQueryEngine.from_args(
retriever, response_mode=response_mode
)
```
Right now, we support the following options:
- `default`: "create and refine" an answer by sequentially going through each retrieved `Node`;
This makes a separate LLM call per Node. Good for more detailed answers.
- `compact`: "compact" the prompt during each LLM call by stuffing as
many `Node` text chunks that can fit within the maximum prompt size. If there are
too many chunks to stuff in one prompt, "create and refine" an answer by going through
multiple prompts.
- `tree_summarize`: Given a set of `Node` objects and the query, recursively construct a tree
and return the root node as the response. Good for summarization purposes.
- `no_text`: Only runs the retriever to fetch the nodes that would have been sent to the LLM,
without actually sending them. Then can be inspected by checking `response.source_nodes`.
The response object is covered in more detail in Section 5.
- `accumulate`: Given a set of `Node` objects and the query, apply the query to each `Node` text
chunk while accumulating the responses into an array. Returns a concatenated string of all
responses. Good for when you need to run the same query separately against each text
chunk.
## Structured Outputs
You may want to ensure your output is structured. See our [Query Engines + Pydantic Outputs](../../module_guides/querying/structured_outputs/query_engine.md) to see how to extract a Pydantic object from a query engine class.
Also make sure to check out our entire [Structured Outputs](../../module_guides/querying/structured_outputs/index.md) guide.
## Creating your own Query Pipeline
If you want to design complex query flows, you can compose your own query pipeline across many different modules, from prompts/LLMs/output parsers to retrievers to response synthesizers to your own custom components.
Take a look at our [Query Pipelines Module Guide](../../module_guides/querying/pipeline/index.md) for more details. |
399 | 906509df-1a70-4ab8-9df2-68aee062407c | Putting It All Together | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/index | true | llama_index | # Putting It All Together
Congratulations! You've loaded your data, indexed it, stored your index, and queried your index. Now you've got to ship something to production. We can show you how to do that!
- In [Q&A Patterns](q_and_a.md) we'll go into some of the more advanced and subtle ways you can build a query engine beyond the basics.
- The [terms definition tutorial](q_and_a/terms_definitions_tutorial.md) is a detailed, step-by-step tutorial on creating a subtle query application including defining your prompts and supporting images as input.
- We have a guide to [creating a unified query framework over your indexes](../../examples/retrievers/reciprocal_rerank_fusion.ipynb) which shows you how to run queries across multiple indexes.
- And also over [structured data like SQL](structured_data.md)
- We have a guide on [how to build a chatbot](chatbots/building_a_chatbot.md)
- We talk about [building agents in LlamaIndex](agents.md)
- We have a complete guide to using [property graphs for indexing and retrieval](../../module_guides/indexing/lpg_index_guide.md)
- And last but not least we show you how to build [a full stack web application](apps/index.md) using LlamaIndex
LlamaIndex also provides some tools / project templates to help you build a full-stack template. For instance, [`create-llama`](https://github.com/run-llama/LlamaIndexTS/tree/main/packages/create-llama) spins up a full-stack scaffold for you.
Check out our [Full-Stack Projects](../../community/full_stack_projects.md) page for more details.
We also have the [`llamaindex-cli rag` CLI tool](../../getting_started/starter_tools/rag_cli.md) that combines some of the above concepts into an easy to use tool for chatting with files from your terminal! |
1,084 | bf31b6c1-15db-4298-aacf-793390f87cb0 | Agents | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/agents | true | llama_index | # Agents
Putting together an agent in LlamaIndex can be done by defining a set of tools and providing them to our ReActAgent implementation. We're using it here with OpenAI, but it can be used with any sufficiently capable LLM:
```python
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
from llama_index.core.agent import ReActAgent
# define sample Tool
def multiply(a: int, b: int) -> int:
"""Multiply two integers and returns the result integer"""
return a * b
multiply_tool = FunctionTool.from_defaults(fn=multiply)
# initialize llm
llm = OpenAI(model="gpt-3.5-turbo-0613")
# initialize ReAct agent
agent = ReActAgent.from_tools([multiply_tool], llm=llm, verbose=True)
```
These tools can be Python functions as shown above, or they can be LlamaIndex query engines:
```python
from llama_index.core.tools import QueryEngineTool
query_engine_tools = [
QueryEngineTool(
query_engine=sql_agent,
metadata=ToolMetadata(
name="sql_agent", description="Agent that can execute SQL queries."
),
),
]
agent = ReActAgent.from_tools(query_engine_tools, llm=llm, verbose=True)
```
You can learn more in our [Agent Module Guide](../../module_guides/deploying/agents/index.md).
## Native OpenAIAgent
We have an `OpenAIAgent` implementation built on the [OpenAI API for function calling](https://openai.com/blog/function-calling-and-other-api-updates) that allows you to rapidly build agents:
- [OpenAIAgent](../../examples/agent/openai_agent.ipynb)
- [OpenAIAgent with Query Engine Tools](../../examples/agent/openai_agent_with_query_engine.ipynb)
- [OpenAIAgent Query Planning](../../examples/agent/openai_agent_query_plan.ipynb)
- [OpenAI Assistant](../../examples/agent/openai_assistant_agent.ipynb)
- [OpenAI Assistant Cookbook](../../examples/agent/openai_assistant_query_cookbook.ipynb)
- [Forced Function Calling](../../examples/agent/openai_forced_function_call.ipynb)
- [Parallel Function Calling](../../examples/agent/openai_agent_parallel_function_calling.ipynb)
- [Context Retrieval](../../examples/agent/openai_agent_context_retrieval.ipynb)
## Agentic Components within LlamaIndex
LlamaIndex provides core modules capable of automated reasoning for different use cases over your data which makes them essentially Agents. Some of these core modules are shown below along with example tutorials.
**SubQuestionQueryEngine for Multi Document Analysis**
- [Sub Question Query Engine (Intro)](../../examples/query_engine/sub_question_query_engine.ipynb)
- [10Q Analysis (Uber)](../../examples/usecases/10q_sub_question.ipynb)
- [10K Analysis (Uber and Lyft)](../../examples/usecases/10k_sub_question.ipynb)
**Query Transformations**
- [How-To](../../optimizing/advanced_retrieval/query_transformations.md)
- [Multi-Step Query Decomposition](../../examples/query_transformations/HyDEQueryTransformDemo.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs/docs/examples/query_transformations/HyDEQueryTransformDemo.ipynb))
**Routing**
- [Usage](../../module_guides/querying/router/index.md)
- [Router Query Engine Guide](../../examples/query_engine/RouterQueryEngine.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs../../examples/query_engine/RouterQueryEngine.ipynb))
**LLM Reranking**
- [Second Stage Processing How-To](../../module_guides/querying/node_postprocessors/index.md)
- [LLM Reranking Guide (Great Gatsby)](../../examples/node_postprocessor/LLMReranker-Gatsby.ipynb)
**Chat Engines**
- [Chat Engines How-To](../../module_guides/deploying/chat_engines/index.md)
## Using LlamaIndex as as Tool within an Agent Framework
LlamaIndex can be used as as Tool within an agent framework - including LangChain, ChatGPT. These integrations are described below.
### LangChain
We have deep integrations with LangChain.
LlamaIndex query engines can be easily packaged as Tools to be used within a LangChain agent, and LlamaIndex can also be used as a memory module / retriever. Check out our guides/tutorials below!
**Resources**
- [Building a Chatbot Tutorial](chatbots/building_a_chatbot.md)
- [OnDemandLoaderTool Tutorial](../../examples/tools/OnDemandLoaderTool.ipynb)
### ChatGPT
LlamaIndex can be used as a ChatGPT retrieval plugin (we have a TODO to develop a more general plugin as well).
**Resources**
- [LlamaIndex ChatGPT Retrieval Plugin](https://github.com/openai/chatgpt-retrieval-plugin#llamaindex) |
5,652 | 8dada3ca-6484-4531-8f3d-cf97f6b9fcd9 | A Guide to Extracting Terms and Definitions | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/q_and_a/terms_definitions_tutorial | true | llama_index | # A Guide to Extracting Terms and Definitions
Llama Index has many use cases (semantic search, summarization, etc.) that are well documented. However, this doesn't mean we can't apply Llama Index to very specific use cases!
In this tutorial, we will go through the design process of using Llama Index to extract terms and definitions from text, while allowing users to query those terms later. Using [Streamlit](https://streamlit.io/), we can provide an easy way to build frontend for running and testing all of this, and quickly iterate with our design.
This tutorial assumes you have Python3.9+ and the following packages installed:
- llama-index
- streamlit
At the base level, our objective is to take text from a document, extract terms and definitions, and then provide a way for users to query that knowledge base of terms and definitions. The tutorial will go over features from both Llama Index and Streamlit, and hopefully provide some interesting solutions for common problems that come up.
The final version of this tutorial can be found [here](https://github.com/abdulasiraj/A-Guide-to-Extracting-Terms-and-Definitions) and a live hosted demo is available on [Huggingface Spaces](https://huggingface.co/spaces/Nobody4591/Llama_Index_Term_Extractor).
## Uploading Text
Step one is giving users a way to input text manually. Let’s write some code using Streamlit to provide the interface for this! Use the following code and launch the app with `streamlit run app.py`.
```python
import streamlit as st
st.title("🦙 Llama Index Term Extractor 🦙")
document_text = st.text_area("Enter raw text")
if st.button("Extract Terms and Definitions") and document_text:
with st.spinner("Extracting..."):
extracted_terms = document_text # this is a placeholder!
st.write(extracted_terms)
```
Super simple right! But you'll notice that the app doesn't do anything useful yet. To use llama_index, we also need to setup our OpenAI LLM. There are a bunch of possible settings for the LLM, so we can let the user figure out what's best. We should also let the user set the prompt that will extract the terms (which will also help us debug what works best).
## LLM Settings
This next step introduces some tabs to our app, to separate it into different panes that provide different features. Let's create a tab for LLM settings and for uploading text:
```python
import os
import streamlit as st
DEFAULT_TERM_STR = (
"Make a list of terms and definitions that are defined in the context, "
"with one pair on each line. "
"If a term is missing it's definition, use your best judgment. "
"Write each line as as follows:\nTerm: <term> Definition: <definition>"
)
st.title("🦙 Llama Index Term Extractor 🦙")
setup_tab, upload_tab = st.tabs(["Setup", "Upload/Extract Terms"])
with setup_tab:
st.subheader("LLM Setup")
api_key = st.text_input("Enter your OpenAI API key here", type="password")
llm_name = st.selectbox("Which LLM?", ["gpt-3.5-turbo", "gpt-4"])
model_temperature = st.slider(
"LLM Temperature", min_value=0.0, max_value=1.0, step=0.1
)
term_extract_str = st.text_area(
"The query to extract terms and definitions with.",
value=DEFAULT_TERM_STR,
)
with upload_tab:
st.subheader("Extract and Query Definitions")
document_text = st.text_area("Enter raw text")
if st.button("Extract Terms and Definitions") and document_text:
with st.spinner("Extracting..."):
extracted_terms = document_text # this is a placeholder!
st.write(extracted_terms)
```
Now our app has two tabs, which really helps with the organization. You'll also noticed I added a default prompt to extract terms -- you can change this later once you try extracting some terms, it's just the prompt I arrived at after experimenting a bit.
Speaking of extracting terms, it's time to add some functions to do just that!
## Extracting and Storing Terms
Now that we are able to define LLM settings and input text, we can try using Llama Index to extract the terms from text for us!
We can add the following functions to both initialize our LLM, as well as use it to extract terms from the input text.
```python
from llama_index.core import Document, SummaryIndex, load_index_from_storage
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings
def get_llm(llm_name, model_temperature, api_key, max_tokens=256):
os.environ["OPENAI_API_KEY"] = api_key
return OpenAI(
temperature=model_temperature, model=llm_name, max_tokens=max_tokens
)
def extract_terms(
documents, term_extract_str, llm_name, model_temperature, api_key
):
llm = get_llm(llm_name, model_temperature, api_key, max_tokens=1024)
temp_index = SummaryIndex.from_documents(
documents,
)
query_engine = temp_index.as_query_engine(
response_mode="tree_summarize", llm=llm
)
terms_definitions = str(query_engine.query(term_extract_str))
terms_definitions = [
x
for x in terms_definitions.split("\n")
if x and "Term:" in x and "Definition:" in x
]
# parse the text into a dict
terms_to_definition = {
x.split("Definition:")[0]
.split("Term:")[-1]
.strip(): x.split("Definition:")[-1]
.strip()
for x in terms_definitions
}
return terms_to_definition
```
Now, using the new functions, we can finally extract our terms!
```python
...
with upload_tab:
st.subheader("Extract and Query Definitions")
document_text = st.text_area("Enter raw text")
if st.button("Extract Terms and Definitions") and document_text:
with st.spinner("Extracting..."):
extracted_terms = extract_terms(
[Document(text=document_text)],
term_extract_str,
llm_name,
model_temperature,
api_key,
)
st.write(extracted_terms)
```
There's a lot going on now, let's take a moment to go over what is happening.
`get_llm()` is instantiating the LLM based on the user configuration from the setup tab. Based on the model name, we need to use the appropriate class (`OpenAI` vs. `ChatOpenAI`).
`extract_terms()` is where all the good stuff happens. First, we call `get_llm()` with `max_tokens=1024`, since we don't want to limit the model too much when it is extracting our terms and definitions (the default is 256 if not set). Then, we define our `Settings` object, aligning `num_output` with our `max_tokens` value, as well as setting the chunk size to be no larger than the output. When documents are indexed by Llama Index, they are broken into chunks (also called nodes) if they are large, and `chunk_size` sets the size for these chunks.
Next, we create a temporary summary index and pass in our llm. A summary index will read every single piece of text in our index, which is perfect for extracting terms. Finally, we use our pre-defined query text to extract terms, using `response_mode="tree_summarize`. This response mode will generate a tree of summaries from the bottom up, where each parent summarizes its children. Finally, the top of the tree is returned, which will contain all our extracted terms and definitions.
Lastly, we do some minor post processing. We assume the model followed instructions and put a term/definition pair on each line. If a line is missing the `Term:` or `Definition:` labels, we skip it. Then, we convert this to a dictionary for easy storage!
## Saving Extracted Terms
Now that we can extract terms, we need to put them somewhere so that we can query for them later. A `VectorStoreIndex` should be a perfect choice for now! But in addition, our app should also keep track of which terms are inserted into the index so that we can inspect them later. Using `st.session_state`, we can store the current list of terms in a session dict, unique to each user!
First things first though, let's add a feature to initialize a global vector index and another function to insert the extracted terms.
```python
from llama_index.core import Settings, VectorStoreIndex
...
if "all_terms" not in st.session_state:
st.session_state["all_terms"] = DEFAULT_TERMS
...
def insert_terms(terms_to_definition):
for term, definition in terms_to_definition.items():
doc = Document(text=f"Term: {term}\nDefinition: {definition}")
st.session_state["llama_index"].insert(doc)
@st.cache_resource
def initialize_index(llm_name, model_temperature, api_key):
"""Create the VectorStoreIndex object."""
Settings.llm = get_llm(llm_name, model_temperature, api_key)
index = VectorStoreIndex([])
return index, llm
...
with upload_tab:
st.subheader("Extract and Query Definitions")
if st.button("Initialize Index and Reset Terms"):
st.session_state["llama_index"] = initialize_index(
llm_name, model_temperature, api_key
)
st.session_state["all_terms"] = {}
if "llama_index" in st.session_state:
st.markdown(
"Either upload an image/screenshot of a document, or enter the text manually."
)
document_text = st.text_area("Or enter raw text")
if st.button("Extract Terms and Definitions") and (
uploaded_file or document_text
):
st.session_state["terms"] = {}
terms_docs = {}
with st.spinner("Extracting..."):
terms_docs.update(
extract_terms(
[Document(text=document_text)],
term_extract_str,
llm_name,
model_temperature,
api_key,
)
)
st.session_state["terms"].update(terms_docs)
if "terms" in st.session_state and st.session_state["terms"]:
st.markdown("Extracted terms")
st.json(st.session_state["terms"])
if st.button("Insert terms?"):
with st.spinner("Inserting terms"):
insert_terms(st.session_state["terms"])
st.session_state["all_terms"].update(st.session_state["terms"])
st.session_state["terms"] = {}
st.experimental_rerun()
```
Now you are really starting to leverage the power of streamlit! Let's start with the code under the upload tab. We added a button to initialize the vector index, and we store it in the global streamlit state dictionary, as well as resetting the currently extracted terms. Then, after extracting terms from the input text, we store it the extracted terms in the global state again and give the user a chance to review them before inserting. If the insert button is pressed, then we call our insert terms function, update our global tracking of inserted terms, and remove the most recently extracted terms from the session state.
## Querying for Extracted Terms/Definitions
With the terms and definitions extracted and saved, how can we use them? And how will the user even remember what's previously been saved?? We can simply add some more tabs to the app to handle these features.
```python
...
setup_tab, terms_tab, upload_tab, query_tab = st.tabs(
["Setup", "All Terms", "Upload/Extract Terms", "Query Terms"]
)
...
with terms_tab:
with terms_tab:
st.subheader("Current Extracted Terms and Definitions")
st.json(st.session_state["all_terms"])
...
with query_tab:
st.subheader("Query for Terms/Definitions!")
st.markdown(
(
"The LLM will attempt to answer your query, and augment it's answers using the terms/definitions you've inserted. "
"If a term is not in the index, it will answer using it's internal knowledge."
)
)
if st.button("Initialize Index and Reset Terms", key="init_index_2"):
st.session_state["llama_index"] = initialize_index(
llm_name, model_temperature, api_key
)
st.session_state["all_terms"] = {}
if "llama_index" in st.session_state:
query_text = st.text_input("Ask about a term or definition:")
if query_text:
query_text = (
query_text
+ "\nIf you can't find the answer, answer the query with the best of your knowledge."
)
with st.spinner("Generating answer..."):
response = (
st.session_state["llama_index"]
.as_query_engine(
similarity_top_k=5,
response_mode="compact",
text_qa_template=TEXT_QA_TEMPLATE,
refine_template=DEFAULT_REFINE_PROMPT,
)
.query(query_text)
)
st.markdown(str(response))
```
While this is mostly basic, some important things to note:
- Our initialize button has the same text as our other button. Streamlit will complain about this, so we provide a unique key instead.
- Some additional text has been added to the query! This is to try and compensate for times when the index does not have the answer.
- In our index query, we've specified two options:
- `similarity_top_k=5` means the index will fetch the top 5 closest matching terms/definitions to the query.
- `response_mode="compact"` means as much text as possible from the 5 matching terms/definitions will be used in each LLM call. Without this, the index would make at least 5 calls to the LLM, which can slow things down for the user.
## Dry Run Test
Well, actually I hope you've been testing as we went. But now, let's try one complete test.
1. Refresh the app
2. Enter your LLM settings
3. Head over to the query tab
4. Ask the following: `What is a bunnyhug?`
5. The app should give some nonsense response. If you didn't know, a bunnyhug is another word for a hoodie, used by people from the Canadian Prairies!
6. Let's add this definition to the app. Open the upload tab and enter the following text: `A bunnyhug is a common term used to describe a hoodie. This term is used by people from the Canadian Prairies.`
7. Click the extract button. After a few moments, the app should display the correctly extracted term/definition. Click the insert term button to save it!
8. If we open the terms tab, the term and definition we just extracted should be displayed
9. Go back to the query tab and try asking what a bunnyhug is. Now, the answer should be correct!
## Improvement #1 - Create a Starting Index
With our base app working, it might feel like a lot of work to build up a useful index. What if we gave the user some kind of starting point to show off the app's query capabilities? We can do just that! First, let's make a small change to our app so that we save the index to disk after every upload:
```python
def insert_terms(terms_to_definition):
for term, definition in terms_to_definition.items():
doc = Document(text=f"Term: {term}\nDefinition: {definition}")
st.session_state["llama_index"].insert(doc)
# TEMPORARY - save to disk
st.session_state["llama_index"].storage_context.persist()
```
Now, we need some document to extract from! The repository for this project used the wikipedia page on New York City, and you can find the text [here](https://github.com/jerryjliu/llama_index/blob/main/examples/test_wiki/data/nyc_text.txt).
If you paste the text into the upload tab and run it (it may take some time), we can insert the extracted terms. Make sure to also copy the text for the extracted terms into a notepad or similar before inserting into the index! We will need them in a second.
After inserting, remove the line of code we used to save the index to disk. With a starting index now saved, we can modify our `initialize_index` function to look like this:
```python
@st.cache_resource
def initialize_index(llm_name, model_temperature, api_key):
"""Load the Index object."""
Settings.llm = get_llm(llm_name, model_temperature, api_key)
index = load_index_from_storage(storage_context)
return index
```
Did you remember to save that giant list of extracted terms in a notepad? Now when our app initializes, we want to pass in the default terms that are in the index to our global terms state:
```python
...
if "all_terms" not in st.session_state:
st.session_state["all_terms"] = DEFAULT_TERMS
...
```
Repeat the above anywhere where we were previously resetting the `all_terms` values.
## Improvement #2 - (Refining) Better Prompts
If you play around with the app a bit now, you might notice that it stopped following our prompt! Remember, we added to our `query_str` variable that if the term/definition could not be found, answer to the best of its knowledge. But now if you try asking about random terms (like bunnyhug!), it may or may not follow those instructions.
This is due to the concept of "refining" answers in Llama Index. Since we are querying across the top 5 matching results, sometimes all the results do not fit in a single prompt! OpenAI models typically have a max input size of 4097 tokens. So, Llama Index accounts for this by breaking up the matching results into chunks that will fit into the prompt. After Llama Index gets an initial answer from the first API call, it sends the next chunk to the API, along with the previous answer, and asks the model to refine that answer.
So, the refine process seems to be messing with our results! Rather than appending extra instructions to the `query_str`, remove that, and Llama Index will let us provide our own custom prompts! Let's create those now, using the [default prompts](https://github.com/jerryjliu/llama_index/blob/main/llama_index/prompts/default_prompts.py) and [chat specific prompts](https://github.com/jerryjliu/llama_index/blob/main/llama_index/prompts/chat_prompts.py) as a guide. Using a new file `constants.py`, let's create some new query templates:
```python
from llama_index.core import (
PromptTemplate,
SelectorPromptTemplate,
ChatPromptTemplate,
)
from llama_index.core.prompts.utils import is_chat_model
from llama_index.core.llms import ChatMessage, MessageRole
# Text QA templates
DEFAULT_TEXT_QA_PROMPT_TMPL = (
"Context information is below. \n"
"---------------------\n"
"{context_str}"
"\n---------------------\n"
"Given the context information answer the following question "
"(if you don't know the answer, use the best of your knowledge): {query_str}\n"
)
TEXT_QA_TEMPLATE = PromptTemplate(DEFAULT_TEXT_QA_PROMPT_TMPL)
# Refine templates
DEFAULT_REFINE_PROMPT_TMPL = (
"The original question is as follows: {query_str}\n"
"We have provided an existing answer: {existing_answer}\n"
"We have the opportunity to refine the existing answer "
"(only if needed) with some more context below.\n"
"------------\n"
"{context_msg}\n"
"------------\n"
"Given the new context and using the best of your knowledge, improve the existing answer. "
"If you can't improve the existing answer, just repeat it again."
)
DEFAULT_REFINE_PROMPT = PromptTemplate(DEFAULT_REFINE_PROMPT_TMPL)
CHAT_REFINE_PROMPT_TMPL_MSGS = [
ChatMessage(content="{query_str}", role=MessageRole.USER),
ChatMessage(content="{existing_answer}", role=MessageRole.ASSISTANT),
ChatMessage(
content="We have the opportunity to refine the above answer "
"(only if needed) with some more context below.\n"
"------------\n"
"{context_msg}\n"
"------------\n"
"Given the new context and using the best of your knowledge, improve the existing answer. "
"If you can't improve the existing answer, just repeat it again.",
role=MessageRole.USER,
),
]
CHAT_REFINE_PROMPT = ChatPromptTemplate(CHAT_REFINE_PROMPT_TMPL_MSGS)
# refine prompt selector
REFINE_TEMPLATE = SelectorPromptTemplate(
default_template=DEFAULT_REFINE_PROMPT,
conditionals=[(is_chat_model, CHAT_REFINE_PROMPT)],
)
```
That seems like a lot of code, but it's not too bad! If you looked at the default prompts, you might have noticed that there are default prompts, and prompts specific to chat models. Continuing that trend, we do the same for our custom prompts. Then, using a prompt selector, we can combine both prompts into a single object. If the LLM being used is a chat model (ChatGPT, GPT-4), then the chat prompts are used. Otherwise, use the normal prompt templates.
Another thing to note is that we only defined one QA template. In a chat model, this will be converted to a single "human" message.
So, now we can import these prompts into our app and use them during the query.
```python
from constants import REFINE_TEMPLATE, TEXT_QA_TEMPLATE
...
if "llama_index" in st.session_state:
query_text = st.text_input("Ask about a term or definition:")
if query_text:
query_text = query_text # Notice we removed the old instructions
with st.spinner("Generating answer..."):
response = (
st.session_state["llama_index"]
.as_query_engine(
similarity_top_k=5,
response_mode="compact",
text_qa_template=TEXT_QA_TEMPLATE,
refine_template=DEFAULT_REFINE_PROMPT,
)
.query(query_text)
)
st.markdown(str(response))
...
```
If you experiment a bit more with queries, hopefully you notice that the responses follow our instructions a little better now!
## Improvement #3 - Image Support
Llama index also supports images! Using Llama Index, we can upload images of documents (papers, letters, etc.), and Llama Index handles extracting the text. We can leverage this to also allow users to upload images of their documents and extract terms and definitions from them.
If you get an import error about PIL, install it using `pip install Pillow` first.
```python
from PIL import Image
from llama_index.readers.file import ImageReader
@st.cache_resource
def get_file_extractor():
image_parser = ImageReader(keep_image=True, parse_text=True)
file_extractor = {
".jpg": image_parser,
".png": image_parser,
".jpeg": image_parser,
}
return file_extractor
file_extractor = get_file_extractor()
...
with upload_tab:
st.subheader("Extract and Query Definitions")
if st.button("Initialize Index and Reset Terms", key="init_index_1"):
st.session_state["llama_index"] = initialize_index(
llm_name, model_temperature, api_key
)
st.session_state["all_terms"] = DEFAULT_TERMS
if "llama_index" in st.session_state:
st.markdown(
"Either upload an image/screenshot of a document, or enter the text manually."
)
uploaded_file = st.file_uploader(
"Upload an image/screenshot of a document:",
type=["png", "jpg", "jpeg"],
)
document_text = st.text_area("Or enter raw text")
if st.button("Extract Terms and Definitions") and (
uploaded_file or document_text
):
st.session_state["terms"] = {}
terms_docs = {}
with st.spinner("Extracting (images may be slow)..."):
if document_text:
terms_docs.update(
extract_terms(
[Document(text=document_text)],
term_extract_str,
llm_name,
model_temperature,
api_key,
)
)
if uploaded_file:
Image.open(uploaded_file).convert("RGB").save("temp.png")
img_reader = SimpleDirectoryReader(
input_files=["temp.png"], file_extractor=file_extractor
)
img_docs = img_reader.load_data()
os.remove("temp.png")
terms_docs.update(
extract_terms(
img_docs,
term_extract_str,
llm_name,
model_temperature,
api_key,
)
)
st.session_state["terms"].update(terms_docs)
if "terms" in st.session_state and st.session_state["terms"]:
st.markdown("Extracted terms")
st.json(st.session_state["terms"])
if st.button("Insert terms?"):
with st.spinner("Inserting terms"):
insert_terms(st.session_state["terms"])
st.session_state["all_terms"].update(st.session_state["terms"])
st.session_state["terms"] = {}
st.experimental_rerun()
```
Here, we added the option to upload a file using Streamlit. Then the image is opened and saved to disk (this seems hacky but it keeps things simple). Then we pass the image path to the reader, extract the documents/text, and remove our temp image file.
Now that we have the documents, we can call `extract_terms()` the same as before.
## Conclusion/TLDR
In this tutorial, we covered a ton of information, while solving some common issues and problems along the way:
- Using different indexes for different use cases (List vs. Vector index)
- Storing global state values with Streamlit's `session_state` concept
- Customizing internal prompts with Llama Index
- Reading text from images with Llama Index
The final version of this tutorial can be found [here](https://github.com/abdulasiraj/A-Guide-to-Extracting-Terms-and-Definitions) and a live hosted demo is available on [Huggingface Spaces](https://huggingface.co/spaces/Nobody4591/Llama_Index_Term_Extractor). |
1,871 | 86e843c6-0a02-4475-84f3-0daaee761aeb | Q&A patterns | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/q_and_a/index | true | llama_index | # Q&A patterns
## Semantic Search
The most basic example usage of LlamaIndex is through semantic search. We provide a simple in-memory vector store for you to get started, but you can also choose to use any one of our [vector store integrations](../../community/integrations/vector_stores.md):
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
```
**Tutorials**
- [Starter Tutorial](../../getting_started/starter_example.md)
- [Basic Usage Pattern](../querying/querying.md)
**Guides**
- [Example](../../examples/vector_stores/SimpleIndexDemo.ipynb) ([Notebook](https://github.com/run-llama/llama_index/tree/main/docs../../examples/vector_stores/SimpleIndexDemo.ipynb))
## Summarization
A summarization query requires the LLM to iterate through many if not most documents in order to synthesize an answer.
For instance, a summarization query could look like one of the following:
- "What is a summary of this collection of text?"
- "Give me a summary of person X's experience with the company."
In general, a summary index would be suited for this use case. A summary index by default goes through all the data.
Empirically, setting `response_mode="tree_summarize"` also leads to better summarization results.
```python
index = SummaryIndex.from_documents(documents)
query_engine = index.as_query_engine(response_mode="tree_summarize")
response = query_engine.query("<summarization_query>")
```
## Queries over Structured Data
LlamaIndex supports queries over structured data, whether that's a Pandas DataFrame or a SQL Database.
Here are some relevant resources:
**Tutorials**
- [Guide on Text-to-SQL](structured_data.md)
**Guides**
- [SQL Guide (Core)](../../examples/index_structs/struct_indices/SQLIndexDemo.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs../../examples/index_structs/struct_indices/SQLIndexDemo.ipynb))
- [Pandas Demo](../../examples/query_engine/pandas_query_engine.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs../../examples/query_engine/pandas_query_engine.ipynb))
## Routing over Heterogeneous Data
LlamaIndex also supports routing over heterogeneous data sources with `RouterQueryEngine` - for instance, if you want to "route" a query to an
underlying Document or a sub-index.
To do this, first build the sub-indices over different data sources.
Then construct the corresponding query engines, and give each query engine a description to obtain a `QueryEngineTool`.
```python
from llama_index.core import TreeIndex, VectorStoreIndex
from llama_index.core.tools import QueryEngineTool
...
# define sub-indices
index1 = VectorStoreIndex.from_documents(notion_docs)
index2 = VectorStoreIndex.from_documents(slack_docs)
# define query engines and tools
tool1 = QueryEngineTool.from_defaults(
query_engine=index1.as_query_engine(),
description="Use this query engine to do...",
)
tool2 = QueryEngineTool.from_defaults(
query_engine=index2.as_query_engine(),
description="Use this query engine for something else...",
)
```
Then, we define a `RouterQueryEngine` over them.
By default, this uses a `LLMSingleSelector` as the router, which uses the LLM to choose the best sub-index to router the query to, given the descriptions.
```python
from llama_index.core.query_engine import RouterQueryEngine
query_engine = RouterQueryEngine.from_defaults(
query_engine_tools=[tool1, tool2]
)
response = query_engine.query(
"In Notion, give me a summary of the product roadmap."
)
```
**Guides**
- [Router Query Engine Guide](../../examples/query_engine/RouterQueryEngine.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs../../examples/query_engine/RouterQueryEngine.ipynb))
## Compare/Contrast Queries
You can explicitly perform compare/contrast queries with a **query transformation** module within a ComposableGraph.
```python
from llama_index.core.query.query_transform.base import DecomposeQueryTransform
decompose_transform = DecomposeQueryTransform(
service_context.llm, verbose=True
)
```
This module will help break down a complex query into a simpler one over your existing index structure.
**Guides**
- [Query Transformations](../../optimizing/advanced_retrieval/query_transformations.md)
You can also rely on the LLM to _infer_ whether to perform compare/contrast queries (see Multi Document Queries below).
## Multi Document Queries
Besides the explicit synthesis/routing flows described above, LlamaIndex can support more general multi-document queries as well.
It can do this through our `SubQuestionQueryEngine` class. Given a query, this query engine will generate a "query plan" containing
sub-queries against sub-documents before synthesizing the final answer.
To do this, first define an index for each document/data source, and wrap it with a `QueryEngineTool` (similar to above):
```python
from llama_index.core.tools import QueryEngineTool, ToolMetadata
query_engine_tools = [
QueryEngineTool(
query_engine=sept_engine,
metadata=ToolMetadata(
name="sept_22",
description="Provides information about Uber quarterly financials ending September 2022",
),
),
QueryEngineTool(
query_engine=june_engine,
metadata=ToolMetadata(
name="june_22",
description="Provides information about Uber quarterly financials ending June 2022",
),
),
QueryEngineTool(
query_engine=march_engine,
metadata=ToolMetadata(
name="march_22",
description="Provides information about Uber quarterly financials ending March 2022",
),
),
]
```
Then, we define a `SubQuestionQueryEngine` over these tools:
```python
from llama_index.core.query_engine import SubQuestionQueryEngine
query_engine = SubQuestionQueryEngine.from_defaults(
query_engine_tools=query_engine_tools
)
```
This query engine can execute any number of sub-queries against any subset of query engine tools before synthesizing the final answer.
This makes it especially well-suited for compare/contrast queries across documents as well as queries pertaining to a specific document.
**Guides**
- [Sub Question Query Engine (Intro)](../../examples/query_engine/sub_question_query_engine.ipynb)
- [10Q Analysis (Uber)](../../examples/usecases/10q_sub_question.ipynb)
- [10K Analysis (Uber and Lyft)](../../examples/usecases/10k_sub_question.ipynb)
## Multi-Step Queries
LlamaIndex can also support iterative multi-step queries. Given a complex query, break it down into an initial subquestions,
and sequentially generate subquestions based on returned answers until the final answer is returned.
For instance, given a question "Who was in the first batch of the accelerator program the author started?",
the module will first decompose the query into a simpler initial question "What was the accelerator program the author started?",
query the index, and then ask followup questions.
**Guides**
- [Query Transformations](../../optimizing/advanced_retrieval/query_transformations.md)
- [Multi-Step Query Decomposition](../../examples/query_transformations/HyDEQueryTransformDemo.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs/docs/examples/query_transformations/HyDEQueryTransformDemo.ipynb))
## Temporal Queries
LlamaIndex can support queries that require an understanding of time. It can do this in two ways:
- Decide whether the query requires utilizing temporal relationships between nodes (prev/next relationships) in order to retrieve additional context to answer the question.
- Sort by recency and filter outdated context.
**Guides**
- [Postprocessing Guide](../../module_guides/querying/node_postprocessors/node_postprocessors.md)
- [Prev/Next Postprocessing](../../examples/node_postprocessor/PrevNextPostprocessorDemo.ipynb)
- [Recency Postprocessing](../../examples/node_postprocessor/RecencyPostprocessorDemo.ipynb)
## Additional Resources
- [A Guide to Extracting Terms and Definitions](q_and_a/terms_definitions_tutorial.md)
- [SEC 10k Analysis](https://medium.com/@jerryjliu98/how-unstructured-and-llamaindex-can-help-bring-the-power-of-llms-to-your-own-data-3657d063e30d) |
3,639 | 0a9fdd80-bd50-41e1-86b6-4dddbefd25f0 | Airbyte SQL Index Guide | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/structured_data/Airbyte_demo | true | llama_index | # Airbyte SQL Index Guide
We will show how to generate SQL queries on a Snowflake db generated by Airbyte.
```python
# Uncomment to enable debugging.
# import logging
# import sys
# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
```
### Airbyte ingestion
Here we show how to ingest data from Github into a Snowflake db using Airbyte.
```python
from IPython.display import Image
Image(filename="img/airbyte_1.png")
```
![png](output_4_0.png)
Let's create a new connection. Here we will be dumping our Zendesk tickets into a Snowflake db.
```python
Image(filename="img/github_1.png")
```
![png](output_6_0.png)
```python
Image(filename="img/github_2.png")
```
![png](output_7_0.png)
```python
Image(filename="img/snowflake_1.png")
```
![png](output_8_0.png)
```python
Image(filename="img/snowflake_2.png")
```
![png](output_9_0.png)
Choose the streams you want to sync.
```python
Image(filename="img/airbyte_7.png")
```
![png](output_11_0.png)
```python
Image(filename="img/github_3.png")
```
![png](output_12_0.png)
Sync your data.
```python
Image(filename="img/airbyte_9.png")
```
![png](output_14_0.png)
```python
Image(filename="img/airbyte_8.png")
```
![png](output_15_0.png)
### Snowflake-SQLAlchemy version fix
Hack to make snowflake-sqlalchemy work despite incompatible sqlalchemy versions
Taken from https://github.com/snowflakedb/snowflake-sqlalchemy/issues/380#issuecomment-1470762025
```python
# Hack to make snowflake-sqlalchemy work until they patch it
def snowflake_sqlalchemy_20_monkey_patches():
import sqlalchemy.util.compat
# make strings always return unicode strings
sqlalchemy.util.compat.string_types = (str,)
sqlalchemy.types.String.RETURNS_UNICODE = True
import snowflake.sqlalchemy.snowdialect
snowflake.sqlalchemy.snowdialect.SnowflakeDialect.returns_unicode_strings = (
True
)
# make has_table() support the `info_cache` kwarg
import snowflake.sqlalchemy.snowdialect
def has_table(self, connection, table_name, schema=None, info_cache=None):
"""
Checks if the table exists
"""
return self._has_object(connection, "TABLE", table_name, schema)
snowflake.sqlalchemy.snowdialect.SnowflakeDialect.has_table = has_table
# usage: call this function before creating an engine:
try:
snowflake_sqlalchemy_20_monkey_patches()
except Exception as e:
raise ValueError("Please run `pip install snowflake-sqlalchemy`")
```
### Define database
We pass the Snowflake uri to the SQL db constructor
```python
snowflake_uri = "snowflake://<user_login_name>:<password>@<account_identifier>/<database_name>/<schema_name>?warehouse=<warehouse_name>&role=<role_name>"
```
First we try connecting with sqlalchemy to check the db works.
```python
from sqlalchemy import select, create_engine, MetaData, Table
# view current table
engine = create_engine(snowflake_uri)
metadata = MetaData(bind=None)
table = Table("ZENDESK_TICKETS", metadata, autoload=True, autoload_with=engine)
stmt = select(table.columns)
with engine.connect() as connection:
results = connection.execute(stmt).fetchone()
print(results)
print(results.keys())
```
/var/folders/dx/n9yhm8p9039b5bgmgjqy46y40000gn/T/ipykernel_57673/3609487787.py:6: RemovedIn20Warning: Deprecated API features detected! These feature(s) are not compatible with SQLAlchemy 2.0. To prevent incompatible upgrades prior to updating applications, ensure requirements files are pinned to "sqlalchemy<2.0". Set environment variable SQLALCHEMY_WARN_20=1 to show all deprecation warnings. Set environment variable SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this message. (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
table = Table(
(False, 'test case', '[]', datetime.datetime(2022, 7, 18, 16, 59, 13, tzinfo=<UTC>), 'test to', None, None, 'question', '{\n "channel": "web",\n "source": {\n "from": {},\n "rel": null,\n "to": {}\n }\n}', True, datetime.datetime(2022, 7, 18, 18, 1, 37, tzinfo=<UTC>), None, '[]', None, 134, None, 1658167297, 'test case', None, '[]', False, '{\n "score": "offered"\n}', 360786799676, 'low', '[]', 'https://d3v-airbyte.zendesk.com/api/v2/tickets/134.json', '[]', 360000358316, 360000084116, '[]', None, '[]', 360033549136, True, None, False, 'new', 360786799676, 'abd39a87-b1f9-4390-bf8b-cf3c288b1f74', datetime.datetime(2023, 6, 9, 0, 25, 23, 501000, tzinfo=pytz.FixedOffset(-420)), datetime.datetime(2023, 6, 9, 0, 38, 20, 440000, tzinfo=<UTC>), '6577ef036668746df889983970579a55', '02522a2b2726fb0a03bb19f2d8d9524d')
RMKeyView(['from_messaging_channel', 'subject', 'email_cc_ids', 'created_at', 'description', 'custom_status_id', 'external_id', 'type', 'via', 'allow_attachments', 'updated_at', 'problem_id', 'follower_ids', 'due_at', 'id', 'assignee_id', 'generated_timestamp', 'raw_subject', 'forum_topic_id', 'custom_fields', 'allow_channelback', 'satisfaction_rating', 'submitter_id', 'priority', 'collaborator_ids', 'url', 'tags', 'brand_id', 'ticket_form_id', 'sharing_agreement_ids', 'group_id', 'followup_ids', 'organization_id', 'is_public', 'recipient', 'has_incidents', 'status', 'requester_id', '_airbyte_ab_id', '_airbyte_emitted_at', '_airbyte_normalized_at', '_airbyte_zendesk_tickets_hashid', '_airbyte_unique_key'])
### Define SQL DB
Once we have defined the SQLDatabase, we can wrap it in a query engine to query it.
If we know what tables we want to use we can use `NLSQLTableQueryEngine`.
This will generate a SQL query on the specified tables.
```python
from llama_index import SQLDatabase
# You can specify table filters during engine creation.
# sql_database = SQLDatabase(engine, include_tables=["github_issues","github_comments", "github_users"])
sql_database = SQLDatabase(engine)
```
### Synthesize Query
We then show a natural language query, which is translated to a SQL query under the hood with our text-to-SQL prompt.
```python
from llama_index.indices.struct_store.sql_query import NLSQLTableQueryEngine
from IPython.display import Markdown, display
query_engine = NLSQLTableQueryEngine(
sql_database=sql_database,
tables=["github_issues", "github_comments", "github_users"],
)
query_str = "Which issues have the most comments? Give the top 10 and use a join on url."
response = query_engine.query(query_str)
display(Markdown(f"<b>{response}</b>"))
```
<b> The top 10 issues with the most comments, based on a join on url, are 'Proof of concept parallel source stream reading implementation for MySQL', 'Remove noisy logging for `LegacyStateManager`', 'Track stream status in source', 'Source Google Analytics v4: - add pk and lookback window', 'Connector Health: Fixed SAT for marketo, close, chargebee, facebook marketing, paystack, hubspot, pipedrive and marketo', '📝 Update outdated docs urls in metadata files', 'Fix emitted intermediate state for initial incremental non-CDC syncs', 'source-postgres : Add logic to handle xmin wraparound', ':bug: Source HubSpot: fix cast string as boolean using string comparison', and 'Fix db-lib JdbcUtils.java to accept JDBC parameters with = sign.'.</b>
```python
# You can also get only the SQL query result.
query_engine = NLSQLTableQueryEngine(
sql_database=sql_database,
synthesize_response=False,
tables=["github_issues", "github_comments", "github_users"],
)
response = query_engine.query(query_str)
display(Markdown(f"<b>{response}</b>"))
```
<b>[('Proof of concept parallel source stream reading implementation for MySQL', 'https://api.github.com/repos/airbytehq/airbyte/issues/26580', 'https://api.github.com/repos/airbytehq/airbyte/issues/26580', 104), ('Remove noisy logging for `LegacyStateManager`', 'https://api.github.com/repos/airbytehq/airbyte/issues/27335', 'https://api.github.com/repos/airbytehq/airbyte/issues/27335', 39), ('Track stream status in source', 'https://api.github.com/repos/airbytehq/airbyte/issues/24971', 'https://api.github.com/repos/airbytehq/airbyte/issues/24971', 35), ('Source Google Analytics v4: - add pk and lookback window', 'https://api.github.com/repos/airbytehq/airbyte/issues/26283', 'https://api.github.com/repos/airbytehq/airbyte/issues/26283', 29), ('Connector Health: Fixed SAT for marketo, close, chargebee, facebook marketing, paystack, hubspot, pipedrive and marketo', 'https://api.github.com/repos/airbytehq/airbyte/issues/24802', 'https://api.github.com/repos/airbytehq/airbyte/issues/24802', 28), ('📝 Update outdated docs urls in metadata files', 'https://api.github.com/repos/airbytehq/airbyte/issues/27420', 'https://api.github.com/repos/airbytehq/airbyte/issues/27420', 26), ('Fix emitted intermediate state for initial incremental non-CDC syncs', 'https://api.github.com/repos/airbytehq/airbyte/issues/24820', 'https://api.github.com/repos/airbytehq/airbyte/issues/24820', 25), ('source-postgres : Add logic to handle xmin wraparound', 'https://api.github.com/repos/airbytehq/airbyte/issues/27384', 'https://api.github.com/repos/airbytehq/airbyte/issues/27384', 24), (':bug: Source HubSpot: fix cast string as boolean using string comparison', 'https://api.github.com/repos/airbytehq/airbyte/issues/26082', 'https://api.github.com/repos/airbytehq/airbyte/issues/26082', 24), ('Fix db-lib JdbcUtils.java to accept JDBC parameters with = sign.', 'https://api.github.com/repos/airbytehq/airbyte/issues/25386', 'https://api.github.com/repos/airbytehq/airbyte/issues/25386', 22)]</b>
```python
# You can also get the original SQL query
sql_query = response.metadata["sql_query"]
display(Markdown(f"<b>{sql_query}</b>"))
```
<b>SELECT gi.title, gi.url, gc.issue_url, COUNT(*) AS comment_count
FROM github_issues gi
JOIN github_comments gc ON gi.url = gc.issue_url
GROUP BY gi.title, gi.url, gc.issue_url
ORDER BY comment_count DESC
LIMIT 10;</b>
We can also use LLM prediction to figure out what tables to use.
We first need to create an ObjectIndex of SQLTableSchema. In this case we only pass in the table names.
The query engine will fetch the relevant table schema at query time.
```python
from llama_index.indices.struct_store.sql_query import (
SQLTableRetrieverQueryEngine,
)
from llama_index.objects import (
SQLTableNodeMapping,
ObjectIndex,
SQLTableSchema,
)
from llama_index import VectorStoreIndex
table_node_mapping = SQLTableNodeMapping(sql_database)
all_table_names = sql_database.get_usable_table_names()
table_schema_objs = []
for table_name in all_table_names:
table_schema_objs.append(SQLTableSchema(table_name=table_name))
obj_index = ObjectIndex.from_objects(
table_schema_objs,
table_node_mapping,
VectorStoreIndex,
)
table_retriever_query_engine = SQLTableRetrieverQueryEngine(
sql_database, obj_index.as_retriever(similarity_top_k=1)
)
response = query_engine.query(query_str)
display(Markdown(f"<b>{response}</b>"))
sql_query = response.metadata["sql_query"]
display(Markdown(f"<b>{sql_query}</b>"))
```
/Users/hongyishi/Documents/GitHub/gpt_index/.venv/lib/python3.11/site-packages/langchain/sql_database.py:279: UserWarning: This method is deprecated - please use `get_usable_table_names`.
warnings.warn(
<b>[('Proof of concept parallel source stream reading implementation for MySQL', 'https://api.github.com/repos/airbytehq/airbyte/issues/26580', 'https://api.github.com/repos/airbytehq/airbyte/issues/26580', 104), ('Remove noisy logging for `LegacyStateManager`', 'https://api.github.com/repos/airbytehq/airbyte/issues/27335', 'https://api.github.com/repos/airbytehq/airbyte/issues/27335', 39), ('Track stream status in source', 'https://api.github.com/repos/airbytehq/airbyte/issues/24971', 'https://api.github.com/repos/airbytehq/airbyte/issues/24971', 35), ('Source Google Analytics v4: - add pk and lookback window', 'https://api.github.com/repos/airbytehq/airbyte/issues/26283', 'https://api.github.com/repos/airbytehq/airbyte/issues/26283', 29), ('Connector Health: Fixed SAT for marketo, close, chargebee, facebook marketing, paystack, hubspot, pipedrive and marketo', 'https://api.github.com/repos/airbytehq/airbyte/issues/24802', 'https://api.github.com/repos/airbytehq/airbyte/issues/24802', 28), ('📝 Update outdated docs urls in metadata files', 'https://api.github.com/repos/airbytehq/airbyte/issues/27420', 'https://api.github.com/repos/airbytehq/airbyte/issues/27420', 26), ('Fix emitted intermediate state for initial incremental non-CDC syncs', 'https://api.github.com/repos/airbytehq/airbyte/issues/24820', 'https://api.github.com/repos/airbytehq/airbyte/issues/24820', 25), ('source-postgres : Add logic to handle xmin wraparound', 'https://api.github.com/repos/airbytehq/airbyte/issues/27384', 'https://api.github.com/repos/airbytehq/airbyte/issues/27384', 24), (':bug: Source HubSpot: fix cast string as boolean using string comparison', 'https://api.github.com/repos/airbytehq/airbyte/issues/26082', 'https://api.github.com/repos/airbytehq/airbyte/issues/26082', 24), ('Fix db-lib JdbcUtils.java to accept JDBC parameters with = sign.', 'https://api.github.com/repos/airbytehq/airbyte/issues/25386', 'https://api.github.com/repos/airbytehq/airbyte/issues/25386', 22)]</b>
<b>SELECT gi.title, gi.url, gc.issue_url, COUNT(*) AS comment_count
FROM github_issues gi
JOIN github_comments gc ON gi.url = gc.issue_url
GROUP BY gi.title, gi.url, gc.issue_url
ORDER BY comment_count DESC
LIMIT 10;</b> |
1,389 | 2ed4f255-948b-40be-8d07-7a07057fa10e | Structured Data | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/structured_data/index | true | llama_index | # Structured Data
# A Guide to LlamaIndex + Structured Data
A lot of modern data systems depend on structured data, such as a Postgres DB or a Snowflake data warehouse.
LlamaIndex provides a lot of advanced features, powered by LLM's, to both create structured data from
unstructured data, as well as analyze this structured data through augmented text-to-SQL capabilities.
**NOTE:** Any Text-to-SQL application should be aware that executing
arbitrary SQL queries can be a security risk. It is recommended to
take precautions as needed, such as using restricted roles, read-only
databases, sandboxing, etc.
This guide helps walk through each of these capabilities. Specifically, we cover the following topics:
- **Setup**: Defining up our example SQL Table.
- **Building our Table Index**: How to go from sql database to a Table Schema Index
- **Using natural language SQL queries**: How to query our SQL database using natural language.
We will walk through a toy example table which contains city/population/country information.
A notebook for this tutorial is [available here](../../examples/index_structs/struct_indices/SQLIndexDemo.ipynb).
## Setup
First, we use SQLAlchemy to setup a simple sqlite db:
```python
from sqlalchemy import (
create_engine,
MetaData,
Table,
Column,
String,
Integer,
select,
column,
)
engine = create_engine("sqlite:///:memory:")
metadata_obj = MetaData()
```
We then create a toy `city_stats` table:
```python
# create city SQL table
table_name = "city_stats"
city_stats_table = Table(
table_name,
metadata_obj,
Column("city_name", String(16), primary_key=True),
Column("population", Integer),
Column("country", String(16), nullable=False),
)
metadata_obj.create_all(engine)
```
Now it's time to insert some datapoints!
If you want to look into filling into this table by inferring structured datapoints
from unstructured data, take a look at the below section. Otherwise, you can choose
to directly populate this table:
```python
from sqlalchemy import insert
rows = [
{"city_name": "Toronto", "population": 2731571, "country": "Canada"},
{"city_name": "Tokyo", "population": 13929286, "country": "Japan"},
{"city_name": "Berlin", "population": 600000, "country": "Germany"},
]
for row in rows:
stmt = insert(city_stats_table).values(**row)
with engine.begin() as connection:
cursor = connection.execute(stmt)
```
Finally, we can wrap the SQLAlchemy engine with our SQLDatabase wrapper;
this allows the db to be used within LlamaIndex:
```python
from llama_index.core import SQLDatabase
sql_database = SQLDatabase(engine, include_tables=["city_stats"])
```
## Natural language SQL
Once we have constructed our SQL database, we can use the NLSQLTableQueryEngine to
construct natural language queries that are synthesized into SQL queries.
Note that we need to specify the tables we want to use with this query engine.
If we don't the query engine will pull all the schema context, which could
overflow the context window of the LLM.
```python
from llama_index.core.query_engine import NLSQLTableQueryEngine
query_engine = NLSQLTableQueryEngine(
sql_database=sql_database,
tables=["city_stats"],
)
query_str = "Which city has the highest population?"
response = query_engine.query(query_str)
```
This query engine should used in any case where you can specify the tables you want
to query over beforehand, or the total size of all the table schema plus the rest of
the prompt fits your context window.
## Building our Table Index
If we don't know ahead of time which table we would like to use, and the total size of
the table schema overflows your context window size, we should store the table schema
in an index so that during query time we can retrieve the right schema.
The way we can do this is using the SQLTableNodeMapping object, which takes in a
SQLDatabase and produces a Node object for each SQLTableSchema object passed
into the ObjectIndex constructor.
```python
from llama_index.core.objects import (
SQLTableNodeMapping,
ObjectIndex,
SQLTableSchema,
)
table_node_mapping = SQLTableNodeMapping(sql_database)
table_schema_objs = [
(SQLTableSchema(table_name="city_stats")),
...,
] # one SQLTableSchema for each table
obj_index = ObjectIndex.from_objects(
table_schema_objs,
table_node_mapping,
VectorStoreIndex,
)
```
Here you can see we define our table_node_mapping, and a single SQLTableSchema with the
"city_stats" table name. We pass these into the ObjectIndex constructor, along with the
VectorStoreIndex class definition we want to use. This will give us a VectorStoreIndex where
each Node contains table schema and other context information. You can also add any additional
context information you'd like.
```python
# manually set extra context text
city_stats_text = (
"This table gives information regarding the population and country of a given city.\n"
"The user will query with codewords, where 'foo' corresponds to population and 'bar'"
"corresponds to city."
)
table_node_mapping = SQLTableNodeMapping(sql_database)
table_schema_objs = [
(SQLTableSchema(table_name="city_stats", context_str=city_stats_text))
]
```
## Using natural language SQL queries
Once we have defined our table schema index obj_index, we can construct a SQLTableRetrieverQueryEngine
by passing in our SQLDatabase, and a retriever constructed from our object index.
```python
from llama_index.core.indices.struct_store import SQLTableRetrieverQueryEngine
query_engine = SQLTableRetrieverQueryEngine(
sql_database, obj_index.as_retriever(similarity_top_k=1)
)
response = query_engine.query("Which city has the highest population?")
print(response)
```
Now when we query the retriever query engine, it will retrieve the relevant table schema
and synthesize a SQL query and a response from the results of that query.
## Concluding Thoughts
This is it for now! We're constantly looking for ways to improve our structured data support.
If you have any questions let us know in [our Discord](https://discord.gg/dGcwcsnxhU).
Relevant Resources:
- [Airbyte SQL Index Guide](./structured_data/Airbyte_demo.ipynb) |
4,506 | 3b04b376-b99a-40a3-96f6-571a5dda5fcb | How to Build a Chatbot | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/chatbots/building_a_chatbot | true | llama_index | # How to Build a Chatbot
LlamaIndex serves as a bridge between your data and Large Language Models (LLMs), providing a toolkit that enables you to establish a query interface around your data for a variety of tasks, such as question-answering and summarization.
In this tutorial, we'll walk you through building a context-augmented chatbot using a [Data Agent](https://gpt-index.readthedocs.io/en/stable/core_modules/agent_modules/agents/root.html). This agent, powered by LLMs, is capable of intelligently executing tasks over your data. The end result is a chatbot agent equipped with a robust set of data interface tools provided by LlamaIndex to answer queries about your data.
**Note**: This tutorial builds upon initial work on creating a query interface over SEC 10-K filings - [check it out here](https://medium.com/@jerryjliu98/how-unstructured-and-llamaindex-can-help-bring-the-power-of-llms-to-your-own-data-3657d063e30d).
### Context
In this guide, we’ll build a "10-K Chatbot" that uses raw UBER 10-K HTML filings from Dropbox. Users can interact with the chatbot to ask questions related to the 10-K filings.
### Preparation
```python
import os
import openai
os.environ["OPENAI_API_KEY"] = "sk-..."
openai.api_key = os.environ["OPENAI_API_KEY"]
import nest_asyncio
nest_asyncio.apply()
```
### Ingest Data
Let's first download the raw 10-k files, from 2019-2022.
```
# NOTE: the code examples assume you're operating within a Jupyter notebook.
# download files
!mkdir data
!wget "https://www.dropbox.com/s/948jr9cfs7fgj99/UBER.zip?dl=1" -O data/UBER.zip
!unzip data/UBER.zip -d data
```
To parse the HTML files into formatted text, we use the [Unstructured](https://github.com/Unstructured-IO/unstructured) library. Thanks to [LlamaHub](https://llamahub.ai/), we can directly integrate with Unstructured, allowing conversion of any text into a Document format that LlamaIndex can ingest.
First we install the necessary packages:
```
!pip install llama-hub unstructured
```
Then we can use the `UnstructuredReader` to parse the HTML files into a list of `Document` objects.
```python
from llama_index.readers.file import UnstructuredReader
from pathlib import Path
years = [2022, 2021, 2020, 2019]
loader = UnstructuredReader()
doc_set = {}
all_docs = []
for year in years:
year_docs = loader.load_data(
file=Path(f"./data/UBER/UBER_{year}.html"), split_documents=False
)
# insert year metadata into each year
for d in year_docs:
d.metadata = {"year": year}
doc_set[year] = year_docs
all_docs.extend(year_docs)
```
### Setting up Vector Indices for each year
We first setup a vector index for each year. Each vector index allows us
to ask questions about the 10-K filing of a given year.
We build each index and save it to disk.
```python
# initialize simple vector indices
from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.core import Settings
Settings.chunk_size = 512
index_set = {}
for year in years:
storage_context = StorageContext.from_defaults()
cur_index = VectorStoreIndex.from_documents(
doc_set[year],
storage_context=storage_context,
)
index_set[year] = cur_index
storage_context.persist(persist_dir=f"./storage/{year}")
```
To load an index from disk, do the following
```python
# Load indices from disk
from llama_index.core import load_index_from_storage
index_set = {}
for year in years:
storage_context = StorageContext.from_defaults(
persist_dir=f"./storage/{year}"
)
cur_index = load_index_from_storage(
storage_context,
)
index_set[year] = cur_index
```
### Setting up a Sub Question Query Engine to Synthesize Answers Across 10-K Filings
Since we have access to documents of 4 years, we may not only want to ask questions regarding the 10-K document of a given year, but ask questions that require analysis over all 10-K filings.
To address this, we can use a [Sub Question Query Engine](https://gpt-index.readthedocs.io/en/stable/examples/query_engine/sub_question_query_engine.html). It decomposes a query into subqueries, each answered by an individual vector index, and synthesizes the results to answer the overall query.
LlamaIndex provides some wrappers around indices (and query engines) so that they can be used by query engines and agents. First we define a `QueryEngineTool` for each vector index.
Each tool has a name and a description; these are what the LLM agent sees to decide which tool to choose.
```python
from llama_index.core.tools import QueryEngineTool, ToolMetadata
individual_query_engine_tools = [
QueryEngineTool(
query_engine=index_set[year].as_query_engine(),
metadata=ToolMetadata(
name=f"vector_index_{year}",
description=f"useful for when you want to answer queries about the {year} SEC 10-K for Uber",
),
)
for year in years
]
```
Now we can create the Sub Question Query Engine, which will allow us to synthesize answers across the 10-K filings. We pass in the `individual_query_engine_tools` we defined above, as well as an `llm` that will be used to run the subqueries.
```python
from llama_index.llms.openai import OpenAI
from llama_index.core.query_engine import SubQuestionQueryEngine
query_engine = SubQuestionQueryEngine.from_defaults(
query_engine_tools=individual_query_engine_tools,
llm=OpenAI(model="gpt-3.5-turbo"),
)
```
### Setting up the Chatbot Agent
We use a LlamaIndex Data Agent to setup the outer chatbot agent, which has access to a set of Tools. Specifically, we will use an OpenAIAgent, that takes advantage of OpenAI API function calling. We want to use the separate Tools we defined previously for each index (corresponding to a given year), as well as a tool for the sub question query engine we defined above.
First we define a `QueryEngineTool` for the sub question query engine:
```python
query_engine_tool = QueryEngineTool(
query_engine=query_engine,
metadata=ToolMetadata(
name="sub_question_query_engine",
description="useful for when you want to answer queries that require analyzing multiple SEC 10-K documents for Uber",
),
)
```
Then, we combine the Tools we defined above into a single list of tools for the agent:
```python
tools = individual_query_engine_tools + [query_engine_tool]
```
Finally, we call `OpenAIAgent.from_tools` to create the agent, passing in the list of tools we defined above.
```python
from llama_index.agent.openai import OpenAIAgent
agent = OpenAIAgent.from_tools(tools, verbose=True)
```
### Testing the Agent
We can now test the agent with various queries.
If we test it with a simple "hello" query, the agent does not use any Tools.
```python
response = agent.chat("hi, i am bob")
print(str(response))
```
```
Hello Bob! How can I assist you today?
```
If we test it with a query regarding the 10-k of a given year, the agent will use
the relevant vector index Tool.
```python
response = agent.chat(
"What were some of the biggest risk factors in 2020 for Uber?"
)
print(str(response))
```
```
=== Calling Function ===
Calling function: vector_index_2020 with args: {
"input": "biggest risk factors"
}
Got output: The biggest risk factors mentioned in the context are:
1. The adverse impact of the COVID-19 pandemic and actions taken to mitigate it on the business.
2. The potential reclassification of drivers as employees, workers, or quasi-employees instead of independent contractors.
3. Intense competition in the mobility, delivery, and logistics industries, with low-cost alternatives and well-capitalized competitors.
4. The need to lower fares or service fees and offer driver incentives and consumer discounts to remain competitive.
5. Significant losses incurred and the uncertainty of achieving profitability.
6. The risk of not attracting or maintaining a critical mass of platform users.
7. Operational, compliance, and cultural challenges related to the workplace culture and forward-leaning approach.
8. The potential negative impact of international investments and the challenges of conducting business in foreign countries.
9. Risks associated with operational and compliance challenges, localization, laws and regulations, competition, social acceptance, technological compatibility, improper business practices, liability uncertainty, managing international operations, currency fluctuations, cash transactions, tax consequences, and payment fraud.
========================
Some of the biggest risk factors for Uber in 2020 were:
1. The adverse impact of the COVID-19 pandemic and actions taken to mitigate it on the business.
2. The potential reclassification of drivers as employees, workers, or quasi-employees instead of independent contractors.
3. Intense competition in the mobility, delivery, and logistics industries, with low-cost alternatives and well-capitalized competitors.
4. The need to lower fares or service fees and offer driver incentives and consumer discounts to remain competitive.
5. Significant losses incurred and the uncertainty of achieving profitability.
6. The risk of not attracting or maintaining a critical mass of platform users.
7. Operational, compliance, and cultural challenges related to the workplace culture and forward-leaning approach.
8. The potential negative impact of international investments and the challenges of conducting business in foreign countries.
9. Risks associated with operational and compliance challenges, localization, laws and regulations, competition, social acceptance, technological compatibility, improper business practices, liability uncertainty, managing international operations, currency fluctuations, cash transactions, tax consequences, and payment fraud.
These risk factors highlight the challenges and uncertainties that Uber faced in 2020.
```
Finally, if we test it with a query to compare/contrast risk factors across years,
the agent will use the Sub Question Query Engine Tool.
```python
cross_query_str = "Compare/contrast the risk factors described in the Uber 10-K across years. Give answer in bullet points."
response = agent.chat(cross_query_str)
print(str(response))
```
```
=== Calling Function ===
Calling function: sub_question_query_engine with args: {
"input": "Compare/contrast the risk factors described in the Uber 10-K across years"
}
Generated 4 sub questions.
[vector_index_2022] Q: What are the risk factors described in the 2022 SEC 10-K for Uber?
[vector_index_2021] Q: What are the risk factors described in the 2021 SEC 10-K for Uber?
[vector_index_2020] Q: What are the risk factors described in the 2020 SEC 10-K for Uber?
[vector_index_2019] Q: What are the risk factors described in the 2019 SEC 10-K for Uber?
[vector_index_2021] A: The risk factors described in the 2021 SEC 10-K for Uber include the adverse impact of the COVID-19 pandemic on their business, the potential reclassification of drivers as employees instead of independent contractors, intense competition in the mobility, delivery, and logistics industries, the need to lower fares and offer incentives to remain competitive, significant losses incurred by the company, the importance of attracting and maintaining a critical mass of platform users, and the ongoing legal challenges regarding driver classification.
[vector_index_2020] A: The risk factors described in the 2020 SEC 10-K for Uber include the adverse impact of the COVID-19 pandemic on their business, the potential reclassification of drivers as employees instead of independent contractors, intense competition in the mobility, delivery, and logistics industries, the need to lower fares and offer incentives to remain competitive, significant losses and the uncertainty of achieving profitability, the importance of attracting and retaining a critical mass of drivers and users, and the challenges associated with their workplace culture and operational compliance.
[vector_index_2022] A: The risk factors described in the 2022 SEC 10-K for Uber include the potential adverse effect on their business if drivers were classified as employees instead of independent contractors, the highly competitive nature of the mobility, delivery, and logistics industries, the need to lower fares or service fees to remain competitive in certain markets, the company's history of significant losses and the expectation of increased operating expenses in the future, and the potential impact on their platform if they are unable to attract or maintain a critical mass of drivers, consumers, merchants, shippers, and carriers.
[vector_index_2019] A: The risk factors described in the 2019 SEC 10-K for Uber include the loss of their license to operate in London, the complexity of their business and operating model due to regulatory uncertainties, the potential for additional regulations for their other products in the Other Bets segment, the evolving laws and regulations regarding the development and deployment of autonomous vehicles, and the increasing number of data protection and privacy laws around the world. Additionally, there are legal proceedings, litigation, claims, and government investigations that Uber is involved in, which could impose a burden on management and employees and come with defense costs or unfavorable rulings.
Got output: The risk factors described in the Uber 10-K reports across the years include the potential reclassification of drivers as employees instead of independent contractors, intense competition in the mobility, delivery, and logistics industries, the need to lower fares and offer incentives to remain competitive, significant losses incurred by the company, the importance of attracting and maintaining a critical mass of platform users, and the ongoing legal challenges regarding driver classification. Additionally, there are specific risk factors mentioned in each year's report, such as the adverse impact of the COVID-19 pandemic in 2020 and 2021, the loss of their license to operate in London in 2019, and the evolving laws and regulations regarding autonomous vehicles in 2019. Overall, while there are some similarities in the risk factors mentioned, there are also specific factors that vary across the years.
========================
=== Calling Function ===
Calling function: vector_index_2022 with args: {
"input": "risk factors"
}
Got output: Some of the risk factors mentioned in the context include the potential adverse effect on the business if drivers were classified as employees instead of independent contractors, the highly competitive nature of the mobility, delivery, and logistics industries, the need to lower fares or service fees to remain competitive, the company's history of significant losses and the expectation of increased operating expenses, the impact of future pandemics or disease outbreaks on the business and financial results, and the potential harm to the business due to economic conditions and their effect on discretionary consumer spending.
========================
=== Calling Function ===
Calling function: vector_index_2021 with args: {
"input": "risk factors"
}
Got output: The COVID-19 pandemic and the impact of actions to mitigate the pandemic have adversely affected and may continue to adversely affect parts of our business. Our business would be adversely affected if Drivers were classified as employees, workers or quasi-employees instead of independent contractors. The mobility, delivery, and logistics industries are highly competitive, with well-established and low-cost alternatives that have been available for decades, low barriers to entry, low switching costs, and well-capitalized competitors in nearly every major geographic region. To remain competitive in certain markets, we have in the past lowered, and may continue to lower, fares or service fees, and we have in the past offered, and may continue to offer, significant Driver incentives and consumer discounts and promotions. We have incurred significant losses since inception, including in the United States and other major markets. We expect our operating expenses to increase significantly in the foreseeable future, and we may not achieve or maintain profitability. If we are unable to attract or maintain a critical mass of Drivers, consumers, merchants, shippers, and carriers, whether as a result of competition or other factors, our platform will become less appealing to platform users.
========================
=== Calling Function ===
Calling function: vector_index_2020 with args: {
"input": "risk factors"
}
Got output: The risk factors mentioned in the context include the adverse impact of the COVID-19 pandemic on the business, the potential reclassification of drivers as employees, the highly competitive nature of the mobility, delivery, and logistics industries, the need to lower fares or service fees to remain competitive, the company's history of significant losses and potential future expenses, the importance of attracting and maintaining a critical mass of platform users, and the operational and cultural challenges faced by the company.
========================
=== Calling Function ===
Calling function: vector_index_2019 with args: {
"input": "risk factors"
}
Got output: The risk factors mentioned in the context include competition with local companies, differing levels of social acceptance, technological compatibility issues, exposure to improper business practices, legal uncertainty, difficulties in managing international operations, fluctuations in currency exchange rates, regulations governing local currencies, tax consequences, financial accounting burdens, difficulties in implementing financial systems, import and export restrictions, political and economic instability, public health concerns, reduced protection for intellectual property rights, limited influence over minority-owned affiliates, and regulatory complexities. These risk factors could adversely affect the international operations, business, financial condition, and operating results of the company.
========================
Here is a comparison of the risk factors described in the Uber 10-K reports across years:
2022 Risk Factors:
- Potential adverse effect if drivers were classified as employees instead of independent contractors.
- Highly competitive nature of the mobility, delivery, and logistics industries.
- Need to lower fares or service fees to remain competitive.
- History of significant losses and expectation of increased operating expenses.
- Impact of future pandemics or disease outbreaks on the business and financial results.
- Potential harm to the business due to economic conditions and their effect on discretionary consumer spending.
2021 Risk Factors:
- Adverse impact of the COVID-19 pandemic and actions to mitigate it on the business.
- Potential reclassification of drivers as employees instead of independent contractors.
- Highly competitive nature of the mobility, delivery, and logistics industries.
- Need to lower fares or service fees and offer incentives to remain competitive.
- History of significant losses and uncertainty of achieving profitability.
- Importance of attracting and maintaining a critical mass of platform users.
2020 Risk Factors:
- Adverse impact of the COVID-19 pandemic on the business.
- Potential reclassification of drivers as employees.
- Highly competitive nature of the mobility, delivery, and logistics industries.
- Need to lower fares or service fees to remain competitive.
- History of significant losses and potential future expenses.
- Importance of attracting and maintaining a critical mass of platform users.
- Operational and cultural challenges faced by the company.
2019 Risk Factors:
- Competition with local companies.
- Differing levels of social acceptance.
- Technological compatibility issues.
- Exposure to improper business practices.
- Legal uncertainty.
- Difficulties in managing international operations.
- Fluctuations in currency exchange rates.
- Regulations governing local currencies.
- Tax consequences.
- Financial accounting burdens.
- Difficulties in implementing financial systems.
- Import and export restrictions.
- Political and economic instability.
- Public health concerns.
- Reduced protection for intellectual property rights.
- Limited influence over minority-owned affiliates.
- Regulatory complexities.
These comparisons highlight both common and unique risk factors that Uber faced in different years.
```
### Setting up the Chatbot Loop
Now that we have the chatbot setup, it only takes a few more steps to setup a basic interactive loop to chat with our SEC-augmented chatbot!
```python
agent = OpenAIAgent.from_tools(tools) # verbose=False by default
while True:
text_input = input("User: ")
if text_input == "exit":
break
response = agent.chat(text_input)
print(f"Agent: {response}")
```
Here's an example of the loop in action:
```
User: What were some of the legal proceedings against Uber in 2022?
Agent: In 2022, Uber faced several legal proceedings. Some of the notable ones include:
1. Petition against Proposition 22: A petition was filed in California alleging that Proposition 22, which classifies app-based drivers as independent contractors, is unconstitutional.
2. Lawsuit by Massachusetts Attorney General: The Massachusetts Attorney General filed a lawsuit against Uber, claiming that drivers should be classified as employees and entitled to protections under wage and labor laws.
3. Allegations by New York Attorney General: The New York Attorney General made allegations against Uber regarding the misclassification of drivers and related employment violations.
4. Swiss social security rulings: Swiss social security rulings classified Uber drivers as employees, which could have implications for Uber's operations in Switzerland.
5. Class action lawsuits in Australia: Uber faced class action lawsuits in Australia, with allegations that the company conspired to harm participants in the taxi, hire-car, and limousine industries.
It's important to note that the outcomes of these legal proceedings are uncertain and may vary.
User:
```
### Notebook
Take a look at our [corresponding notebook](../../../examples/agent/Chatbot_SEC.ipynb). |
3,667 | 874edc9f-5575-4c23-a772-908223caa446 | A Guide to Building a Full-Stack Web App with LLamaIndex | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/apps/fullstack_app_guide | true | llama_index | # A Guide to Building a Full-Stack Web App with LLamaIndex
LlamaIndex is a python library, which means that integrating it with a full-stack web application will be a little different than what you might be used to.
This guide seeks to walk through the steps needed to create a basic API service written in python, and how this interacts with a TypeScript+React frontend.
All code examples here are available from the [llama_index_starter_pack](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react) in the flask_react folder.
The main technologies used in this guide are as follows:
- python3.11
- llama_index
- flask
- typescript
- react
## Flask Backend
For this guide, our backend will use a [Flask](https://flask.palletsprojects.com/en/2.2.x/) API server to communicate with our frontend code. If you prefer, you can also easily translate this to a [FastAPI](https://fastapi.tiangolo.com/) server, or any other python server library of your choice.
Setting up a server using Flask is easy. You import the package, create the app object, and then create your endpoints. Let's create a basic skeleton for the server first:
```python
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello World!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5601)
```
_flask_demo.py_
If you run this file (`python flask_demo.py`), it will launch a server on port 5601. If you visit `http://localhost:5601/`, you will see the "Hello World!" text rendered in your browser. Nice!
The next step is deciding what functions we want to include in our server, and to start using LlamaIndex.
To keep things simple, the most basic operation we can provide is querying an existing index. Using the [paul graham essay](https://github.com/jerryjliu/llama_index/blob/main/examples/paul_graham_essay/data/paul_graham_essay.txt) from LlamaIndex, create a documents folder and download+place the essay text file inside of it.
### Basic Flask - Handling User Index Queries
Now, let's write some code to initialize our index:
```python
import os
from llama_index.core import (
SimpleDirectoryReader,
VectorStoreIndex,
StorageContext,
load_index_from_storage,
)
# NOTE: for local testing only, do NOT deploy with your key hardcoded
os.environ["OPENAI_API_KEY"] = "your key here"
index = None
def initialize_index():
global index
storage_context = StorageContext.from_defaults()
index_dir = "./.index"
if os.path.exists(index_dir):
index = load_index_from_storage(storage_context)
else:
documents = SimpleDirectoryReader("./documents").load_data()
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
storage_context.persist(index_dir)
```
This function will initialize our index. If we call this just before starting the flask server in the `main` function, then our index will be ready for user queries!
Our query endpoint will accept `GET` requests with the query text as a parameter. Here's what the full endpoint function will look like:
```python
from flask import request
@app.route("/query", methods=["GET"])
def query_index():
global index
query_text = request.args.get("text", None)
if query_text is None:
return (
"No text found, please include a ?text=blah parameter in the URL",
400,
)
query_engine = index.as_query_engine()
response = query_engine.query(query_text)
return str(response), 200
```
Now, we've introduced a few new concepts to our server:
- a new `/query` endpoint, defined by the function decorator
- a new import from flask, `request`, which is used to get parameters from the request
- if the `text` parameter is missing, then we return an error message and an appropriate HTML response code
- otherwise, we query the index, and return the response as a string
A full query example that you can test in your browser might look something like this: `http://localhost:5601/query?text=what did the author do growing up` (once you press enter, the browser will convert the spaces into "%20" characters).
Things are looking pretty good! We now have a functional API. Using your own documents, you can easily provide an interface for any application to call the flask API and get answers to queries.
### Advanced Flask - Handling User Document Uploads
Things are looking pretty cool, but how can we take this a step further? What if we want to allow users to build their own indexes by uploading their own documents? Have no fear, Flask can handle it all :muscle:.
To let users upload documents, we have to take some extra precautions. Instead of querying an existing index, the index will become **mutable**. If you have many users adding to the same index, we need to think about how to handle concurrency. Our Flask server is threaded, which means multiple users can ping the server with requests which will be handled at the same time.
One option might be to create an index for each user or group, and store and fetch things from S3. But for this example, we will assume there is one locally stored index that users are interacting with.
To handle concurrent uploads and ensure sequential inserts into the index, we can use the `BaseManager` python package to provide sequential access to the index using a separate server and locks. This sounds scary, but it's not so bad! We will just move all our index operations (initializing, querying, inserting) into the `BaseManager` "index_server", which will be called from our Flask server.
Here's a basic example of what our `index_server.py` will look like after we've moved our code:
```python
import os
from multiprocessing import Lock
from multiprocessing.managers import BaseManager
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex, Document
# NOTE: for local testing only, do NOT deploy with your key hardcoded
os.environ["OPENAI_API_KEY"] = "your key here"
index = None
lock = Lock()
def initialize_index():
global index
with lock:
# same as before ...
pass
def query_index(query_text):
global index
query_engine = index.as_query_engine()
response = query_engine.query(query_text)
return str(response)
if __name__ == "__main__":
# init the global index
print("initializing index...")
initialize_index()
# setup server
# NOTE: you might want to handle the password in a less hardcoded way
manager = BaseManager(("", 5602), b"password")
manager.register("query_index", query_index)
server = manager.get_server()
print("starting server...")
server.serve_forever()
```
_index_server.py_
So, we've moved our functions, introduced the `Lock` object which ensures sequential access to the global index, registered our single function in the server, and started the server on port 5602 with the password `password`.
Then, we can adjust our flask code as follows:
```python
from multiprocessing.managers import BaseManager
from flask import Flask, request
# initialize manager connection
# NOTE: you might want to handle the password in a less hardcoded way
manager = BaseManager(("", 5602), b"password")
manager.register("query_index")
manager.connect()
@app.route("/query", methods=["GET"])
def query_index():
global index
query_text = request.args.get("text", None)
if query_text is None:
return (
"No text found, please include a ?text=blah parameter in the URL",
400,
)
response = manager.query_index(query_text)._getvalue()
return str(response), 200
@app.route("/")
def home():
return "Hello World!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5601)
```
_flask_demo.py_
The two main changes are connecting to our existing `BaseManager` server and registering the functions, as well as calling the function through the manager in the `/query` endpoint.
One special thing to note is that `BaseManager` servers don't return objects quite as we expect. To resolve the return value into it's original object, we call the `_getvalue()` function.
If we allow users to upload their own documents, we should probably remove the Paul Graham essay from the documents folder, so let's do that first. Then, let's add an endpoint to upload files! First, let's define our Flask endpoint function:
```python
...
manager.register("insert_into_index")
...
@app.route("/uploadFile", methods=["POST"])
def upload_file():
global manager
if "file" not in request.files:
return "Please send a POST request with a file", 400
filepath = None
try:
uploaded_file = request.files["file"]
filename = secure_filename(uploaded_file.filename)
filepath = os.path.join("documents", os.path.basename(filename))
uploaded_file.save(filepath)
if request.form.get("filename_as_doc_id", None) is not None:
manager.insert_into_index(filepath, doc_id=filename)
else:
manager.insert_into_index(filepath)
except Exception as e:
# cleanup temp file
if filepath is not None and os.path.exists(filepath):
os.remove(filepath)
return "Error: {}".format(str(e)), 500
# cleanup temp file
if filepath is not None and os.path.exists(filepath):
os.remove(filepath)
return "File inserted!", 200
```
Not too bad! You will notice that we write the file to disk. We could skip this if we only accept basic file formats like `txt` files, but written to disk we can take advantage of LlamaIndex's `SimpleDirectoryReader` to take care of a bunch of more complex file formats. Optionally, we also use a second `POST` argument to either use the filename as a doc_id or let LlamaIndex generate one for us. This will make more sense once we implement the frontend.
With these more complicated requests, I also suggest using a tool like [Postman](https://www.postman.com/downloads/?utm_source=postman-home). Examples of using postman to test our endpoints are in the [repository for this project](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react/postman_examples).
Lastly, you'll notice we added a new function to the manager. Let's implement that inside `index_server.py`:
```python
def insert_into_index(doc_text, doc_id=None):
global index
document = SimpleDirectoryReader(input_files=[doc_text]).load_data()[0]
if doc_id is not None:
document.doc_id = doc_id
with lock:
index.insert(document)
index.storage_context.persist()
...
manager.register("insert_into_index", insert_into_index)
...
```
Easy! If we launch both the `index_server.py` and then the `flask_demo.py` python files, we have a Flask API server that can handle multiple requests to insert documents into a vector index and respond to user queries!
To support some functionality in the frontend, I've adjusted what some responses look like from the Flask API, as well as added some functionality to keep track of which documents are stored in the index (LlamaIndex doesn't currently support this in a user-friendly way, but we can augment it ourselves!). Lastly, I had to add CORS support to the server using the `Flask-cors` python package.
Check out the complete `flask_demo.py` and `index_server.py` scripts in the [repository](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react) for the final minor changes, the`requirements.txt` file, and a sample `Dockerfile` to help with deployment.
## React Frontend
Generally, React and Typescript are one of the most popular libraries and languages for writing webapps today. This guide will assume you are familiar with how these tools work, because otherwise this guide will triple in length :smile:.
In the [repository](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react), the frontend code is organized inside of the `react_frontend` folder.
The most relevant part of the frontend will be the `src/apis` folder. This is where we make calls to the Flask server, supporting the following queries:
- `/query` -- make a query to the existing index
- `/uploadFile` -- upload a file to the flask server for insertion into the index
- `/getDocuments` -- list the current document titles and a portion of their texts
Using these three queries, we can build a robust frontend that allows users to upload and keep track of their files, query the index, and view the query response and information about which text nodes were used to form the response.
### fetchDocuments.tsx
This file contains the function to, you guessed it, fetch the list of current documents in the index. The code is as follows:
```typescript
export type Document = {
id: string;
text: string;
};
const fetchDocuments = async (): Promise<Document[]> => {
const response = await fetch("http://localhost:5601/getDocuments", {
mode: "cors",
});
if (!response.ok) {
return [];
}
const documentList = (await response.json()) as Document[];
return documentList;
};
```
As you can see, we make a query to the Flask server (here, it assumes running on localhost). Notice that we need to include the `mode: 'cors'` option, as we are making an external request.
Then, we check if the response was ok, and if so, get the response json and return it. Here, the response json is a list of `Document` objects that are defined in the same file.
### queryIndex.tsx
This file sends the user query to the flask server, and gets the response back, as well as details about which nodes in our index provided the response.
```typescript
export type ResponseSources = {
text: string;
doc_id: string;
start: number;
end: number;
similarity: number;
};
export type QueryResponse = {
text: string;
sources: ResponseSources[];
};
const queryIndex = async (query: string): Promise<QueryResponse> => {
const queryURL = new URL("http://localhost:5601/query?text=1");
queryURL.searchParams.append("text", query);
const response = await fetch(queryURL, { mode: "cors" });
if (!response.ok) {
return { text: "Error in query", sources: [] };
}
const queryResponse = (await response.json()) as QueryResponse;
return queryResponse;
};
export default queryIndex;
```
This is similar to the `fetchDocuments.tsx` file, with the main difference being we include the query text as a parameter in the URL. Then, we check if the response is ok and return it with the appropriate typescript type.
### insertDocument.tsx
Probably the most complex API call is uploading a document. The function here accepts a file object and constructs a `POST` request using `FormData`.
The actual response text is not used in the app but could be utilized to provide some user feedback on if the file failed to upload or not.
```typescript
const insertDocument = async (file: File) => {
const formData = new FormData();
formData.append("file", file);
formData.append("filename_as_doc_id", "true");
const response = await fetch("http://localhost:5601/uploadFile", {
mode: "cors",
method: "POST",
body: formData,
});
const responseText = response.text();
return responseText;
};
export default insertDocument;
```
### All the Other Frontend Good-ness
And that pretty much wraps up the frontend portion! The rest of the react frontend code is some pretty basic react components, and my best attempt to make it look at least a little nice :smile:.
I encourage to read the rest of the [codebase](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react/react_frontend) and submit any PRs for improvements!
## Conclusion
This guide has covered a ton of information. We went from a basic "Hello World" Flask server written in python, to a fully functioning LlamaIndex powered backend and how to connect that to a frontend application.
As you can see, we can easily augment and wrap the services provided by LlamaIndex (like the little external document tracker) to help provide a good user experience on the frontend.
You could take this and add many features (multi-index/user support, saving objects into S3, adding a Pinecone vector server, etc.). And when you build an app after reading this, be sure to share the final result in the Discord! Good Luck! :muscle: |
182 | d4157c1a-a595-4350-9ba4-63e0e92e2984 | Full-Stack Web Application | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/apps/index | true | llama_index | # Full-Stack Web Application
LlamaIndex can be integrated into a downstream full-stack web application. It can be used in a backend server (such as Flask), packaged into a Docker container, and/or directly used in a framework such as Streamlit.
We provide tutorials and resources to help you get started in this area:
- [Fullstack Application Guide](./fullstack_app_guide.md) shows you how to build an app with LlamaIndex as an API and a TypeScript+React frontend
- [Fullstack Application with Delphic](./fullstack_with_delphic.md) walks you through using LlamaIndex with a production-ready web app starter template called Delphic.
- The [LlamaIndex Starter Pack](https://github.com/logan-markewich/llama_index_starter_pack) provides very basic flask, streamlit, and docker examples for LlamaIndex. |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 170