Facebook AI Similarity Search (Faiss) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also includes supporting code for evaluation and parameter tuning. See The FAISS Library paper.Faiss documentation. You’ll need to install
langchain-community with pip install -qU langchain-community to use this integration
This notebook shows how to use functionality related to the FAISS vector database using asyncio.
LangChain implemented the synchronous and asynchronous vector store functions.
See synchronous version here.
OpenAIEmbeddings so we have to get the OpenAI API Key.
Similarity Search with score
There are some FAISS specific methods. One of them issimilarity_search_with_score, which allows you to return not only the documents but also the distance score of the query to them. The returned distance score is L2 distance. Therefore, a lower score is better.
similarity_search_by_vector which accepts an embedding vector as a parameter instead of a string.
Saving and loading
You can also save and load a FAISS index. This is useful so you don’t have to recreate it everytime you use it.Serializing and De-Serializing to bytes
you can pickle the FAISS Index by these functions. If you use embeddings model which is of 90 mb (sentence-transformers/all-MiniLM-L6-v2 or any other model), the resultant pickle size would be more than 90 mb. the size of the model is also included in the overall size. To overcome this, use the below functions. These functions only serializes FAISS index and size would be much lesser. this can be helpful if you wish to store the index in database like sql.Merging
You can also merge two FAISS vectorstoresSimilarity Search with filtering
FAISS vectorstore can also support filtering, since the FAISS does not natively support filtering we have to do it manually. This is done by first fetching more results thank and then filtering them. You can filter the documents based on metadata. You can also set the fetch_k parameter when calling any search method to set how many documents you want to fetch before filtering. Here is a small example:
page = 1
max_marginal_relevance_search as well.
fetch_k parameter when calling similarity_search. Usually you would want the fetch_k parameter >> k parameter. This is because the fetch_k parameter is the number of documents that will be fetched before filtering. If you set fetch_k to a low number, you might not get enough documents to filter from.
$eq(equals)$neq(not equals)$gt(greater than)$lt(less than)$gte(greater than or equal)$lte(less than or equal)$in(membership in list)$nin(not in list)$and(all conditions must match)$or(any condition must match)$not(negation of condition)