Which Vector Database Should I Use? A Comparison Cheatsheet

Navid Rezaei
2 min readJul 29


Semantic search and retrieval-augmented generation (RAG) applications require systems to be able to save lots of embedding vectors (n-dimensional vectors, representing pieces of data) and also be able to retrieve the most relevant vectors with low latency. This requirement has resulted in emergence of many new vector databases. Choosing and relying on one can have long-term impacts and dependencies in your system. We ideally choose a vector database that can scale well, while keeping cost and latency low. Last but not least, the chosen vector database must adhere to the compliance requirements of the target application.

In this post, we try to summarize and compare all the well-known vector databases to make the decision between them easier. The comparison is an on-going work and the decision depends on the specific use cases.

Screenshot of TensorFlow embedding projector

The comparison table is as follows. It is not a comprehensive comparison and may have errors. Please let me know if anything needs to be updated. Last update: Jul. 30, 2023.

Vector databases compared are: Weaviate, Pinecone, pgvector, Milvus, MongoDB, Qdrant, and Chroma. The benchmark data is from ANN Benchmarks.

The comparison is not exhaustive, I am sharing this Google Sheet so that others could contribute too: https://docs.google.com/spreadsheets/d/1oAeF4Q7ILxxfInGJ8vTsBck3-2U9VV8idDf3hJOozNw/edit?usp=sharing.


Choosing a database to store vector formats is an important decision that can affect your architecture, compliance, and future costs. There are two general categories of vector databases: 1) Independent Vector Database and 2) Vector Search in Current Database. An example of an independent vector database is Pinecone and an example of vector search in the current database is pgvector on PostgreSQL.

Independent vector databases require that you maintain the embeddings independent of the original database. There could be some added benefits to this architecture. One should decide if these added benefits are worth the extra complexity and cost.

Another solution is to store the embeddings where your data already resides. This way, the complexity of the architecture is reduced, and you will not have extra compliance concerns. Last but not least, it seems to be a cost-effective solution. However, these solutions should be considered in terms of database queries per second (QPS).

Choosing between these two categories, a new vector database or vector search in the current database, is a decision that depends on application-specific factors. Hopefully, this collaborative comparison table could help with your decision!

Please follow me on Medium or social media to keep in contact:

Twitter | LinkedIn | Medium