# Overview # Key Considerations ## Approaches ### Using [[Cosine Similarity]] #flashcard - Cosine similarity is often used for searching for similar vectors. It represents the cosine of the angle between two vectors. The values range between -1 and 1. With 1 indicating more similarity between two word. Negative values do not indicate opposites, though. - When choosing cosine similarity for vector search, you must consider if the model was also trained with using cosine similarity. In the US, word2vec might tell you espresso and cappuccino are practically identical. It is not a claim you would make in Italy. (source: [Don't use cosine similarity carelessly](https://p.migdal.pl/blog/2025/01/dont-use-cosine-similarity)). - If you're using cosine similarity when retrieving for a RAG application, a good approach is to then use a "semantic re-ranker" or "L2 re-ranking model" to re-rank the results to better match the user query. - There's an example in the pgvector-python that uses a cross-encoder model for re-ranking: [https://github.com/pgvector/pgvector-python/blob/master/exam...](https://github.com/pgvector/pgvector-python/blob/master/examples/hybrid_search/cross_encoder.py#L57) You can even use a language model for re-ranking, though it may not be as good as a model trained specifically for re-ranking purposes. <!--ID: 1751507777206--> # Pros # Cons # Use Cases # Related Topics