Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anchor Links #1021

Open
wants to merge 105 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 101 commits
Commits
Show all changes
105 commits
Select commit Hold shift + click to select a range
9ce6788
Anchor Text to Add Link 3
kartik-gupta-ij Jul 12, 2024
0f15ee9
4
kartik-gupta-ij Jul 12, 2024
6ae49b5
5
kartik-gupta-ij Jul 12, 2024
3866e8e
6
kartik-gupta-ij Jul 12, 2024
19682af
7
kartik-gupta-ij Jul 12, 2024
d70a8d7
8,9
kartik-gupta-ij Jul 12, 2024
8dba5f2
10,11
kartik-gupta-ij Jul 12, 2024
a8ca4b9
12
kartik-gupta-ij Jul 12, 2024
2b86c47
13
kartik-gupta-ij Jul 12, 2024
d310beb
14
kartik-gupta-ij Jul 12, 2024
43a93a4
15
kartik-gupta-ij Jul 12, 2024
3a253f4
16
kartik-gupta-ij Jul 12, 2024
8ea4a32
17
kartik-gupta-ij Jul 12, 2024
ef3d1c6
18
kartik-gupta-ij Jul 12, 2024
34d07a2
19
kartik-gupta-ij Jul 12, 2024
20e9a13
20
kartik-gupta-ij Jul 12, 2024
48fad15
21
kartik-gupta-ij Jul 12, 2024
77f8d91
22
kartik-gupta-ij Jul 12, 2024
f057d1d
24
kartik-gupta-ij Jul 12, 2024
63a1102
25
kartik-gupta-ij Jul 12, 2024
f574f80
26
kartik-gupta-ij Jul 12, 2024
ccf8214
27
kartik-gupta-ij Jul 12, 2024
fe050c3
28
kartik-gupta-ij Jul 12, 2024
ca686c8
29
kartik-gupta-ij Jul 12, 2024
3fe73b2
30
kartik-gupta-ij Jul 12, 2024
470e360
31
kartik-gupta-ij Jul 12, 2024
39e5c2f
32
kartik-gupta-ij Jul 12, 2024
84a72a3
33
kartik-gupta-ij Jul 12, 2024
aca10fb
34
kartik-gupta-ij Jul 12, 2024
c52c775
35
kartik-gupta-ij Jul 12, 2024
248cf78
36
kartik-gupta-ij Jul 12, 2024
9ec6612
37
kartik-gupta-ij Jul 12, 2024
d3b0379
38
kartik-gupta-ij Jul 12, 2024
58ee87c
39
kartik-gupta-ij Jul 12, 2024
5884735
40
kartik-gupta-ij Jul 12, 2024
5dec687
41
kartik-gupta-ij Jul 12, 2024
c72eeca
42
kartik-gupta-ij Jul 12, 2024
9c8778c
43
kartik-gupta-ij Jul 12, 2024
563a99d
44
kartik-gupta-ij Jul 12, 2024
28e7bdb
45
kartik-gupta-ij Jul 12, 2024
8e58f0e
46
kartik-gupta-ij Jul 12, 2024
5760f24
47
kartik-gupta-ij Jul 12, 2024
67de4e2
48
kartik-gupta-ij Jul 12, 2024
e5dc4d1
49
kartik-gupta-ij Jul 12, 2024
97c919b
50
kartik-gupta-ij Jul 12, 2024
e048691
52
kartik-gupta-ij Jul 12, 2024
5ee7126
53
kartik-gupta-ij Jul 12, 2024
8d299fc
54
kartik-gupta-ij Jul 12, 2024
590e6ed
55
kartik-gupta-ij Jul 12, 2024
6062a0b
56
kartik-gupta-ij Jul 12, 2024
345c185
57
kartik-gupta-ij Jul 12, 2024
c862307
58
kartik-gupta-ij Jul 12, 2024
b6022a5
59
kartik-gupta-ij Jul 12, 2024
31dd0ff
60
kartik-gupta-ij Jul 12, 2024
f5cf9f5
61
kartik-gupta-ij Jul 12, 2024
078aa9a
62
kartik-gupta-ij Jul 12, 2024
316ae94
63
kartik-gupta-ij Jul 12, 2024
0690d7e
64
kartik-gupta-ij Jul 12, 2024
2dd86d8
65
kartik-gupta-ij Jul 12, 2024
d8637cd
66
kartik-gupta-ij Jul 12, 2024
8ecad3a
67
kartik-gupta-ij Jul 12, 2024
94ec37f
68
kartik-gupta-ij Jul 12, 2024
97fe802
69
kartik-gupta-ij Jul 12, 2024
8e5842c
70
kartik-gupta-ij Jul 12, 2024
b4b6b9f
71
kartik-gupta-ij Jul 12, 2024
adbbfda
72
kartik-gupta-ij Jul 12, 2024
de28707
74
kartik-gupta-ij Jul 12, 2024
5bdd5c6
75
kartik-gupta-ij Jul 12, 2024
1be6949
76
kartik-gupta-ij Jul 12, 2024
7b8b604
77
kartik-gupta-ij Jul 12, 2024
1b491c5
78
kartik-gupta-ij Jul 12, 2024
f9e2f98
79
kartik-gupta-ij Jul 12, 2024
8a49689
80
kartik-gupta-ij Jul 12, 2024
29b007b
81
kartik-gupta-ij Jul 12, 2024
d110fbe
82
kartik-gupta-ij Jul 12, 2024
5e86ed4
83
kartik-gupta-ij Jul 12, 2024
a07f991
84
kartik-gupta-ij Jul 12, 2024
dd67443
85
kartik-gupta-ij Jul 12, 2024
dc14c20
86
kartik-gupta-ij Jul 12, 2024
2839b43
88
kartik-gupta-ij Jul 12, 2024
5ede13c
89
kartik-gupta-ij Jul 12, 2024
39d621d
90
kartik-gupta-ij Jul 12, 2024
ce3ee33
91
kartik-gupta-ij Jul 12, 2024
0a9896e
92
kartik-gupta-ij Jul 12, 2024
6af3545
93
kartik-gupta-ij Jul 12, 2024
ebf4183
94
kartik-gupta-ij Jul 12, 2024
5e36696
95
kartik-gupta-ij Jul 12, 2024
b182b6e
96
kartik-gupta-ij Jul 12, 2024
6dfcfbe
97
kartik-gupta-ij Jul 12, 2024
463b9ac
98
kartik-gupta-ij Jul 12, 2024
b9ee0c0
99
kartik-gupta-ij Jul 12, 2024
aea08dc
100
kartik-gupta-ij Jul 12, 2024
bf3dbe2
101
kartik-gupta-ij Jul 12, 2024
e03e18f
102
kartik-gupta-ij Jul 12, 2024
41de5f9
103
kartik-gupta-ij Jul 12, 2024
d6003c5
104
kartik-gupta-ij Jul 12, 2024
a69aa8b
105
kartik-gupta-ij Jul 12, 2024
925dd6c
106
kartik-gupta-ij Jul 12, 2024
f10f451
fix
kartik-gupta-ij Jul 12, 2024
8b4e645
fix : Absolute URLs to relative URLs
kartik-gupta-ij Jul 12, 2024
b724a99
Update using-qdrant-and-langchain.md
kartik-gupta-ij Jul 12, 2024
d7740da
Update using-qdrant-and-langchain.md
kartik-gupta-ij Jul 12, 2024
14c4883
Update vector-image-search-rag-vector-space-talk-008.md
kartik-gupta-ij Jul 12, 2024
fb7b8f4
fix
kartik-gupta-ij Jul 12, 2024
4750802
removing outdated tutorial links and non related articles links
kartik-gupta-ij Jul 12, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions qdrant-landing/content/articles/binary-quantization-openai.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ You can also try out these techniques as described in [Binary Quantization OpenA

## New OpenAI embeddings: performance and changes

As the technology of embedding models has advanced, demand has grown. Users are looking more for powerful and efficient text-embedding models. OpenAI's Ada-003 embeddings offer state-of-the-art performance on a wide range of NLP tasks, including those noted in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) and [MIRACL](https://openai.com/blog/new-embedding-models-and-api-updates).
As the technology of [embedding models](https://qdrant.tech/articles/fastembed/) has advanced, demand has grown. Users are looking more for powerful and efficient text-embedding models. OpenAI's Ada-003 embeddings offer state-of-the-art performance on a wide range of NLP tasks, including those noted in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) and [MIRACL](https://openai.com/blog/new-embedding-models-and-api-updates).
timvisee marked this conversation as resolved.
Show resolved Hide resolved

These models include multilingual support in over 100 languages. The transition from text-embedding-ada-002 to text-embedding-3-large has led to a significant jump in performance scores (from 31.4% to 54.9% on MIRACL).

Expand Down Expand Up @@ -118,7 +118,7 @@ For those exploring the integration of text embedding models with Qdrant, it's c

1. **Model Name**: Signifying the specific text embedding model variant, such as "text-embedding-3-large" or "text-embedding-3-small". This distinction correlates with the model's capacity, with "large" models offering more detailed embeddings at the cost of increased computational resources.

2. **Dimensions**: This refers to the size of the vector embeddings produced by the model. Options range from 512 to 3072 dimensions. Higher dimensions could lead to more precise embeddings but might also increase the search time and memory usage in Qdrant.
2. **Dimensions**: This refers to the size of the [vector embeddings](/articles/what-are-embeddings/) produced by the model. Options range from 512 to 3072 dimensions. Higher dimensions could lead to more precise embeddings but might also increase the search time and memory usage in Qdrant.

Optimizing these parameters is a balancing act between search accuracy and resource efficiency. Testing across these combinations allows users to identify the configuration that best meets their specific needs, considering the trade-offs between computational resources and the quality of search results.

Expand Down
4 changes: 2 additions & 2 deletions qdrant-landing/content/articles/binary-quantization.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ keywords:

Qdrant is built to handle typical scaling challenges: high throughput, low latency and efficient indexing. **Binary quantization (BQ)** is our latest attempt to give our customers the edge they need to scale efficiently. This feature is particularly excellent for collections with large vector lengths and a large number of points.

Our results are dramatic: Using BQ will reduce your memory consumption and improve retrieval speeds by up to 40x.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not related

Our results are dramatic: Using BQ will reduce your [memory consumption](/articles/memory-consumption/) and improve retrieval speeds by up to 40x.

As is the case with other quantization methods, these benefits come at the cost of recall degradation. However, our implementation lets you balance the tradeoff between speed and recall accuracy at time of search, rather than time of index creation.

Expand All @@ -30,7 +30,7 @@ The rest of this article will cover:
3. Benchmark analysis and usage recommendations

## What is Binary Quantization?
Binary quantization (BQ) converts any vector embedding of floating point numbers into a vector of binary or boolean values. This feature is an extension of our past work on [scalar quantization](/articles/scalar-quantization/) where we convert `float32` to `uint8` and then leverage a specific SIMD CPU instruction to perform fast vector comparison.
Binary quantization (BQ) converts any [vector embedding](/articles/what-are-embeddings/) of floating point numbers into a vector of binary or boolean values. This feature is an extension of our past work on [scalar quantization](/articles/scalar-quantization/) where we convert `float32` to `uint8` and then leverage a specific SIMD CPU instruction to perform fast vector comparison.

![What is binary quantization](/articles_data/binary-quantization/bq-2.png)

Expand Down
2 changes: 1 addition & 1 deletion qdrant-landing/content/articles/chatgpt-plugin.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ These plugins, designed to enhance the model's performance, serve as modular ext
that seamlessly interface with the core system. By adding a knowledge base plugin to
ChatGPT, we can effectively provide the AI with a curated, trustworthy source of
information, ensuring that the generated content is more accurate and relevant. Qdrant
may act as a vector database where all the facts will be stored and served to the model
may act as a [vector database](/qdrant-vector-database/) where all the facts will be stored and served to the model
upon request.

If you’d like to ask ChatGPT questions about your data sources, such as files, notes, or
Expand Down
4 changes: 2 additions & 2 deletions qdrant-landing/content/articles/data-privacy.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ keywords: # Keywords for SEO
- Enterprise Data Compliance
---

Data stored in vector databases is often proprietary to the enterprise and may include sensitive information like customer records, legal contracts, electronic health records (EHR), financial data, and intellectual property. Moreover, strong security measures become critical to safeguarding this data. If the data stored in a vector database is not secured, it may open a vulnerability known as "[embedding inversion attack](https://arxiv.org/abs/2004.00053)," where malicious actors could potentially [reconstruct the original data from the embeddings](https://arxiv.org/pdf/2305.03010) themselves.
Data stored in vector databases is often proprietary to the enterprise and may include sensitive information like customer records, legal contracts, electronic health records (EHR), financial data, and intellectual property. Moreover, strong security measures become critical to safeguarding this data. If the data stored in a [vector databases](/qdrant-vector-database/) is not secured, it may open a vulnerability known as "[embedding inversion attack](https://arxiv.org/abs/2004.00053)," where malicious actors could potentially [reconstruct the original data from the embeddings](https://arxiv.org/pdf/2305.03010) themselves.

Strict compliance regulations govern data stored in vector databases across various industries. For instance, healthcare must comply with HIPAA, which dictates how protected health information (PHI) is stored, transmitted, and secured. Similarly, the financial services industry follows PCI DSS to safeguard sensitive financial data. These regulations require developers to ensure data storage and transmission comply with industry-specific legal frameworks across different regions. **As a result, features that enable data privacy, security and sovereignty are deciding factors when choosing the right vector database.**

Expand Down Expand Up @@ -234,7 +234,7 @@ Data governance varies by country, especially for global organizations dealing w

To address these needs, the vector database you choose should support deployment and scaling within your controlled infrastructure. [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/) offers this flexibility, along with features like sharding, replicas, JWT authentication, and monitoring.

Qdrant Hybrid Cloud integrates Kubernetes clusters from various environments—cloud, on-premises, or edge—into a unified managed service. This allows organizations to manage Qdrant databases through the Qdrant Cloud UI while keeping the databases within their infrastructure.
Qdrant Hybrid Cloud integrates Kubernetes clusters from various environments—cloud, on-premises, or edge—into a unified managed service. This allows organizations to manage Qdrant databases through the [Qdrant Cloud](/cloud/) UI while keeping the databases within their infrastructure.

With JWT and RBAC, Qdrant Hybrid Cloud provides a secure, private, and sovereign vector store. Enterprises can scale their AI applications geographically, comply with local laws, and maintain strict data control.

Expand Down
4 changes: 2 additions & 2 deletions qdrant-landing/content/articles/dedicated-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ keywords:
Ever since the data science community discovered that vector search significantly improves LLM answers,
various vendors and enthusiasts have been arguing over the proper solutions to store embeddings.

Some say storing them in a specialized engine (aka vector database) is better. Others say that it's enough to use plugins for existing databases.
Some say storing them in a specialized engine (aka [vector databases](/qdrant-vector-database/)) is better. Others say that it's enough to use plugins for existing databases.

Here are [just](https://nextword.substack.com/p/vector-database-is-not-a-separate) a [few](https://stackoverflow.blog/2023/09/20/do-you-need-a-specialized-vector-database-to-implement-vector-search-well/) of [them](https://www.singlestore.com/blog/why-your-vector-database-should-not-be-a-vector-database/).

Expand Down Expand Up @@ -72,7 +72,7 @@ Those priorities lead to different architectural decisions that are not reproduc

###### Having a dedicated vector database requires duplication of data.

By their very nature, vector embeddings are derivatives of the primary source data.
By their very nature, [vector embeddings](/articles/what-are-embeddings/) are derivatives of the primary source data.

In the vast majority of cases, embeddings are derived from some other data, such as text, images, or additional information stored in your system. So, in fact, all embeddings you have in your system can be considered transformations of some original source.

Expand Down
2 changes: 1 addition & 1 deletion qdrant-landing/content/articles/discovery-search.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,4 +103,4 @@ This way you can give refreshing recommendations, while still being in control b
- Discovery search is a powerful tool for controlled exploration in vector spaces.
Context, positive, and negative vectors guide search parameters and refine results.
- Real-world applications include multimodal search, diverse recommendations, and context-driven exploration.
- Ready to experience the power of Qdrant's Discovery search for yourself? [Try a free demo](https://qdrant.tech/contact-us/) now and unlock the full potential of controlled exploration in vector spaces!
- Ready to experience the power of Qdrant's Discovery search for yourself? [Try a free demo](/contact-us/) now and unlock the full potential of controlled exploration in vector spaces!
4 changes: 2 additions & 2 deletions qdrant-landing/content/articles/fastembed.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ If anything changes, you'll see a new version number pop up, like going from 0.0

## Using FastEmbed with Qdrant

Qdrant is a Vector Store, offering comprehensive, efficient, and scalable [enterprise solutions](https://qdrant.tech/enterprise-solutions/) for modern machine learning and AI applications. Whether you are dealing with billions of data points, require a low latency performant [vector database solution](https://qdrant.tech/qdrant-vector-database/), or specialized quantization methods – [Qdrant is engineered](/documentation/overview/) to meet those demands head-on.
Qdrant is a Vector Store, offering comprehensive, efficient, and scalable [enterprise solutions](/enterprise-solutions/) for modern machine learning and AI applications. Whether you are dealing with billions of data points, require a low latency performant [vector database solution](/qdrant-vector-database/), or specialized quantization methods – [Qdrant is engineered](/documentation/overview/) to meet those demands head-on.

The fusion of FastEmbed with Qdrant’s vector store capabilities enables a transparent workflow for seamless embedding generation, storage, and retrieval. This simplifies the API design — while still giving you the flexibility to make significant changes e.g. you can use FastEmbed to make your own embedding other than the DefaultEmbedding and use that with Qdrant.

Expand Down Expand Up @@ -229,7 +229,7 @@ Behind the scenes, we first convert the query_text to the embedding and use tha

By following these steps, you effectively utilize the combined capabilities of FastEmbed and Qdrant, thereby streamlining your embedding generation and retrieval tasks.

Qdrant is designed to handle large-scale datasets with billions of data points. Its architecture employs techniques like [binary quantization](https://qdrant.tech/articles/binary-quantization/) and [scalar quantization](https://qdrant.tech/articles/scalar-quantization/) for efficient storage and retrieval. When you inject FastEmbed’s CPU-first design and lightweight nature into this equation, you end up with a system that can scale seamlessly while maintaining low latency.
Qdrant is designed to handle large-scale datasets with billions of data points. Its architecture employs techniques like [binary quantization](/articles/binary-quantization/) and [scalar quantization](/articles/scalar-quantization/) for efficient storage and retrieval. When you inject FastEmbed’s CPU-first design and lightweight nature into this equation, you end up with a system that can scale seamlessly while maintaining low latency.

## Summary

Expand Down
2 changes: 1 addition & 1 deletion qdrant-landing/content/articles/hybrid-search.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ into account. Those models are usually trained on clickstream data of a real app
very business-specific. Thus, we'll not cover them right now, as there is a more general approach. We will
use so-called **cross-encoder models**.

Cross-encoder takes a pair of texts and predicts the similarity of them. Unlike embedding models,
Cross-encoder takes a pair of texts and predicts the similarity of them. Unlike [embedding models](/articles/fastembed/),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not related

cross-encoders do not compress text into vector, but uses interactions between individual tokens of both
texts. In general, they are more powerful than both BM25 and vector search, but they are also way slower.
That makes it feasible to use cross-encoders only for re-ranking of some preselected candidates.
Expand Down
4 changes: 2 additions & 2 deletions qdrant-landing/content/articles/langchain-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ provides unified interfaces to different libraries, so you can avoid writing boi
It has been reported millions of times recently, but let's say that again. ChatGPT-like models struggle with generating factual statements if no context
is provided. They have some general knowledge but cannot guarantee to produce a valid answer consistently. Thus, it is better to provide some facts we
know are actual, so it can just choose the valid parts and extract them from all the provided contextual data to give a comprehensive answer. [Vector database,
such as Qdrant](https://qdrant.tech/), is of great help here, as their ability to perform a [semantic search](https://qdrant.tech/documentation/tutorials/search-beginners/) over a huge knowledge base is crucial to preselect some possibly valid
documents, so they can be provided into the LLM. That's also one of the **chains** implemented in [LangChain](https://qdrant.tech/documentation/frameworks/langchain/), which is called `VectorDBQA`. And Qdrant got
such as Qdrant](https://qdrant.tech/), is of great help here, as their ability to perform a [semantic search](/documentation/tutorials/search-beginners/) over a huge knowledge base is crucial to preselect some possibly valid
documents, so they can be provided into the LLM. That's also one of the **chains** implemented in [LangChain](/documentation/frameworks/langchain/), which is called `VectorDBQA`. And Qdrant got
integrated with the library, so it might be used to build it effortlessly.

### The Two-Model Approach
Expand Down
4 changes: 2 additions & 2 deletions qdrant-landing/content/articles/memory-consumption.md
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ But let's first see how much RAM we need to serve 1 million vectors and then we

### Vectors and HNSW graph stored using MMAP

In the third experiment, we tested how well our system performs when vectors and [HNSW](https://qdrant.tech/articles/filtrable-hnsw/) graph are stored using the memory-mapped files.
In the third experiment, we tested how well our system performs when vectors and [HNSW](/articles/filtrable-hnsw/) graph are stored using the memory-mapped files.
Create collection with:

```http
Expand Down Expand Up @@ -355,7 +355,7 @@ Which might be an interesting option to serve large datasets with low search lat

## Conclusion

In this article, we showed that Qdrant has flexibility in terms of RAM usage and can be used to serve large datasets. It provides configurable trade-offs between RAM usage and search speed. If you’re interested to learn more about Qdrant, [book a demo today](https://qdrant.tech/contact-us/)!
In this article, we showed that Qdrant has flexibility in terms of RAM usage and can be used to serve large datasets. It provides configurable trade-offs between RAM usage and search speed. If you’re interested to learn more about Qdrant, [book a demo today](/contact-us/)!

We are eager to learn more about how you use Qdrant in your projects, what challenges you face, and how we can help you solve them.
Please feel free to join our [Discord](https://qdrant.to/discord) and share your experience with us!
Expand Down
2 changes: 1 addition & 1 deletion qdrant-landing/content/articles/multitenancy.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ keywords:

We are seeing the topics of [multitenancy](/documentation/guides/multiple-partitions/) and [distributed deployment](/documentation/guides/distributed_deployment/#sharding) pop-up daily on our [Discord support channel](https://qdrant.to/discord). This tells us that many of you are looking to scale Qdrant along with the rest of your machine learning setup.

Whether you are building a bank fraud-detection system, [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/) for e-commerce, or services for the federal government - you will need to leverage a multitenant architecture to scale your product.
Whether you are building a bank fraud-detection system, [RAG](/articles/what-is-rag-in-ai/) for e-commerce, or services for the federal government - you will need to leverage a multitenant architecture to scale your product.
In the world of SaaS and enterprise apps, this setup is the norm. It will considerably increase your application's performance and lower your hosting costs.

## Multitenancy & custom sharding with Qdrant
Expand Down
2 changes: 1 addition & 1 deletion qdrant-landing/content/articles/neural-search-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ Transformers is not the only architecture suitable for neural search, but for ou

We will use a model called `all-MiniLM-L6-v2`.
This model is an all-round model tuned for many use-cases. Trained on a large and diverse dataset of over 1 billion training pairs.
It is optimized for low memory consumption and fast inference.
It is optimized for low [memory consumption](/articles/memory-consumption/) and fast inference.

The complete code for data preparation with detailed comments can be found and run in [Colab Notebook](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing).

Expand Down
2 changes: 1 addition & 1 deletion qdrant-landing/content/articles/new-recommendation-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Here, we'll discuss some internals and show how they may be used in practice.
### Recap of the old recommendations API

The previous [Recommendation API](/documentation/concepts/search/#recommendation-api) in Qdrant came with some limitations. First of all, it was required to pass vector IDs for
both positive and negative example points. If you wanted to use vector embeddings directly, you had to either create a new point
both positive and negative example points. If you wanted to use [vector embeddings](/articles/what-are-embeddings/) directly, you had to either create a new point
in a collection or mimic the behaviour of the Recommendation API by using the [Search API](/documentation/concepts/search/#search-api).
Moreover, in the previous releases of Qdrant, you were always asked to provide at least one positive example. This requirement
was based on the algorithm used to combine multiple samples into a single query vector. It was a simple, yet effective approach.
Expand Down
Loading
Loading