Skip to content

Commit

Permalink
Merge branch 'main' into containerize-doc-building
Browse files Browse the repository at this point in the history
  • Loading branch information
kolchfa-aws committed Sep 26, 2024
2 parents db53097 + a6a3396 commit 81677c6
Show file tree
Hide file tree
Showing 109 changed files with 3,642 additions and 346 deletions.
4 changes: 3 additions & 1 deletion .github/vale/styles/Vocab/OpenSearch/Words/accept.txt
Original file line number Diff line number Diff line change
Expand Up @@ -77,8 +77,9 @@ Levenshtein
[Mm]ultivalued
[Mm]ultiword
[Nn]amespace
[Oo]versamples?
[Oo]ffline
[Oo]nboarding
[Oo]versamples?
pebibyte
p\d{2}
[Pp]erformant
Expand All @@ -105,6 +106,7 @@ p\d{2}
[Rr]eprovision(ed|ing)?
[Rr]erank(er|ed|ing)?
[Rr]epo
[Rr]escor(e|ed|ing)?
[Rr]ewriter
[Rr]ollout
[Rr]ollup
Expand Down
1 change: 1 addition & 0 deletions _about/version-history.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ permalink: /version-history/

OpenSearch version | Release highlights | Release date
:--- | :--- | :---
[2.17.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.17.0.md) | Includes disk-optimized vector search, binary quantization, and byte vector encoding in k-NN. Adds asynchronous batch ingestion for ML tasks. Provides search and query performance enhancements and a new custom trace source in trace analytics. Includes application-based configuration templates. For a full list of release highlights, see the Release Notes. | 17 September 2024
[2.16.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.16.0.md) | Includes built-in byte vector quantization and binary vector support in k-NN. Adds new sort, split, and ML inference search processors for search pipelines. Provides application-based configuration templates and additional plugins to integrate multiple data sources in OpenSearch Dashboards. Includes an experimental Batch Predict ML Commons API. For a full list of release highlights, see the Release Notes. | 06 August 2024
[2.15.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.15.0.md) | Includes parallel ingestion processing, SIMD support for exact search, and the ability to disable doc values for the k-NN field. Adds wildcard and derived field types. Improves performance for single-cardinality aggregations, rolling upgrades to remote-backed clusters, and more metrics for top N queries. For a full list of release highlights, see the Release Notes. | 25 June 2024
[2.14.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.14.0.md) | Includes performance improvements to hybrid search and date histogram queries with multi-range traversal, ML model integration within the Ingest API, semantic cache for LangChain applications, low-level vector query interface for neural sparse queries, and improved k-NN search filtering. Provides an experimental tiered cache feature. For a full list of release highlights, see the Release Notes. | 14 May 2024
Expand Down
24 changes: 23 additions & 1 deletion _analyzers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,6 +170,28 @@ The response provides information about the analyzers for each field:
}
```

## Normalizers
Tokenization divides text into individual terms, but it does not address variations in token forms. Normalization resolves these issues by converting tokens into a standard format. This ensures that similar terms are matched appropriately, even if they are not identical.

### Normalization techniques

The following normalization techniques can help address variations in token forms:
1. **Case normalization**: Converts all tokens to lowercase to ensure case-insensitive matching. For example, "Hello" is normalized to "hello".

2. **Stemming**: Reduces words to their root form. For instance, "cars" is stemmed to "car", and "running" is normalized to "run".

3. **Synonym handling:** Treats synonyms as equivalent. For example, "jogging" and "running" can be indexed under a common term, such as "run".

### Normalization

A search for `Hello` will match documents containing `hello` because of case normalization.

A search for `cars` will also match documents containing `car` because of stemming.

A query for `running` can retrieve documents containing `jogging` using synonym handling.

Normalization ensures that searches are not limited to exact term matches, allowing for more relevant results. For instance, a search for `Cars running` can be normalized to match `car run`.

## Next steps

- Learn more about specifying [index analyzers]({{site.url}}{{site.baseurl}}/analyzers/index-analyzers/) and [search analyzers]({{site.url}}{{site.baseurl}}/analyzers/search-analyzers/).
- Learn more about specifying [index analyzers]({{site.url}}{{site.baseurl}}/analyzers/index-analyzers/) and [search analyzers]({{site.url}}{{site.baseurl}}/analyzers/search-analyzers/).
160 changes: 160 additions & 0 deletions _analyzers/token-filters/cjk-bigram.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
---
layout: default
title: CJK bigram
parent: Token filters
nav_order: 30
---

# CJK bigram token filter

The `cjk_bigram` token filter is designed specifically for processing East Asian languages, such as Chinese, Japanese, and Korean (CJK), which typically don't use spaces to separate words. A bigram is a sequence of two adjacent elements in a string of tokens, which can be characters or words. For CJK languages, bigrams help approximate word boundaries and capture significant character pairs that can convey meaning.


## Parameters

The `cjk_bigram` token filter can be configured with two parameters: `ignore_scripts`and `output_unigrams`.

### `ignore_scripts`

The `cjk-bigram` token filter ignores all non-CJK scripts (writing systems like Latin or Cyrillic) and tokenizes only CJK text into bigrams. Use this option to specify CJK scripts to be ignored. This option takes the following valid values:

- `han`: The `han` script processes Han characters. [Han characters](https://simple.wikipedia.org/wiki/Chinese_characters) are logograms used in the written languages of China, Japan, and Korea. The filter can help with text processing tasks like tokenizing, normalizing, or stemming text written in Chinese, Japanese kanji, or Korean Hanja.

- `hangul`: The `hangul` script processes Hangul characters, which are unique to the Korean language and do not exist in other East Asian scripts.

- `hiragana`: The `hiragana` script processes hiragana, one of the two syllabaries used in the Japanese writing system.
Hiragana is typically used for native Japanese words, grammatical elements, and certain forms of punctuation.

- `katakana`: The `katakana` script processes katakana, the other Japanese syllabary.
Katakana is mainly used for foreign loanwords, onomatopoeia, scientific names, and certain Japanese words.


### `output_unigrams`

This option, when set to `true`, outputs both unigrams (single characters) and bigrams. Default is `false`.

## Example

The following example request creates a new index named `devanagari_example_index` and defines an analyzer with the `cjk_bigram_filter` filter and `ignored_scripts` parameter set to `katakana`:

```json
PUT /cjk_bigram_example
{
"settings": {
"analysis": {
"analyzer": {
"cjk_bigrams_no_katakana": {
"tokenizer": "standard",
"filter": [ "cjk_bigrams_no_katakana_filter" ]
}
},
"filter": {
"cjk_bigrams_no_katakana_filter": {
"type": "cjk_bigram",
"ignored_scripts": [
"katakana"
],
"output_unigrams": true
}
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the analyzer:

```json
POST /cjk_bigram_example/_analyze
{
"analyzer": "cjk_bigrams_no_katakana",
"text": "東京タワーに行く"
}
```
{% include copy-curl.html %}

Sample text: "東京タワーに行く"

東京 (Kanji for "Tokyo")
タワー (Katakana for "Tower")
に行く (Hiragana and Kanji for "go to")

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "",
"start_offset": 0,
"end_offset": 1,
"type": "<SINGLE>",
"position": 0
},
{
"token": "東京",
"start_offset": 0,
"end_offset": 2,
"type": "<DOUBLE>",
"position": 0,
"positionLength": 2
},
{
"token": "",
"start_offset": 1,
"end_offset": 2,
"type": "<SINGLE>",
"position": 1
},
{
"token": "タワー",
"start_offset": 2,
"end_offset": 5,
"type": "<KATAKANA>",
"position": 2
},
{
"token": "",
"start_offset": 5,
"end_offset": 6,
"type": "<SINGLE>",
"position": 3
},
{
"token": "に行",
"start_offset": 5,
"end_offset": 7,
"type": "<DOUBLE>",
"position": 3,
"positionLength": 2
},
{
"token": "",
"start_offset": 6,
"end_offset": 7,
"type": "<SINGLE>",
"position": 4
},
{
"token": "行く",
"start_offset": 6,
"end_offset": 8,
"type": "<DOUBLE>",
"position": 4,
"positionLength": 2
},
{
"token": "",
"start_offset": 7,
"end_offset": 8,
"type": "<SINGLE>",
"position": 5
}
]
}
```


96 changes: 96 additions & 0 deletions _analyzers/token-filters/cjk-width.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
---
layout: default
title: CJK width
parent: Token filters
nav_order: 40
---

# CJK width token filter

The `cjk_width` token filter normalizes Chinese, Japanese, and Korean (CJK) tokens by converting full-width ASCII characters to their standard (half-width) ASCII equivalents and half-width katakana characters to their full-width equivalents.

### Converting full-width ASCII characters

In CJK texts, ASCII characters (such as letters and numbers) can appear in full-width form, occupying the space of two half-width characters. Full-width ASCII characters are typically used in East Asian typography for alignment with the width of CJK characters. However, for the purposes of indexing and searching, these full-width characters need to be normalized to their standard (half-width) ASCII equivalents.

The following example illustrates ASCII character normalization:

```
Full-Width: ABCDE 12345
Normalized (half-width): ABCDE 12345
```

### Converting half-width katakana characters

The `cjk_width` token filter converts half-width katakana characters to their full-width counterparts, which are the standard form used in Japanese text. This normalization, illustrated in the following example, is important for consistency in text processing and searching:


```
Half-Width katakana: カタカナ
Normalized (full-width) katakana: カタカナ
```

## Example

The following example request creates a new index named `cjk_width_example_index` and defines an analyzer with the `cjk_width` filter:

```json
PUT /cjk_width_example_index
{
"settings": {
"analysis": {
"analyzer": {
"cjk_width_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": ["cjk_width"]
}
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the analyzer:

```json
POST /cjk_width_example_index/_analyze
{
"analyzer": "cjk_width_analyzer",
"text": "Tokyo 2024 カタカナ"
}
```
{% include copy-curl.html %}

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "Tokyo",
"start_offset": 0,
"end_offset": 5,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "2024",
"start_offset": 6,
"end_offset": 10,
"type": "<NUM>",
"position": 1
},
{
"token": "カタカナ",
"start_offset": 11,
"end_offset": 15,
"type": "<KATAKANA>",
"position": 2
}
]
}
```
2 changes: 1 addition & 1 deletion _analyzers/token-filters/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Token filter | Underlying Lucene token filter| Description
[`apostrophe`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/apostrophe/) | [ApostropheFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/ApostropheFilter.html) | In each token containing an apostrophe, the `apostrophe` token filter removes the apostrophe itself and all characters following it.
[`asciifolding`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/asciifolding/) | [ASCIIFoldingFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html) | Converts alphabetic, numeric, and symbolic characters.
`cjk_bigram` | [CJKBigramFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKBigramFilter.html) | Forms bigrams of Chinese, Japanese, and Korean (CJK) tokens.
`cjk_width` | [CJKWidthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html) | Normalizes Chinese, Japanese, and Korean (CJK) tokens according to the following rules: <br> - Folds full-width ASCII character variants into the equivalent basic Latin characters. <br> - Folds half-width Katakana character variants into the equivalent Kana characters.
[`cjk_width`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/cjk-width/) | [CJKWidthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html) | Normalizes Chinese, Japanese, and Korean (CJK) tokens according to the following rules: <br> - Folds full-width ASCII character variants into their equivalent basic Latin characters. <br> - Folds half-width katakana character variants into their equivalent kana characters.
`classic` | [ClassicFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/classic/ClassicFilter.html) | Performs optional post-processing on the tokens generated by the classic tokenizer. Removes possessives (`'s`) and removes `.` from acronyms.
`common_grams` | [CommonGramsFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/commongrams/CommonGramsFilter.html) | Generates bigrams for a list of frequently occurring terms. The output contains both single terms and bigrams.
`conditional` | [ConditionalTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ConditionalTokenFilter.html) | Applies an ordered list of token filters to tokens that match the conditions provided in a script.
Expand Down
Loading

0 comments on commit 81677c6

Please sign in to comment.