Elasticsearch Configuration Used During Indexing

Navigate to: Marketing & Site Search > Site Search


Search functionality in the Znode Webstore is built on top of Elasticsearch.  See Working With Elasticsearch for more information.


When Creating the site search Index Znode uses the following features and settings in Elasticsearch:

  1. Trigger the Standard Elasticsearch Analyzer - for all text fields
    1. Character Filter: Used to replace special characters Underscore, Dash, Hyphen, and Minus with an empty string.
    2. Tokenizer: Used to break a text into words, this is helpful to create tokens (set of words) that can be used to search the desired output in search results.
  2. Then all the following Token Filters in this sequence:
    1. Lowercase:  Used to changes token text to lowercase

      Ex: If a text entered = “The Quick Brown Fox” it will be converted to “the quick brown fox"

    2. Synonyms: Used to create a list of words that can be used as an alternative and is saved in the synonym list, when a user enters a search keyword it is then compared with the synonym list to find the match before displaying the search results
      Ex: If a text entered = “Brown Fox” and the synonym list has Fox= “coyote, dingo” then the search results would display products that have fox, coyote, dingo

    3. Stopwords: Used to ignore all stop words if found in the search keyword. Stop words are usually words like “to”, “I”, “has”, “the”, “be”, “or”, etc. They are filler words that help sentences flow better, but provide a very little context on their own.
      Ex: If a text entered = “The Quick Brown Fox” it will be considered as “quick brown fox”

    4. Ngram (for Product Name, SKU, and Brand) : Used to break these fields into smaller tokens (Znode users a minimum length=1 and maximum length=40 )
      Ex: If a text entered =“quick fox” then tokens generated would be [ q, qu, qui, quic, quick, quick f, quickfFo, quick fox, u, ui, uic, uick, uick f, uick fo,
      uick fox, i, ic, ick, … ]Shingle: Used when tokens are to be generated using the concatenation of the adjacent tokens (words). Shingles are used to help speed up phrase queries
      Ex:  If a text entered =  “The Quick Brown Fox” then tokens generated would be “the,”  “the quick”, “quick” “quick brown“, “brown fox” and “fox”

Please see Elasticsearch Definitions for help understanding the terms on this page.

Did you find it helpful? Yes No

Send feedback
Sorry we couldn't be helpful. Help us improve this article with your feedback.