Uses of Class
com.azure.search.documents.indexes.models.LexicalTokenizerName
Packages that use LexicalTokenizerName
Package
Description
Package containing the data models for SearchServiceClient.
-
Uses of LexicalTokenizerName in com.azure.search.documents.indexes.models
Subclasses with type arguments of type LexicalTokenizerName in com.azure.search.documents.indexes.modelsModifier and TypeClassDescriptionfinal classDefines the names of all tokenizers supported by the search engine.Fields in com.azure.search.documents.indexes.models declared as LexicalTokenizerNameModifier and TypeFieldDescriptionstatic final LexicalTokenizerNameLexicalTokenizerName.CLASSICGrammar-based tokenizer that is suitable for processing most European-language documents.static final LexicalTokenizerNameLexicalTokenizerName.EDGE_NGRAMTokenizes the input from an edge into n-grams of the given size(s).static final LexicalTokenizerNameLexicalTokenizerName.KEYWORDEmits the entire input as a single token.static final LexicalTokenizerNameLexicalTokenizerName.LETTERDivides text at non-letters.static final LexicalTokenizerNameLexicalTokenizerName.LOWERCASEDivides text at non-letters and converts them to lower case.static final LexicalTokenizerNameLexicalTokenizerName.MICROSOFT_LANGUAGE_STEMMING_TOKENIZERDivides text using language-specific rules and reduces words to their base forms.static final LexicalTokenizerNameLexicalTokenizerName.MICROSOFT_LANGUAGE_TOKENIZERDivides text using language-specific rules.static final LexicalTokenizerNameLexicalTokenizerName.NGRAMTokenizes the input into n-grams of the given size(s).static final LexicalTokenizerNameLexicalTokenizerName.PATH_HIERARCHYTokenizer for path-like hierarchies.static final LexicalTokenizerNameLexicalTokenizerName.PATTERNTokenizer that uses regex pattern matching to construct distinct tokens.static final LexicalTokenizerNameLexicalTokenizerName.STANDARDStandard Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter.static final LexicalTokenizerNameLexicalTokenizerName.UAX_URL_EMAILTokenizes urls and emails as one token.static final LexicalTokenizerNameLexicalTokenizerName.WHITESPACEDivides text at whitespace.Methods in com.azure.search.documents.indexes.models that return LexicalTokenizerNameModifier and TypeMethodDescriptionstatic LexicalTokenizerNameLexicalTokenizerName.fromString(String name) Creates or finds a LexicalTokenizerName from its string representation.CustomAnalyzer.getTokenizer()Get the tokenizer property: The name of the tokenizer to use to divide continuous text into a sequence of tokens, such as breaking a sentence into words.AnalyzeTextOptions.getTokenizerName()Get the tokenizer name property: The name of the tokenizer to use to break the given text.Methods in com.azure.search.documents.indexes.models that return types with arguments of type LexicalTokenizerNameModifier and TypeMethodDescriptionstatic Collection<LexicalTokenizerName> LexicalTokenizerName.values()Gets known LexicalTokenizerName values.Constructors in com.azure.search.documents.indexes.models with parameters of type LexicalTokenizerNameModifierConstructorDescriptionAnalyzeTextOptions(String text, LexicalTokenizerName tokenizerName) Constructor toAnalyzeTextOptionswhich takes tokenizerName.CustomAnalyzer(String name, LexicalTokenizerName tokenizer) Creates an instance of CustomAnalyzer class.