NEST by Martijn Laarman and contributors

<PackageReference Include="NEST" Version="0.11.2" />

.NET API 668,672 bytes

 Nest

Namespace with 466 public types

 Classes

 AliasParams
 AllFieldMapping
 AnalysisDescriptor
 AnalysisSettings
 AnalyzeParams
 AnalyzeResponse
 AnalyzerFieldMapping
 AnalyzerFieldMapping`1
 AnalyzeToken
 AsciiFoldingTokenFilter A token filter of type asciifolding that converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the “Basic Latin” Unicode block) into their ASCII equivalents, if one exists.
 AttachmentMapping
 AttachmentMappingDescriptor`1
 BaseFacetDescriptor`1
 BaseFilter
 BaseParameters
 BaseQuery
 BaseResponse
 BinaryMapping
 BinaryMappingDescriptor`1
 BoolBaseQueryDescriptor
 BooleanMapping
 BooleanMappingDescriptor`1
 BoolFilterDescriptor`1
 BoolQueryDescriptor`1
 BoostFieldMapping
 BoostFieldMapping`1
 BoostingQueryDescriptor`1
 BulkCreateDescriptor`1
 BulkCreateResponseItem
 BulkDeleteDescriptor`1
 BulkDeleteResponseItem
 BulkDescriptor
 BulkIndexDescriptor`1
 BulkIndexResponseItem
 BulkParameters`1
 BulkResponse
 BulkUpdateDescriptor`2
 BulkUpdateResponseItem
 ClusterStateResponse
 Connection
 ConnectionError
 ConnectionSettings
 ConnectionStatus
 ConstantScoreQueryDescriptor`1
 CorePropertiesDescriptor`1
 CountResponse
 CovariantDictionary`1
 CovariantItem`1
 CreateIndexDescriptor
 CreateWarmerDescriptor
 CustomAnalyzer An analyzer of type custom that allows to combine a Tokenizer with zero or more Token Filters, and zero or more Char Filters. The custom analyzer accepts a logical/registered name of the tokenizer to use, and a list of logical/registered names of token filters.
 CustomBoostFactorQueryDescriptor`1
 CustomFiltersScoreDescriptor`1
 CustomScoreQueryDescriptor`1
 DateEntry
 DateHistogramFacet
 DateHistogramFacetDescriptor`1
 DateMapping
 DateMappingDescriptor`1
 DateRange
 DateRangeFacet
 DeleteByQueryParameters
 DeleteParameters
 DeleteResponse
 DictionaryDecompounderTokenFilter
 DismaxQueryDescriptor`1
 DocStats
 DslException
 DynamicTemplate
 DynamicTemplateDescriptor`1
 DynamicTemplatesDescriptor`1
 EdgeNGramTokenFilter A token filter of type edgeNGram.
 EdgeNGramTokenizer A tokenizer of type edgeNGram.
 ElasticClient
 ElasticPropertyAttribute
 ElasticSearchException
 ElasticSearchVersionInfo
 ElasticTypeAttribute
 ElisionTokenFilter A token filter which removes elisions. For example, “l’avion” (the plane) will tokenized as “avion” (plane).
 ExistsFilter
 Explanation
 ExplanationDetail
 ExternalFieldDeclarationDescriptor`1
 FacetDescriptorsBucket`1
 FileSystemStats
 FilterDescriptor
 FilterDescriptor`1
 FilteredQueryDescriptor`1
 FilterFacet
 FluentDictionary`2
 FlushStats
 FuzzyDateQueryDescriptor`1
 FuzzyLikeThisDescriptor`1
 FuzzyNumericQueryDescriptor`1
 FuzzyQueryDescriptor`1
 GenericMapping Sometimes you need a generic type mapping, i.e when using dynamic templates in order to specify "{dynamic_template}" the type, or if you have some plugin that exposes a new type.
 GenericMappingDescriptor`1
 GeoBoundingBoxFilter
 GeoDistanceFacet
 GeoDistanceFacetDescriptor
 GeoDistanceFacetDescriptor`1
 GeoDistanceFilterDescriptor
 GeoDistanceRange
 GeoDistanceRangeFilterDescriptor
 GeoPointMapping
 GeoPointMappingDescriptor`1
 GeoPolygonFilter
 GeoShapeMapping
 GeoShapeMappingDescriptor`1
 GetDescriptor`1
 GetResponse`1
 GetStats
 GetWarmerDescriptor
 GlobalStats
 GlobalStatsResponse
 HasChildFilterDescriptor`1
 HasChildQueryDescriptor`1
 HasParentFilterDescriptor`1
 HasParentQueryDescriptor`1
 HealthParams
 HealthResponse
 Highlight
 HighlightDescriptor`1
 HighlightDocumentDictionary
 HighlightFieldDescriptor`1
 HighlightFieldDictionary
 HistogramFacet
 HistogramFacetDescriptor`1
 HistogramItem
 Hit`1
 HitsMetaData`1
 HtmlStripCharFilter A char filter of type html_strip stripping out HTML elements from an analyzed text.
 HttpStats
 HyphenationDecompounderTokenFilter
 IdFieldMapping
 IdsFilter
 IdsQuery
 IndexExistsResponse
 IndexFieldMapping
 IndexHealthStats
 IndexingStats
 IndexParameters
 IndexResponse
 IndexRoutingTable
 IndexSegment
 IndexSettings Writing these uses a custom converter that ignores the json props
 IndexSettingsResponse
 IndicesOperationResponse
 IndicesQueryDescriptor`1
 IndicesResponse
 IndicesShardResponse
 InMemoryConnection
 IPMapping
 IPMappingDescriptor`1
 JVM
 KeywordAnalyzer An analyzer of type keyword that “tokenizes” an entire stream as a single token. This is useful for data like zip codes, ids and so on. Note, when using mapping definitions, it make more sense to simply mark the field as not_analyzed.
 KeywordMarkerTokenFilter Protects words from being modified by stemmers. Must be placed before any stemming filters.
 KeywordTokenizer A tokenizer of type keyword that emits the entire input as a single input.
 KStemTokenFilter The kstem token filter is a high performance filter for english. All terms must already be lowercased (use lowercase filter) for this filter to work correctly.
 LanguageAnalyzer A set of analyzers aimed at analyzing specific language text.
 LengthTokenFilter A token filter of type length that removes words that are too long or too short for the stream.
 LetterTokenizer A tokenizer of type letter that divides text at non-letters. That’s to say, it defines tokens as maximal strings of adjacent letters. Note, this does a decent job for most European languages, but does a terrible job for some Asian languages, where words are not separated by spaces.
 LimitFilter
 LowercaseTokenFilter A token filter of type lowercase that normalizes token text to lower case. Lowercase token filter supports Greek and Turkish lowercase token filters through the language parameter.
 LowercaseTokenizer A tokenizer of type lowercase that performs the function of Letter Tokenizer and Lower Case Token Filter together. It divides text at non-letters and converts them to lower case. While it is functionally equivalent to the combination of Letter Tokenizer and Lower Case Token Filter, there is a performance advantage to doing the two tasks at once, hence this (redundant) implementation.
 MappingCharFilter A char filter of type mapping replacing characters of an analyzed text with given mapping.
 MatchAll
 MatchAllFilter
 MatchPhrasePrefixQueryDescriptor`1 A Query that matches documents containing a particular sequence of terms. It allows for prefix matches on the last term in the text.
 MatchPhraseQueryDescriptor`1 A Query that matches documents containing a particular sequence of terms. A PhraseQuery is built by QueryParser for input like "new york".
 MatchQueryDescriptor`1
 MergesStats
 MetadataIndexState
 MetadataState
 MissingFilter
 MoreLikeThisDescriptor`1
 MoreLikeThisQueryDescriptor`1
 MultiFieldMapping
 MultiFieldMappingDescriptor`1
 MultiGetDescriptor
 MultiGetHit`1
 MultiGetResponse
 MultiHit`1
 MultiMatchQueryDescriptor`1
 MultiSearchDescriptor
 MultiSearchResponse
 NestedElasticPropertyAttribute
 NestedFilterDescriptor`1
 NestedObjectMapping
 NestedObjectMappingDescriptor`2
 NestedQueryDescriptor`1
 NetworkStats
 NgramTokenFilter A token filter of type nGram.
 NGramTokenizer A tokenizer of type nGram.
 NodeInfo
 NodeInfoHTTP
 NodeInfoJVM
 NodeInfoJVMMemory
 NodeInfoMemory
 NodeInfoNetwork
 NodeInfoNetworkInterface
 NodeInfoOS
 NodeInfoOSCPU
 NodeInfoProcess
 NodeInfoResponse
 NodeInfoThreadPoolThreadInfo
 NodeInfoTransport
 NodeState
 NodeStats
 NodeStatsIndexes
 NodeStatsResponse
 NumberMapping
 NumberMappingDescriptor`1
 NumericRangeFilterDescriptor`1 Filters documents with fields that have values within a certain numeric range. Similar to range filter, except that it works only with numeric values
 ObjectMapping
 ObjectMappingDescriptor`2
 OptimizeParams
 OSStats
 ParentTypeMapping
 PartialFieldDescriptor`1
 PathHierarchyTokenizer The path_hierarchy tokenizer takes something like this: /something/something/elseAnd produces tokens:/something/something/something/something/something/else
 PatternAnalyzer An analyzer of type pattern that can flexibly separate text into terms via a regular expression.
 PatternReplaceTokenFilter
 PatternTokenizer A tokenizer of type pattern that can flexibly separate text into terms via a regular expression.
 PercolateDescriptor`1
 PercolateResponse
 PercolatorDescriptor`1
 PhoneticTokenFilter The phonetic token filter is provided as a plugin.
 PorterStemTokenFilter A token filter of type porterStem that transforms the token stream as per the Porter stemming algorithm.
 Prefix
 ProcessStats
 PropertiesDescriptor`1
 PutWarmerDescriptor
 QueryDescriptor
 QueryDescriptor`1
 QueryFacet
 QueryResponse`1
 QueryStringDescriptor`1
 Range
 Range`1
 RangeFacet
 RangeFacetDescriptor`2
 RangeFilterDescriptor`1
 RangeQueryDescriptor`1
 RawOrFilterDescriptor`1
 RawOrQueryDescriptor`1
 RefreshStats
 RegisterPercolateResponse
 ReverseTokenFilter A token filter of type reverse that simply reverses the tokens.
 RootObjectMapping
 RootObjectMappingDescriptor`1
 RoutingFieldMapping
 RoutingFieldMapping`1
 RoutingNodesState
 RoutingQueryPathDescriptor
 RoutingQueryPathDescriptor`1
 RoutingShard
 RoutingTableState
 ScriptFilterDescriptor A filter allowing to define scripts as filters. Ex: "doc['num1'].value > 1"
 SearchDescriptor`1
 SearchStats
 Segment
 SegmentsResponse
 SettingsOperationResponse
 ShardHealthStats
 ShardSegmentRouting
 ShardsMetaData
 ShardsSegment
 ShingleTokenFilter A token filter of type shingle that constructs shingles (token n-grams) from a token stream. In other words, it creates combinations of tokens as a single token.
 SimilaritySettings
 SimpleAnalyzer An analyzer of type simple that is built using a Lower Case Tokenizer.
 SimpleBulkParameters
 SimpleGetDescriptor`1
 SingleMappingDescriptor`1
 SizeFieldMapping
 SnowballAnalyzer An analyzer of type snowball that uses the standard tokenizer, with standard filter, lowercase filter, stop filter, and snowball filter. The Snowball Analyzer is a stemming analyzer from Lucene that is originally based on the snowball project from snowball.tartarus.org.
 SnowballTokenFilter A filter that stems words using a Snowball-generated stemmer.
 SourceFieldMapping
 SpanFirstQueryDescriptor`1
 SpanNearQueryDescriptor`1
 SpanNotQueryDescriptor`1
 SpanOrQueryDescriptor`1
 SpanQueryDescriptor`1
 SpanTerm
 StandardAnalyzer An analyzer of type standard that is built of using Standard Tokenizer, with Standard Token Filter, Lower Case Token Filter, and Stop Token Filter.
 StandardTokenFilter A token filter of type standard that normalizes tokens extracted with the Standard Tokenizer.
 StandardTokenizer A tokenizer of type standard providing grammar based tokenizer that is a good tokenizer for most European language documents. The tokenizer implements the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29.
 StatisticalFacet
 StatisticalFacetDescriptor`1
 Stats
 StatsContainer
 StatsParams
 StatsResponse
 StemmerTokenFilter A filter that stems words (similar to snowball, but with more options).
 StopAnalyzer An analyzer of type stop that is built using a Lower Case Tokenizer, with Stop Token Filter.
 StopTokenFilter A token filter of type stop that removes stop words from token streams.
 StoreStats
 StringMapping
 StringMappingDescriptor`1
 SynonymTokenFilter The synonym token filter allows to easily handle synonyms during the analysis process.
 TemplateMapping
 TemplateMappingDescriptor
 TemplateResponse
 Term
 TermFacet
 TermFacetDescriptor`1
 TermItem
 TermsQueryDescriptor`1
 TermsStatsFacetDescriptor`1
 TermStats
 TermStatsFacet
 TextPhrasePrefixQueryDescriptor`1 A Query that matches documents containing a particular sequence of terms. It allows for prefix matches on the last term in the text.
 TextPhraseQueryDescriptor`1 A Query that matches documents containing a particular sequence of terms. A PhraseQuery is built by QueryParser for input like "new york".
 TextQueryDescriptor`1
 ThreadCountStats
 TimestampFieldMapping
 TimestampFieldMapping`1
 TopChildrenQueryDescriptor`1
 TransportStats
 TrimTokenFilter The trim token filter trims surrounding whitespaces around a token.
 TruncateTokenFilter The truncate token filter can be used to truncate tokens into a specific length. This can come in handy with keyword (single token) based mapped fields that are used for sorting in order to reduce memory usage.
 TtlFieldMapping
 TypeFieldMapping
 TypeFilter
 TypeMappingProperty
 TypeStats
 UaxEmailUrlTokenizer A tokenizer of type uax_url_email which works exactly like the standard tokenizer, but tokenizes emails and urls as single tokens
 UniqueTokenFilter The unique token filter can be used to only index unique tokens during analysis. By default it is applied on all the token stream
 UnregisterPercolateResponse
 UpdateDescriptor`2
 UpdateResponse
 UptimeStats
 ValidateQueryPathDescriptor
 ValidateQueryPathDescriptor`1
 ValidateResponse
 ValidationExplanation
 WarmerMapping
 WarmerResponse
 WhitespaceAnalyzer An analyzer of type whitespace that is built using a Whitespace Tokenizer.
 WhitespaceTokenizer A tokenizer of type whitespace that divides text at whitespace.
 Wildcard
 WordDelimiterTokenFilter Named word_delimiter, it Splits words into subwords and performs optional transformations on subword groups.

 Enumerations

 ChildScoreType
 ClearCacheOptions
 ClusterStateInfo
 ComparatorType
 ConnectionErrorType
 Consistency
 DateHistogramComparatorType
 DateInterval
 DateRounding
 DistanceUnit
 EsRegexFlags
 FieldIndexOption
 FieldType Define the type of field content.
 GeoDistance
 GeoExecution
 GeoOptimizeBBox
 GeoTree
 GeoUnit
 HealthLevel
 HealthStatus
 HistogramComparatorType
 IndexOptions
 Lang Scripting Language.
 Language Language types used for language analyzers
 NamingConvention
 NestedScore
 NodeInfoStats
 NodesInfo
 NonStringIndexOption
 NumberType
 NumericType
 Occur
 Operator
 OpType
 ParentScoreType
 Replication
 RewriteMultiTerm
 ScoreMode
 SearchType
 SortOrder
 StatsInfo
 StoreOption
 TermsExecution
 TermsOrder
 TermsStatsComparatorType
 TermsStatsOrder
 TermVectorOption
 TextQueryType
 TopChildrenScore
 VersionType

 Static Classes

 ElasticMap Static helper to help create resusable RootObjectMappings
 Filter
 Filter`1
 NameValueCollectionExtensions
 Query
 Query`1
 StringExtensions
 SuffixExtensions
 TypeExtensions
 UriExtensions

 Abstract Classes

 AnalyzerBase
 BaseBulkOperation
 BaseSimpleGetDescriptor
 BulkOperationResponseItem
 CharFilterBase
 CompoundWordTokenFilter Token filters that allow to decompose compound words.
 Facet
 FacetItem
 FilterBase
 QueryPathDescriptor
 QueryPathDescriptor`1
 SearchDescriptorBase
 TokenFilterBase
 TokenizerBase

 Interfaces

 IAnalysisSetting
 IAnalyzeResponse
 IBulkResponse
 IClusterStateResponse
 IConnection
 IConnectionSettings
 ICountResponse
 ICovariantDictionary`1
 ICovariantItem`1
 IDeleteResponse
 IElasticClient
 IElasticCoreType
 IElasticPropertyAttribute
 IElasticPropertyVisitor
 IElasticSearchVersionInfo
 IElasticType
 IExternalFieldDeclarationDescriptor
 IFacet
 IFacet`1
 IFacetDescriptor
 IFacetDescriptor`1
 IGetResponse`1
 IGlobalStatsResponse
 IHealthResponse
 IHit`1
 IIndexExistsResponse
 IIndexResponse
 IIndexSettingsResponse
 IIndicesOperationResponse
 IIndicesResponse
 IIndicesShardResponse
 IMultiGetHit`1
 INodeInfoResponse
 INodeStatsResponse
 IPercolateResponse
 IQueryPathDescriptor
 IQueryResponse`1
 IRegisterPercolateResponse
 IResponse
 ISegmentsResponse
 ISettingsOperationResponse
 ISimpleUrlParameters
 ISpanQuery
 IStatsResponse
 ITemplateResponse
 IUnregisterPercolateResponse
 IUpdateResponse
 IUrlParameters
 IValidateResponse
 IWarmerResponse