TellusR Understands Your Users
By choosing Solr for your search, you have already made a good choice. The Lucene engine in Solr is fast. But in addition to fast, you want your search to be good. And that’s where the TellusR semantic understanding comes into play.
TellusR has a modern take on search. Since very often a search term is written quickly and inaccurately, it has to be understood for the search to be performed as intended. TellusR accomplishes this by constructing linguistic contexts for your search terms.
The TellusR NLP module uses a neural network model to build word vectors in your Solr indexing. These vectors represent the meaning of words through their proximity to one another. This makes it possible to determine what words have similar contexts: the vectors are representations of the word meanings, and similar vectors equal similar meanings.
As an example, think of the two expressions Wallis Simpson and the Duchess of Windsor. They in fact refer to the same person, so if one searches for either of those two terms, the given search result set should be more or less the same (try this with a web search and see for yourself!). The TellusR indexing creates word vectors that represent the meanings of these notions, and when someone searches for Wallis Simpson, TellusR is thereby able to find documents where the name Wallis Simpson isn’t even mentionned, but the context suggests the subject is indeed her. Do not confuse this with the use of synonyms: this is fully automated, and will connect a lot of notions that you didn’t even know were relevant for each other.
So TellusR actually finds more documents in your searches, since it does not exclude those that lack the exact words used in a query. Using the TellusR NLP module, you get
- Fewer zero-hit searches
- Fewer cast asides
- Higher conversion rates