Auto-Complete in MarkLogic Server

It Must Be Insanely Fast and Meet the Requirements

Auto-complete is hard to optimize because:

  1. we’re searching on partial-words, but search engines are not optimized for that
  2. queries happen on each keystroke, so expect five to ten times your normal search query traffic in auto-complete queries
  3. if response times are slower than 100 milliseconds, users are on to the next keystroke before you show your suggestions

Another complication is that each implementation has different requirements for defining and matching the set of possible values for auto-complete suggestions.  For this article I’m assuming the most common requirements: given a defined set of possible values, we want to present as auto-complete suggestions all values which contain any word or phrase that starts with the text being entered by the user, while matching case-insensitively.

Until now, I had been using custom Java to achieve scalable auto-complete that can match any word in the set of possible values, but I’ve now found a way to do it in MarkLogic server.  But before I explain the new approach, let’s quickly review the solutions I tried which didn’t work.

Basic Element Range Indexes Don’t Meet the Requirements

Using MarkLogic Server, standard Element Range Indexes and/or search:suggest are powerful options for auto-complete, as mentioned in my previous blog post on the topic: Scalable Search Auto-Complete.  But they only meet a narrow set of use cases.  The main limitation is they can only do a “starts-with” match against the entire value.  They don’t support starts-with against any word in the set of possible values, which is a more common requirement.  So if a user starts typing “ban” they will get matches for “Bank of America” but not “Wells Fargo Bank”.

auto-complete mockup 1

XQuery example 1, “basic element-value match”, is fast but only meets a narrow set of use cases:

cts:element-value-match(xs:QName(<span class="code-quote">"COMPANY"</span>),<span class="code-quote">"ban*"</span>, <span class="code-quote">"limit=50"</span>)

Wild-Cards aren’t Fast Enough

Wild-cards present a solution, but as in all technologies wild-cards are one of the slowest query types.  While wild-card searching in MarkLogic Server is very fast, I have not been able to tune it to be scalable enough for auto-complete.  I’ve used 1, 2, and 3-character indexes to enable fast matching using wild-cards even on the first keystrokes.  Under load I’d like at least 90% of queries below 100 ms to meet #3 above, but I’m seeing wild-card queries take as much as 1 second at the 90% line.  I’ve also tried wild-card queries in fields and elements, with very similar results.

XQuery example 2, “wild-card search”, is not fast enough:

(cts:search(/DOCUMENT/COMPANY, <span class="code-quote">"ban*"</span>)/text())[1 to 50]

fn:contains isn’t Fast Enough

XPath has fn:contains which matches any sub-string, thus coming close to the required functionality, but I didn’t expect high performance because it has no index to optimize the query–it essentially conducts a full-text scan.  While I was impressed that MarkLogic’s implementation of fn:contains in vanilla XPath performs even better than wild-card queries, the need to be case-insensitive adds a serious performance hit, making this the slowest query of the bunch.

XQuery example 3, “contains”, is not fast enough:

(cts:element-values(xs:QName(<span class="code-quote">"COMPANY"</span>))[contains(lower-<span class="code-keyword">case</span>(.),<span class="code-quote">"ban"</span>)]/text())[1 to 50]

Expanded Word Queries are Very Slow Under Load

I tried some very creative solutions, including wild-card matching of all possible words in the text, then using the words for a word query.  I expected this to be highly optimized because word queries use the search index and are very fast.  Either I did something wrong, or sending so many words in one “or” query slows things down.  Under load, these queries were even slower than contains queries.

XQuery example 3, “expanded words query”, is not fast enough:

let $words := cts:element-word-match(xs:QName(<span class="code-quote">"COMPANY"</span>),<span class="code-quote">"ban*"</span>)
let $wordQuery := cts:or-query((<span class="code-keyword">for</span> $word in $words <span class="code-keyword">return</span> $word))
<span class="code-keyword">return</span> (cts:search(/DOCUMENT/COMPANY, $wordQuery))[1 to 50]

Chunked Element Range Indexes Are the Answer

Luckily, I landed on a solution that works wonderfully.  It allows matching against any word in the values, and performs very quickly: under 10 milliseconds with no load, and under 150 milliseconds with outrageously high load (100 concurrent thread in JMeter).  I create what I’ll call a “chunked” Element Range Index which takes advantage of the indexes’ optimization for “starts-with” queries by creating a new value for each word in each value.  In order to associate the partial “chunked” values to the full value, I just append the full value after a colon.  So for “Wells Fargo Bank” I would create three values: “Wells Fargo Bank:Wells Fargo Bank”, “Fargo Bank:Wells Fargo Bank”, and “Bank:Wells Fargo Bank”.  This way when a user types “ban” and I search for values starting with “ban”, one of my matches will be “Bank:Wells Fargo Bank” and I will then strip everything before the colon and present the user with the match of “Wells Fargo Bank”.

auto-complete mockup 1

XQuery example 4, creates “chunked” values on which we can create a “chunked Element Range Index”:

(: This differs from tokenize because we want the whole trailing
 : string with each token :)
declare function local:chunkOnNonWordChars($value as xs:string, $fullValue as xs:string) {
  (: only shrink the value <span class="code-keyword">if</span> it contains whitespace character(s)
   : between non-whitespace characters :)
  <span class="code-keyword">if</span> ( matches($value, <span class="code-quote">"^\w+\W+\w"</span>) ) then (
    (: remove the first word and following whitespace characters :)
    let $smallerValue := replace($value, <span class="code-quote">"^\w+\W+"</span>, "")
    <span class="code-keyword">return</span> (
      concat($value, <span class="code-quote">":"</span>, $fullValue),
      local:chunkOnNonWordChars($smallerValue, $fullValue)
    )
  ) <span class="code-keyword">else</span> (
    concat($value, <span class="code-quote">":"</span>, $fullValue)
  )
};

xdmp:document-insert(<span class="code-quote">"autocomplete-seed.xml"</span>, &lt;xml&gt;{
  <span class="code-keyword">for</span> $value in cts:element-values(xs:QName(<span class="code-quote">"COMPANY"</span>))
  <span class="code-keyword">return</span>
    <span class="code-keyword">for</span> $chunk in local:chunkOnNonWordChars($value, $value)
    <span class="code-keyword">return</span> &lt;AUTOCOMPLETE_COMPANY&gt;{ $chunk }&lt;/AUTOCOMPLETE_COMPANY&gt;
}&lt;/xml&gt;)

XQuery example 5, “chunked element-value match”, is fast and meets the requirement to match any word from the values:

let $matches := cts:element-value-match(xs:QName(<span class="code-quote">"AUTOCOMPLETE_COMPANY"</span>), <span class="code-quote">"ban*"</span>, <span class="code-quote">"limit=50"</span>)<a href="http://blogs.avalonconsult.com/files/2011/03/jmeter_results_2011-03-03_22-25.png">
</a>let $clean_matches := <span class="code-keyword">for</span> $chunk in $matches <span class="code-keyword">return</span> replace($chunk, <span class="code-quote">".*:"</span>, "")
<span class="code-keyword">return</span> distinct-values($clean_matches)

Advice From an Expert

I had this article reviewed by Colleen Whitney, the MarkLogic engineer who wrote (and performance optimized) search:suggest, and she provided some tips compiled from multiple engineers on her team.  We can all use these when performance tuning our matches against element range indexes:

The major take-away is that on very large datasets, resolving matches is highly data-dependent. The factors (in order of importance) are:

  • Number of total values with the same prefix as the match pattern. (I think this is a key issue given your approach.)
  • Collation.
  • Average number of characters in each value. (Longer strings are slower to compare; also if the beginnings of the strings are similar, comparison will be slower)
  • Number of unique values.

Therefore, users can improve performance on large datasets in several ways:

  • Be selective about which index(es) drive suggestions.
  • Design interaction with the UI so that suggestions aren’t given until there are a minimum of 3 characters as input to the lexicon call.
  • Use codepoint collation on string range indexes.

For most use cases, I don’t think the data sets are large enough to require a user experience limitation of only showing suggestions after 3 characters of input.  But understanding Colleen’s tests were run on data sets with up to 22 million records, I’ll have to agree that with data sets that large we’ll most likely have to make some compromises on user experience.

Full-Disclosure on My Testing Approach

My testing approach matches my previous post, using geonames.org data (though about 85,000 cities loaded this time).  I’m also still running a set of queries as if users were typing each character of each city name.  My hardware is now a Dell E6510 with 4 GB ram and Windows 7.

Here’s a snapshot of my JMeter results after one of my tests (only 30 threads looping 10 times each):

<a href="http://blogs.avalonconsult.com/files/2011/03/jmeter_results_2011-03-03_22-25.png"><img class="aligncenter size-full wp-image-37" src="http://blogs.avalonconsult.com/files/2011/03/jmeter_results_2011-03-03_22-25.png" alt="JMeter load test results screenshot" width="500" height="199" /></a>

The times above are in milliseconds.  While you’ll notice that the median timings are acceptable (under 100 milliseconds) for most of the approaches, with a few outlier slow queries throwing off the averages, the 90% line is where I look to make sure the approach will have acceptable timings most of the time.  For this test run, only element-value-match and chuncked element-value match keep 90% of the responses under 100 milliseconds.  As I increase the size of the data or increase the load, the performance gap widens and it becomes more obvious that chunked element-value-match is much more scalable.

Other Requirements to Meet

While this solution is exciting, there are many potential auto-complete requirements beyond the scope this article, for example:

  • We may want to broaden the matches based on synonyms or spelling corrections.
  • Some user experiences may not need matches on the first few keystrokes, which significantly reduces the performance requirements.
  • We don’t want to query on every keystroke when keystrokes occur in quick succession.  Many javascript auto-complete libraires like YUI Auto-complete take care of this for us.
  • If we have a pre-defined order among the potential set of values, we would like to present our matches in the right order.
  • We might first show matches which start with the user’s text, then matches containing words or phrases which start with the users text.
  • We may want to group auto-complete suggestions, so we may need to create multiple chunked element range indexes, and run a query against each.  This would obviously add more overhead and require additional performance testing.

To meet many of these requirements, we’ll need to extend this approach.  For others we’ll need to fall-back to the approach of using custom Java code.

Conclusion

While the Java solution is still a viable alternative, it’s nice to now have a pure MarkLogic / XQuery solution for environments that can’t easily set up and manage a Java app server just for auto-complete.

Avalon Consulting, LLC About Avalon Consulting, LLC

Avalon Consulting, LLC implements Big Data, Web Presence, Content Publishing, and Enterprise Search solutions. We are the trusted partner to over one hundred clients, primarily Global 2000 companies, public agencies, and institutions of higher learning.

Headquartered in Plano, Texas, Avalon also maintains offices in Austin, Texas, Chicago, Illinois, and Washington, DC.

Comments

  1. Brajendu Kumar Das says:

    X-Query for Library Metadata : Solution
    Refference: http://developer.marklogic.com

    Here’s a query that catalogers will love! It leverages Library of Congress Classification numbers from MARCXML records to create subject facets. We used this for a discovery portal for digitized theology books. (This application is not live yet, but I’ll add the link when it is.)

    We got the idea for this query from Kurt Cagle’s example which makes an XQuery typeswitch expression more like a traditional switch. Each LC class is turned into a temporary element to get the typeswitch to work. The end result is a new class element with a value assigned for the subject facet.

    Also, This query was refactored and turned into a library module. I’d be glad to share it (thought it was too long for a blog post).
    xquery version “1.0-ml”;

    (: 2/12/11 Query for mapping LCC to subject categories using typeswitch expression. This updated version includes all of the LCC, not just the B class. :)

    declare default element namespace “http://digital.library.ptsem.edu/ia”;
    declare namespace m = “http://www.loc.gov/MARC21/slim”;

    for $meta in xdmp:directory(“/xml/”)/doc/metadata
    let $oldNode := $meta/class
    let $marcMeta :=$meta/marc/m:record
    let $lcClass := ($marcMeta/m:datafield[@tag=(“050”, “055”, “090”)]/m:subfield[@code=”a”])[1]
    let $bsClass := for $classNum in $lcClass
    where fn:starts-with($lcClass, “BS”)
    return if (fn:number(fn:substring($lcClass, 3, 4)) < 701)
    then "BS1"
    else if (fn:number(fn:substring($lcClass, 3, 4)) 1900)
    then “BS3”
    else ()
    let $bvClass := for $classNum in $lcClass
    where fn:starts-with($lcClass, “BV”)
    return if (fn:number(fn:substring($lcClass, 3, 4)) < 590)
    then "BV1"
    else if (fn:number(fn:substring($lcClass, 3, 4)) < 2000)
    then "BV2"
    else if (fn:number(fn:substring($lcClass, 3, 4)) < 3750)
    then "BV3"
    else if (fn:number(fn:substring($lcClass, 3, 4)) < 4200)
    then "BV4"
    else if (fn:number(fn:substring($lcClass, 3, 4)) < 4485)
    then "BV5"
    else if (fn:number(fn:substring($lcClass, 3, 4)) <= 5099)
    then "BV6"
    else ()
    let $bxClass := for $classNum in $lcClass
    where fn:starts-with($lcClass, "BX")
    return if (fn:number(fn:substring($lcClass, 3, 4)) < 100)
    then "BX1"
    else if (fn:number(fn:substring($lcClass, 3, 4)) < 800)
    then "BX2"
    else if (fn:number(fn:substring($lcClass, 3, 4)) < 4800)
    then "BX3"
    else if (fn:number(fn:substring($lcClass, 3, 4)) <= 9999)
    then "BX4"
    else ()
    let $bClass := if (fn:matches($lcClass, "^B\D\d"))
    then fn:substring($lcClass, 1, 2)
    else ()
    let $azClass := if ($lcClass)
    then fn:substring($lcClass, 1, 1)
    else ()
    let $newClass := ($bsClass, $bvClass, $bxClass, $bClass, $azClass, "Unclassified")[1]
    where $oldNode = "Unclassified"
    return for $class in $newClass
    let $c := element { $class }{}
    return let $newNode := typeswitch ($c)
    case $c as element (B) return Philosophy
    case $c as element (BC) return Logic
    case $c as element (BD) return Philosophy
    case $c as element (BF) return Psychology
    case $c as element (BH) return Aesthetics
    case $c as element (BJ) return Ethics
    case $c as element (BL) return Religion
    case $c as element (BM) return Judaism
    case $c as element (BP) return Islam
    case $c as element (BQ) returnBuddhism
    case $c as element (BR) return Church History
    case $c as element (BS1) return Bibles
    case $c as element (BS2) return Old Testament
    case $c as element (BS3) return New Testament
    case $c as element (BT) return Theology
    case $c as element (BV1) return Worship
    case $c as element (BV2) return Ecclesiology
    case $c as element (BV3) return Missions
    case $c as element (BV4) return Practical Theology
    case $c as element (BV5) return Preaching
    case $c as element (BV6) return Practical Theology
    case $c as element (BX1) return Ecumenism
    case $c as element (BX2) return Eastern churches
    case $c as element (BX3) return Catholic Church
    case $c as element (BX4) return Protestantism
    case $c as element (A) return General Works
    case $c as element (C) return History
    case $c as element (D) return History
    case $c as element (E) return History
    case $c as element (F) return History
    case $c as element (G) return Geography and Anthropology
    case $c as element (H) return Social Sciences
    case $c as element (J) return Political Science
    case $c as element (K) return Law
    case $c as element (L) return Education
    case $c as element (M) return Music
    case $c as element (N) return Fine Arts
    case $c as element (P) return Language and Literature
    case $c as element (Q) return Science
    case $c as element (R) return Medicine
    case $c as element (S) return Agriculture
    case $c as element (T) return Technology
    case $c as element (U) return Military Science
    case $c as element (V) return Military Science
    case $c as element (Z) return Bibliography
    default $c return Unclassified
    return xdmp:node-replace($oldNode, $newNode)

Leave a Comment

*