It Must Be Insanely Fast and Meet the Requirements
Auto-complete is hard to optimize because:
- we’re searching on partial-words, but search engines are not optimized for that
- queries happen on each keystroke, so expect five to ten times your normal search query traffic in auto-complete queries
- if response times are slower than 100 milliseconds, users are on to the next keystroke before you show your suggestions
Another complication is that each implementation has different requirements for defining and matching the set of possible values for auto-complete suggestions. For this article I’m assuming the most common requirements: given a defined set of possible values, we want to present as auto-complete suggestions all values which contain any word or phrase that starts with the text being entered by the user, while matching case-insensitively.
Until now, I had been using custom Java to achieve scalable auto-complete that can match any word in the set of possible values, but I’ve now found a way to do it in MarkLogic server. But before I explain the new approach, let’s quickly review the solutions I tried which didn’t work.
Using MarkLogic Server, standard Element Range Indexes and/or search:suggest are powerful options for auto-complete, as mentioned in my previous blog post on the topic: Scalable Search Auto-Complete. But they only meet a narrow set of use cases. The main limitation is they can only do a “starts-with” match against the entire value. They don’t support starts-with against any word in the set of possible values, which is a more common requirement. So if a user starts typing “ban” they will get matches for “Bank of America” but not “Wells Fargo Bank”.
XQuery example 1, “basic element-value match”, is fast but only meets a narrow set of use cases:
Wild-cards present a solution, but as in all technologies wild-cards are one of the slowest query types. While wild-card searching in MarkLogic Server is very fast, I have not been able to tune it to be scalable enough for auto-complete. I’ve used 1, 2, and 3-character indexes to enable fast matching using wild-cards even on the first keystrokes. Under load I’d like at least 90% of queries below 100 ms to meet #3 above, but I’m seeing wild-card queries take as much as 1 second at the 90% line. I’ve also tried wild-card queries in fields and elements, with very similar results.
XQuery example 2, “wild-card search”, is not fast enough:
XPath has fn:contains which matches any sub-string, thus coming close to the required functionality, but I didn’t expect high performance because it has no index to optimize the query–it essentially conducts a full-text scan. While I was impressed that MarkLogic’s implementation of fn:contains in vanilla XPath performs even better than wild-card queries, the need to be case-insensitive adds a serious performance hit, making this the slowest query of the bunch.
XQuery example 3, “contains”, is not fast enough:
I tried some very creative solutions, including wild-card matching of all possible words in the text, then using the words for a word query. I expected this to be highly optimized because word queries use the search index and are very fast. Either I did something wrong, or sending so many words in one “or” query slows things down. Under load, these queries were even slower than contains queries.
XQuery example 3, “expanded words query”, is not fast enough:
Luckily, I landed on a solution that works wonderfully. It allows matching against any word in the values, and performs very quickly: under 10 milliseconds with no load, and under 150 milliseconds with outrageously high load (100 concurrent thread in JMeter). I create what I’ll call a “chunked” Element Range Index which takes advantage of the indexes’ optimization for “starts-with” queries by creating a new value for each word in each value. In order to associate the partial “chunked” values to the full value, I just append the full value after a colon. So for “Wells Fargo Bank” I would create three values: “Wells Fargo Bank:Wells Fargo Bank”, “Fargo Bank:Wells Fargo Bank”, and “Bank:Wells Fargo Bank”. This way when a user types “ban” and I search for values starting with “ban”, one of my matches will be “Bank:Wells Fargo Bank” and I will then strip everything before the colon and present the user with the match of “Wells Fargo Bank”.
XQuery example 4, creates “chunked” values on which we can create a “chunked Element Range Index”:
XQuery example 5, “chunked element-value match”, is fast and meets the requirement to match any word from the values:
Advice From an Expert
I had this article reviewed by Colleen Whitney, the MarkLogic engineer who wrote (and performance optimized) search:suggest, and she provided some tips compiled from multiple engineers on her team. We can all use these when performance tuning our matches against element range indexes:
The major take-away is that on very large datasets, resolving matches is highly data-dependent. The factors (in order of importance) are:
- Number of total values with the same prefix as the match pattern. (I think this is a key issue given your approach.)
- Average number of characters in each value. (Longer strings are slower to compare; also if the beginnings of the strings are similar, comparison will be slower)
- Number of unique values.
Therefore, users can improve performance on large datasets in several ways:
- Be selective about which index(es) drive suggestions.
- Design interaction with the UI so that suggestions aren’t given until there are a minimum of 3 characters as input to the lexicon call.
- Use codepoint collation on string range indexes.
For most use cases, I don’t think the data sets are large enough to require a user experience limitation of only showing suggestions after 3 characters of input. But understanding Colleen’s tests were run on data sets with up to 22 million records, I’ll have to agree that with data sets that large we’ll most likely have to make some compromises on user experience.
My testing approach matches my previous post, using geonames.org data (though about 85,000 cities loaded this time). I’m also still running a set of queries as if users were typing each character of each city name. My hardware is now a Dell E6510 with 4 GB ram and Windows 7.
Here’s a snapshot of my JMeter results after one of my tests (only 30 threads looping 10 times each):
<a href="http://blogs.avalonconsult.com/files/2011/03/jmeter_results_2011-03-03_22-25.png"><img class="aligncenter size-full wp-image-37" src="http://blogs.avalonconsult.com/files/2011/03/jmeter_results_2011-03-03_22-25.png" alt="JMeter load test results screenshot" width="500" height="199" /></a>
The times above are in milliseconds. While you’ll notice that the median timings are acceptable (under 100 milliseconds) for most of the approaches, with a few outlier slow queries throwing off the averages, the 90% line is where I look to make sure the approach will have acceptable timings most of the time. For this test run, only element-value-match and chuncked element-value match keep 90% of the responses under 100 milliseconds. As I increase the size of the data or increase the load, the performance gap widens and it becomes more obvious that chunked element-value-match is much more scalable.
While this solution is exciting, there are many potential auto-complete requirements beyond the scope this article, for example:
- We may want to broaden the matches based on synonyms or spelling corrections.
- Some user experiences may not need matches on the first few keystrokes, which significantly reduces the performance requirements.
- If we have a pre-defined order among the potential set of values, we would like to present our matches in the right order.
- We might first show matches which start with the user’s text, then matches containing words or phrases which start with the users text.
- We may want to group auto-complete suggestions, so we may need to create multiple chunked element range indexes, and run a query against each. This would obviously add more overhead and require additional performance testing.
To meet many of these requirements, we’ll need to extend this approach. For others we’ll need to fall-back to the approach of using custom Java code.
While the Java solution is still a viable alternative, it’s nice to now have a pure MarkLogic / XQuery solution for environments that can’t easily set up and manage a Java app server just for auto-complete.