I believe there are two ways search will improve significantly in the future. First, since talking is easier than typing, speech recognition will allow longer and more accurate input strings. Second, search will be informed by much more persistent user information, with search companies having very detailed understanding of searchers. Based on that, I expect:
- A small oligopoly dominating the conjoined businesses of mobile device software and search. The companies most obviously positioned for membership are Google and Apple.
- The continued and growing combination of search, advertisement/recommendation, and alerting. The same user-specific data will be needed for all three.
- A whole lot of privacy concerns.
My reasoning starts from several observations:
- Enterprise search is greatly disappointing. My main reason for saying that is anecdotal evidence — I don’t notice users being much happier with search than they were 15 years ago. But business results are suggestive too:
- HP just disclosed serious problems with Autonomy.
- Microsoft’s acquisition of FAST was a similar debacle.
- Lesser enterprise search outfits never prospered much. (E.g., when’s the last time you heard mention of Coveo?)
- My favorable impressions of the e-commerce site search business turned out to be overdone. (E.g., Mercado’s assets were sold for a pittance soon after I wrote that, while Endeca and Inquira were absorbed into Oracle.)
- Lucene/Solr’s recent stirrings aren’t really in the area of search.
- Web search, while superior to the enterprise kind, is disappointing people as well. Are Google’s results any better than they were 8 years ago? Google’s ongoing hard work notwithstanding, are they even as good?
- Consumer computer usage is swinging toward mobile devices. I hope I don’t have to convince you about that one.
In principle, there are two main ways to make search better:
- Understand more about the documents being searched over. But Google’s travails, combined with the rather dismal history of enterprise search, suggest we’re well into the diminishing-returns part of that project.
- Understand more about what the searcher wants.
The latter, I think, is where significant future improvement will be found.
So how does a search engine understand what you want? It can listen to you directly, parsing your search string. It can ask for more clarity, through some kind of disambiguation interface. Or it can make inferences, based on — well, based on just about any kind of information that might exist about you and your online behavior.
Search strings are short, typically four words or less. That doesn’t leave room for a lot of innovative parsing. Not a lot of progress can be made until search strings get a lot longer, and that is unlikely except perhaps through the convenience of speech recognition.
Faceted/parameterized selection has its place. For example, when I search on Amazon.com, the site encourages me to also select a department from its dropdown menu; otherwise, it refuses to rank the search results. And when I buy shirts from Land’s End, I just click through and never search at all. Still, Google’s been around for 15 years, and about all its successes in searcher-does-the-work disambiguation boil down to is:
- A list of a few major subcategories to search (News, YouTube, etc.).
- Spelling correction.
- A desultory list of related/more specific searches, perhaps just longer search strings other people have recently entered.
- Well-hidden “Advanced Search” features, which look much like AltaVista’s and AllTheWeb’s similar features did late in the 20th Century.
Whatever the user attitudes and behaviors are that constrain Google’s or its competitors’ success in this area, I can’t imagine them changing much — except, once again, in the event that speech recognition leads to richer human-computer conversations.
I’ve now highlighted two different ways in which there’s a search-interface challenge that will be tough to beat without turning to speech recognition. But the case for speech recognition is even stronger than that. We’re moving to small, mobile devices, and:
- Traditional search interfaces work worse on mobile devices than on desktop computers. Typing is harder. So is dealing with picky forms.
- Speech may work as well or better on mobile devices than at your desk. If you have upgraded your Apple device to IOS 6, you have both a microphone and Siri. The same may not be true of your desktop gear.
And so I conclude that speech recognition is a big part of the future of search.
What will that allow? Since talking is easier than typing, speech is a way to get longer text strings as search inputs, or more of them. It’s plausible that people might speak queries as complex as:
- “I want to buy a recharger for an iPad 3 with delivery this week.”
- “Where is 10gen’s Northern California office?” … “Which nearby restaurants have good Yelp reviews?”
- “Tell me about the David Reed who went to the Kennedy School of Government around 1977, went to Dartmouth before that, and worked for the Federal Communications Commission.”
Getting search engines to the point that they can handle such queries will be difficult but straightforward — but even more progress is needed. Search results for various queries will be greatly improved if the search engine “knows” things like:
- The location of your home and office, and the distance you’re willing to go from them to eat or shop.
- Your tastes in food, clothing, and gadgetry.
- The level of sophistication at which you like to read about medicine, finance, or electronics.
- Which people are or might be in your extended social network.
And that will cement internet search squarely in the world of — for once I approve of the term — big data.