Long-tail Search, How to Do It, What Comes Next Ted Dunning Currently available search engines limit query size and complexity severely due to the high cost of executing searches based on large queries. This limitation is unfortunate since there is a long tail of user information needs that are better addressed by allowing users to employ large chunks of existing text as prototypes of what they would like to find. The DeepDyve search engine supports very long queries with very favorable cost scaling. The DeepDyve engine scales well because it changes accuracy-cost trade-offs in early phases of the search to allow fast and "pretty good" search to be done conventionally. Later phases of the search then improve search quality from "pretty good" to "pretty damned good". Arranging the computation in this way allows all but the first phase of search to exhibit counter-intuitive and suprisingly good scaling properties. In this talk, I will explain in detail how the DeepDyve search architecture supports long queries and why it achieves such good scaling performance. Beyond what we have already developed, I will show results from an advanced search prototype that exhibits semantic properties and will explain how this prototype can be implemented using the DeepDyve search architecture.