Tuesday, September 20th, 2011

If you like this author, how about?

Short version. A few weeks ago, we introduced an “author read-alikes” feature, in four different flavors, with an invitation to help us pick the best one. We got our answer, and have gone with the winner. But the voting worked so well, we’ve decided to integrate it into the new author recommendations. The recommendations—now called “If you like this author..”—all include a ratings option. Rate a recommendation high and it will rise. Rate it low and it will sink. This is fortunate, as our author recommendations—we admit—need work. They’re nowhere near as good as our work-to-work recommendations.

With luck, your input will help us improve them!

More details. The vote between flavors turned out very well for us. Option 2 was a new and extremely slow algorithm. We thought it would be the best, and it polled a respectable 3.43 stars. It certainly outpolled version 3 and 4, at 2.95 and 2.94 stars respectively. But it did slightly worse than option 1, which polled 3.52. Best of all, option 1 was a much easier, faster algorithm. This was unexpected, but welcome. Option one is so fast we’ve been able to apply recommendations to virtually all major authors in a single night—option 2 would have taken weeks or months!

3.52 stars still isn’t that great. But the success of the vote suggested we might do better if we subjected the recommendations themselves to a vote. So we’ve done that. Basically, anything above 3 stars will conspire to move the recommendation up. Anything below three will move it down. The higher or lower the stars, the greater the movement. The effect will differ between authors and recommendation, as all recommendations have a (hidden) score, not just a ranking.

Some examples: Agatha Christie, Sir Arthur Conan Doyle, Umberto Eco, Doris Kearns Goodwin, J. K. Rowling, Malcolm Gladwell.

Come talk about it here.

Labels: recommendations

One Comments:

  1. If I remember correctly, a time ago (one or two years I think) Netflix issued a challenge (with a prize) for the development of a better algorithm. They provided people interested in participating with detailed datasets and the algorithm that better predicted what an user would like based on what he liked before (there was a before and an after datasets, so the quality of the challenging algorithms could be objectively evaluated) by a certain date won, and it indeed resulted in something much more accurate than Netflix itself was using by then. Maybe you could try doing something similar?

Leave a Reply to Alexander Gieg