TITLE

Preference-based learning to rank

AUTHOR(S)
Ailon, Nir; Mohri, Mehryar
PUB. DATE
September 2010
SOURCE
Machine Learning;Sep2010, Vol. 80 Issue 2/3, p189
SOURCE TYPE
Academic Journal
DOC. TYPE
Article
ABSTRACT
This paper presents an efficient preference-based ranking algorithm running in two stages. In the first stage, the algorithm learns a preference function defined over pairs, as in a standard binary classification problem. In the second stage, it makes use of that preference function to produce an accurate ranking, thereby reducing the learning problem of ranking to binary classification. This reduction is based on the familiar QuickSort and guarantees an expected pairwise misranking loss of at most twice that of the binary classifier derived in the first stage. Furthermore, in the important special case of bipartite ranking, the factor of two in loss is reduced to one. This improved bound also applies to the regret achieved by our ranking and that of the binary classifier obtained. Our algorithm is randomized, but we prove a lower bound for any deterministic reduction of ranking to binary classification showing that randomization is necessary to achieve our guarantees. This, and a recent result by Balcan et al., who show a regret bound of two for a deterministic algorithm in the bipartite case, suggest a trade-off between achieving low regret and determinism in this context. Our reduction also admits an improved running time guarantee with respect to that deterministic algorithm. In particular, the number of calls to the preference function in the reduction is improved from Ω( n2) to O( nlog n). In addition, when the top k ranked elements only are required ( k≪ n), as in many applications in information extraction or search engine design, the time complexity of our algorithm can be further reduced to O( klog k+ n). Our algorithm is thus practical for realistic applications where the number of points to rank exceeds several thousand.
ACCESSION #
52269290

 

Related Articles

  • Enclosing machine learning: concepts and algorithms. Xun-Kai Wei; Ying-Hong Li; Yu-Fei Li; Dong-Fang Zhang // Neural Computing & Applications;2008, Vol. 17 Issue 3, p237 

    A novel machine learning paradigm, i.e., enclosing machine learning based on regular geometric shapes was proposed. First, it adopted regular minimum volume enclosing and bounding geometric shapes (sphere, ellipsoid, box) or their unions and so on to obtain one class description model. Second,...

  • On boosting kernel density methods for multivariate data: density estimation and classification. Marzio, Marco; Taylor, Charles // Statistical Methods & Applications;2005, Vol. 14 Issue 2, p163 

    Statistical learning is emerging as a promising field where a number of algorithms from machine learning are interpreted as statistical methods and vice-versa. Due to good practical performance, boosting is one of the most studied machine learning techniques. We propose algorithms for...

  • Three New MDL-Based Pruning Techniques for Robust Rule Induction. Pham, D. T.; Afify, A. A. // Proceedings of the Institution of Mechanical Engineers -- Part C;Apr2006, Vol. 220 Issue 4, p553 

    Overfitting the training data is a major problem in machine learning, particularly when noise is present. Overfitting increases learning time and reduces both the accuracy and the comprehensibility of the generated rules, making learning from large data sets more difficult. Pruning is a...

  • On the equivalence of weak learnability and linear separability: new relaxations and efficient boosting algorithms. Shalev-Shwartz, Shai; Singer, Yoram // Machine Learning;Sep2010, Vol. 80 Issue 2/3, p141 

    Boosting algorithms build highly accurate prediction mechanisms from a collection of low-accuracy predictors. To do so, they employ the notion of weak-learnability. The starting point of this paper is a proof which shows that weak learnability is equivalent to linear separability with â„“1...

  • Hedging Predictions in Machine Learning. Alexander Gammerman // Computer Journal;2007, Vol. 50 Issue 2, p151 

    Recent advances in machine learning make it possible to design efficient prediction algorithms for data sets with huge numbers of parameters. This article describes a new technique for ‘hedging’ the predictions output by many such algorithms, including support vector machines,...

  • Use of machine learning techniques for educational proposes: a decision support system for forecasting students' grades. Kotsiantis, S. // Artificial Intelligence Review;Apr2012, Vol. 37 Issue 4, p331 

    Use of machine learning techniques for educational proposes (or educational data mining) is an emerging field aimed at developing methods of exploring data from computational educational settings and discovering meaningful patterns. The stored data (virtual courses, e-learning log file,...

  • Data Preprocessing for Supervised Leaning. Kotsiantis, S. B.; Kanellopoulos, D.; Pintelas, P. E. // Enformatika;2006, Vol. 12, p277 

    Many factors affect the success of Machine Learning (ML) on a given task. The representation and quality of the instance data is first and foremost. If there is much irrelevant and redundant information present or noisy and unreliable data, then knowledge discovery during the training phase is...

  • A study of the effect of different types of noise on the precision of supervised learning techniques. Nettleton, David F.; Orriols-Puig, Albert; Fornells, Albert // Artificial Intelligence Review;Apr2010, Vol. 33 Issue 4, p275 

    Machine learning techniques often have to deal with noisy data, which may affect the accuracy of the resulting data models. Therefore, effectively dealing with noise is a key aspect in supervised learning to obtain reliable models from data. Although several authors have studied the effect of...

  • Efficient greedy feature selection for unsupervised learning. Farahat, Ahmed; Ghodsi, Ali; Kamel, Mohamed // Knowledge & Information Systems;May2013, Vol. 35 Issue 2, p285 

    Reducing the dimensionality of the data has been a challenging task in data mining and machine learning applications. In these applications, the existence of irrelevant and redundant features negatively affects the efficiency and effectiveness of different learning algorithms. Feature selection...

Share

Read the Article

Courtesy of VIRGINIA BEACH PUBLIC LIBRARY AND SYSTEM

Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics