An oracle inequality for regularized risk minimizers with strongly mixing observations

Cao, Feilong; Xing, Xing
April 2013
Frontiers of Mathematics in China;Apr2013, Vol. 8 Issue 2, p301
Academic Journal
We establish a general oracle inequality for regularized risk minimizers with strongly mixing observations, and apply this inequality to support vector machine (SVM) type algorithms. The obtained main results extend the previous known results for independent and identically distributed samples to the case of exponentially strongly mixing observations.


Related Articles

  • Information, Divergence and Risk for Binary Experiments. Reid, Mark D.; Williamson, Robert C. // Journal of Machine Learning Research;Mar2011, Vol. 12 Issue 3, p731 

    We unify f -divergences, Bregman divergences, surrogate regret bounds, proper scoring rules, cost curves, ROC-curves and statistical information. We do this by systematically studying integral and variational representations of these objects and in so doing identify their representation...

  • Parameter selection of support vector machines and genetic algorithm based on change area search. Zhao, Mingyuan; Ren, Jian; Ji, Luping; Fu, Chong; Li, Jianping; Zhou, Mingtian // Neural Computing & Applications;Feb2012, Vol. 21 Issue 1, p1 

    Generalization performance of support vector machines (SVM) with Gaussian kernel is influenced by its model parameters, both the error penalty parameter and the Gaussian kernel parameter. After researching the characteristics and properties of the parameter simultaneous variation of support...

  • Detection & Classification of Network Anomalies using SVM and Decision Tree. Nagar, Mayank; Pandit, Shraddha; Maurya, JayPrakash // International Journal of Computer Science & Information Technolo;2014, Vol. 5 Issue 2, p2338 

    Here in this paper a new technique of detecting network anomalies in the traffic is implemented using the concept of Support vector machine and decision tree. The idea is to first apply clustering of the data traffic using support vector machine and then classifying the network traffic using...

  • On the Necessity of Irrelevant Variables. Helmbold, David P.; Long, Philip M.; Lugosi, Gábor // Journal of Machine Learning Research;Jul2012, Vol. 13 Issue 7, p2145 

    This work explores the effects of relevant and irrelevant boolean variables on the accuracy of classifiers. The analysis uses the assumption that the variables are conditionally independent given the class, and focuses on a natural family of learning algorithms for such sources when the relevant...

  • AN EFFICIENT TEXT CLASSIFICATION USING KNN AND NAIVE BAYESIAN. Sreemathy, J.; Balamurugan, P. S. // International Journal on Computer Science & Engineering;Mar2012, Vol. 4 Issue 3, p392 

    The main objective is to propose a text classification based on the features selection and preprocessing thereby reducing the dimensionality of the Feature vector and increase the classification accuracy. Text classification is the process of assigning a document to one or more target...

  • Classification of 5-HT1A receptor agonists and antagonists using GA-SVM method. Zhu, Xue-lian; Cai, Hai-yan; Xu, Zhi-jian; Wang, Yong; Wang, He-yao; Zhang, Ao; Zhu, Wei-liang // Acta Pharmacologica Sinica;Nov2011, Vol. 32 Issue 11, p1424 

    Aim:To construct a reliable computational model for the classification of agonists and antagonists of 5-HT1A receptor.Methods:Support vector machine (SVM), a well-known machine learning method, was employed to build a prediction model, and genetic algorithm (GA) was used to select the most...

  • Guest editorial: model selection and optimization in machine learning. Özöğür-Akyüz, Süreyya; Ünay, Devrim; Smola, Alex // Machine Learning;Oct2011, Vol. 85 Issue 1/2, p1 

    An introduction is presented in which the editor discusses various reports within the issue on topics including the improvement of machine learning methods by using optimization algorithms, a new optimization algorithm for Multiple Kernel Learning (MKL), and the two types of nonsmooth...

  • Unsupervised Supervised Learning II: Margin-Based Classification Without Labels. Balasubramanian, Krishnakumar; Donmez, Pinar; Lebanon, Guy // Journal of Machine Learning Research;Nov2011, Vol. 12 Issue 11, p3119 

    Many popular linear classifiers, such as logistic regression, boosting, or SVM, are trained by optimizing a margin-based risk function. Traditionally, these risk functions are computed based on a labeled data set. We develop a novel technique for estimating such risks using only unlabeled data...

  • Twin support vector hypersphere (TSVH) classifier for pattern recognition. Peng, Xinjun; Xu, Dong // Neural Computing & Applications;Apr2014, Vol. 24 Issue 5, p1207 

    Motivated by the support vector data description, a classical one-class support vector machine, and the twin support vector machine classifier, this paper formulates a twin support vector hypersphere (TSVH) classifier, a novel binary support vector machine (SVM) classifier that determines a pair...


Read the Article


Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics