A Bayesian Classifier Learning Algorithm Based on Optimization Model

Sanyang Liu; Mingmin Zhu; Youlong Yang
January 2013
Mathematical Problems in Engineering;2013, p1
Academic Journal
Naive Bayes classifier is a simple and effective classification method, but its attribute independence assumption makes it unable to express the dependence among attributes and affects its classification performance. In this paper, we summarize the existing improved algorithms and propose a Bayesian classifier learning algorithm based on optimization model (BC-OM). BC-OM uses the chi-squared statistic to estimate the dependence coefficients among attributes, with which it constructs the objective function as an overall measure of the dependence for a classifier structure. Therefore, a problem of searching for an optimal classifier can be turned into finding the maximum value of the objective function in feasible fields. In addition, we have proved the existence and uniqueness of the numerical solution. BC-OMoffers a new opinion for the research of extended Bayesian classifier. Theoretical and experimental results show that the new algorithm is correct and effective.


Related Articles

  • Bayesian Network Learning with Parameter Constraints. Niculescu, Radu Stefan; Mitchell, Tom M.; Rao, R. Bharat; Bennett, Kristin P.; Parrado-Hernández, Emilio // Journal of Machine Learning Research;7/1/2006, Vol. 7 Issue 7, p1357 

    The task of learning models for many real-world problems requires incorporating domain knowledge into learning algorithms, to enable accurate learning from a realistic volume of training data. This paper considers a variety of types of domain knowledge for constraining parameter estimates when...

  • OPTIMAL ESTIMATION OF â„“1-REGULARIZATION PRIOR FROM A REGULARIZED EMPIRICAL BAYESIAN RISK STANDPOINT. Hui Huang; Haber, Eldad; Horesh, Lior; Jin Keun Seo // Inverse Problems & Imaging;Aug2012, Vol. 6 Issue 3, p447 

    We address the problem of prior matrix estimation for the solution of â„“1-regularized ill-posed inverse problems. From a Bayesian viewpoint, we show that such a matrix can be regarded as an in uence matrix in a multivariate â„“1-Laplace density function. Assuming a training set is...

  • Development of generalized integral index for estimating complex impact of major factors of winter runoff formation. P'yankov, S.; Kalinin, V. // Russian Meteorology & Hydrology;Jul2013, Vol. 38 Issue 7, p496 

    Considered is a method of using the generalized integral index for the complex taking account of major factors of the winter runoff formation. An expert-statistical regression model is proposed and a method is worked out of optimizing the selection of its coefficients objectively taking account...

  • Simultaneous Identification of Two Time Independent Coefficients in a Nonlinear Phase Field System. Gnanavel, S.; Barani Balan, N.; Balachandran, K. // Journal of Optimization Theory & Applications;Mar2014, Vol. 160 Issue 3, p992 

    In this article, we study a simultaneous reconstruction of two time independent parameters in a nonlinear phase field system by final overdetermination data. To this end, the given problem is transformed into an optimization problem by using the optimal control framework; then the existence of...

  • A Bayesian Model-Averaging Approach for Multiple-Response Optimization. Szu Hui Ng // Journal of Quality Technology;Jan2010, Vol. 42 Issue 1, p52 

    The characteristics that define the quality and reliability of many products and processes are often multidimensional. Many of the current multiple-response optimization approaches assume a single-response model to optimize such processes and do not consider the correlations among the response...

  • Function Estimation Employing Exponential Splines. Dose, V.; Fischer, R. // AIP Conference Proceedings;2005, Vol. 803 Issue 1, p67 

    We introduce and discuss the use of the exponential spline family for Bayesian nonparametric function estimation. Exponential splines span the range of shapes between the limiting cases of traditional cubic spline and piecewise linear interpolation. They are therefore particularly suited for...

  • ESTIMATION OF PARAMETERS OF STRUCTURAL CHANGE UNDER SMALL SIGMA APPR0XIMATION THEORY. GUPTA, R. K. // International Journal of Research in Commerce, IT & Management;Oct2014, Vol. 4 Issue 10, p51 

    In this paper, the structural change in a linear regression model over two different periods of time is estimated. The ordinary least squares and Stein-rule estimators are employed to estimate the structural change. Their efficiency properties are derived using the small sigma theory and...

  • REPRESENTATION OF THE LEAST WEIGHTED SQUARES. Víšek, Jan Ámos // Advances & Applications in Statistics;2015, Vol. 47 Issue 2, p91 

    The estimation of the coefficients of linear regression model by means of the least weighted squares (LWS) is studied. The order of words in the name of method - the least weighted squares - is to hint that the weights are assigned to the order statistics of squared residuals rather than...

  • On Finding Predictors for Arbitrary Families of Processes. Ryabko, Daniil // Journal of Machine Learning Research;2/1/2010, Vol. 11 Issue 2, p581 

    The problem is sequence prediction in the following setting. A sequence x1, . . . ,xn, . . . of discrete-valued observations is generated according to some unknown probabilistic law (measure) µ. After observing each outcome, it is required to give the conditional probabilities of the next...


Read the Article


Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics