Results 1  10
of
49
Sparse kernel density construction using orthogonal forward regression with leaveoneout test score and local regularization
 IEEE Trans. Systems, Man and Cybernetics, Part B
, 2004
"... Abstract — The paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regress ..."
Abstract

Cited by 24 (12 self)
 Add to MetaCart
(Show Context)
Abstract — The paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leaveoneout test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing stateofart kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favourably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates. Index Terms — Cross validation, leaveoneout test score, orthogonal least squares, Parzen window estimate, probability
A kernelbased twoclass classifier for imbalanced data sets
 IEEE Trans Neural Netw. 2007; 18: 28–41. PMID: 17278459
"... ..."
(Show Context)
A fault tolerant regularizer for rbf networks
 IEEE Transactions on Neural Networks
, 2008
"... Abstract—In classical training methods for node open fault, we need to consider many potential faulty networks. When the multinode fault situation is considered, the space of potential faulty networks is very large. Hence, the objective function and the corresponding learning algorithm would be comp ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
(Show Context)
Abstract—In classical training methods for node open fault, we need to consider many potential faulty networks. When the multinode fault situation is considered, the space of potential faulty networks is very large. Hence, the objective function and the corresponding learning algorithm would be computationally complicated. This paper uses the Kullback–Leibler divergence to define an objective function for improving the fault tolerance of radial basis function (RBF) networks. With the assumption that there is a Gaussian distributed noise term in the output data, a regularizer in the objective function is identified. Finally, the corresponding learning algorithm is developed. In our approach, the objective function and the learning algorithm are computationally simple. Compared with some conventional approaches, including weightdecaybased regularizers, our approach has a better faulttolerant ability. Besides, our empirical study shows that our approach can improve the generalization ability of a faultfree RBF network. Index Terms—Kullback–Leibler divergence, node open fault, regularization. NOMENCLATUREdimensional input pattern. 1D output of the system. The th input–output training pair. Input dimension. Number of training samples.
M.: Prospect for a Silent Speech Interface Using Ultrasound Imaging
 In: International Conference on Acoustics, Speech and Signal Processing
, 2006
"... The feasibility of a silent speech interface using ultrasound (US) imaging and lip profile video is investigated by examining the quality of line spectral frequencies (LSF) derived from the image sequences. It is found that the data do not at present allow reliable identification of silences and fri ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
The feasibility of a silent speech interface using ultrasound (US) imaging and lip profile video is investigated by examining the quality of line spectral frequencies (LSF) derived from the image sequences. It is found that the data do not at present allow reliable identification of silences and fricatives, but that LSF’s recovered from vocalized passages are compatible with the synthesis of intelligible speech. 1.
Feature Selection Using a Piecewise Linear Network,” accepted by
 IEEE Trans. on Neural Networks
"... Abstract—We present an efficient feature selection algorithm for the general regression problem, which utilizes a piecewise linear orthonormal least squares (OLS) procedure. The algorithm 1) determines an appropriate piecewise linear network (PLN) model for the given data set, 2) applies the OLS pr ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Abstract—We present an efficient feature selection algorithm for the general regression problem, which utilizes a piecewise linear orthonormal least squares (OLS) procedure. The algorithm 1) determines an appropriate piecewise linear network (PLN) model for the given data set, 2) applies the OLS procedure to the PLN model, and 3) searches for useful feature subsets using a floating search algorithm. The floating search prevents the “nesting effect. ” The proposed algorithm is computationally very efficient because only one data pass is required. Several examples are given to demonstrate the effectiveness of the proposed algorithm. Index Terms—Feature selection, regression, piecewise linear network (PLN), orthonormal least squares (OLS), floating search. I.
Particle swarm optimization aided orthogonal forward regression for unified data modelling
 IEEE TRANS. EVOLUTION. COMPUT
, 2010
"... We propose a unified data modeling approach that is equally applicable to supervised regression and classification applications, as well as to unsupervised probability density function estimation. A particle swarm optimization (PSO) aided orthogonal forward regression (OFR) algorithm based on leave ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
We propose a unified data modeling approach that is equally applicable to supervised regression and classification applications, as well as to unsupervised probability density function estimation. A particle swarm optimization (PSO) aided orthogonal forward regression (OFR) algorithm based on leaveoneout (LOO) criteria is developed to construct parsimonious radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines the center vector and diagonal covariance matrix of one RBF node by minimizing the LOO statistics. For regression applications, the LOO criterion is chosen to be the LOO mean square error, while the LOO misclassification rate is adopted in twoclass classification applications. By adopting the Parzen window estimate as the desired response, the unsupervised density estimation problem is transformed into a constrained regression problem. This PSO aided OFR algorithm for tunablenode RBF networks is capable of constructing very parsimonious RBF models that generalize well, and our analysis and experimental results demonstrate that the algorithm is computationally even simpler than the efficient regularization assisted orthogonal least square algorithm based on LOO criteria for selecting fixednode RBF models. Another significant advantage of the proposed learning procedure is that it does not have learning hyperparameters that have to be tuned using costly cross validation. The effectiveness of the proposed PSO aided OFR construction procedure is illustrated using several examples taken from regression and classification, as well as density estimation applications.
Construction of Tunable Radial Basis Function Networks Using Orthogonal Forward Selection
"... Abstract—An orthogonal forward selection (OFS) algorithm based on leaveoneout (LOO) criteria is proposed for the construction of radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines an RBF node, namely, its center vector and diagonal covariance ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract—An orthogonal forward selection (OFS) algorithm based on leaveoneout (LOO) criteria is proposed for the construction of radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines an RBF node, namely, its center vector and diagonal covariance matrix, by minimizing the LOO statistics. For regression application, the LOO criterion is chosen to be the LOO meansquare error, while the LOO misclassification rate is adopted in twoclass classification application. This OFSLOO algorithm is computationally efficient, and it is capable of constructing parsimonious RBF networks that generalize well. Moreover, the proposed algorithm is fully automatic, and the user does not need to specify a termination criterion for the construction process. The effectiveness of the proposed RBF network construction procedure is demonstrated using examples taken from both regression and classification applications. Index Terms—Classification, leaveoneout (LOO) statistics, orthogonal forward selection (OFS), radial basis function (RBF) network, regression, tunable node. I.
A Growing and Pruning Method for Radial Basis Function Networks
"... Abstract – A recently published learning algorithm GGAP for radial basis function (RBF) neural networks is studied and modified. GGAP is a growing and pruning algorithm, which means that a created network unit that consistently makes little contribution to the network’s performance can be removed du ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract – A recently published learning algorithm GGAP for radial basis function (RBF) neural networks is studied and modified. GGAP is a growing and pruning algorithm, which means that a created network unit that consistently makes little contribution to the network’s performance can be removed during the training. GGAP states a formula for computing the significance of the network units, which requires a dfold numerical integration for arbitrary probability density function p ( x) of the input d data x ( x∈R). In this work the GGAP formula is approximated using a Gaussian mixture model (GMM) for p ( x) and an analytical solution of the approximated unit significance is derived. This makes it possible to employ the modified GGAP for input data having complex and high dimensional p ( x), which was not possible in the original GGAP. The results of an extensive experimental study show that the modified algorithm outperforms the original GGAP achieving both a lower prediction error and reduced complexity of the trained network. Index Terms – Radial basis function neural networks, sequential function approximation, growing and pruning algorithms, extended Kalman filter, Gaussian mixture model. I.
Hu “RBF networks for nonlinear models subject to linear constraints
 2009 IEEE International Conference on Granular Computing, GrC 2009
, 2009
"... Abstract—In this work, we present a study of nonlinear modelings based on RBF networks. The incorporation of prior knowledge in modelings is our specific concern for adding transparency and improving the performance of the networks. We focus on the prior knowledge within the class of linear constrai ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract—In this work, we present a study of nonlinear modelings based on RBF networks. The incorporation of prior knowledge in modelings is our specific concern for adding transparency and improving the performance of the networks. We focus on the prior knowledge within the class of linear constraints, which includes both linear equality and linear inequality constraints. Different with other existing modeling approaches using Lagrange multiplier technique, we propose a submodel using the same RBF network configuration to impose the constraints. Two benefits are gained from this modeling approach in comparison with the conventional RBF networks. First, the transparency is added through a structural way with a higher degree of explicitness than an algorithm means. Second, on linear equality constraint problems, the proposed approach is able to obtain the learning solutions directly without involving iteration processes. Numerical results from three benchmark examples confirm the beneficial aspects on the proposed modeling approach. I.
An efficient feature selection algorithm for computeraided polyp detection
 in FLAIRS
, 2005
"... We present an efficient feature selection algorithm for computer aided detection (CAD) computed tomographic (CT) colonography. The algorithm 1) determines an appropriate piecewise linear network (PLN) model based on a learning theorem for the given data set, 2) applies the orthonormal least square ( ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We present an efficient feature selection algorithm for computer aided detection (CAD) computed tomographic (CT) colonography. The algorithm 1) determines an appropriate piecewise linear network (PLN) model based on a learning theorem for the given data set, 2) applies the orthonormal least square (OLS) procedure to the PLN model utilizing a Modified Schmidt procedure, and 3) uses a floating search algorithm to select features that minimize the output variance. The undesirable “nesting effect ” is prevented by the floating search approach, and the piecewise linear OLS procedure makes this algorithm very computationally efficient because the Modified Schmidt procedure only requires one data pass during the whole searching process. The selected features are compared to those selected by other methods, through crossvalidation with a committee of support vector machines (SVMs).