Buradasınız

Predictive Subset Selection using Regression Trees and RBF Neural Networks Hybridized with the Genetic Algorithm

Journal Name:

Publication Year:

Abstract (2. Language): 
In this paper we develop a novel nonparametric predictive subset regression modeling procedure that involves a combination of regression trees with radial basis function (RBF) neural networks hybridized with the genetic algorithm (GA) to carry out the subset selection of the best predictors. We use the information-theoretic measure of complexity (ICOMP) criterion of [5, 6, 7, 8] as our fitness function to choose the best approximating radial basis functions and to choose the best subset of predictors with the GA. To avoid the potential singularities in the design matrix, we combine our model with analytical global ridge regression for regularization. On the other hand, estimation and prediction performance of model also taken into account for best subset chosen.
467-485

REFERENCES

References: 

[1] H Akaike. Information theory and an extension of the maximum likelihood principle. In
B Petrox and F Csaki, editors, Second International Symposium on Information Theory.,
pages 267–281, Budapest, 1973. Academiai Kiado.
[2] C Bishop. Improving the generalization properties of radial basis function neural networks.
Neural Computation, 3:579–588, 1991.
[3] D Boyce, A Fahri, and R Weischedel. Optimal Subset Selection: Multiple Regression,
Independence, and Optimal Network Algorithms Extension. Springer Verlag, New York,
1974.
[4] H Bozdogan. Model selection and Akaike’s information criterion (AIC): The general
theory and it’s analytical extension. Journal of Mathematical Psychology, 52:345–370,
September 1987.
[5] H Bozdogan. Icomp: A new model-selection criteria. In H.H Bock, editor, Classification
and Related Methods of Data Analysis. 1988.
[6] H Bozdogan. Mixture-model cluster analysis using a new informational complexity and
model selection criteria. In H Bozdogan, editor, Multivariate Statistical Modeling, Vol. 2,
Proceedings of the First US/Japan Conference on the Frontiers of Statistical Modeling: An
Informational Approach, pages 69–113. Kluwer Academic Publishers, The Netherlands,
Dordrecht, 1994.
[7] H Bozdogan. Akaike’s information criterion and recent developments in informational
complexity. Journal of Mathematical Psychology, 44:62–91, March 2000.
[8] H Bozdogan. Intelligent statistical data mining with information complexity and genetic
algorithms. In H Bozdogan, editor, Statistical Data Mining and Knowledge Discovery,
pages 15–56. Chapman and Hall/CRC, Boca Raton, Florida, 2004.
[9] L Breiman, J Freidman, J C Stone, and R Olsen. Classification and Regression Trees.
Chapman and Hall, 1984.
[10] R Hocking. Developments in linear regression methodology: 1959-1982. Technometrics,
25:219–230, 1983.
[11] A Horel, R Kennard, and K Baldwin. Ridge regression: Some simulations. Communica-
tions in Statistics, 4:105–123, 1975.
[12] M Kubat. Decision trees can initialize radial basis function networks. Transactions on
Neural Networks, 9:813–821, 1998.
[13] A Kullback and R Leibler. On information and sufficiency. Annals of Mathematical Statis-
tics, 22:79–86, 1951.
[14] J Lawless and P Wang. A simulation study if ridge and other regression estimators.
Communications in Statistics, A5:307–323, 1975.
[15] C Lin and C Lee. Neural Fuzzy Systems; A Neuro-Fuzzy Synergism to Intelligent Systems.
Prentice Hall P T R, New Jersey, USA, 1996.
[16] D MacKay. A practical bayesian framework for backpropagation networks. Neural Com-
putation, 4:448–472, 1992.
[17] N Mantel. Why stepdown procedures in variables selection. Technometrics, 12:591–612,
1970.
[18] L Moses. Think and Explain with Statistics. Addison-Wesley, MA, 1986.
[19] M Orr. Combining regression trees and rbfs. International Journal of Neural Systems,
10:453–465, 2000.
[20] T Poggio and F Girosi. Regularization algorithms for learning that are equivalent to
multilayer networks. Science, New-Series, 247:978–982, 1990.
[21] G Schwartz. Estimating the dimension of model. Annals of Statistics, 6:461–464, 1978.
[22] S Sclove. Least squares with random regression coefficient. Technical report, Department
of Economics, Stanford University, 1973.
[23] A Tikhonov and V Arsenin. Solutions of ill-posed problems. Wiley, 1977.
[24] H White. Maximum likelihood estimation of misspecified models. Econometrica, 50:1–
25, 1982.
[25] L Wilkinson. SYSTAT: The System for Statistics. SYSTAT, Evanston, IL, 1989.

Thank you for copying data from http://www.arastirmax.com