Buradasınız

FA01V01 ALGORİTMASI: YENİ BİR DİNAMİK YAPAY SİNİR AĞI ALGORİTMASI

FA01V01 ALGORITHM: A NEW DYNAMIC NEURAL NETWORK FA01V01 ALGORITHM: A NEW DYNAMIC NEURAL NETWORK ALGORITHM

Journal Name:

Publication Year:

Abstract (2. Language): 
Due to black-box structure, number of outputs isalways fixed while constructing artificial neural networks (ANNs). Thusnone of the existing algorithms adds new outputs to an existing ANN. This lack of ability disables ANNs to learn new abilities or gain new properties. Purpose of this study is to represent a new ANN constructing algorithm forenabling ANNs to learn new abilities or gain new properties.To represent such an algorithm, definitions of increasing or correcting an existing ability, and gaining a new ability concepts must be differentiated. Then new method for creating new output neurons and selecting best fitted one, must be developed. This study proposes different methods to create a candidate pool for selecting best suitable one. A new concept of L of K cross-validation is used to select best suitable output neuron from this candidate pool. For all these concepts described above, a new JAVA based software is developed and named NeuroBee. All applications of these concepts are done by using this software. For comparisons a joint logic problem from well-known logic problems; AND, OR and XOR.In this problem construction ANNs are started with zero output neurons and then using FA01V01 constructed up to three output neurons. After adding each new output neuron back-propagation algorithm is used for training ANNs. Constructed ANNs are compared to ANNs produced from NeuroSolutions software with fixed number of output neurons during training. Statistical tests showed that ANNs produced from FA01V01 or NeuroSolutions has no difference in terms of mean square error (MSE). But FA01V01 has a clear advantage over NeuroSolutions with respect to number of backpropagation iterations.
Abstract (Original Language): 
Yapay sinir ağları yapıları oluşturulurken, kara kutu yaklaşımının benimsenmesinin etkisiyle, her zaman çıktı sinir hücresi sayısı sabitlenmektedir. Mevcut bir yapay sinir ağına hiçbir zaman yeni bir çıktı sinir hücresi eklenmemektedir. Bu durum yapay sinir ağlarının hiçbir zaman yeni bir yetenek veya beceri kazanamamasına sebep olmaktadır.Bu çalışmanın amacı mevcut bir yapay sinir ağına yeni beceri ve yetenek kazandırmak amacıyla yeni çıktı sinir hücreleri ekleyebilen bir algoritma geliştirmektir. Böyle bir algoritmanın geliştirilebilmesi için öncelikle mevcut bir beceriyi arttırma ve düzeltme ile yeni bir yetenek kazandırma kavramlarının birbirinden ayrıştırılması gerekmektedir. Daha sonra ise yeni çıktı sinir hücresi oluşturma ve en uygun olanı tespit etme yöntemlerinin geliştirilmesi gerekmektedir. Bu çalışmada yeni eklenecek çıktı sinir hücresi adaylarını tespit etmek için çeşitli yöntemlerle bir aday havuzu oluşturuldu. Oluşan bu aday havuzundan K’nın L’li çapraz geçerliliğinde en uygun adaylar tespit edildi. Geliştirilen algoritma ve yöntemleri uygulamak için JAVA programlama dili kullanılarak NeuroBee adlı bir yazılım geliştirildi ve tüm uygulamalar bu yazılım kullanılarak yapıldı. Uygulama için ise ve, veya ve özel veya mantık problemlerinin birleşiminden oluşan bir problem oluşturuldu. Bu problemde farklı sayıda gizli sinir hücreleri için sıfır çıktı sinir hücresine sahip çok katmanlı algılayıcılar ile çözüme başlandı. Daha sonra üç çıktı sinir hücreli yapı oluşuncaya kadar FA01V01 algoritması çalıştırıldı. Elde edilen yapay sinir ağları geri yayılım (backpropagation) algoritması ile eğitildi. Aday havuzundan yeni eklenecek çıktı sinir hücrelerinin seçimi için K’nın L’li çapraz geçerliliği (L of Kcrossvalidation) yöntemi gelişitirildi. Önerilen algoritmanın performansını karşılaştırmak için ise NeuroSolutions programının sunduğu çok katmanlı algılayıcılar (ÇKA) kullanılarak, farklı sayıda gizli sinir hücreleri için üç çıktı sinir hücreli olarak eğitildi. Yapılan istatistiksel testler sonucunda ortalama hata karesi açısından FA01V01 algortiması ile oluşturulan ÇKA’lar ile sabit yapılı ÇKA’lar arasında fark olmadığı, ancak FA01V01 ile oluşturulan ÇKA’lar için geri yayılım algoritmasının daha az sayıda iterasyona ihtiyaç duyduğu tespit edilmiştir.
61
98

REFERENCES

References: 

Alpaydın, E., 2010. Introduction to Machine Learning. The MIT Press,
London, England.
Ash, T., 1989. "Dynamic node creation in backpropagation networks".
Connection Science, 1(4): 365-375.
Azimi-Sadjadi, M. R., Sheedvash, S. ve Trujillo, F. O., 1993. "Recursive
Dynamic Node Creation". IEEE TRANSACTIONS ON NEURAL
NETWORKS, 4(2): 242-256.
Back, T. ve Kursawe, F.,1995. “Evolutionary algorithms for fuzzy logic: A brief
overview”. Fuzzy logic and soft computing (Ed.). World Scientific,
Singapore, (Sf.3-10).
Broomhead, D. S. ve Lowe, D., 1988. "Multivariable Functional Interpolation
and Adaptive Networks". Complex Systems, 2: 321-355.
Buntine, W. L. ve Weigend, A. S., 1991. "Bayesian backpropagation".
Complex Systems, 5: 603–643.
Burkitt, A. N., 1991. "Optimization of the architecture of feed-forward neural
networks with hidden layers by unit elimination". Complex Systems, 5:
371-380.
Castellano, G., Fanelli, A. M. ve Pelillo, M., 1993. “An Empirical Comparison of
Node Pruning Methods for Layered Feed-forward Neural Networks”.
International Joint Conference on Neural Networks. Nagoya, Japan.
1: 321-326.
Castellano, G., Fanelli, A. M. ve Pelillo, M., 1997. "An Iterative Pruning
Algorithm for Feedforward Neural Networks". IEEE TRANSACTIONS
ON NEURAL NETWORKS, 8(3): 519-531.
GÜREŞEN–KAYAKUTLU
92
KHO BİLİM DERGİSİ CİLT: 23 SAYI: 1 YIL: 2013
Castillo, P. A., Merelo, J. J., Prieto, A., Rivas, V. ve Romero, G., 2000. "GProp:
Global optimization of multilayer perceptrons using GAs".
Neurocomputing, 35, 149-163.
Çınar, D., 2007. Hidroelektrik Enerji Üretiminin Hibrid Bir Model ile
Tahmini. Industrial Engineering. Istanbul, Istanbul Teknik
Üniversitesi. M.S.: 108.
Doganis, P., Alexandridis, A., Patrinos, P. ve Sarimveis, H., 2006. "Time series
sales forecasting for short shelf-life food products based on artificial
neural networks and evolutionary computing". Journal of Food
Engineering, 75: 196-204.
Fahlman, S. E. ve Lebiere, C.,1990. The cascade-correlation learning
architecture. Advances in Neural Information Processing Systems.
Touretzky, D. S. (Ed.). Morgan Kaufmann, San Mateo, CA, (Sf.524–
532).
Fayyad, U. M., Shapire, G. P., Smyth, P. ve Uthurusamy, R., 1996. Advances
in knowledge discovery and data mining. MIT Press, Cambridge,
MA.
Ferentinos, K. P., 2005. "Biological engineering applications of feedforward
neural networks designed and parameterized by genetic algorithms".
Neural Networks, 18: 934-950.
Frattale-Mascioli, F. M. ve Martinelli, G., 1995. "A constructive algorithm for
binary neural networks: the oil-spot algorithm". IEEE Transactions on
Neural Networks, 6(3): 794 - 797
Frean, M., 1990. "The Upstart Algorithm a method for constructing and training
feedforward neural networks". Neural Computation,2: 198-209.
Friedman, J. H. ve Stuetzle, W., 1981. "Projection Pursuit Regression".
Journal of the American Statistical Association, 73(376): 817-823.
GÜREŞEN–KAYAKUTLU
93
KHO BİLİM DERGİSİ CİLT: 23 SAYI: 1 YIL: 2013
Fritzke, B., 1993. Growing cell structures--A self-organizing network for
unsupervised and supervised learning, International Computer
Science Institution.
Fritzke, B., 1994a. "Growing cell structures--A self-organizing network for
unsupervised and supervised learning". Neural Networks: 1441-1460.
Fritzke, B.,1994b. Supervised Learning with Growing Cell Structures.
Advances in Neural Information Processing Systems Cowan, J. D.,
Tesauro, G. ve Alspector, J. (Ed.). Morgan Kaufmann, San Mateo, CA,
USA, (Sf.255-262).
Fritzke, B., 1995a. "Growing Grid - a self - organizing network with
constantneighborhood range and adaptation strength". Neural
Processing Letters, 2(5): 9-13.
Fritzke, B.,1995b. A Growing Neural Gas Network Learns Topologies.
Advances in Neural Information Processing Systems. Tesauro, G.,
Touretzky, D. S. ve Leen, T. K. (Ed.). MIT Press, Cambridge, MA).
Gallant, S. I., 1986. Optimal Linear Discriminants. IEEE 8th Conferance on
Pattern Recognition. Paris, France.
Gallant, S. I., 1990. "Perceptron-based learning algorithms". IEEE
Transactions on Neural Networks, 1(2): 179-191.
Ghiassi, M. ve Saidane, H., 2005. "A dynamic architecture for artificial neural
networks". Neurocomputing, 63: 397-413.
Goldberg, D., 1989. Genetic algorithms in search, optimization and
machine learning. Addison-Wesley, Reading, MA.
Han, H.-G. ve Qiao, J.-F., 2013. "A structure optimisation algorithm for
feedforward neural network construction". Neurocomputing, 99: 347-
357.
Hwang, J.-N., Lay, S.-R., Maechler, M., Martin, R. D. ve Schimert, J., 1994.
"Regression Modeling in Back-Propagation and Proiection Pursuit
GÜREŞEN–KAYAKUTLU
94
KHO BİLİM DERGİSİ CİLT: 23 SAYI: 1 YIL: 2013
Learning ". IEEE TRANSACTIONS ON NEURAL NETWORKS, 5(3):
342-353.
Ivakhnenko, A. G.,1984. Self-Organizing Methods in Modeling: GMDH Type
Algorithms. Statistics: Textbooks and Monographs. Farlow, S. J.
(Ed.). Marcel Dekker Inc., New York:).
Karnin, E. D., 1990. "A simple procedure for pruning back-propagation trained
neural networks". IEEE Transactions on Neural Networks, 1(2): 239–
242.
Kennedy, J. ve Eberhart, R., 1995. "Particle swarm optimization." IEEE
International Conferance on Neural Networks, Perth, Australia.
Kim, B. ve Bae, J., 2005. "Prediction of plasma processes using neural
network and genetic algorithms". Solid-State Electronics, 49: 1576-
1580.
Kiranyaz, S., Ince, T., Yildirim, A. ve Gabbouj, M., 2009. "Evolutionary artificial
neural networks by multi-dimensional particle swarm optimization".
Neural Networks, 22(10): 1448-1462.
Koza, J., 1992. Genetic programming: On the programming of computers
by means of natural selection. MIT Press, Cambridge, MA.
Kuo, R. J., 2001. "A sales forecasting system based on fuzzy neural network
with initial weights generated by genetic algorithm". European Journal
of Operational Research, 129: 496-517.
Kwok, T.-Y. ve Yeung, D.-Y., 1997a. "Constructive Algorithms for Structure
Learning in Feedforward Neural Networks for Regression Problems".
IEEE TRANSACTIONS ON NEURAL NETWORKS, 8(3): 630-645.
Kwok, T.-Y. ve Yeung, D.-Y., 1997b. "Objective Functions for Training New
Hidden Units in Constructive Neural Networks". IEEE TRANSACTIONS
ON NEURAL NETWORKS, 8(5): 1131-1148
GÜREŞEN–KAYAKUTLU
95
KHO BİLİM DERGİSİ CİLT: 23 SAYI: 1 YIL: 2013
Li, Z., Cheng, G. ve Qiang, X., 2010. Some Classical Constructive Neural
Networks and their New Developments. 2010 International
Conference on Educational and Network Technology (ICENT
2010). Qinhuangdao, China 174-178.
Ma, L. ve Khorasani, K., 2003. "A new strategy for adaptively constructing
multilayer feedforward neural networks". Neurocomputing, 51: 361-
385.
Mezard, M. ve Nadal, J.-p., 1989. "Learning in feedforward layered networks:
The tiling algorithm". Journal of Physics A: Math. Gen, 22(12): 2191-
2203.
Moody, J. ve Darken, C., 1989. "Fast Learning in Networks of Locally-Tuned
Processing Units". Neural Computation, 1: 281-294.
Mozer, M. C. ve Smolensky, P.,1989. Skeletonization: a technique for
trimming the fat from a network via relevance assessment.
Advances in Neural Information Processing Touretzky, D. S. (Ed.).
Morgan Kaufman, San Mateo, CA, (Sf.107–115).
Ngom, A., Stojmenovic, I. ve Milutinovic, V., 2001. "STRIP—A Strip-Based
Neural-Network Growth Algorithm for Learning Multiple-Valued
Functions". IEEE TRANSACTIONS ON NEURAL NETWORKS, 12(2):
212-227.
Niska, H., Hiltunen, T., Karppinen, A., Ruuskanen, J. ve Kolehmainen, M.,
2004. "Evolving the neural network model for forecasting air pollution
time series". Engineering Applications of Artificial Intelligence, 17:
159-167.
Parekh, R., Yang, J. ve Honavar, V., 2000. "Constructive Neural-Network
Learning Algorithms for Pattern Classification". IEEE TRANSACTIONS
ON NEURAL NETWORKS, 11(2): 436-451.
Parker, R. E. ve Tummala, M., 1992. “Identification of Volterra systems with a
polynomial neural network”.IEEE International Conference on
GÜREŞEN–KAYAKUTLU
96
KHO BİLİM DERGİSİ CİLT: 23 SAYI: 1 YIL: 2013
Acoustics, Speech, and Signal Processing. San Francisco, CA, USA
561–564.
Pelillo, M. ve Fanelli, A. M.,1993. A method of pruning layered feed-forward
neural networks. New Trends in Neural Computation (Lecture
Notes in Computer Science). Mira, J., Cabestany, J. ve Prieto, A.
(Ed.). Springer-Verlag, Berlin, (Sf.278-283).
Phatak, D. S. ve Koren, I., 1994 "Connectivity and performance tradeoffs in the
cascade correlation learning architecture". IEEE Transactions on
Neural Networks, 5(6): 1045-9227
Platt, J., 1991. "A Resource-Allocating Network for Function Interpolation".
Neural Computation, 3(2): 213-225.
Puma-Villanueva, W. J., dos Santos, E. P. ve Von Zuben, F. J., 2012. "A
constructive algorithm to synthesize arbitrarily connected feedforward
neural networks". Neurocomputing, 75(1): 14-32.
Reilly, D. L., Cooper, L. N. ve Elbaum, C., 1982. " A neural model for category
learning". Biological Cybernetics, 45: 35-41.
Saha, A., Wu, C.-L. ve Tang, D.-S., 1993. "Approximation, Dimension
Reduction, and Nonconvex ODtimization Using Linear Superpositions
of Gaussians". IEEE TRANSACTIONS ON COMPUTERS, 42(10):
1222-1233.
Setiono, R., 2001. "Feedforward Neural Network Construction Using Cross
Validation". Neural Computation, 13: 2865-2877.
Sexton, R. S., Dorsey, R. E. ve Sikander, N. A., 2004. "Simultaneous
optimization of neural network function and architecture algorithm".
Decision Support Systems, 36: 283-296.
Shin, Y. ve Ghosh, J., 1995. "Ridge Polynomial Networks". IEEE
TRANSACTIONS ON NEURAL NETWORKS, 6(3): 610-622.
GÜREŞEN–KAYAKUTLU
97
KHO BİLİM DERGİSİ CİLT: 23 SAYI: 1 YIL: 2013
Sietsma, J. ve Dow, R. J. F., 1988. Neural net pruning - Why and how.
International Conferance on Neural Networks. San Diego, CA, USA:
325-333.
Sjogaard, S., 1992. Generalization in cascade-correlation networks. Neural
Networks for Signal Processing. Helsingoer, Denmark 59 - 68.
Subirats, J. L., Franco, L. ve Jerez, J. M., 2012. "C-Mantec: A novel
constructive neural network algorithm incorporating competition
between neurons". Neural Networks, 26: 130–140.
Tenorio, M. F. ve Lee, W. T., 1990. "Self-organizing network for optimum
supervised learning". IEEE Transactions on Neural Networks, 1(1):
100-110.
Thodberg, H. H., 1996. "A review of Bayesian neural networks with an
application to near infrared spectroscopy". IEEE Transactions on
Neural Networks, 7: 56–72.
Verkooijen, W. ve Daniels, H., 1994. "Connectionist Projection Pursuit
Regression". Computational Economics, 7: 155-161.
Wang, D., 2008. "Fast Constructive-Covering Algorithm for neural networks
and its implement in classification". Applied Soft Computing, 8(1):
166-173.
Wang, T. ve Huang, C., 2007. "Applying optimized BPN to a chaotic time
series problem". Expert Systems with Applications, 32: 193-200.
Yang, S.-H. ve Chen, Y.-P., 2012. "An evolutionary constructive and pruning
algorithm for artificial neural networks and its prediction applications".
Neurocomputing, 86: 140-149.
Yi-Hui, L., 2007. "Evolutionary neural network modeling for forecasting the field
failure data of repairable systems". Expert Systems with Applications, 33:
1090-1096.
GÜREŞEN–KAYAKUTLU
98
KHO BİLİM DERGİSİ CİLT: 23 SAYI: 1 YIL: 2013
Young, S. ve Downs, T., 1998. "CARVE - A Constructive Algorithm for Real
Valued Examples". IEEE Transactions on Neural Networks, 9(6):
1180-1190.
Zhang, B.-T., 1994. An incremental learning algorithm that optimizes
network size and sample size in one trial. IEEE World Congress on
Neural Networks. Orlando, Florida. 1: 215-220.

Thank you for copying data from http://www.arastirmax.com