You are here

A Study and analysis of optimized processing w.r.t. time and cost for Artificial Intelligence: The Connectionist Learning

Journal Name:

Publication Year:

Author NameUniversity of Author
Abstract (2. Language): 
This article describes Artificial Neural Network as an important processing paradigm that is inspired by the way biological nervous system, such as the brain process information. Connectionism is basically an arrangement of network in such a way that is inoptimally fashion and gets provision effective result. Here the analysis is on many algorithms and technique for complex system which sometimes could not be applied directly because of problem such as non-linearity, uncertainty and time varying property of complex system. The paper discusses different techniques when one can solve complex problems based on certain arithmetic operations and one does not need to employ any analytic method. In this Connectionist Learning Algorithms the study is based on essentiality to configure each connection whose characteristics are more suitable for the processing/learning but the algorithms can work efficiently with the help of connectionism.
39-44

REFERENCES

References: 

1. A. Namatame, in: N. Boubakins (Ed.), "Connectionist Learning with Chebyshev Neural
Network and Analyses of its Internal Representation", World Scientific, 1991, pp.
3348.
2. A. Namatame, N. Ueda, "Pattern classification with Chebyshev neural network", Int. J.
Neural Net w. 3 (March) (1992) 2331.
3. C. L. Hull. "Principles of Behavior". Appleton-Century-Crofts, New York, 1943.
4. D. O. Hebb. "The Organization of Behavior". John Wiley & Sons, New York, 1949.
5. E. Rumelhart, G. E. Hinton, and R. J. Williams "1. Learning representations by backpropagating
errors". Nature, 323:533-536, 1986.
6. G. Cybenko. "Approximation by superposition of a sigmoid function". Mathematics of
Control, Signals, and Systems, 2:303-314, 1989. Biological Cybernetics, 43:59-69,
1982.
7. J. L.McClelland, “Retrieving general and specific information from stored knowledge
of specifics”. In Proceedings of the Third Annual Meeting of the Cognitive Science
Society, pages 170-172, 1981.
8. J. Moody and C. J. Darken. "Fast learning in networks of locally tuned processing
units". Neural Computation, 1:281-294, 1989.
9. J.C. Patra, A.C. Kot, "Nonlinear dynamic system identification using chebyshev
functional link artificial neural networks", IEEE Trans. Syst. Man Cybern. B 32 (4)
(2002) 505511.
10. K. S. Narendra and K. Parthasarathy, "Identification and control of dynamical system
using neural networks", IEEE 73ansactioTu on Neural Networks, Vol. 1, No. 1, pp.
427, 1990.
11. M. Minsky and S. A. Papert. Perceptrons:"An Introduction to Computational
Geometry". MIT Press, Cambridge, MA, expanded edition, 1988/1969.
12. M. R. W. Dawson and D. P. Schopflocher. "Modifying the generalized delta rule to train
networks of non-monotonic processors for pattern classification". Connection
Science, 4:19-31, 1992.
13. O. Hebb. "The Organization of Behavior. John Wiley & Sons", New York, 1949.
14. P. J. Werbos. Back propagation: "Basics and new developments. In Arbib [4]", pages
134-139.

Thank you for copying data from http://www.arastirmax.com