Buradasınız

Doğrusal Olmayan Kısıtlı Programlama ile Yapay Sinir Ağlarının Eğitilmesi

İngilizce Özet: 

In this study, it is proposed to be able to use a nonlinear GRG (Generalized Reduced
Gradient) method to train feed forward ANNs named as. The ANN training problem is written
as a general nonlinear optimization problem with nonlinear sigmoidal or hyperbolic tangent
constraints. Constraints are made up of training data in that each expected output must be
equal to its trained value. If any given problem has n data points (i.e., pattern), the technique
decomposes the problem into n constrained nonlinear programming problem. The update of
the variables are performed by executing the constraints. As an alternative to backpropogation
algorithm, even this method resulted in acceptable outputs under the condition that less
iterations and less hidden neurons for nearly learning the ANN on trained data. Here the
problem presented for proposed method is that an application of XOR problem.

Makaleler İngilizce Anahtar Kelimeler:

Yıllar:

Share