Wednesday, September 9, 2009

Activity 16: Neural Networks

Objectives: To demonstrate the application of neural networks in pattern recognition

Tools: Scilab with SIP and ANN toolboxes

Procedure: As most of the details of the procedure are canned and those which are distinct are graphically repetitive, we shall go through what was done only in broad sweeping strokes. As mentioned, we used canned code for this, available through various channels. We first set a seed for consistency. We then create a training set of several values of two parameters, in this case as before, the area and red-color channel value of the intensities. For example we utilize the set:

y=

0.04 0.44
0.05 0.41
0.07 0.45
0.18 0.43
0.21 0.45
0.33 0.44
0.36 0.42
0.38 0.43

For this set, we classify this, in a manner similar to Boolean, as

[ 0 0 0 0 0 1 1 1]

That is to say, for our purposes, the last three sets, with the corresponding parameters are the ones to be classified under a certain heading and all the others effectively discarded.

Now we define a test set of object parameters we wish to classify:

x=

0.20 0.44
0.23 0.45
0.40 0.41
0.38 0.39
0.36 0.40
0.04 0.41
0.05 0.42
0.24 0.40
0.37 0.41
0.39 0.39
0.38 0.42
0.36 0.38
0.39 0.45
0.06 0.35
0.22 0.33
0.38 0.40
0.40 0.39
0.37 0.38
0.02 0.31
0.04 0.34

The output of the code, at a learning rate of 1 and with 1000 training cycles,rounded off to the nearest integer (1 or 0) is

[ 0 0 1 1 1 0 0 0 1 1 1 1 1 0 0 1 1 1 0 0]

Which is in exact agreement with what is expected. It is of note that when the learning rate is decreased, the pointwise accuracy of the network decreases. Additionally, the training cycle also has the same direct effect which agrees with intuitive understanding.

Evaluation: For proper classification a grade of 10 is appropriate.

Acknowledgement: As usual, Mr. earl Panganiban is of mention for his assistance.


No comments:

Post a Comment

Followers