CS 420/527 — Biologically Inspired Computation
NetLogo Simulation

Artificial Neural Net

(Simple Demonstration of Back-Propagation)

This page was automatically generated by NetLogo 4.1. Questions, problems? Contact feedback@ccl.northwestern.edu.

The applet requires Java 5 or higher. Java must be enabled in your browser settings. Mac users must have Mac OS X 10.4 or higher. Windows and Linux users may obtain the latest Java from Sun's Java site. If the display appear cut off with Firefox, then try another browser (Safari works).



powered by NetLogo

view/download model file: Artificial Neural Net.nlogo

WHAT IS IT?

This is a model of a very small neural network. It is based on the Perceptron model, but instead of one layer, this network has two layers of "perceptrons". That means it can learn operations a single layer cannot.

The goal of a network is to take input from its input nodes on the far left and classify those inputs appropriately in the output nodes on the far right. It does this by being given a lot of examples and attempting to classify them, and having a supervisor tell it if the classification was right or wrong. Based on this information the neural network updates its weight until it correctly classifies all inputs correctly.


HOW IT WORKS

Initially the weights on the links of the networks are random. When inputs are fed into the network on the far left, those inputs times the random weights are added up to create the activation for the next node in the network. The next node then sends out an activation along its output link. These link weights and activations are summed up by the final output node which reports a value. This activation is passed through a sigmoid function, which means that values near 0 are assigned values close to 0, and vice versa for 1. The values increase nonlinearly between 0 and 1 with a sharp transition at 0.5.

To train the network a lot of inputs are presented to the network along with how the network should correctly classify the inputs. The network uses a back-propagation algorithm to pass error back from the output node and uses this error to update the weights along each link.


HOW TO USE IT

To use it press SETUP to create the network and initialize the weights to small random numbers.

Press TRAIN ONCE to run one epoch of training. The number of examples presented to the network during this epoch is controlled by EXAMPLES-PER-EPOCH slider.

Press TRAIN to continually train the network.

In the view, the larger the size of the link the greater the weight it has. If the link is red then its a positive weight. If the link is blue then its a negative weight.

To test the network, set INPUT-1 and INPUT-2, then press the TEST button. A dialog box will appear telling you whether or not the network was able to correctly classify the input that you gave it.

LEARNING-RATE controls how much the neural network will learn from any one example.

TARGET-FUNCTION allows you to choose which function the network is trying to solve.


THINGS TO NOTICE

Unlike the Perceptron model, this model is able to learn both OR and XOR. It is able to learn XOR because the hidden layer (the middle nodes) in a way allows the network to draw two lines classifying the input into positive and negative regions. As a result one of the nodes will learn essentially the OR function that if either of the inputs is on it should be on, and the other node will learn an exclusion function that if both of the inputs or on it should be on (but weighted negatively).

However unlike the perceptron model, the neural network model takes longer to learn any of the functions, including the simple OR function. This is because it has a lot more that it needs to learn. The perceptron model had to learn three different weights (the input links, and the bias link). The neural network model has to learn ten weights (4 input to hidden layer weights, 2 hidden layer to output weight and the three bias weights).


THINGS TO TRY

Manipulate the LEARNING-RATE parameter. Can you speed up or slow down the training?

Switch back and forth between OR and XOR several times during a run. Why does it take less time for the network to return to 0 error the longer the network runs?


EXTENDING THE MODEL

Add additional functions for the network to learn beside OR and XOR. This may require you to add additional hidden nodes to the network.

Back-propagation using gradient descent is considered somewhat unrealistic as a model of real neurons, because in the real neuronal system there is no way for the output node to pass its error back. Can you implement another weight-update rule that is more valid?


NETLOGO FEATURES

This model uses the link primitives. It also makes heavy use of lists.


RELATED MODELS

This is the second in the series of models devoted to understanding artificial neural networks. The first model is Perceptron.


CREDITS AND REFERENCES

The code for this model is inspired by the pseudo-code which can be found in Tom M. Mitchell's "Machine Learning" (1997).

Thanks to Craig Brozefsky for his work in improving this model.

To refer to this model in academic publications, please use: Rand, W. and Wilensky, U. (2006). NetLogo Artificial Neural Net model. http://ccl.northwestern.edu/netlogo/models/ArtificialNeuralNet. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.

In other publications, please use: Copyright 2006 Uri Wilensky. All rights reserved. See http://ccl.northwestern.edu/netlogo/models/ArtificialNeuralNet for terms of use.


PROCEDURES

links-own [weight]

;; define the four node types
breed [bias-nodes bias-node]
bias-nodes-own [activation error]
breed [input-nodes input-node]
input-nodes-own [activation error]
breed [output-nodes output-node]
output-nodes-own [activation error]
breed [hidden-nodes hidden-node]
hidden-nodes-own [activation error]

globals [
epoch-error
]

;;;
;;; SETUP PROCEDURES
;;;
to setup
clear-all
ask patches [ set pcolor gray + 2 ]
set-default-shape bias-nodes "bias-node"
set-default-shape input-nodes "circle"
set-default-shape output-nodes "output-node"
set-default-shape hidden-nodes "output-node"
setup-nodes
setup-links
propagate
end
to setup-nodes
create-bias-nodes
1 [ setxy -5 5 ]
ask bias-nodes [ set activation 1 ]
create-input-nodes
1 [ setxy -5 -1 ]
create-input-nodes
1 [ setxy -5 1 ]
ask input-nodes [ set activation random 2 ]
create-hidden-nodes
1 [ setxy 0 -1 ]
create-hidden-nodes
1 [ setxy 0 1 ]
ask hidden-nodes [ set activation random 2
set size 1.5 ]
create-output-nodes
1 [ setxy 5 0 ]
ask output-nodes [ set activation random 2 ]
end
to setup-links
connect-all bias-nodes hidden-nodes
connect-all bias-nodes output-nodes
connect-all input-nodes hidden-nodes
connect-all hidden-nodes output-nodes
end
to connect-all [nodes1 nodes2]
ask nodes1 [
create-links-to nodes2 [
set weight random-float 0.2 - 0.1
]
]
end
to recolor
ask turtles with [breed != links] [
set color item (step activation) [black white]
]
ask links [
set thickness 0.1 * abs weight
ifelse weight > 0
[
set color red ]
[
set color blue ]
]
end
;;;
;;; TRAINING PROCEDURES
;;;
to train
set epoch-error 0
repeat examples-per-epoch [
ask input-nodes [ set activation random 2 ]
propagate
back-propagate
]
tick
set epoch-error epoch-error / examples-per-epoch
plotxy ticks epoch-error
end
;;;
;;; FUNCTIONS TO LEARN
;;;
to-report target-answer
ifelse target-function = "xor"
[
report my-xor ]
[
report my-or ]
end
to-report my-or
;; assumes exactly two input nodes
ifelse [activation] of input-nodes = [0 0]
[
report [0] ]
[
report [1] ]
end
to-report my-xor
;; assumes exactly two input nodes
let vals [activation] of input-nodes
ifelse item 0 vals = item 1 vals
[
report [0] ]
[
report [1] ]
end
;;;
;;; PROPAGATION PROCEDURES
;;;
;; carry out one calculation from beginning to end
to propagate
ask hidden-nodes [ set activation new-activation ]
ask output-nodes [ set activation new-activation ]
recolor
end
to-report new-activation ;; node procedure
report sigmoid sum [[activation] of end1 * weight] of my-in-links
end
;; changes weights to correct for errors
to back-propagate
;; plot stats
;; computing error for output nodes
;; this assumes that the nodes-list will be in the same order that the list of the correct
;; answers is in
let example-error 0
(
foreach target-answer (sort output-nodes) [
ask ?2 [ set error activation * (1 - activation) * (?1 - activation) ]
set example-error example-error + ( (?1 - [activation] of ?2) ^ 2 )
] )
set epoch-error epoch-error + (example-error / count output-nodes)
ask hidden-nodes [
set error activation *
(
1 - activation) *
sum [weight * [error] of end2] of my-out-links
]
ask links [
set weight weight + learning-rate * [error] of end2 * [activation] of end1
]
end
;;;
;;; MISC PROCEDURES
;;;
;; computes the sigmoid function given an input value and the weight on the link
to-report sigmoid [input]
report 1 / (1 + e ^ (- input))
end
;; computes the step function given an input value and the weight on the link
to-report step [input]
ifelse input > 0.5
[
report 1 ]
[
report 0 ]
end
;;;
;;; TESTING PROCEDURES
;;;
;; test runs one instance and computes the output
to test
;; output the result
ifelse test-success? input-1 input-2
[
user-message "Correct." ]
[
user-message "Incorrect." ]
end
to-report test-success? [n1 n2]
ask item 0 sort input-nodes [ set activation n1 ]
ask item 1 sort input-nodes [ set activation n2 ]
propagate
report target-answer = map [step [activation] of ?] sort output-nodes
end
; *** NetLogo 4.0 Model Copyright Notice ***
;
; Copyright 2006 by Uri Wilensky. All rights reserved.
;
; Permission to use, modify or redistribute this model is hereby granted,
; provided that both of the following requirements are followed:
; a) this copyright notice is included.
; b) this model will not be redistributed for profit without permission
; from Uri Wilensky.
; Contact Uri Wilensky for appropriate licenses for redistribution for
; profit.
;
; To refer to this model in academic publications, please use:
; Rand, W. and Wilensky, U. (2006). NetLogo Artificial Neural Net model.
; http://ccl.northwestern.edu/netlogo/models/ArtificialNeuralNet.
; Center for Connected Learning and Computer-Based Modeling,
; Northwestern University, Evanston, IL.
;
; In other publications, please use:
; Copyright 2006 Uri Wilensky. All rights reserved.
; See http://ccl.northwestern.edu/netlogo/models/ArtificialNeuralNet
; for terms of use.
;
; *** End of NetLogo 4.0 Model Copyright Notice ***

Return to CS 420/527 home page

Return to MacLennan's home page

Send mail to Bruce MacLennan / MacLennan@utk.edu

Valid HTML 4.01!This page is web.eecs.utk.edu/~mclennan/Classes/420/NetLogo/Artificial-Neural-Net.html
Last updated: 2010-11-03.