CS 420: Advanced Topics in Machine Intelligence
Spring 1997: Neural Networks and Cognition

Instructor:
Bruce MacLennan
Phone: 974-5067
Office: 110-A Ayres
Hours: 1-2 TR or make an appointment
Email: maclennan@cs.utk.edu

Teaching Assistant:
Ray Byler
Phone: 4-8807
Office: 5 Ayres
Hours: 9-10 WR
Email: byler@cs.utk.edu@cs.utk.edu

Classes: 3:40-4:55 TR in BU 655

Directory of Handouts, Labs, etc.

PDP Directory


Information


Description

CS 420 covers advanced topics in machine intelligence with an emphasis on faculty research. In the Spring semester of 1995 the topic will be neural networks, which is a promising, new approach to many AI problems, including knowledge representation, learning, pattern recognition, robotic control, natural language understanding and inference.

This course will be based on a new text, J. A. Anderson's An Introduction to Neural Networks (MIT Press, 1995). In addition to basic neural network theory and applications, it discusses the applications of neural networks to modeling human cognition, so the course should be useful to anyone with an interest in artificial intelligence or cognitive science.


Prerequisites

Basic calculus through differential equations (Mat 231, 241), linear algebra (Mat 251), probability and statistics (Mat 222). Ability to program in Lisp (e.g. from CS 320) is not necessary for this AI course, since neural network programs use simple data structures. However, you should be able to program competently in some programming language.

Grading

It is anticipated that grades will be based primarily on the
laboratory projects (see next).

Laboratories

Although the course does not have formal labs, there will be a number of lab projects in which students will get hands-on experience in designing neural nets to solve some simple problems. Some projects will involve using existing neural net simulators, though other projects will be programmmed by the student.

Text

J. A. Anderson, An Introduction to Neural Networks (MIT Press, 1995)

Tentative List of Topics

  1. Properties of Single Neurons
  2. Synaptic Integration and Neuron Models
  3. Lateral Inhibition and Sensory Processing
  4. The Linear Associator: Background and Foundations
  5. The Linear Associator: Simulations
  6. Early Network Models: The Perceptron
  7. Gradient Descent Algorithms Application of Simple Associators: Concept Formation and Object Motion
  8. Energy and Neural Networks: Hopfield Networks and Boltzmann Machines
  9. Nearest Neighbor Models
  10. Adaptive Maps
  11. The BSB Model: A Simple Nonlinear Autoassociative Neural Network

Return to MacLennan's home page

Send mail to Bruce MacLennan / MacLennan@cs.utk.edu
Last updated: Tue Jan 28 11:19:27 EST 1997