CS 594-004

Scientific Computing for Engineers:  Spring 2007 – 3 Credits

Wednesdays from 1:30 – 4:15

Room C211

Prof. Jack Dongarra with help from Profs. George Bosilca, Shirley Moore, and Stan Tomov

Email: dongarra@cs.utk.edu

Phone: 865-974-8295

Fax: 865-974-8296

Office hours: Wednesday 11:00 - 1:00, or by appointment

TA: Erika Fuentes efuentes@cs.utk.edu

O : 228 Claxton Complex, 974-9954

OH: 11:00 – 1:00 Mondays, or by request

 

 

 

 

There will be four major aspects of the course:

·         Part I will start with current trends in high-end computing systems and environments, and continue with a practical short description on parallel programming with MPI, OpenMP, and pthreads.

 

·         Part II will illustrate the modeling of problems from physics and engineering in terms of partial differential equations (PDEs), and their numerical discretization using finite difference, finite element, and spectral approximation.

 

  • Part III will be on solvers: both iterative for the solution of sparse problems of part II, and direct for dense matrix problems.  Algorithmic and practical implementation aspects will be covered.

 

  • Finally in Part IV, various software tools will be surveyed and used.  This will include PETSc, Sca/LAPACK, MATLAB, and some tools and techniques for scientific debugging and performance analysis.

 

The grade would be based on homework, a midterm project, a final project, and a final project presentation. Topics for the final project would be flexible according to the student's major area of research.

 

 

Class Roster

If your name is not on the list or some information is incorrect, please send mail to the TA:

 

Charles Hawley          hawley@cs.utk.edu

Michael Kuhn            mkuhn@cmr.utk.edu

Mark Lenox              marklexox@comcast.com

Daniel Locio            lucio@cs.utk.edu

Teng Ma                 tma@cs.utk.edu

Brandon Merkl           bmerkl@utk.edu

Matthew Parsons         parsons@utk.edu 

Matthew Strobel         mstrobel@cs.utk.edu

Thad Thompson           tthompso@cs.utk.edu

Asim YarKhan            yarkhan@cs.utk.edu

Haihang You             you@cs.utk.edu

 

 

And the course mailing list: cs594parallel-students@cs.utk.edu  

 

 



cover imageBook for the Class:

The Sourcebook of Parallel Computing, Edited by Jack Dongarra, Ian Foster, Geoffrey Fox, William Gropp, Ken Kennedy, Linda Torczon, Andy White, October 2002, 760 pages, ISBN 1-55860-871-0, Morgan Kaufmann Publishers.

 

 

 

 

 

 

 

 

Lecture Notes: (Tentative outline of the class)

 

  1. January 10 (Dr. Dongarra)

 Class Introduction

 Introduction to High Performance Computing

 Read Chapter 1, 2, and 9

 Homework 1 (due January 24, 2007)

 Tar file of timer

  1. January 17 (Dr. Bosilca)

Message Passing

Additional Slides

MPI quick reference card

Pi example

Read Chapter 10

  1. January 24 (Dr. Bosilca)

Message Passing 2

Homework 2 (due February 7, 2007)

Notes on booting over the network

Read Chapter 11

  1. January 31 (Dr. Dongarra w/Dr. Jakub Kurzak)

HPC Architectures and the IBM Cell Processor

Homework3 tarball (due February 14, 2007)

Read Chapter 3

  1. February 7 ( Dr. Bosilca)

Parallel Programming Paradigms

  1. February 14 (Dr. Bosilca)

Parallel Programming Paradigms and Performance (continue with slides from last week)

Homework4 (due February 28, 2007)

(There will be a driver for the first part of the homework, it will come by email in a few days. The first part will be noted for correctness and performance.) (driver.c, fifo.h, Makefile)

  1. February 21 (Dr. Tomov)

Projection and its importance in scientific computing

Homework5 (due March 7, 2007)

  1. February 28 (Dr. Tomov)

Discretization of PDEs and tools for the parallel solution of the resulting systems

Mesh Generation and Load Balancing

            Homework6 (due March 21, 2007), Tar file for hw6

  1. March 7 (Dr. Dongarra)

Floating Point Arithmetic, Memory Hierarchy and Cache

Homework 7(due March 28, 2007)

Read Chapter 3

Toward an Optimal Algorithm for Matrix Multiply

 Read Chapter 20,

Bailey’s paper on “12 ways to fool …”

 

March 14 – Spring Break

 

  1. March 21 (Dr. Tomov)

Sparse matrices and optimized parallel implementations

Homework 8 (due April 4, 2007)

Matrix for HW8

  1. March 28 (Dr. Dongarra)

Dense Linear Algebra

Homework 9 (due April 11, 2007)

Read Chapter 20

  1. April 4 (Dr. Dongarra)

Dense Linear Algebra part2 and Grid Computing

Read Chapter 14 pp 409 - 442

  1. April 11 (Dr. Tomov)

Iterative Methods in Linear Algebra (part 1)

Read Chapter 20 and 21

  1. April 18  (Dr. Tomov)

Iterative Methods in Linear Algebra (part 2)

Read Chapter 21

  1. April 25– Last Class (Dr. Moore)

Performance Analysis Tools;

Read Chapter 15

  1.  May 2 (Starting at 12:30)

Class Final reports

Order of presentation:

 

·  Projects reports to be turned in on Tuesday, May 1st.


Additional Reading Materials


Message Passing Systems.

The PVM home page.


The MPI home page.
This is the best place to get info on MPI from, including implementations and the MPI forum itself.

The implementation that you should use, that is installed on the TORC (Tennessee Oak Ridge Cluster) cluster is MPICH .
A duplex postscript version of the MPI 1.1 API (with thanks to the LAM team) is available. There is also a version of MPI developed here at UTK which has the ability to recover from faults. See: http://icl.cs.utk.edu/ft-mpi/

 


Other useful reference material

 

· Here’s a pointer to specs on various processors:

http://www.cpu-world.com/CPUs/index.html

http://www.cpu-world.com/sspec/index.html

http://processorfinder.intel.com/scripts/default.asp

 

 

A good introduction to message passing systems.


J.J. Dongarra, G.E. Fagg, R. Hempl and D. Walker, Chapter in Wiley Encyclopedia of Electrical and Electronics Engineering,. ( postscript version )

``Message Passing Interfaces'', Special issue of Parallel Computing , vol 20(4), April 1994.

A paper by members of the PVM team on the differences between PVM and MPI.

Geist, G.A, J.A. Kohl, P.M. Papadopoulos, `` PVM and MPI: A Comparison of Features '', Calculateurs Paralleles , 8(2), pp. 137--150, June, 1996.

Papers by members of the MPI team on the differences between PVM and MPI.

``Why are PVM and MPI So Different'', William Gropp and Ewing Lusk (submitted to The Fourth European PVM - MPI Users' Group Meeting)

and

``PVM and MPI are completely different'', William Gropp and Ewing Lusk, to appear in the journal Future Generation Computer Systems, 1998.

Ian Foster, Designing and Building Parallel Programs, see http://www-unix.mcs.anl.gov/dbpp/

Alice Koniges, ed., Industrial Strength Parallel Computing, ISBN1-55860-540-1, Morgan Kaufmann Publishers, San Francisco, 2000. Ananth Gramma et al., Introduction to Parallel Computing

Michael Quinn, Parallel Programming, see http://web.engr.oregonstate.edu/~quinn/Comparison.htm

David E. Culler & Jaswinder Pal Singh, Parallel Computer Architecture, see http://www.cs.berkeley.edu/%7Eculler/book.alpha/index.html

George Almasi and Allan Gottlieb, Highly Parallel Computing

 

 


Standard Books on Message Passing

``MPI - The Complete Reference, Volume 1, The MPI-1 Core, Second Edition'',
by Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, Jack Dongarra, MIT Press, September 1998, ISDN 0-262-69215-5.

``Using MPI,''
by William Gropp, Ewing Lusk, and Anthony Skjellum, published by MIT Press, October 1994; ISBN 0-262-57104-8.

``MPI: The Complete Reference - 2nd Edition: Volume 2 - The MPI-2 Extensions'',
by William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg, William Saphir, and Marc Snir, published by The MIT Press, September, 1998; ISBN 0-262-57123-4.


On-line Documentation and Information about Machines

·         Overview of Recent Supercomputers, Aad J. van der Steen and Jack J. Dongarra, 2007.

 

·         Catalog of Commercial Hardware and Software Vendors

 

Other Parallel Information Sites

·  NHSE - National HPCC Software Exchange

·  Netlib Repository at UTK/ORNL

·  BLAS Quick Reference Card

·  LAPACK

·  ScaLAPACK

·  GAMS - Guide to Available Math Software

·  Center for Research on Parallel Computation (CRPC)

·  Supercomputing & Parallel Computing: Conferences

·  Supercomputing & Parallel Computing: Journals

·  High Performance Fortran (HPF) reports

·  High Performance Fortran Resource List

·  Fortran 90 Resource List

·  Major Science Research Institutions from Caltech

·  Message Passing Interface (MPI) Forum

·  High Performance Fortran Forum

·  OpenMP

·  PVM

·  Parallel Tools Consortium

·  DoD High Performance Computing Modernization Program

·  DoE Accelerated Strategic Computing Initiative (ASCI)

·  National Computational Science Alliance


Related On-line Textbooks

·  Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM Publication, Philadelphia, 1994.

·  PVM - A Users' Guide and Tutorial for Networked Parallel Computing, MIT Press, Boston, 1994.

·  MPI : A Message-Passing Interface Standard

·  LAPACK Users' Guide (Second Edition), SIAM Publications, Philadelphia, 1995.

·  MPI: The Complete Reference, MIT Press, Boston, 1996.

· Using MPI: Portable Parallel Programming with the Message-Passing Interface by W. Gropp, E. Lusk, and A. Skjellum

·  Parallel Computing Works, by G. Fox, R. Williams, and P. Messina (Morgan Kaufmann Publishers)

·  Computational Science Education Project

·  Designing and Building Parallel Programs. A dead-tree version of this book is available by Addison-Wesley.

·  High Performance Fortran (HPF), a course offered by Manchester and North High Performance Computing Training & Education Centre, United Kingdom

For performance analysis:

·  Raj Jain, The Art of Computer Systems Performance Analysis. John Wiley, 1991.

Papers on performance analysis tools:

·  Ruth A. Aydt, "The Pablo Self-Defining Data Format," November 1997, click here.

·  Jeffrey K. Hollingsworth, Barton P. Miller, Marcelo J. R. Gongalves, Oscar Naim, Zhichen Xu and Ling Zheng, "MDL: A Language and Compiler for Dynamic Program Instrumentation", International Conference on Parallel Architectures and Compilation Techniques, San Francisco, CA, November 1997, click here.

·  Barton P. Miller, Mark D. Callaghan, Jonathan M. Cargille, Jeffrey K. Hollingsworth, R. Bruce Irvin, Karen L. Karavanic, Krishna Kunchithapadam and Tia Newhall. "The Paradyn Parallel Performance Measurement Tools", IEEE Computer 28(11), (November 1995). click here.

·  Steven T. Hackstadt and Allen D. Malony, "Distributed Array Query and Visualization for High Performance Fortran, February 1996.

·  Jerry Yan and Sekhar Sarukkai and Pankaj Mehra, "Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs using the AIMS toolkit", Software Practice and Experience 25(4), April 1995, 429--461

 

Other Online Software and Documentation

·  Matlab documentation is available from several sources, most notably by typing ``help'' into the Matlab command window. A primer (for version 4.0/4.1 of Matlab, not too different from the current version) is available in either postscript or pdf.

·  Netlib, a repository of numerical software and related documentation

·  Netlib Search Facility, a way to search for the software on Netlib that you need

·  GAMS - Guide to Available Math Software, another search facility to find numerical software

·  Linear Algebra Software Libraries and Collections

·  LAPACK, state-of-the-art software for dense numerical linear algebra on workstations and shared-memory parallel computers. Written in Fortran.

·  CLAPACK, a C version of LAPACK.
(For a partial C++ version, see LAPACK++ on Roldan Pozo's homepage)

·  LAPACK Manual

·  ScaLAPACK, a partial version of LAPACK for distributed-memory parallel computers.

·  ScaLAPACK manual

·  LINPACK and EISPACK are precursors of LAPACK, dealing with linear systems and eigenvalue problems, respectively.

·  SuperLU is a fast implementations of sparse Gaussian elimination for sequential and parallel computers, respectively.

·  Sources of test matrices for sparse matrix algorithms

·  Matrix Market

·  University of Florida Sparse Matrix Collection

·  Templates for the solution of linear systems, a collection of iterative methods, with advice on which ones to use. The web site includes on-line versions of the book (in html and postscript) as well as software.

·  Templates for the Solution of Algebraic Eigenvalue Problems is a survey of algorithms and software for solving eigenvalue problems. The web site points to an html version of the book, as well as software.

·  Updated survey of sparse direct linear equation solvers, by Xiaoye Li

·  MGNet is a repository for information and software for Multigrid and Domain Decomposition methods, which are widely used methods for solving linear systems arising from PDEs.

·  Resources for Parallel and High Performance Computing

·  Millennium a UC Berkeley campus-wide parallel computing resource

·  Resources for CS 267, Applications of Parallel Computers

·  ACTS (Advanced CompuTational Software) is a set of software tools that make it easier for programmers to write high performance scientific applications for parallel computers.

·  PETSc: Portable, Extensible, Toolkit for Scientific Computation

·  NHSE - National High Performance Computing and Communications Software Exchange, pointers to related work across the country.

·  Issues related to Computer Arithmetic and Error Analysis

·  Efficient software for very high precision floating point arithmetic

·  Notes on IEEE Floating Point Arithmetic, by Prof. W. Kahan

·  Other notes on arithmetic, error analysis, etc. by Prof. W. Kahan

·  Report on arithmetic error that cause the Ariane 5 Rocket Crash

·  The IEEE floating point standard is currently being updated. To find out what issues the standard committee is considering, look here.

 

Jack Dongarra
4/28/2007