Micah Beck

Associate Professor
Dept. of Electrical Engineering and Computer Science
University of Tennessee


Recent Papers and Presentations

"Pervasively Distributed CyberInfrastructure for Yottascale Data Ecosystems"
Micah Beck, Terry Moore
To be presented at ASPLOS 2018 Grand Challenges Workshop

"Interoperable Convergence of Storage, Networking and Computation"
Micah Beck, Terry Moore, Piotr Luszczek
https://arxiv.org/abs/1706.07519

"On The Hourglass Model, The End-to-End Principle and Deployment Scalability"
Micah Beck
http://philsci-archive.pitt.edu/12626

"Interoperable Convergence of Storage, Networking and Processing at Layer 2"
Micah Beck, talk presented at the Department of Energy on 8/2/2017

"In Case of Rapture, Can I Have Your Data?"
Micah Beck, talk presented at DLF Forum 2017



Research

Dr. Beck has been an active researcher in a number of areas of computer systems, including distributed operating systems, the theory of distributed computation, compilers, parallel computation, networking and storage.


Data Logistics

Logistical Computing and Internetworking (LoCI)

The Logistical Computing and Internetworking (LoCI), is devoted to information logistics in distributed computer systems and networks. Information logistics is the study of the flexible coscheduling of the fundamental physical resources that underpin computer systems: storage, computation, and data transmission. The term is used in analogy to conventional logistics, which focuses on the coscheduling of the movement, storage and processing of military and industrial materiel. The approach taken by LoCI Lab researchers focuses on the application of architectural features of the Internet as a data transmission medium to analogous intrastructure for storage and computation. The core mission of the laboratory is the design and implementation of a Resource Fabric, or generalized information logistics infrastructure, to provide support for advanced applications that are not adequately served by the conventional model of Internetworking.

The Internet Backplane Protocol (IBP) and the exNode

The Internet Backplane Protocol (IBP) is middleware for managing and using remote storage. It was invented to supportLogistical Networking in large scale, distributed systems and applications. One of the important tools in working with IBP in building distributed systems is the exNode data structure and the tools based on it.

The Data Logistics Toolkit

See project Web site.

Resilient System Solutions for Data in Wildland Fire Incident Operations

The goal of the project is to improve sharing of operation-critical wildland fire data and information during wildfire incident operations through improved data access technologies. The objectives are: (1) Work with the wildland fire management community to define specific requirements of an enhanced, resilient data sharing system; (2) Co-develop software systems for data logistics based on existing tools, including future proofing and generation of ideas to advance capabilities with further R&D; and (3) Deploy and test prototype hardware-software system with fire operations personnel that integrates the new data sharing system with existing capabilities using relevant data.

The project is a collaboration led by Nancy French of Michigan Tech Research Institute with co-investigator Martin Swany of Indiana University. My research focus is in the study/development of asynchronous routing protocols for wildland fire operations. See project poster.

Deployment Scalability in Layered Systems

Convergence of Storage, Networking and Computing

Bridging the Digital Divide


Digital Preservation under Weak Assumptions

The goal of Digital Preservation is the maintenance of human knowledge pthrough the preservation and interpretation of digital objects. Any long term storage is a process of writing data to a storage medium, periodic fixity checking and error correction (antientropy), media migration, format translation, all while maintaining verifiabled provenance and other metadata. The success of this process over the long term (100 years) depends on a number of strong assumptions, either explicit or implicit. Key among these assumptions is the maximum number of uncorrected errors that can be concentrated in any data partition. My research in Digital Preservation is focused on questioning this assumption and developing approaches that weaken it substantially, optimally degrading gracefully under and unbounded number of errors in the preservation process.


Some Past Projects



Classes Recently Taught

Fall 2017

Spring 2018