Leveraging Application Guidance for More Effective Management of Complex Memories
Computer systems with multiple tiers of memory with different performance and
capabilities are quickly becoming mainstream. Conventional data management
strategies will need to be altered to take advantage of different types of
memory in each tier, but doing so faces significant challenges that are in
urgent need of research. This collection of projects is developing new tools
and infrastructure to extend the benefits of application guided data management
to a variety of applications on new and emerging memory architectures, with
little or no additional effort from developers and users.
Our primary research thrusts include:
- developing new profiling and machine learning tools to extract memory
management guidance from application source code and runtime behavior
- building new approaches that use both static and dynamic guidance to
steer data management across complex memory hierarchies
- creating new optimizations that leverage tighter integration of
application- and system-level activities to improve the efficiency of
fundamental memory management operations, including data allocation,
migration, and recycling
This work is supported by multiple grants from Intel Corporation and the
National Science Foundation, including an NSF CAREER award. Example
publications include:
TACO 2024 (to appear),
ISMM 2023,
MemSys 2019 (a),
NAS 2018 (Best Paper Award), and
TACO 2018.
The Simplified Interface to Complex Memory
The U.S. Department of Energy (DOE) is working towards achieving new levels
of scientific discovery through ever-increasingly powerful supercomputers. To
make these computing environments viable, the DOE initiated a large effort
titled the Exascale Computing Project (ECP). The Simplified Interface to
Complex Memory (SICM), one of the ECP projects, seeks to deliver a simple and
unified interface to the emerging complex memory hierarchies on exascale
nodes. To achieve this goal, SICM is split into two separate interfaces: the
low-level and the high-level. The high-level interface delivers an API that
allows applications to allocate, migrate, and persist their data without
detailed knowledge of the underlying memory hardware. To implement these
operations efficiently, the high-level API invokes the low-level interface,
which interacts directly with device-specific services in the operating
system.
SICM is available for download in public repositories (see Software). Example publications
include:
IJHPCA 2024,
TACO 2022, and
MemSys 2019 (b).
Secure Native Binary Execution
Typically, securing software is the responsibility of the software developer.
The customer or end-user of the software does not control or direct the steps
taken by the developer to employ best practice coding styles or mechanisms to
ensure software security and robustness. Current systems and tools also do not
provide the end-user with an ability to determine the level of security in the
software they use. At the same time, any flaw or security vulnerabilities
ultimately affect the end-user of the software. This project aimed to provide
greater control to the end-user to actively assess and secure the software they
use. Publications include:
ISPEC 2021.
Dynamic Compilation
Programs written in managed languages, such as Java and C#, execute in the
context of a virtual machine (VM) (also called runtime system) that compiles
program methods at runtime to achieve high-performance emulation.
Managed runtime systems need to consider several factors when deciding how,
when, or if to compile program methods, including: the compiling speed and
code quality produced by the available compiler(s), the execution frequency
of individual methods, and the availability of compilation resources.
Our research in this area explores tradeoffs involved in selective
compilation and the potential of applying iterative search techniques to
dynamic compilers.
Project efforts include:
- How to schedule method compiles for JITs with multiple optimization tiers
- Dynamic compilation policies for multi- and many-core machines.
- Program profiling for feedback-directed optimization
This work was supported by NSF (CCF-1617954). Example publications include:
LCTES 2017,
TACO 2016,
LCTES 2016,
TACO 2013,
VEE 2013
Exploiting Phase Interactions during Phase Order Search
Program-specific or function-specific compiler optimization phase sequences
are universally accepted to achieve better overall performance than any fixed
optimization phase ordering.
In order to find the best combination of phases to apply to a particular
function or program, researchers have developed iterative search techniques
to quickly evaluate many different orderings of optimization phases.
While such techniques have been shown to be effective, they are also extremely
time consuming due to the large number of phase combinations that must be
evaluated for each application.
We conduct research that aims to reduce the phase ordering search space by
identifying and exploiting certain interactions between phases during the
search.
In addition to speeding up exhaustive iterative searches, this work has led
to the invention of a technique that can improve the efficacy of individual
optimization phases, as well as novel heuristics that find more effective
phase ordering sequences much faster than current approaches.
This work was mostly done at the University of Kansas as a joint effort
between (then PhD student) Michael Jantz and his advisor, Prof. Prasad
Kulkarni. Example publications include:
CASES 2013
S:P&E 2013,
CASES 2010,
Masters Thesis (2010),
LCTES 2010
|