Memory Management for Modern and Emerging Architectures
Performance and energy efficiency in the memory subsystem is a critical
factor for a wide range of computing applications.
Trends such as the increasing demand for data analytics (i.e. Big Data) and
the desire to multiplex physical resources to improve efficiency have driven
the adoption of memory systems with larger power, bandwidth, and capacity
requirements.
Additionally, with the advent of new memory technologies (such as persistent
memory and die-stacked RAMs), systems are beginning to incorporate multiple
types of memory devices, each with varying capabilities and performance.
Not surprisingly, it is very challenging to obtain precise control over the
distribution and usage of memory power or bandwidth when virtualizing system
memory.
These effects depend upon the assignment of virtual addresses to the
application's data objects, the OS binding of virtual to physical addresses,
and the mapping of physical pages to hardware memory devices.
To address these challenges, we are investigating approaches to memory
management that increase collaboration between layers of the vertical
execution stack (i.e. compilers, applications, middleware, operating system,
and hardware).
Our current projects include:
- Development of offline and online profiling to predict data access patterns
- Cross-layer strategies to improve performance and power efficiency in DRAM
- Adapting heap layouts to exploit features of heterogeneous memory architectures
Publications:
MemSys 2019 (a),
MemSys 2019 (b),
CC 2019,
NAS 2018,
TACO 2018,
ARCS 2018,
OOPSLA 2015,
VEE 2013
Dynamic Compilation
Programs written in managed languages, such as Java and C#, execute in the
context of a virtual machine (VM) (also called runtime system) that compiles
program methods at runtime to achieve high-performance emulation.
Managed runtime systems need to consider several factors when deciding how,
when, or if to compile program methods, including: the compiling speed and
code quality produced by the available compiler(s), the execution frequency
of individual methods, and the availability of compilation resources.
Our research in this area explores tradeoffs involved in selective
compilation and the potential of applying iterative search techniques to
dynamic compilers.
We are currently exploring:
- How to schedule method compiles for JITs with multiple optimization tiers
- Dynamic compilation policies for multi- and many-core machines.
- Program profiling for feedback-directed optimization
Publications:
LCTES 2017,
TACO 2016,
LCTES 2016,
TACO 2013,
VEE 2013
Exploiting Phase Interactions during Phase Order Search
Program-specific or function-specific compiler optimization phase sequences
are universally accepted to achieve better overall performance than any fixed
optimization phase ordering.
In order to find the best combination of phases to apply to a particular
function or program, researchers have developed iterative search techniques
to quickly evaluate many different orderings of optimization phases.
While such techniques have been shown to be effective, they are also extremely
time consuming due to the large number of phase combinations that must be
evaluated for each application.
We conduct research that aims to reduce the phase ordering search space by
identifying and exploiting certain interactions between phases during the
search.
In addition to speeding up exhaustive iterative searches, this work has led
to the invention of a technique that can improve the efficacy of individual
optimization phases, as well as novel heuristics that find more effective
phase ordering sequences much faster than current approaches.
Publications: CASES 2013
S:P&E 2013,
CASES 2010,
Masters Thesis (2010),
LCTES 2010
|