Assistant Research Fellow (Assistant Professor)  |  Cheng, Hsiang-Yun  
Research Descriptions

My research falls primarily in the field of computer architecture, with an emphasis on memory system design and energy-efficient computing systems. The goal of my research is to improve computer architecture designs based on the challenges and opportunities brought by emerging technologies and modern applications.  On the technology side, new technologies, such as non-volatile memories, 3D ICs, etc, are developed to replace traditional technologies for better performance and lower power consumption. Nevertheless, current computer architectures need to be redesigned to overcome the weakness of these new technologies and realize their full potential. On the application side, the design of hardware system and architectural-level management policies must matches various demands from modern applications, such as big-data processing, parallel programming, machine learning, etc. Below is a list of current research projects and future research directions in my group:

Architectural support for emerging technologies:

Emerging technologies, such as non-volatile memories and 3D ICs, introduce new opportunities and challenges to the design of future computer systems. For example, the high density, low leakage power, and nonvolatile features of nonvolatile memories bring new opportunities to address the memory contention problem in many-core systems and the massive storage requirement of "big data" applications. However, the high write energy, long write latency, and reliability issues of nonvolatile memories hinder them from directly replacing traditional SRAM and DRAM in computer systems. 3D integration is also envisioned as a solution for future many-core design to tackle the memory bandwidth problem, and it provides new opportunities for future many-core designs with its heterogeneous integration capability. Nevertheless, the increased power density in 3D die-stacking incurs thermal issues. We can redesign the memory interface and architectural management policies to leverage the advantage of emerging technologies and compensate for their weakness.

Architectural support for Dark Silicon Era:

Recent trends in VLSI technology have led to a dark silicon era. Even though Moore’s Law continues to provide increasing transistor counts, the rise of the uutilization wall limits the number of transistors that can be powered on and results in a large region of dark silicon. To tackle this utilization wall challenge, computer designers are looking for ways to stay on the performance curve by using multi-cores but without exceeding the chip's thermal design power. Emerging memory technologies and 3D integration techniques are promising to improve energy-efficiency, as non-volatile memories consume near zero leakage and 3D integration techniques enable heterogeneous integration of different components. We can also explore other architectural solutions, such as heterogeneous computing, low-power circuit designs, etc, to save energy in future computer systems.   

Architectural support for modern applications:

Many modern applications rely on processing large and heterogeneous data sets through distributed and heterogeneous computing systems, with power budget constraints. One of the biggest challenges of computer system design is therefore how to deal with data processed on a broad range of platforms in an energy-efficient manner. To address these issues, we can leverage 3D integration technologies to provide large memory capacity and high memory bandwidth. We can also add simple logic near memory chips to support near-data computing for high performance data processing. In addition to designing memory systems to efficiently process "big data", how to add additional hardware or redesign memory interface to accelerate specific applications, such as machine learning, computer vision, image processing, etc, is also worth studying.

Leveraging application implications in architecture designs:

The implication provided by user, programming language, or compiler runtime can be utilized by hardware and hardware/software interface to improve performance and energy efficiency when running the applications. For example, stream programming properties can be exploited to improve the performance of multi-core systems through throttling memory threads. The read/write access pattern explicitly or implicitly provided by programming models and compilers can also be leveraged to develop energy-efficient data placement policies for hybrid memory systems built from traditional SRAM/DRAM and emerging non-volatile memories. Through designing cooperative software/hardware mechanisms, we can explore many performance improvement and energy saving opportunities in future computer systems.