Abstract
Beyond 5G applications such as Augmented/Virtual/Mixed Reality (AR/VR/MR) are latency-critical due to their stringent Quality-of-Experience (QoE) requirements. Due to limited battery life of AR/VR devices, edge-assisted AR/VR, where the user equipment offloads heavy computational tasks to an edge server, is becoming the de facto approach.
Although today’s 5G mmWave deployments can offer up to 3.5 Gbps throughput in line-of-sight (Los), static scenarios, their performance is often suboptimal due to sporadic coverage and suboptimal handover processes. Additionally, while 5G mmWave promises sub-ms latency over the air, today’s edge servers are attached to the mobile packet core network, resulting in much longer RTTs (on the order of 15 ms). This, combined with the server processing delays, make it extremely challenging to support high frame rate AR/VR applications.
In this talk, we describe a latency-aware algorithmic framework designed to address critical limitations of present 5G mmWave networks and edge infrastructure. This framework consists of algorithms that (1) jointly optimize content caching and request routing over general multi-hop edge computing networks to meet latency requirements, and (2) optimally trades off the computation and storage resources of a distributed hierarchical computing/storage infrastructure inside the cellular B5G network, through joint request routing, computation placement and caching, to minimize end-end latency. We show how these algorithms significantly reduce the processing and data movement latency of heavy computational tasks, thereby bringing closer to reality B5G applications such as high-quality edge-assisted AR/VR.
Although today’s 5G mmWave deployments can offer up to 3.5 Gbps throughput in line-of-sight (Los), static scenarios, their performance is often suboptimal due to sporadic coverage and suboptimal handover processes. Additionally, while 5G mmWave promises sub-ms latency over the air, today’s edge servers are attached to the mobile packet core network, resulting in much longer RTTs (on the order of 15 ms). This, combined with the server processing delays, make it extremely challenging to support high frame rate AR/VR applications.
In this talk, we describe a latency-aware algorithmic framework designed to address critical limitations of present 5G mmWave networks and edge infrastructure. This framework consists of algorithms that (1) jointly optimize content caching and request routing over general multi-hop edge computing networks to meet latency requirements, and (2) optimally trades off the computation and storage resources of a distributed hierarchical computing/storage infrastructure inside the cellular B5G network, through joint request routing, computation placement and caching, to minimize end-end latency. We show how these algorithms significantly reduce the processing and data movement latency of heavy computational tasks, thereby bringing closer to reality B5G applications such as high-quality edge-assisted AR/VR.
Bio
Edmund Yeh received the B.S. degree in EE from Stanford University, in 1994, M.Phil. degree in engineering from Cambridge University, in 1995 through the Winston Churchill Scholarship, and the Ph.D. degree in EECS from MIT under Prof. Robert Gallager, in 2001. He is a Professor and a Chair of electrical and computer engineering with the Northeastern University. He was an Assistant Professor and an Associate Professor in electrical engineering, computer science, and statistics with Yale University. He is an IEEE Communications Society Distinguished Lecturer. He was a recipient of the Alexander von Humboldt Research Fellowship, the Army Research Office Young Investigator Award, the Winston Churchill Scholarship, the National Science Foundation and Office of Naval Research Graduate Fellowships, the Barry M. Goldwater Scholarship, the Frederick Emmons Terman Engineering Scholastic Award, and the President’s Award for Academic Excellence (Stanford University). He has received four best paper awards, including awards from the 2023 International Symposium on Modeling and Optimization in Mobile, Ad hoc, and Wireless Networks (WiOpt), the 2017 ACM Conference on Information Centric Networking (ICN), and the 2015 IEEE International Conference on Communications (ICC) Communication Theory Symposium. He served as the TPC Co-Chair for ACM MobiHoc 2021. He served as both Treasurer and Secretary of the Board of Governors for the IEEE Information Theory Society. He served as the General Chair for ACM Sigmetrics 2020, an Area Editor for IEEE TRANSACTIONS ON INFORMATION THEORY, and an Associate Editor for IEEE TRANSACTIONS ON NETWORKING, IEEE TRANSACTIONS ON MOBILE COMPUTING, and IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING. He served as the Guest Editor-in-Chief of the Special Issue on Wireless Networks for Internet Mathematics, and a Guest Editor for IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS—Special Series on Smart Grid Communications. He also received the Phi Beta Kappa Award.