C.-H. Kuo, H.-Y. Chang, R. Y. Chang, And W.-H. Chung
Unsupervised Learning Based Hybrid Beamforming with Low-Resolution Phase Shifters for MU-MIMO Systems
IEEE International Conference on Communications (ICC)
May 2022
Millimeter wave (mmWave) is a key technology for fifth-generation (5G) and beyond communications. Hybrid beamforming has been proposed for large-scale antenna systems in mmWave communications. Existing hybrid beamforming designs based on infinite-resolution phase shifters (PSs) are impractical due to hardware cost and power consumption. In this paper, we propose an unsupervised-learning-based scheme to jointly design the analog precoder and combiner with low-resolution PSs for multiuser multiple-input multiple-output (MU-MIMO) systems. We transform the analog precoder and combiner design problem into a phase classification problem and propose a generic neural network architecture, termed the phase classification network (PCNet), capable of producing solutions of various PS resolutions. Simulation results demonstrate the superior sum-rate and complexity performance of the proposed scheme, as compared to state-of-the-art hybrid beamforming designs for the most commonly used low-resolution PS configurations.
K. M. Chen And R. Y. Chang
Semi-Supervised Learning with GANs for Device-Free Fingerprinting Indoor Localization
IEEE Global Communications Conference (GLOBECOM)
December 2020
Device-free wireless indoor localization is a key enabling technology for the Internet of Things (IoT). Fingerprint-based indoor localization techniques are a commonly used solution. This paper proposes a semi-supervised, generative adversarial network (GAN)-based device-free fingerprinting indoor localization system. The proposed system uses a small amount of labeled data and a large amount of unlabeled data (i.e., semi-supervised), thus considerably reducing the expensive data labeling effort. Experimental results show that, as compared to the state-of-the-art supervised scheme, the proposed semi-supervised system achieves comparable performance with equal, sufficient amount of labeled data, and significantly superior performance with equal, highly limited amount of labeled data. Besides, the proposed semi-supervised system retains its performance over a broad range of the amount of labeled data. The interactions between the generator, discriminator, and classifier models of the proposed GAN-based system are visually examined and discussed. A mathematical description of the proposed system is also presented.
Chih-Kai Kang, Hashan Roshantha Mendis, Chun-Han Lin, Ming-Syan Chen And Pi-Cheng Hsiu
Everything Leaves Footprints: Hardware Accelerated Intermittent Deep Inference
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
November 2020
Current peripheral execution approaches for intermittently-powered systems require full access to the internal hardware state for checkpointing or rely on application-level energy estimation for task partitioning to make correct forward progress. Both requirements present significant practical challenges for energy-harvesting, intelligent edge IoT devices, which perform hardware accelerated DNN inference. Sophisticated compute peripherals may have inaccessible internal state, and the complexity of DNN models makes it difficult for programmers to partition the application into suitably sized tasks that fit within an estimated energy budget. This paper presents the concept of inference footprinting for intermittent DNN inference, where accelerator progress is accumulatively preserved across power cycles. Our middleware stack, HAWAII, tracks and restores inference footprints efficiently and transparently to make inference forward progress, without requiring access to the accelerator internal state and application-level energy estimation. Evaluations were carried out on a Texas Instruments device, under varied energy budgets and network workloads. Compared to a variety of task-based intermittent approaches, HAWAII improves the inference throughput by 5.7% to 95.7%, particularly achieving higher performance on heavily accelerated DNNs.
Han-Yi Lin, Pi-Cheng Hsiu, Tei-Wei Kuo, And Yen-Yu Lin
Spatiotemporal Super-Resolution with Cross-Task Consistency and Its Semi-supervised Extension
International Joint Conference on Artificial Intelligence (IJCAI)
July 2020
Spatiotemporal super-resolution (SR) aims to upscale both the spatial and temporal dimensions of input videos, and produces videos with higher frame resolutions and rates. It involves two essential sub-tasks: spatial SR and temporal SR. We design a two-stream network for spatiotemporal SR in this work. One stream contains a temporal SR module followed by a spatial SR module, while the other stream has the same two modules in the reverse order. Based on the interchangeability of performing the two sub-tasks, the two network streams are supposed to produce consistent spatiotemporal SR results. Thus, we present a cross-stream consistency to enforce the similarity between the outputs of the two streams. In this way, the training of the two streams is correlated, which allows the two SR modules to share their supervisory signals and improve each other. In addition, the proposed cross-stream consistency does not consume labeled training data and can guide network training in an unsupervised manner. We leverage this property to carry out semi-supervised spatiotemporal SR. It turns out that our method makes the most of training data, and can derive an effective model with few high-resolution and high-frame-rate videos, achieving the state-of-the-art performance.
Chao-Lun Wu, Te-Chuan Chiu, Chih-Yu Wang, Ai-Chun Pang
Mobility-Aware Deep Reinforcement Learning with Glimpse Mobility Prediction in Edge Computing
IEEE International Conference on Communications (ICC)
June 2020
Mobile/multi-access edge computing (MEC) is therefore developed to support the upcoming AI-aware mobile services, which require low latency and intensive computation resources at the edge of the network. One of the most challenging issues in MEC is service provision with mobility consideration. It has been known that the offloading and migration decision need to be jointly handled to maximize the utility of networks within the latency constraints, which is challenging when users are in mobility. In this paper, we propose Mobility-Aware Deep Reinforcement Learning (M-DRL) framework for mobile service provision problems in the MEC system. M-DRL is composed of two parts: DRL specialized in supporting multiple users joint training, and glimpse, a seq2seq model customized for mobility prediction to predict a sequence of locations just like a "glimpse" of future. Through integrating the proposed DRL and glimpse mobility prediction model, the proposed M-DRL framework is optimized to handle the service provision problem in MEC with acceptable computation complexity and near optimal performance.
Po-Yu Chou, Wei-Yu Chen, Chih-Yu Wang, Ren-Hung Hwang, Wen-Tsuen Chen
Deep Reinforcement Learning for MEC Streaming with Joint User Association and Resource Management
IEEE International Conference on Communications (ICC)
June 2020
Mobile Edge Computing (MEC) is a promising technique in the 5G Era to improve the Quality of Experience (QoE) for online video streaming due to its ability to reduce the backhaul transmission by caching certain content. However, it still takes effort to address the user association and video quality selection problem under the limited resource of MEC to fully support the low-latency demand for live video streaming. We found the optimization problem to be a non-linear integer programming, which is impossible to obtain a globally optimal solution under polynomial time. In this paper, we first reformulate this problem as a Markov Decision Process (MDP) and develop a Deep Deterministic Policy Gradient (DDPG) based algorithm exploiting the supply-demand interpretation of the Lagrange dual problem. Simulation results show that our proposed approach achieves significant QoE improvement especially in the low wireless resource and high user number scenario compared to other baselines.
Yi-Hsuan Hung, Chih-Yu Wang, Ren-Hung Hwang
Optimizing Social Welfare of Live Video Streaming Services in Mobile Edge Computing
IEEE Transactions on Mobile Computing
April 2020
The live video streaming services have been suffered from the limited backhaul capacity of the cellular core network and occasional congestions due to the cloud-based architecture. Mobile Edge Computing (MEC) brings the services from the centralized cloud to nearby network edge to improve the Quality of Experience (QoE) of cloud services, such as live video streaming services. Nevertheless, the resource at edge devices is still limited and should be allocated economically efficiently. In this paper, we propose Edge Combinatorial Clock Auction (ECCA) and Combinatorial Clock Auction in Stream (CCAS), two auction frameworks to improve the QoE of live video streaming services in the Edge-enabled cellular system. The edge system is the auctioneer who decides the backhaul capacity and caching space allocation and streamers are the bidders who request for the backhaul capacity and caching space to improve the video quality their audiences can watch. There are two key subproblems: the caching space value evaluations and allocations. We show that both problems can be solved by the proposed dynamic programming algorithms. The truth-telling property is guaranteed in both ECCA and CCAS. The simulation results show that the overall system utility can be significantly improved through the proposed system.
Zhan-Lun Chang, Chun-Yen Lee, Chia-Hung Lin, Chih-Yu Wang, Hung-Yu Wei
Game-Theoretic Intrusion Prevention System Deployment for Mobile Edge Computing
IEEE Global Communications Conference (GLOBECOM)
December 2021
The network attack such as Distributed Denial-of-Service (DDoS) attack could be critical to latency-critical systems such as Mobile Edge Computing (MEC) as such attacks significantly increase the response delay of the victim service. Intrusion prevention system (IPS) is a promising solution to defend against such attacks, but there will be a trade-off between IPS deployment and application resource reservation as the deployment of IPS will reduce the number of computation resources for MEC applications. In this paper, we proposed a game-theoretic framework to study the joint computation resource allocation and IPS deployment in the MEC architecture. We study the pricing strategy of the MEC platform operator and purchase strategy of the application service provider, given the expected attack strength and end user demands. The best responses of both MPO and ASPs are derived theoretically to identify the Stackelberg equilibrium. The simulation results confirm that the proposed solutions significantly increase the social welfare of the system.
Yu-Tai Lin, Yu-Cheng Hsiao, Chih-Yu Wang
Enabling Mobile Edge Computing for Battery-less Intermittent IoT Devices
IEEE Global Communications Conference (GLOBECOM)
December 2021
Intermittent computing enables battery-less systems to support complex tasks such as face recognition through energy harvesting, but without an installed battery. Nevertheless, the latency may not be satisfied due to the limited computing power. Integrating mobile edge computing (MEC) with intermittent computing would be the desired solution to reduce latency and increase computation efficiency. In this work, we investigate the joint optimization problem of bandwidth allocation and the computation offloading with multiple battery-less intermittent devices in a wireless MEC network. We provide a comprehensive analysis of the expected offloading efficiency, and then propose Greedy Adaptive Balanced Allocation and Offloading (GABAO) algorithm considering the energy arrival distributions, remaining task load, and available computing/communication resources. Simulation results show that the proposed system can significantly reduce the latency in a multi-user MEC network with battery-less devices.
Hashan Roshantha Mendis, Chih-Kai Kang, And Pi-Cheng Hsiu
Intermittent-Aware Neural Architecture Search
ACM Transactions on Embedded Computing Systems
September 2021
The increasing paradigm shift towards intermittent computing has made it possible to intermittently execute deep neural network (DNN) inference on edge devices powered by ambient energy. Recently, neural architecture search (NAS) techniques have achieved great success in automatically finding DNNs with high accuracy and low inference latency on the deployed hardware. We make a key observation, where NAS attempts to improve inference latency by primarily maximizing data reuse, but the derived solutions when deployed on intermittently-powered systems may be inefficient, such that the inference may not satisfy an end-to-end latency requirement and, more seriously, they may be unsafe given an insufficient energy budget. This work proposes iNAS, which introduces intermittent execution behavior into NAS to find accurate network architectures with corresponding execution designs, which can safely and efficiently execute under intermittent power. An intermittent-aware execution design explorer is presented, which finds the right balance between data reuse and the costs related to intermittent inference, and incorporates a preservation design search space into NAS, while ensuring the power-cycle energy budget is not exceeded. To assess an intermittent execution design, an intermittent-aware abstract performance model is presented, which formulates the key costs related to progress preservation and recovery during intermittent inference. We implement iNAS on top of an existing NAS framework and evaluate their respective solutions found for various datasets, energy budgets and latency requirements, on a Texas Instruments device. Compared to those NAS solutions that can safely complete the inference, the iNAS solutions reduce the intermittent inference latency by 60% on average while achieving comparable accuracy, with an average 7% increase in search overhead.