:::
Mobile/multi-access edge computing (MEC) is therefore developed to support the upcoming AI-aware mobile services, which require low latency and intensive computation resources at the edge of the network. One of the most challenging issues in MEC is service provision with mobility consideration. It has been known that the offloading and migration decision need to be jointly handled to maximize the utility of networks within the latency constraints, which is challenging when users are in mobility. In this paper, we propose Mobility-Aware Deep Reinforcement Learning (M-DRL) framework for mobile service provision problems in the MEC system. M-DRL is composed of two parts: DRL specialized in supporting multiple users joint training, and glimpse, a seq2seq model customized for mobility prediction to predict a sequence of locations just like a "glimpse" of future. Through integrating the proposed DRL and glimpse mobility prediction model, the proposed M-DRL framework is optimized to handle the service provision problem in MEC with acceptable computation complexity and near optimal performance.
Mobile Edge Computing (MEC) is a promising technique in the 5G Era to improve the Quality of Experience (QoE) for online video streaming due to its ability to reduce the backhaul transmission by caching certain content. However, it still takes effort to address the user association and video quality selection problem under the limited resource of MEC to fully support the low-latency demand for live video streaming. We found the optimization problem to be a non-linear integer programming, which is impossible to obtain a globally optimal solution under polynomial time. In this paper, we first reformulate this problem as a Markov Decision Process (MDP) and develop a Deep Deterministic Policy Gradient (DDPG) based algorithm exploiting the supply-demand interpretation of the Lagrange dual problem. Simulation results show that our proposed approach achieves significant QoE improvement especially in the low wireless resource and high user number scenario compared to other baselines.
The live video streaming services have been suffered from the limited backhaul capacity of the cellular core network and occasional congestions due to the cloud-based architecture. Mobile Edge Computing (MEC) brings the services from the centralized cloud to nearby network edge to improve the Quality of Experience (QoE) of cloud services, such as live video streaming services. Nevertheless, the resource at edge devices is still limited and should be allocated economically efficiently. In this paper, we propose Edge Combinatorial Clock Auction (ECCA) and Combinatorial Clock Auction in Stream (CCAS), two auction frameworks to improve the QoE of live video streaming services in the Edge-enabled cellular system. The edge system is the auctioneer who decides the backhaul capacity and caching space allocation and streamers are the bidders who request for the backhaul capacity and caching space to improve the video quality their audiences can watch. There are two key subproblems: the caching space value evaluations and allocations. We show that both problems can be solved by the proposed dynamic programming algorithms. The truth-telling property is guaranteed in both ECCA and CCAS. The simulation results show that the overall system utility can be significantly improved through the proposed system.
Intermittent computing enables battery-less systems to support complex tasks such as face recognition through energy harvesting, but without an installed battery. Nevertheless, the latency may not be satisfied due to the limited computing power. Integrating mobile edge computing (MEC) with intermittent computing would be the desired solution to reduce latency and increase computation efficiency. In this work, we investigate the joint optimization problem of bandwidth allocation and the computation offloading with multiple battery-less intermittent devices in a wireless MEC network. We provide a comprehensive analysis of the expected offloading efficiency, and then propose Greedy Adaptive Balanced Allocation and Offloading (GABAO) algorithm considering the energy arrival distributions, remaining task load, and available computing/communication resources. Simulation results show that the proposed system can significantly reduce the latency in a multi-user MEC network with battery-less devices.
The network attack such as Distributed Denial-of-Service (DDoS) attack could be critical to latency-critical systems such as Mobile Edge Computing (MEC) as such attacks significantly increase the response delay of the victim service. Intrusion prevention system (IPS) is a promising solution to defend against such attacks, but there will be a trade-off between IPS deployment and application resource reservation as the deployment of IPS will reduce the number of computation resources for MEC applications. In this paper, we proposed a game-theoretic framework to study the joint computation resource allocation and IPS deployment in the MEC architecture. We study the pricing strategy of the MEC platform operator and purchase strategy of the application service provider, given the expected attack strength and end user demands. The best responses of both MPO and ASPs are derived theoretically to identify the Stackelberg equilibrium. The simulation results confirm that the proposed solutions significantly increase the social welfare of the system.
Mobile Edge Computing (MEC) is a promising paradigm to ease the computation burden of Internet-of-Things (IoT) devices by leveraging computing capabilities at the network edge. With the yearning needs for resource provision from IoT devices, the queueing delay at the edge nodes not only poses a colossal impediment to achieving satisfactory quality of experience (QoE) for the IoT devices but also to the benefits of the edge nodes owing to escalating energy expenditure. Moreover, since the service providers may differ, computationally competent entities’ computing services should entail economic compensation for the incurred energy expenditure and the capital investment. Therefore, the workload allocation mechanism, where we consider flat-rate and dynamic pricing schemes in the multi-layer edge computing structure, is much-needed. We use Stackelberg game to capture the inherent hierarchy and interdependence between the second-layer edge node (SLEN) and first-layer edge nodes (FLENs). A truthful admission control mechanism grounded on the optimal workload allocation is designed for FLENs without violating end-to-end (E2E) latency requirements. We prove that a Stackelberg equilibrium with the E2E latency guarantee and truthfulness exists and can be reached through proposed algorithm. Simulation results confirm the effectiveness of our scheme and illustrate several insights.
Intermittent systems enable batteryless devices to operate through energy harvesting by leveraging the complementary characteristics of volatile (VM) and non-volatile memory (NVM). Unfortunately, alternate and frequent accesses to heterogeneous memories for accumulative execution across power cycles can significantly hinder computation progress. The progress impediment is mainly due to more CPU time being wasted for slow NVM accesses than for fast VM accesses. This paper explores how to leverage heterogeneous cores to mitigate the progress impediment caused by heterogeneous memories. In particular, a delegable and adaptive synchronization protocol is proposed to allow memory accesses to be delegated between cores and to dynamically adapt to diverse memory access latency. Moreover, our design guarantees task serializability across multiple cores and maintains data consistency despite frequent power failures. We integrated our design into FreeRTOS running on a Cypress device featuring heterogeneous dual cores and hybrid memories. Experimental results show that, compared to recent approaches that assume single-core intermittent systems, our design can improve computation progress at least 1.8x and even up to 33.9x by leveraging core heterogeneity.
The increasing paradigm shift towards intermittent computing has made it possible to intermittently execute deep neural network (DNN) inference on edge devices powered by ambient energy. Recently, neural architecture search (NAS) techniques have achieved great success in automatically finding DNNs with high accuracy and low inference latency on the deployed hardware. We make a key observation, where NAS attempts to improve inference latency by primarily maximizing data reuse, but the derived solutions when deployed on intermittently-powered systems may be inefficient, such that the inference may not satisfy an end-to-end latency requirement and, more seriously, they may be unsafe given an insufficient energy budget. This work proposes iNAS, which introduces intermittent execution behavior into NAS to find accurate network architectures with corresponding execution designs, which can safely and efficiently execute under intermittent power. An intermittent-aware execution design explorer is presented, which finds the right balance between data reuse and the costs related to intermittent inference, and incorporates a preservation design search space into NAS, while ensuring the power-cycle energy budget is not exceeded. To assess an intermittent execution design, an intermittent-aware abstract performance model is presented, which formulates the key costs related to progress preservation and recovery during intermittent inference. We implement iNAS on top of an existing NAS framework and evaluate their respective solutions found for various datasets, energy budgets and latency requirements, on a Texas Instruments device. Compared to those NAS solutions that can safely complete the inference, the iNAS solutions reduce the intermittent inference latency by 60% on average while achieving comparable accuracy, with an average 7% increase in search overhead.
In this paper, we propose a convolutional neural network (CNN) model for device-free fingerprinting indoor localization based on Wi-Fi channel state information (CSI). Besides, we develop an interpretation framework to understand the representations learned by the model. By quantifying and visualizing CNN in comparison with the fully-connected feedforward deep neural network (DNN) (or multilayer perceptron), we observe that each model can automatically identify location-specific patterns, which are however different across models and are linked to the respective performance of each model. Furthermore, we quantify how features, relevant or otherwise, as deemed by the adopted quantifying metrics (i.e., relevance scores, calculated by relevance propagation techniques), determine or affect the performance results. Interpretation of learning models for wireless applications is challenging due to the lack of human sensory intuition and reference. The results presented in this paper provide visually perceivable evidence and plausible explanations for the performance advantages of CNN in this important application.
Device-free indoor localization is a key enabling technology for many Internet of Things (IoT) applications. Deep neural network (DNN)-based location estimators achieve high-precision localization performance by automatically learning discriminative features from noisy wireless signals without much human intervention. However, the inner workings of DNN are not transparent and not adequately understood especially in wireless localization applications. In this paper, we conduct visual analyses of DNN-based location estimators trained with Wi-Fi channel state information (CSI) fingerprints in a real-world experiment. We address such questions as 1) how well has the DNN learned and been trained, and 2) what critical features has the DNN learned to distinguish different classes, via visualization techniques. The results provide plausible explanations and allow for a better understanding of the mechanism of DNN-based wireless indoor localization.