Beamforming is regarded as a promising technique for future wireless communication systems. In this regard, codebook-based beamforming offers satisfactory performance with acceptable computational complexity; however, it requires a high power-consumed beam overhead. To achieve balance between the utilized beam overhead and the achieved spectral efficiency performance, we propose a super-resolution-based scheme using a hierarchical codebook. We consider beam sweeping as an inference problem, where low-resolution beam radiating responses are used as an input, and high-resolution beam sweeping responses are output. Simulation results confirm that the proposed scheme exhibits extraordinary performance-overhead tradeoffs as compared with state-of-the-art codebook-based beamforming designs.
Vehicle positioning is a key component of autonomous driving. The global positioning system (GPS) is the most commonly used vehicle positioning system currently. However, its accuracy will be affected by environmental differences and thus fails to meet the requirements of meter-level accuracy. We consider a coordinate neighboring vehicle positioning system (CNVPS) based on GPS, omnidirectional radar, and V2V communication ability to obtain additional information from neighboring vehicles to improve the GPS positioning accuracy of vehicles in various environments. We further use the concept of transfer learning (TL) wherein an adversarial mechanism is designed to eliminate the deviation of multiple environments to optimize vehicle positioning accuracy in multiple environments using one model. The simulation results show that, compared with the existing methods, the proposed system architecture not only improves the performance but also effectively reduces the amount of data required for training.
Device-free wireless indoor localization is an essential technology for the Internet of Things (IoT), and fingerprint-based methods are widely used. A common challenge to fingerprint-based methods is data collection and labeling. This paper proposes a few-shot transfer learning system that uses only a small amount of labeled data from the current environment and reuses a large amount of existing labeled data previously collected in other environments, thereby significantly reducing the data collection and labeling cost for localization in each new environment. The core method lies in graph neural network (GNN) based few-shot transfer learning and its modifications. Experimental results conducted on real-world environments show that the proposed system achieves comparable performance to a convolutional neural network (CNN) model, with 40 times fewer labeled data.
Millimeter wave (mmWave) is a key technology for fifth-generation (5G) and beyond communications. Hybrid beamforming has been proposed for large-scale antenna systems in mmWave communications. Existing hybrid beamforming designs based on infinite-resolution phase shifters (PSs) are impractical due to hardware cost and power consumption. In this paper, we propose an unsupervised-learning-based scheme to jointly design the analog precoder and combiner with low-resolution PSs for multiuser multiple-input multiple-output (MU-MIMO) systems. We transform the analog precoder and combiner design problem into a phase classification problem and propose a generic neural network architecture, termed the phase classification network (PCNet), capable of producing solutions of various PS resolutions. Simulation results demonstrate the superior sum-rate and complexity performance of the proposed scheme, as compared to state-of-the-art hybrid beamforming designs for the most commonly used low-resolution PS configurations.
Device-free wireless indoor localization is a key enabling technology for the Internet of Things (IoT). Fingerprint-based indoor localization techniques are a commonly used solution. This paper proposes a semi-supervised, generative adversarial network (GAN)-based device-free fingerprinting indoor localization system. The proposed system uses a small amount of labeled data and a large amount of unlabeled data (i.e., semi-supervised), thus considerably reducing the expensive data labeling effort. Experimental results show that, as compared to the state-of-the-art supervised scheme, the proposed semi-supervised system achieves comparable performance with equal, sufficient amount of labeled data, and significantly superior performance with equal, highly limited amount of labeled data. Besides, the proposed semi-supervised system retains its performance over a broad range of the amount of labeled data. The interactions between the generator, discriminator, and classifier models of the proposed GAN-based system are visually examined and discussed. A mathematical description of the proposed system is also presented.
Current peripheral execution approaches for intermittently-powered systems require full access to the internal hardware state for checkpointing or rely on application-level energy estimation for task partitioning to make correct forward progress. Both requirements present significant practical challenges for energy-harvesting, intelligent edge IoT devices, which perform hardware accelerated DNN inference. Sophisticated compute peripherals may have inaccessible internal state, and the complexity of DNN models makes it difficult for programmers to partition the application into suitably sized tasks that fit within an estimated energy budget. This paper presents the concept of inference footprinting for intermittent DNN inference, where accelerator progress is accumulatively preserved across power cycles. Our middleware stack, HAWAII, tracks and restores inference footprints efficiently and transparently to make inference forward progress, without requiring access to the accelerator internal state and application-level energy estimation. Evaluations were carried out on a Texas Instruments device, under varied energy budgets and network workloads. Compared to a variety of task-based intermittent approaches, HAWAII improves the inference throughput by 5.7% to 95.7%, particularly achieving higher performance on heavily accelerated DNNs.
Spatiotemporal super-resolution (SR) aims to upscale both the spatial and temporal dimensions of input videos, and produces videos with higher frame resolutions and rates. It involves two essential sub-tasks: spatial SR and temporal SR. We design a two-stream network for spatiotemporal SR in this work. One stream contains a temporal SR module followed by a spatial SR module, while the other stream has the same two modules in the reverse order. Based on the interchangeability of performing the two sub-tasks, the two network streams are supposed to produce consistent spatiotemporal SR results. Thus, we present a cross-stream consistency to enforce the similarity between the outputs of the two streams. In this way, the training of the two streams is correlated, which allows the two SR modules to share their supervisory signals and improve each other. In addition, the proposed cross-stream consistency does not consume labeled training data and can guide network training in an unsupervised manner. We leverage this property to carry out semi-supervised spatiotemporal SR. It turns out that our method makes the most of training data, and can derive an effective model with few high-resolution and high-frame-rate videos, achieving the state-of-the-art performance.
Mobile/multi-access edge computing (MEC) is therefore developed to support the upcoming AI-aware mobile services, which require low latency and intensive computation resources at the edge of the network. One of the most challenging issues in MEC is service provision with mobility consideration. It has been known that the offloading and migration decision need to be jointly handled to maximize the utility of networks within the latency constraints, which is challenging when users are in mobility. In this paper, we propose Mobility-Aware Deep Reinforcement Learning (M-DRL) framework for mobile service provision problems in the MEC system. M-DRL is composed of two parts: DRL specialized in supporting multiple users joint training, and glimpse, a seq2seq model customized for mobility prediction to predict a sequence of locations just like a "glimpse" of future. Through integrating the proposed DRL and glimpse mobility prediction model, the proposed M-DRL framework is optimized to handle the service provision problem in MEC with acceptable computation complexity and near optimal performance.
Mobile Edge Computing (MEC) is a promising technique in the 5G Era to improve the Quality of Experience (QoE) for online video streaming due to its ability to reduce the backhaul transmission by caching certain content. However, it still takes effort to address the user association and video quality selection problem under the limited resource of MEC to fully support the low-latency demand for live video streaming. We found the optimization problem to be a non-linear integer programming, which is impossible to obtain a globally optimal solution under polynomial time. In this paper, we first reformulate this problem as a Markov Decision Process (MDP) and develop a Deep Deterministic Policy Gradient (DDPG) based algorithm exploiting the supply-demand interpretation of the Lagrange dual problem. Simulation results show that our proposed approach achieves significant QoE improvement especially in the low wireless resource and high user number scenario compared to other baselines.
The live video streaming services have been suffered from the limited backhaul capacity of the cellular core network and occasional congestions due to the cloud-based architecture. Mobile Edge Computing (MEC) brings the services from the centralized cloud to nearby network edge to improve the Quality of Experience (QoE) of cloud services, such as live video streaming services. Nevertheless, the resource at edge devices is still limited and should be allocated economically efficiently. In this paper, we propose Edge Combinatorial Clock Auction (ECCA) and Combinatorial Clock Auction in Stream (CCAS), two auction frameworks to improve the QoE of live video streaming services in the Edge-enabled cellular system. The edge system is the auctioneer who decides the backhaul capacity and caching space allocation and streamers are the bidders who request for the backhaul capacity and caching space to improve the video quality their audiences can watch. There are two key subproblems: the caching space value evaluations and allocations. We show that both problems can be solved by the proposed dynamic programming algorithms. The truth-telling property is guaranteed in both ECCA and CCAS. The simulation results show that the overall system utility can be significantly improved through the proposed system.