Mobile/multi-access edge computing (MEC) is developed to support the upcoming AI-aware mobile services, which require low latency and intensive computation resources at the edge of the network. One of the most challenging issues in MEC is service provision with mobility consideration. It has been known that the offloading decision and resource allocation need to be jointly handled to optimize the service provision efficiency within the latency constraints, which is challenging when users are in mobility. In this paper, we propose Mobility-Aware Deep Reinforcement Learning (M-DRL) framework for mobile service provision in the MEC system. M-DRL is composed of two parts: glimpse, a seq2seq model customized for mobility prediction to predict a sequence of locations just like a “glimpse” of the future, and a DRL specialized in supporting offloading decisions and resource allocation in MEC. By integrating the proposed DRL and glimpse mobility prediction model, the proposed M-DRL framework is optimized to handle the MEC service provision with average 70% performance improvements.
Speech quality estimation has recently undergone a paradigm shift from human-hearing expert designs to machine-learning models. However, current models rely mainly on supervised learning, which is time-consuming and expensive for label collection. To solve this problem, we propose VQScore, a self-supervised metric for evaluating speech based on the quantization error of a vector-quantized-variational autoencoder (VQ-VAE). The training of VQ-VAE relies on clean speech; hence, large quantization errors can be expected when the speech is distorted. To further improve correlation with real quality scores, domain knowledge of speech processing is incorporated into the model design. We found that the vector quantization mechanism could also be used for self-supervised speech enhancement (SE) model training. To improve the robustness of the encoder for SE, a novel self-distillation mechanism combined with adversarial training is introduced. In summary, the proposed speech quality estimation method and enhancement models require only clean speech for training without any label requirements. Experimental results show that the proposed VQScore and enhancement model are competitive with supervised baselines.
Reconfigurable intelligent surface (RIS) is a revolutionary passive radio technique to facilitate capacity enhancement beyond the current massive multiple-input multiple-output (MIMO) transmission. However, the potential hardware impairment (HWI) of the RIS usually causes inevitable performance degradation and the amplification of imperfect CSI. These impacts still lack full investigation in the RIS-assisted wireless network. This paper developed a robust joint RIS and transceiver design algorithm to minimize the worst-case mean square error (MSE) of the received signal under the HWI effect and imperfect channel state information (CSI) in the RIS-assisted multi-user MIMO (MU-MIMO) wireless network. Specifically, since the proposed robust joint RIS and transceiver design problem yields non-convex characteristics under severe HWI, an iterative three-step convex algorithm is developed to approach the optimality by relaxation and convex transformation. Compared with the state-of-the-art baselines that ignore the HWI, the proposed robust algorithm inhibits the destruction of HWI while raising the worst-case MSE effectively in several numerical simulations. Moreover, due to the properties of the HWI, the performance loss is notable under the magnification of the number of reflected elements in the RIS-assisted MU-MIMO wireless network.
Vehicle-to-everything (V2X) communication is one of the key technologies of 5G New Radio to support emerging applications such as autonomous driving. Due to the high density of vehicles, Remote Radio Heads (RRHs) will be deployed as Road Side Units to support V2X. Nevertheless, activation of all RRHs during low-traffic off-peak hours may cause energy wasting. The proper activation of RRH and association between vehicles and RRHs while maintaining the required service quality are the keys to reducing energy consumption. In this work, we first formulate the problem as an Integer Linear Programming optimization problem and prove that the problem is NP-hard. Then, we propose two novel algorithms, referred to as “Least Delete (LD)” and ”Largest-First Rounding with Capacity Constraints (LFRCC).” The simulation results show that the proposed algorithms can achieve significantly better performance compared with existing solutions and are competitive with the optimal solution. Specifically, the LD and LFRCC algorithms can reduce the number of activated RRHs by 86 % and 89 % in low-density scenarios. In high-density scenarios, the LD algorithm can reduce the number of activated RRHs by 90 % . In addition, the solution of LFRCC is larger than that of the optimal solution within 7 % on average.
Dissecting low-level malware behaviors into human-readable reports, such as cyber threat intelligence, is time-consuming and requires expertise in systems and cybersecurity. This work combines dynamic analysis and artificial intelligence-generative transformation for malware report generation, providing detailed technical insights and articulating malware intentions.
Designing intelligent, tiny devices with limited memory is immensely challenging, exacerbated by the additional memory requirement of residual connections in deep neural networks. In contrast to existing approaches that eliminate residuals to reduce peak memory usage at the cost of significant accuracy degradation, this paper presents DERO, which reorganizes residual connections by leveraging insights into the types and interdependencies of operations across residual connections. Evaluations were conducted across diverse model architectures designed for common computer vision applications. DERO consistently achieves peak memory usage comparable to plain-style models without residuals, while closely matching the accuracy of the original models with residuals.
This study investigates importance sampling under the scheme of mini-batch stochastic gradient descent, under which the contributions are twofold. First, theoretically, we develop a neat tilting formula, which can be regarded as a general device for asymptotically optimal importance sampling. Second, practically, guided by the formula, we present an effective algorithm for importance sampling which accounts for the effects of minibatches and leverages the Markovian property of the gradients between iterations. Experiments conducted on artificial data confirm that our algorithm consistently delivers superior performance in terms of variance reduction. Furthermore, experiments carried out on real-world data demonstrate that our method, when paired with relatively straightforward models like multilayer perceptron (MLP) and convolutional neural networks (CNN), outperforms in terms of training loss and testing error.
This study examines a downlink multiple-input single-output (MISO) system, where a base station (BS) with multiple antennas sends data to multiple single-antenna users with the help of a reconfigurable intelligent surface (RIS) and a half-duplex decode-and-forward (DF) relay. The system's sum rate is maximized through joint optimization of active beamforming at the BS and DF relay and passive beamforming at the RIS. The conventional alternating optimization algorithm for handling this complex design problem is suboptimal and computationally intensive. To overcome these challenges, this letter proposes a two-phase graph neural network (GNN) model that learns the joint beamforming strategy by exchanging and updating relevant relational information embedded in the graph representation of the transmission system. The proposed method demonstrates superior performance compared to existing approaches, robustness against channel imperfections and variations, generalizability across varying user numbers, and notable complexity advantages.
In the rapidly evolving realm of machine learning, algorithm effectiveness often faces limitations due to data quality and availability. Traditional approaches grapple with data sharing due to legal and privacy concerns. The federated learning framework addresses this challenge. Federated learning is a decentralized approach where model training occurs on client sides, preserving privacy by keeping data localized. Instead of sending raw data to a central server, only model updates are exchanged, enhancing data security. We apply this framework to Sparse Principal Component Analysis (SPCA) in this work. SPCA aims to attain sparse component loadings while maximizing data variance for improved interpretability. Beside the ℓ1 norm regularization term in conventional SPCA, we add a smoothing function to facilitate gradient-based optimization methods. Moreover, in order to improve computational efficiency, we introduce a least squares approximation to original SPCA. This enables analytic solutions on the optimization processes, leading to substantial computational improvements. Within the federated framework, we formulate SPCA as a consensus optimization problem, which can be solved using the Alternating Direction Method of Multipliers (ADMM). Our extensive experiments involve both IID and non-IID random features across various data owners. Results on synthetic and public datasets affirm the efficacy of our federated SPCA approach.
Sparse grid imputation (SGI) is a challenging problem, as its goal is to infer the values of the entire grid from a limited number of cells with values. Traditionally, the problem is solved using regression methods such as KNN and kriging, whereas in the real world, there is often extra information---usually imprecise---that can aid inference and yield better performance. In the SGI problem, in addition to the limited number of fixed grid cells with precise target domain values, there are contextual data and imprecise observations over the whole grid. To solve this problem, we propose a distribution estimation theory for the whole grid and realize the theory {via the composition architecture of the Target-Embedding and the Contextual CycleGAN} trained with contextual information and imprecise observations. Contextual CycleGAN is structured as two generator-discriminator pairs and uses different types of contextual loss to guide the training. We consider the real-world problem of fine-grained PM2.5 inference with realistic settings: a few (less than 1$\%$) grid cells with precise PM2.5 data and all grid cells with contextual information concerning weather and imprecise observations from satellites and microsensors. The task is to infer reasonable values for all grid cells. As there is no ground truth for empty cells, out-of-sample MSE (mean squared error) and JSD (Jensen--Shannon divergence) measurements are used in the empirical study. The results show that Contextual CycleGAN supports the proposed theory and outperforms the methods used for comparison.