:::
Low earth orbit (LEO) satellite-enabled orthogonal frequency division multiple access (OFDMA) systems will play a pivotal role in future integrated satellite-terrestrial networks to realize ubiquitous high-throughput communication. However, the high mobility of LEO satellites and the utilization of Ku-Ka and millimeter wave (mmWave) bands introduce wide-range Doppler shifts, which are especially detrimental to OFDMA-based systems. Existing Doppler shift compensation methods are limited by the requirement for prior user location information and/or high computational complexities associated with searching across broad Doppler shift ranges. In this work, we propose a multi-stage Doppler shift compensation method aimed at compensating for wide-range Doppler shifts in downlink LEO satellite OFDMA systems over Ku-Ka to mmWave bands. The proposed method consists of three stages: incorporating the phase-differential (PD) operation into the extended Kalman filter (EKF) to widen the estimation range, enhancing compensation using a repetition training sequence, and utilizing the cyclic prefix (CP) for fine estimation. Simulation results demonstrate the proposed method's effectiveness in handling Doppler shifts in LEO SatCom over different channels and frequency bands. Moreover, the proposed method attains the maximum estimation range and exhibits high accuracy with low complexity, irrespective of the Doppler shift range, making it an effective, practical, and easily implementable solution in LEO satellite communication.
With the advances of machine learning, edge computing, and wireless communications, split inference has tracked more and more attention as a versatile inference paradigm. Split inference is essential to accelerate large-scale deep neural network (DNN) inference on resource-limited edge devices through partitioning a DNN between the edge device and the cloud server with advanced wireless communications such as B5G/6G and WiFi 6. We investigate the U-shape partitioning inference system, where both the input raw data and output inference results are kept on the edge device. We use image semantic segmentation as an exemplary application in our experiments. The experiment results showed that an honest-but-curious (HbC) server can launch the bidirectional privacy attack to reconstruct the raw data and steal the inference results, even when only the middle-end partition of the model is visible. To ensure bidirectional privacy and user experience on the U-shape partitioning inference system, a privacy and latency-aware partitioning strategy is needed to balance the trade-off between service latency and data privacy. We compared our proposed framework to other inference paradigms, including conventional split inference and inferencing entirely on the edge device or the server. We analyzed their inference latencies in various wireless technologies and quantitatively measured their level of privacy protection. The experiment results show that the U-shape partitioning inference system is advantageous over inference entirely on the edge device or the server.
State-of-the-art (SOTA) semi-supervised learning techniques, such as FixMatch and it's variants, have demonstrated impressive performance in classification tasks. However, these methods are not directly applicable to regression tasks. In this paper, we present RankUp, a simple yet effective approach that adapts existing semi-supervised classification techniques to enhance the performance of regression tasks. RankUp achieves this by converting the original regression task into a ranking problem and training it concurrently with the original regression objective. This auxiliary ranking classifier outputs a classification result, thus enabling integration with existing semi-supervised classification methods. Moreover, we introduce regression distribution alignment (RDA), a complementary technique that further enhances RankUp's performance by refining pseudo-labels through distribution alignment. Despite its simplicity, RankUp, with or without RDA, achieves SOTA results in across a range of regression benchmarks, including computer vision, audio, and natural language processing tasks.
Hepatocellular carcinoma (HCC), the most common type of liver cancer, poses significant challenges in detection and diagnosis. Medical imaging, especially computed tomography (CT), is pivotal in non-invasively identifying this disease, requiring substantial expertise for interpretation. This research introduces an innovative strategy that integrates two-dimensional (2D) and three-dimensional (3D) deep learning models within a federated learning (FL) framework for precise segmentation of liver and tumor regions in medical images. The study utilized 131 CT scans from the Liver Tumor Segmentation (LiTS) challenge and demonstrated the superior efficiency and accuracy of the proposed Hybrid-ResUNet model with a Dice score of 0.9433 and an AUC of 0.9965 compared to ResNet and EfficientNet models. This FL approach is beneficial for conducting large-scale clinical trials while safeguarding patient privacy across healthcare settings. It facilitates active engagement in problem-solving, data collection, model development, and refinement. The study also addresses data imbalances in the FL context, showing resilience and highlighting local models' robust performance. Future research will concentrate on refining federated learning algorithms and their incorporation into the continuous implementation and deployment (CI/CD) processes in AI system operations, emphasizing the dynamic involvement of clients. We recommend a collaborative human-AI endeavor to enhance feature extraction and knowledge transfer. These improvements are intended to boost equitable and efficient data collaboration across various sectors in practical scenarios, offering a crucial guide for forthcoming research in medical AI.
The utilization of face masks is an essential healthcare measure, particularly during times of pandemics, yet it can present challenges in communication in our daily lives. To address this problem, we propose a novel approach known as the human-in-the-loop StarGAN (HL–StarGAN) face-masked speech enhancement method. HL–StarGAN comprises discriminator, classifier, metric assessment predictor, and generator that leverages an attention mechanism. The metric assessment predictor, referred to as MaskQSS, incorporates human participants in its development and serves as a “human-in-the-loop” module during the learning process of HL–StarGAN. The overall HL–StarGAN model was trained using an unsupervised learning strategy that simultaneously focuses on the reconstruction of the original clean speech and the optimization of human perception. To implement HL–StarGAN, we created a face-masked speech database named “FMVD,” which comprises recordings from 34 speakers in three distinct face-masked scenarios and a clean condition. We conducted subjective and objective tests on the proposed HL–StarGAN using this database. The outcomes of the test results are as follows: (1) MaskQSS successfully predicted the quality scores of face-masked voices, outperforming several existing speech assessment methods. (2) The integration of the MaskQSS predictor enhanced the ability of HL–StarGAN to transform face-masked voices into high-quality speech; this enhancement is evident in both objective and subjective tests, outperforming conventional StarGAN and CycleGAN-based systems.
This letter explores energy efficiency (EE) maximization in a downlink multiple-input single-output (MISO) reconfigurable intelligent surface (RIS)-aided multiuser system employing rate-splitting multiple access (RSMA). The optimization task entails base station (BS) and RIS beamforming and RSMA common rate allocation with constraints. We propose a graph neural network (GNN) model that learns beamforming and rate allocation directly from the channel information using a unique graph representation derived from the communication system. The GNN model outperforms existing deep neural network (DNN) and model-based methods in terms of EE, demonstrating low complexity, resilience to imperfect channel information, and effective generalization across varying user numbers.
Mobile/multi-access edge computing (MEC) is developed to support the upcoming AI-aware mobile services, which require low latency and intensive computation resources at the edge of the network. One of the most challenging issues in MEC is service provision with mobility consideration. It has been known that the offloading decision and resource allocation need to be jointly handled to optimize the service provision efficiency within the latency constraints, which is challenging when users are in mobility. In this paper, we propose Mobility-Aware Deep Reinforcement Learning (M-DRL) framework for mobile service provision in the MEC system. M-DRL is composed of two parts: glimpse, a seq2seq model customized for mobility prediction to predict a sequence of locations just like a “glimpse” of the future, and a DRL specialized in supporting offloading decisions and resource allocation in MEC. By integrating the proposed DRL and glimpse mobility prediction model, the proposed M-DRL framework is optimized to handle the MEC service provision with average 70% performance improvements.
Speech quality estimation has recently undergone a paradigm shift from human-hearing expert designs to machine-learning models. However, current models rely mainly on supervised learning, which is time-consuming and expensive for label collection. To solve this problem, we propose VQScore, a self-supervised metric for evaluating speech based on the quantization error of a vector-quantized-variational autoencoder (VQ-VAE). The training of VQ-VAE relies on clean speech; hence, large quantization errors can be expected when the speech is distorted. To further improve correlation with real quality scores, domain knowledge of speech processing is incorporated into the model design. We found that the vector quantization mechanism could also be used for self-supervised speech enhancement (SE) model training. To improve the robustness of the encoder for SE, a novel self-distillation mechanism combined with adversarial training is introduced. In summary, the proposed speech quality estimation method and enhancement models require only clean speech for training without any label requirements. Experimental results show that the proposed VQScore and enhancement model are competitive with supervised baselines.
Reconfigurable intelligent surface (RIS) is a revolutionary passive radio technique to facilitate capacity enhancement beyond the current massive multiple-input multiple-output (MIMO) transmission. However, the potential hardware impairment (HWI) of the RIS usually causes inevitable performance degradation and the amplification of imperfect CSI. These impacts still lack full investigation in the RIS-assisted wireless network. This paper developed a robust joint RIS and transceiver design algorithm to minimize the worst-case mean square error (MSE) of the received signal under the HWI effect and imperfect channel state information (CSI) in the RIS-assisted multi-user MIMO (MU-MIMO) wireless network. Specifically, since the proposed robust joint RIS and transceiver design problem yields non-convex characteristics under severe HWI, an iterative three-step convex algorithm is developed to approach the optimality by relaxation and convex transformation. Compared with the state-of-the-art baselines that ignore the HWI, the proposed robust algorithm inhibits the destruction of HWI while raising the worst-case MSE effectively in several numerical simulations. Moreover, due to the properties of the HWI, the performance loss is notable under the magnification of the number of reflected elements in the RIS-assisted MU-MIMO wireless network.
Vehicle-to-everything (V2X) communication is one of the key technologies of 5G New Radio to support emerging applications such as autonomous driving. Due to the high density of vehicles, Remote Radio Heads (RRHs) will be deployed as Road Side Units to support V2X. Nevertheless, activation of all RRHs during low-traffic off-peak hours may cause energy wasting. The proper activation of RRH and association between vehicles and RRHs while maintaining the required service quality are the keys to reducing energy consumption. In this work, we first formulate the problem as an Integer Linear Programming optimization problem and prove that the problem is NP-hard. Then, we propose two novel algorithms, referred to as “Least Delete (LD)” and ”Largest-First Rounding with Capacity Constraints (LFRCC).” The simulation results show that the proposed algorithms can achieve significantly better performance compared with existing solutions and are competitive with the optimal solution. Specifically, the LD and LFRCC algorithms can reduce the number of activated RRHs by 86 % and 89 % in low-density scenarios. In high-density scenarios, the LD algorithm can reduce the number of activated RRHs by 90 % . In addition, the solution of LFRCC is larger than that of the optimal solution within 7 % on average.