:::
Efficient resource orchestration is essential for ensuring high Quality of Service (QoS) and reliability in Software-defined Networks (SDNs). This paper introduces an optimization-based algorithm that integrates Lagrangian Relaxation (LR) and Queueing Theory to enhance admission control and priority scheduling in SDNs. The proposed approach overcomes the limitations of traditional binary admission control methods by enabling Partial Admission Control (PAC), which allows more flexible resource allocation. The system's performance is significantly improved through the use of non-preemptive and preemptive priority scheduling, while LR techniques effectively manage complex network conditions. \evan{Specifically, the proposed Bisection-Search (B-S) heuristic leverages the Lagrangian multipliers generated during the optimization process to intelligently guide resource allocation, consistently producing high-quality feasible solutions ( ). These solutions are validated against the theoretical bound ( ) provided by the LR method, demonstrating a provably small duality gap.} The proposed algorithm is evaluated through extensive simulations across diverse network scales, traffic loads, and delay constraints, demonstrating substantial improvements in network performance and service differentiation. These results provide a comprehensive analysis of the performance envelope of the proposed framework, highlighting the trade-offs between solution quality, computational complexity, and network scale. The study offers an adaptive and mathematically grounded solution, demonstrating its effectiveness in complex, high-contention networking environments.
Wet electrodes with conductive gel are widely applied as the gold standard for recording EEG signals due to their low impedance between the scalp and the electrode. However, their extensive preparation time before data collection and the required cleaning afterward make them impractical for real-world Brain-Computer Interface (BCI) applications. Recent advancements in semi-dry electrodes, which use a minimal amount of conductive material and achieve a comparable signal-to-noise quality to wet electrodes, present an alternative approach for continuous EEG monitoring when comparing to dry electrodes. Our prior study introduced a potential solution for overcoming challenges related to hair-layer penetration and dose control through 3D-printed, watermill-shaped EEG electrodes. Based on those promising results, this study prototypes three designs of watermill-shaped EEG electrodes and refines the fabrication process to scale production and accommodate diverse hairstyles in real-world scenarios. Eight different wig styles which were made of either human or synthetic hair were tested in offline experiments to evaluate hair-layer penetration performance and gel-applying application efficiency. In the real-world experiment, 15 participants with varying hairstyles were recruited in neurophysiological experiments. Statistical analysis revealed that the watermill electrodes consumed significantly less gel than wet electrodes (p<0.001), with the star electrode requiring the fewest mean rolls to achieve target impedance (1.94 rolls). The results demonstrate that the watermill-shaped electrode effectively works across different hairstyles, ensuring consistent hair-layer penetration and controlled application of conductive material. These findings establish the proposed electrode as a viable semi-dry solution for real-world BCI applications.
The floral export industry, particularly the export of Phalaenopsis orchids, plays a pivotal role in the agricultural economy. Double-spike orchids hold significantly higher commercial value compared to single-spike varieties. Traditionally, small-scale farms face challenges due to limited data volume and availability, making the application of Machine Learning (ML) or Deep Learning (DL) techniques for improving the accuracy of double-spike orchid predictions highly demanding. However, this study presents an innovative approach to predicting and enhancing the double-spike rates of Phalaenopsis orchids, addressing critical challenges in the floriculture industry. By leveraging advanced ML, DL, and Federated Learning (FL) frameworks, the research integrates horticultural trait extraction with predictive modeling to optimize orchid cultivation practices. The methodology includes a multi-stage process utilizing You Only Look Once version 8 (YOLOv8) for extracting key features from orchid images, such as leaf dimensions and count, combined with historical spike data to train models including Extreme Gradient Boosting (XGBoost), Deep Neural Network (DNN), TabNet, and Gated Adaptive Network for Deep Automated Learning of Features (GANDALF). The results demonstrate that FL effectively resolves issues of limited data availability and privacy concerns for small-scale farms, enabling secure data collaboration and improving model performance. Continual learning further enhances predictive accuracy by dynamically incorporating new data, ensuring sustained adaptability and relevance. Application of the proposed framework significantly improves the ability of orchid growers to identify and prioritize double-spike orchids, a highly valuable trait in export markets, increasing their competitiveness and profitability.
Biometric recognition plays an increasingly pivotal role in cybersecurity, where the CIA triad, Confidentiality, Integrity, and Availability, forms the cornerstone of information security, with authentication as a critical yet challenging component. This paper presents the Biometric Multi-modal Authentication System using Geometric Programming (BMMA-GPT), tailored for deployment in Fast IDentity Online (FIDO/FIDO2)-enabled environments and Zero Trust Architectures (ZTA). The system employs a dual-threshold mechanism integrated with Defense-in-Depth (DiD) strategies to simultaneously enhance accuracy, efficiency, and security. The underlying optimization problem is formulated as a mathematical programming task and reformulated into a Geometric Programming (GP) model to efficiently compute optimal biometric permutations and verification thresholds under constrained estimation errors. BMMA-GPT enables the flexible integration of multiple biometric modalities, allowing dynamic adjustments to meet both individual user profiles and organizational security requirements. It achieves a high Area Under Curve (AUC) of approximately 0.99 while maintaining authentication latency under 1.5 seconds. This design supports Chief Information Security Officers (CISOs) in configuring tailored authentication processes with minimal computational cost, enhancing resilience against spoofing attacks and ensuring seamless user experience. By aligning biometric verification with DiD principles and GP-based optimization, the proposed framework offers a scalable and robust solution for identity authentication in complex digital ecosystems.
A multitude of interconnected risk events---ranging from regulatory changes to geopolitical tensions---can trigger ripple effects across firms. Identifying inter-firm risk relations is thus crucial for applications like portfolio management and investment strategy. Traditionally, such assessments rely on expert judgment and manual analysis, which are, however, subjective, labor-intensive, and difficult to scale. To address this, we propose a systematic method for extracting inter-firm risk relations using Form 10-K filings---authoritative, standardized financial documents---as our data source. Leveraging recent advances in natural language processing, our approach captures implicit and abstract risk connections through unsupervised fine-tuning based on chronological and lexical patterns in the filings. This enables the development of a domain-specific financial encoder with a deeper contextual understanding and introduces a quantitative risk relation score for transparency, interpretable analysis. Extensive experiments demonstrate that our method outperforms strong baselines across multiple evaluation settings.
In this paper, we examine the existence of the Rényi divergence between two time invariant hidden Markov models with arbitrary positive initial distributions. By making use of a Markov chain representation of the probability distribution for the hidden Markov model and eigenvalue for the associated Markovian operator, we obtain, under some regularity conditions, convergence of the Rényi divergence. By using this device, we also characterize the Rényi divergence and obtain the Kullback–Leibler divergence as of the Rényi divergence. Several examples, including classical finite state hidden Markov models, Markov switching models, and recurrent neural networks, are given for illustration. Moreover, we develop a non-Monte Carlo method that computes the Rényi divergence of two-state Markov switching models via the underlying invariant probability measure, which is characterized by the Fredholm integral equation.
The prevalence of hearing aids is increasing. However, optimizing their amplification remains challenging due to the complexity of integrating multiple components in traditional methods. To address this, we present NeuroAMP, a novel deep neural network for end-to-end, personalized amplification in hearing aids. NeuroAMP leverages spectral features and the listener’s audiogram as inputs, and we explore four architectures: Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Convolutional Recurrent Neural Network (CRNN), and Transformer. We also introduce Denoising NeuroAMP, an extension that integrates noise reduction with amplification for improved real-world performance. To enhance generalization, we employed a comprehensive data augmentation strategy during training on diverse speech (TIMIT, TMHINT) and music (Cadenza Challenge MUSIC) datasets. Evaluation using the Hearing Aid Speech Perception Index (HASPI), Hearing Aid Speech Quality Index (HASQI), and Hearing Aid Audio Quality Index (HAAQI) shows that the Transformer-based NeuroAMP achieves the best performance, with SRCC scores of 0.9927 (HASQI) and 0.9905 (HASPI) on TIMIT, and 0.9738 (HAAQI) on Cadenza dataset. Notably, the augmentation strategy maintains robust performance on unseen datasets (e.g., VoiceBank-DEMAND, MUSDB18-HQ). Furthermore, Denoising NeuroAMP outperforms both the conventional NAL-R+WDRC method and a two-stage baseline on the VoiceBank-DEMAND dataset, achieving HASPI of 0.90 and HASQI of 0.59. These results highlight the strong potential of NeuroAMP and Denoising NeuroAMP to provide a novel and effective framework for personalized hearing aid amplification.
Tags play a critical role in enhancing product discoverability, optimizing search results, and enriching recommendation systems on e-commerce platforms. Despite the recent advancements in large language models (LLMs), which have shown proficiency in processing and understanding textual information, their application in tag generation remains an under-explored yet complex challenge. To this end, we introduce a novel method for automatic product tagging using LLMs to create behavior-enhanced tags (BETags). Specifically, our approach begins by generating base tags using an LLM. These base tags are then refined into BETags by incorporating user behavior data. This method aligns the tags with users' actual browsing and purchasing behavior, enhancing the accuracy and relevance of tags to user preferences. By personalizing the base tags with user behavior data, BETags are able to capture deeper behavioral insights, which is essential for understanding nuanced user interests and preferences in e-commerce environments. Moreover, since BETags are generated offline, they do not impose real-time computational overhead and can be seamlessly integrated into downstream tasks commonly associated with recommendation systems and search optimization. Our evaluation of BETag across three datasets--- Amazon (Scientific), MovieLens-1M, and FreshFood---shows that our approach significantly outperforms both human-annotated tags and other automated methods. These results highlight BETag as a scalable and efficient solution for personalized automated tagging, advancing e-commerce platforms by creating more tailored and engaging user experiences.
This paper tackles key challenges in Software-Defined Networking (SDN) by proposing a novel approach for optimizing resource allocation and dynamic priority assignment using OpenFlows priority field. The proposed Lagrangian relaxation (LR)-based algorithms significantly reduces network delay, achieving performance management with dynamic priority levels while demonstrating adaptability and efficiency in a sliced network. The algorithms’ effectiveness were validated through computational experiments, highlighting the strong potential for QoS management across diverse industries. Compared to the Same Priority baseline, the proposed methods: RPA, AP–1, and AP–2, exhibited notable performance improvements, particularly under strict delay constraints. For future applications, the study recommends expanding the algorithm to handle larger networks, integrating it with artificial intelligence technologies for proactive resource optimization. Additionally, the proposed methods lay a solid foundation for addressing the unique demands of 6G networks, particularly in areas such as base station mobility (Low-Earth Orbit, LEO), ultra-low latency, and multi-path transmission strategies.
.