:::
A multitude of interconnected risk events---ranging from regulatory changes to geopolitical tensions---can trigger ripple effects across firms. Identifying inter-firm risk relations is thus crucial for applications like portfolio management and investment strategy. Traditionally, such assessments rely on expert judgment and manual analysis, which are, however, subjective, labor-intensive, and difficult to scale. To address this, we propose a systematic method for extracting inter-firm risk relations using Form 10-K filings---authoritative, standardized financial documents---as our data source. Leveraging recent advances in natural language processing, our approach captures implicit and abstract risk connections through unsupervised fine-tuning based on chronological and lexical patterns in the filings. This enables the development of a domain-specific financial encoder with a deeper contextual understanding and introduces a quantitative risk relation score for transparency, interpretable analysis. Extensive experiments demonstrate that our method outperforms strong baselines across multiple evaluation settings.
The fusion of tiny energy harvesting devices with deep neural networks (DNN) optimized for intermittent execution is vital for sustainable intelligent applications at the edge. However, current intermittent-aware neural architecture search (NAS) frameworks overlook the inherent intermittency management overhead (IMO) of DNNs, leading to under-performance upon deployment. Moreover, we observe that straightforward IMO minimization within NAS may degrade solution accuracy. This work explores the relationship between DNN architectural characteristics, IMO, and accuracy, uncovering the varying sensitivity toward IMO across different DNN characteristics. Inspired by our insights, we present two guidelines for leveraging IMO sensitivity in NAS. First, the overall architecture search space can be reduced to exclude parameters with low IMO sensitivity, and second, network blocks with high IMO sensitivity can be primarily focused during the search, facilitating the discovery of highly accurate networks with low IMO. We incorporate these guidelines into TiNAS, which integrates cutting-edge tiny NAS and intermittent-aware NAS frameworks. Evaluations are conducted across various datasets and latency requirements, as well as deployment experiments on a Texas Instruments device under different intermittent power profiles. Compared to two variants, one minimizing IMO and the other disregarding IMO, TiNAS, respectively, achieves up to 38% higher accuracy and 33% lower IMO, with greater improvements for larger datasets. Its deployed solutions also achieve up to a 1.33 times inference speedup, especially under fluctuating power conditions.
Tiny battery-free devices running deep neural networks (DNNs) embody intermittent TinyML, a paradigm at the intersection of intermittent computing and deep learning, bringing sustainable intelligence to the extreme edge. This paper, as an overview of a special session at Embedded Systems Week (ESWEEK) 2025, presents four tales from diverse research backgrounds, sharing experiences in addressing unique challenges of efficient and reliable DNN inference despite the intermittent nature of ambient power. The first explores enhancing inference engines for efficient progress accumulation in hardware-accelerated intermittent inference and designing networks tailored for such execution. The second investigates computationally light, adaptive algorithms for faster, energy-efficient inference, and emerging computing-in-memory architectures for power failure resiliency. The third addresses battery-free networking, focusing on timely neighbor discovery and maintaining synchronization despite spatio-temporal energy dynamics across nodes. The fourth leverages modern nonvolatile memory fault behavior and DNN robustness to save energy without significant accuracy loss, with applicability to intermittent inference on nano-satellites. Collectively, these early efforts advance intermittent TinyML research and promote future cross-domain collaboration to tackle open challenges.
Guaranteeing reliable deep neural network (DNN) inference despite intermittent power is the cornerstone of enabling intelligent systems in energy-harvesting environments. Existing intermittent inference approaches support static neural networks with deterministic execution characteristics, accumulating progress across power cycles. However, dynamic neural networks adapt their structures at runtime. We observe that because intermittent inference approaches are unaware of this non-deterministic execution behavior, they suffer from incorrect progress recovery, degrading inference accuracy and performance. This work proposes non-deterministic inference progress accumulation to enable dynamic neural network inference on intermittent systems. Our middleware, NodPA, realizes this methodology by strategically selecting additional progress information to capture the non-determinism of the power-interrupted computation while preserving only the changed portions of the progress information to maintain low runtime overhead. Evaluations are conducted on a Texas Instruments device with both static and dynamic neural networks under time-varying power sources. Compared to intermittent inference approaches reliant on determinism, NodPA is less prone to inference non-termination and achieves an average inference speedup of 1.57 times without compromising accuracy, with greater improvements for highly dynamic networks under weaker power.
In this paper, we examine the existence of the Rényi divergence between two time invariant hidden Markov models with arbitrary positive initial distributions. By making use of a Markov chain representation of the probability distribution for the hidden Markov model and eigenvalue for the associated Markovian operator, we obtain, under some regularity conditions, convergence of the Rényi divergence. By using this device, we also characterize the Rényi divergence and obtain the Kullback–Leibler divergence as of the Rényi divergence. Several examples, including classical finite state hidden Markov models, Markov switching models, and recurrent neural networks, are given for illustration. Moreover, we develop a non-Monte Carlo method that computes the Rényi divergence of two-state Markov switching models via the underlying invariant probability measure, which is characterized by the Fredholm integral equation.
The prevalence of hearing aids is increasing. However, optimizing their amplification remains challenging due to the complexity of integrating multiple components in traditional methods. To address this, we present NeuroAMP, a novel deep neural network for end-to-end, personalized amplification in hearing aids. NeuroAMP leverages spectral features and the listener’s audiogram as inputs, and we explore four architectures: Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Convolutional Recurrent Neural Network (CRNN), and Transformer. We also introduce Denoising NeuroAMP, an extension that integrates noise reduction with amplification for improved real-world performance. To enhance generalization, we employed a comprehensive data augmentation strategy during training on diverse speech (TIMIT, TMHINT) and music (Cadenza Challenge MUSIC) datasets. Evaluation using the Hearing Aid Speech Perception Index (HASPI), Hearing Aid Speech Quality Index (HASQI), and Hearing Aid Audio Quality Index (HAAQI) shows that the Transformer-based NeuroAMP achieves the best performance, with SRCC scores of 0.9927 (HASQI) and 0.9905 (HASPI) on TIMIT, and 0.9738 (HAAQI) on Cadenza dataset. Notably, the augmentation strategy maintains robust performance on unseen datasets (e.g., VoiceBank-DEMAND, MUSDB18-HQ). Furthermore, Denoising NeuroAMP outperforms both the conventional NAL-R+WDRC method and a two-stage baseline on the VoiceBank-DEMAND dataset, achieving HASPI of 0.90 and HASQI of 0.59. These results highlight the strong potential of NeuroAMP and Denoising NeuroAMP to provide a novel and effective framework for personalized hearing aid amplification.
Tags play a critical role in enhancing product discoverability, optimizing search results, and enriching recommendation systems on e-commerce platforms. Despite the recent advancements in large language models (LLMs), which have shown proficiency in processing and understanding textual information, their application in tag generation remains an under-explored yet complex challenge. To this end, we introduce a novel method for automatic product tagging using LLMs to create behavior-enhanced tags (BETags). Specifically, our approach begins by generating base tags using an LLM. These base tags are then refined into BETags by incorporating user behavior data. This method aligns the tags with users' actual browsing and purchasing behavior, enhancing the accuracy and relevance of tags to user preferences. By personalizing the base tags with user behavior data, BETags are able to capture deeper behavioral insights, which is essential for understanding nuanced user interests and preferences in e-commerce environments. Moreover, since BETags are generated offline, they do not impose real-time computational overhead and can be seamlessly integrated into downstream tasks commonly associated with recommendation systems and search optimization. Our evaluation of BETag across three datasets--- Amazon (Scientific), MovieLens-1M, and FreshFood---shows that our approach significantly outperforms both human-annotated tags and other automated methods. These results highlight BETag as a scalable and efficient solution for personalized automated tagging, advancing e-commerce platforms by creating more tailored and engaging user experiences.
This paper tackles key challenges in Software-Defined Networking (SDN) by proposing a novel approach for optimizing resource allocation and dynamic priority assignment using OpenFlows priority field. The proposed Lagrangian relaxation (LR)-based algorithms significantly reduces network delay, achieving performance management with dynamic priority levels while demonstrating adaptability and efficiency in a sliced network. The algorithms’ effectiveness were validated through computational experiments, highlighting the strong potential for QoS management across diverse industries. Compared to the Same Priority baseline, the proposed methods: RPA, AP–1, and AP–2, exhibited notable performance improvements, particularly under strict delay constraints. For future applications, the study recommends expanding the algorithm to handle larger networks, integrating it with artificial intelligence technologies for proactive resource optimization. Additionally, the proposed methods lay a solid foundation for addressing the unique demands of 6G networks, particularly in areas such as base station mobility (Low-Earth Orbit, LEO), ultra-low latency, and multi-path transmission strategies.
.
Low earth orbit (LEO) satellite-enabled orthogonal frequency division multiple access (OFDMA) systems will play a pivotal role in future integrated satellite-terrestrial networks to realize ubiquitous high-throughput communication. However, the high mobility of LEO satellites and the utilization of Ku-Ka and millimeter wave (mmWave) bands introduce wide-range Doppler shifts, which are especially detrimental to OFDMA-based systems. Existing Doppler shift compensation methods are limited by the requirement for prior user location information and/or high computational complexities associated with searching across broad Doppler shift ranges. In this work, we propose a multi-stage Doppler shift compensation method aimed at compensating for wide-range Doppler shifts in downlink LEO satellite OFDMA systems over Ku-Ka to mmWave bands. The proposed method consists of three stages: incorporating the phase-differential (PD) operation into the extended Kalman filter (EKF) to widen the estimation range, enhancing compensation using a repetition training sequence, and utilizing the cyclic prefix (CP) for fine estimation. Simulation results demonstrate the proposed method's effectiveness in handling Doppler shifts in LEO SatCom over different channels and frequency bands. Moreover, the proposed method attains the maximum estimation range and exhibits high accuracy with low complexity, irrespective of the Doppler shift range, making it an effective, practical, and easily implementable solution in LEO satellite communication.