Self-powered intermittent systems waste considerable I/O energy because volatile I/O modules repeatedly issue identical operations under power failure conditions, and also due to the use of the inefficient I/O stack originally developed for battery-powered systems. This paper presents the concept, design, and implementation of autonomous I/O, which can accumulatively and transparently complete I/O operations regardless of power stability. We define its two essential functionalities, separate the general I/O stack to make accumulatively-completed I/O operations transparent to application tasks, and propose an access protocol that allows for energy efficiency and compatibility with the general I/O stack. To evaluate the efficacy, we implement our design and conduct extensive experiments on a Texas Instruments device with commodity sensor and Wi-Fi modules. Experimental results show that autonomous I/O can achieve 1.8 times the throughout achieved with nonvolatile I/O when the power is relatively steady, while reducing the completion time of individual I/O operations by at least 34% with relatively unstable power.
Graphics-intensive mobile games place different and varying levels of demand on the associated CPUs and GPUs. In contrast to the workload variability that characterizes games, the current design of the energy governor employed by mobile systems appears to be outdated. In this work, we review the energy-saving mechanism implemented in an Android system coupled with graphics-intensive gaming workloads from three perspectives: user perception, application status, and the interplay between the CPU and GPU. We observe that there are information gaps in the current system, which may result in unnecessary energy wastage. To resolve the problem, we propose an online user-centric CPU-GPU governing framework. To bridge the identified information gaps, we classify rendered game frames into redundant/changing frames to satisfy user demand, categorize an application into GPU sensitive/insensitive phases to understand the application’s demand, and determine the frequency scaling intents of the CPU and GPU to capture processor demand. In response to the measured demand, we employ a required workload estimator, a unified policy selector, and a frequency-scaling intent communicator in the framework to save energy. The proposed framework was implemented on an LG Nexus 5X smartphone, and extensive experiments with realworld 3D gaming applications were conducted. According to the experiment results, for an application which is low interactive and infrequent phase changing, the proposed framework can respectively reduce energy consumption by 25.3% and 39% compared with our previous work and Android governors while maintaining user experience.
Vehicular fog computing (VFC) is a promising approach to provide ultra-low-latency service to vehicles and end users by extending fog computing to the conventional vehicular networks. Parked vehicle assistance (PVA), as a critical technique in VFC, can be integrated with smart parking in order to exploit its full potentials. In this paper, we propose a smart VFC system, by combining both PVA and smart parking. A VFC-aware parking reservation auction is proposed to guide the on-the-move vehicles to the available parking places with less effort and meanwhile exploit the fog capability of parked vehicles to assist the delay-sensitive computing services by monetary rewards to compensate for their service cost. The proposed allocation rule maximizes the aggregate utility of the smart vehicles and the proposed payment rule guarantees incentive compatibility, individual rationality, and budget balance. We further provide an observation stage with dynamic offload pricing update to improve the offload efficiency and the profit of the fog system. The simulation results confirm the win–win performance enhancement to the fog node controller, the smart vehicles, and the parking places from the proposed design.
Perceptual similarity measurement allows mobile applications to eliminate unnecessary computations without compromising visual experience. Existing pixel-wise measures incur significant overhead with increasing display resolutions and frame rates. This paper presents an ultra lightweight similarity measure called LSIM, which assesses the similarity between frames based on the transformation matrices of graphics objects. To evaluate its efficacy, we integrate LSIM into the Open Graphics Library and conduct experiments on an Android smartphone with various mobile 3D games. The results show that LSIM is highly correlated with the most widely used pixel-wise measure SSIM, yet three to five orders of magnitude faster. We also apply LSIM to a CPU-GPU governor to suppress the rendering of similar frames, thereby further reducing computation energy consumption by up to 27.3% while maintaining satisfactory visual quality.
Self-powered intermittent systems enable accumulative executions in unstable power environments, where checkpointing is often adopted as a means to achieve data consistency and system recovery under power failures. However, existing approaches based on the checkpointing paradigm normally require system suspension and logging at runtime. This paper presents a design which enables failure-resilient intermittently-powered systems without runtime checkpointing. Our design enforces the consistency and serializability of concurrent data access while maximizing computation progress, as well as allows instant system recovery after power resumption, by leveraging the characteristics of data accessed in hybrid memory. We integrated the design into FreeRTOS running on a Texas Instruments device. Experimental results show that our design achieves up to 11.8 times the computation progress achieved by checkpointing-based approaches, while reducing the recovery time by nearly 90%.
For rate optimization in interference limited network, improper Gaussian signaling has shown its capability to outperform the conventional proper Gaussian signaling. In this work, we study a weighted sum-rate maximization problem with improper Gaussian signaling for the multiple-input multiple-output interference broadcast channel (MIMO-IBC). To solve this nonconvex and NP-hard problem, we propose an effective separate covariance and pseudo-covariance matrices optimization algorithm. In the covariance optimization, a weighted minimum mean square error (WMMSE) algorithm is adopted, and, in the pseudo-covariance optimization, an alternating optimization (AO) algorithm is proposed, which guarantees convergence to a stationary solution and ensures a sum-rate improvement over proper Gaussian signaling. An alternating direction method of multipliers (ADMM)-based multi-agent distributed algorithm is proposed to solve an AO subproblem with the globally optimal solution in a parallel and scalable fashion. The proposed scheme exhibits favorable convergence, optimality, and complexity properties for future large-scale networks. Simulation results demonstrate the superior sum-rate performance of the proposed algorithm as compared to existing schemes with proper as well as improper Gaussian signaling under various network configurations.
In this paper, multi-stream transmission in interference networks aided by multiple amplify-and-forward (AF) relays in the presence of direct links is considered. The objective is to minimize the sum power of transmitters and relays by beamforming optimization under the stream signal-to-interference-plus-noise-ratio (SINR) constraints. For transmit beamforming optimization, the problem is a well-known non-convex quadratically constrained quadratic program (QCQP) that is NP-hard to solve. After semi-denite relaxation (SDR), the problem can be optimally solved via alternating direction method of multipliers (ADMM) algorithm for distributed implementation. Analytical and extensive numerical analyses demonstrate that the proposed ADMM solution converges to the optimal centralized solution. The convergence rate, computational complexity, and message exchange load of the proposed algorithm outperforms the existing solutions. Furthermore, by SINR approximation at the relay side, distributed joint transmit and relay beamforming optimization is also proposed that further improves the total power saving at the cost of increased complexity.
Most existing or currently developing Internet of Things (IoT) communication standards are based on the assumption that the IoT services only require low data rate transmission and therefore can be supported by limited resources such as narrow-band channels. This assumption rules out those IoT services with burst traffic, critical missions, and low latency requirements. In this paper, we propose to utilize the idle devices in mission-critical IoT networks to boost the transmission data rate for critical tasks through multiple concurrent transmissions. This approach virtually expands the existing narrow-band IoT protocols to break the bandwidth limitation in order to provide low latency services for critical tasks. In this approach, we propose the task-balance method and the first-link descending order to determine the relay order and data partition in a given relay set. We theoretically prove that the optimal relay configuration that minimizes the uploading latency in single source scenario can be derived by the proposed algorithms in polynomial time when we have sufficient number of available channels. We also propose a greedy algorithm to approximate the optimal solution within a 1/2 performance lower bound in general scenarios. The simulation results shows that the proposed approach can reduce the latency of critical tasks up to 76% comparing with traditional approaches.
Wireless body area networks (WBANs) have emerged recently to provide health monitoring for chronic patients. In a WBAN, the patient's smartphone is deemed an appropriate sink to help forward the sensing data to back-end servers. Through a real-world case study, we observe that temporary disconnection between sensors and the associated smartphone can happen frequently due to postural changes, causing a significant amount of data to be lost forever. In this paper, we propose a scheme to parasitize the data in surrounding Wi-Fi networks whenever temporary disconnection occurs. Specifically, we model data parasitizing as an optimization problem, with the objective of maximizing the system lifetime without any data loss. Then, we propose an optimal offline algorithm to solve the problem, as well as an online algorithm that allows practical implementations. We have also implemented a prototype system, where the online algorithm serves as the underlying technique, based on Arduino. To evaluate our scheme, we conduct a series of experiments with the prototype system in controlled and real-world environments. The results show that the lifetime is prolonged by 100 times, and it could be further doubled if the health monitoring application permits a few packet losses.
In the future, mobile systems will increasingly feature more advanced organic light-emitting diode (OLED) displays. The power consumption of these displays is highly dependent on the image content. However, existing OLED power-saving techniques either change the visual experience of users or degrade the visual quality of images in exchange for a reduction in the power consumption. Some techniques attempt to enhance the image quality by employing a compound objective function. In this paper, we present a win-win scheme that always enhances the image quality while simultaneously reducing the power consumption. We define metrics to assess the benefits and cost for potential image enhancement and power reduction. We then introduce algorithms that ensure the transformation of images into their quality-enhanced power-saving versions. Next, the win-win scheme is extended to process videos at a justifiable computational cost. All the proposed algorithms are shown to possess the win-win property without assuming accurate OLED power models. Finally, the proposed scheme is realized through a practical camera application and a video camcorder on mobile devices. The results of experiments conducted on a commercial tablet with a popular image database and on a smartphone with real-world videos are very encouraging and provide valuable insights for future research and practices.