:::
In this paper, we investigate the wireless network deployment problem, which seeks the best deployment of a given limited number of wireless routers. We found that many goals for network deployment, such as maximizing the number of covered users or areas, or the total throughput of the network, can be modelled with the submodular set function. Specifically, given a set of routers, the goal is to find a set of locations S, each of which is equipped with a router, such that S maximizes a predefined submodular set function. However, this deployment problem is more difficult than the traditional maximum submodular set function problem, e.g., the maximum coverage problem, because it requires all the deployed routers to form a connected network. In addition, deploying a router in different locations might consume different costs. To address these challenges, this paper introduces two approximation algorithms, one for homogeneous deployment cost scenarios and the other for heterogeneous deployment cost scenarios. Our simulations, using synthetic data and real traces of census in Taipei, show that the proposed algorithms achieve a better performance than other heuristics.
Learning-based approaches for image superresolution (SR) have attracted the attention from researchers in the past few years. In this paper, we present a novel selflearning approach for SR. In our proposed framework, we advance support vector regression (SVR) with image sparse representation, which offers excellent generalization in modeling the relationship between images and their associated SR versions. Unlike most prior SR methods, our proposed framework does not require the collection of training low and high-resolution image data in advance, and we do not assume the reoccurrence (or selfsimilarity) of image patches within an image or across image scales. With theoretical supports of Bayes decision theory, we verify that our SR framework learns and selects the optimal SVR model when producing a SR image, which results in the minimum SR reconstruction error. We evaluate our method on a variety of images, and obtain very promising SR results. In most cases, our method quantitatively and qualitatively outperforms bicubic interpolation and state-of-the-art learning-based SR approaches.
Adaptation of modulation and transmission bit-rates for video multicast in a multi-rate wireless network is a challenging problem because of network dynamics, variable video bit-rates, and hetero- geneous clients who may expect differentiated video qualities. Prior work on the leader-based schemes selects the transmission bit-rate that provides reliable transmission for the node that experiences the worst channel condition. However, this may penalize other nodes that can achieve a higher throughput by receiving at a higher rate. In this work, we investigate a rate-adaptive video multicast scheme that can provide heterogeneous clients differentiated visual qualities matching their channel conditions. We first propose a rate scheduling model that selects the optimal transmission bit-rate for each video frame to maximize the total visual quality for a multicast group subject to the minimum-visual-quality-guaranteed constraint. We then present a practical and easy-to-implement protocol, called QDM, which constructs a cluster-based structure to characterize node heterogeneity and adapts the transmission bit-rate to network dynamics based on video quality perceived by the representative cluster heads. Since QDM selects the rate by a sample-based technique, it is suitable for real-time streaming even without any pre-process. We show that QDM can adapt to network dynamics and variable video-bit rates efficiently, and produce a gain of 2-5dB in terms of the average video quality as compared to the leader- based approach.
TBA
Reducing the energy consumption of the emerging genre of smart handheld devices while simultaneously maintaining mobile applications and services is a major challenge. This work is inspired by an observation on the resource usage patterns of mobile applications. In contrast to existing DVFS scheduling algorithms and history-based prediction techniques, we propose a resource-driven DVFS scheme in which resource state machines are designed to model the resource usage patterns in an online fashion to guide DVFS. We have implemented the proposed scheme on Android smartphones and conducted experiments based on real-world applications. The results are very encouraging and demonstrate the efficacy of the proposed scheme.
In this paper, we study a coalitional game approach to resource allocation in a multi-channel cooperative cognitive radio network with multiple primary users (PUs) and secondary users (SUs). We propose to form the grand coalition by grouping all PUs and SUs in a set, where each PU can lease its spectrum to all SUs in a time-division manner while the SUs in return assist PUs' data transmission as relays. We use the solution concept of the core to analyze the stability of the grand coalition, and the solution concept of the Shapley value to fairly divide the payoffs among the users. Due to the convexity of the proposed game, the Shapley value is shown to be in the core. We derive the optimal strategy for the SU, i.e., transmitting its own data or serving as a relay, that maximizes the sum rate of all PUs and SUs. The payoff allocations according to the core and the Shapley value are illustrated by an example, which demonstrates the benefits of forming the grand coalition as compared with non-coalition and other coalition schemes.
In this paper, we consider a two-way relay network in which two users exchange messages through a single relay using a physical-layer network coding (PNC) based protocol. The protocol comprises two phases of communication. In the multiple access (MA) phase, two users transmit their modulated signals concurrently to the relay, and in the broadcast (BC) phase, the relay broadcasts a network-coded (denoised) signal to both users. Nonbinary and binary network codes are considered for uniform and nonuniform pulse amplitude modulation (PAM) adopted in the MA phase, respectively. We examine the effect of different choices of symbol mapping (i.e., mapping from the denoised signal to the modulation symbols at the relay) and bit mapping (i.e., mapping from the modulation symbols to the source bits at the user) on the system error-rate performance. A general optimization framework is proposed to determine the optimal symbol/bit mappings with joint consideration of noisy transmissions in both communication phases. Complexity-reduction techniques are developed for solving the optimization problems. It is shown that the optimal symbol/bit mappings depend on the signal-to-noise ratio (SNR) of the channel and the modulation scheme. A general strategy for choosing good symbol/bit mappings is also presented based on a high-SNR analysis, which suggests using a symbol mapping that aligns the error patterns in both communication phases and Gray and binary bit mappings for uniform and nonuniform PAM, respectively.
In this paper, we present an automatic foreground object detection method for videos captured by freely moving cameras. While we focus on extracting a single foreground object of interest throughout a video sequence, our approach does not require any training data nor the interaction by the users. Based on the SIFT correspondence across video frames, we construct robust SIFT trajectories in terms of the calculated foreground feature point probability. Our foreground feature point probability is able to determine candidate foreground feature points in each frame, without the need of user interaction such as parameter or threshold tuning. Furthermore, we propose a probabilistic consensus foreground object template (CFOT), which is directly applied to the input video for moving object detection via template matching. Our CFOT can be used to detect the foreground object in videos captured by a fast moving camera, even if the contrast between the foreground and background regions is low. Moreover, our proposed method can be generalized to foreground object detection in dynamic backgrounds, and is robust to viewpoint changes across video frames. The contribution of this paper is trifold: (1) we provide a robust decision process to detect the foreground object of interest in videos with contrast and viewpoint variations; (2) our proposed method builds longer SIFT trajectories, and this is shown to be robust and effective for object detection tasks; and (3) the construction of our CFOT is not sensitive to the initial estimation of the foreground region of interest, while its use can achieve excellent foreground object detection results on real-world video data.
Multimedia Broadcast/Multicast Service (MBMS) is a bandwidth efficient broadcast scheme for multimedia communications. To support prioritized transmissions, the unequal error protection (UEP) for multi-resolution multimedia sources can be realized through MBMS. Nevertheless, the enhancement on the transmission fidelity in base layer typically sacrifices the fidelity of enhancement layers. Herein, a novel dual diversity space-time coding (DDSTC) is proposed to exploit the intrinsic UEP capability of space-time codes by utilizing a constellation mapping duo for two consecutive transmission periods in multiple-input multiple-output (MIMO) systems. As compared with Alamouti coding, the DDSTC achieves coding gains on the transmission error rates of base layer without significant degradations on the enhancement layers. At the transmission rates of base and enhancement layers equal to 2 bits per transmission, the DDSTC obtains 1.3 dB and 3.0 dB coding gains for base layer in $2 \times 2$ and $2 \times 3$ MIMO systems respectively. Besides, analytical analysis on symbol error probabilities verifies that 6 dB asymptotic coding gain is reachable in rich transmit diversity scenarios. While attaining the considerable improvements on error rates , the DDSTC avoids the high decoding complexity by adopting our proposed decoding schemes. Simulation results show that DDSTC outperforms conventional UEP schemes based on hierarchical modulations or power allocations.
Linear discriminant analysis (LDA) is a popular supervised dimension reduction algorithm, which projects the data into an effective low-dimensional linear subspace while the separation between the projected data from different classes is improved. While this subspace is typically determined by solving a generalized eigenvalue decomposition problem, its high computation costs prohibit the use of LDA especially when the scale and the dimensionality of the data are large. Based on the recent success of least squares LDA (LSLDA), we propose a novel rank-one update method with a simplified class indicator matrix. Using the proposed algorithm, we are able to derive the LSLDA model efficiently. Moreover, our LSLDA model can be extended to address the learning task of concept drift, in which the recently received data exhibit with gradual or abrupt changes in distribution. In other words, our LSLDA is able to observe and model the data distribution changes, while the dependency on outdated data will be suppressed. This proposed LSLDA will benefit applications of streaming data classification or mining, and it can recognize data with newly added class labels during the learning process. Experimental results on both synthetic and real datasets (with and without concept drift) confirm the effectiveness of our propose LSLDA.