Home > Publications > Doctoral dissertation & Master thesis
Domestic Conference
 
 
Title Bayesian Message-Passing Recovery for Compressed Sensing: Algorithm Construction and Performance
Degree Ph.D.
Author Jaewook Kang
Advisor Kiseon Kim
Graduation Date 2016.02.25 File
    Date 2017-03-06 10:56

Compressed sensing (CS) is a signal processing framework which includes a growing body of techniques that undersample sparse signals and yet reconstruct them precisely. Such technique allows us to sample sparse signals beyond the traditional Shannon-Nyquist sampling theory demands. Namely, information on the signals is compressively acquired and preserved in sampled measurements where the sampling rate is less than the Nyquist rate. However, as compared to the traditional sampling framework, which can recover signals by using a simple low-pass filter, the task of CS recovery problem is complicated. The CS recovery problem is intuitively defined as an l_0-norm minimization. Solving the l_0-norm task is very limited in practice because it is a NP-complete problem. Therefore, several alternative recovery algorithms have been developed to relax the computational infeasibility of the l_0-norm task, such as l_1-norm minimization and greedy approaches; there has been a long and rich history of such approaches improving the recovery performance and the computational speed. Another line of the approaches is based on Bayesian philosophy. In the Bayesian framework, the sparse recovery problem is described as maximum-a-posteriori (MAP) or minimum-mean-square-error (MMSE) estimation problem, and its sparse solution then is sought by imposing a proper sparsifying prior probability density function with respect to the target signal. In this dissertation, we are interested in a class of Bayesian recovery algorithms applying strategy of belief propagation or message-passing, which have been known to have strong recovery ability and low-computational nature. We will study Bayesian message-passing algorithms to handle two types of the sparsity: direct sparsity and sparsity in finite differences. In addition, we extend our scope to a case where the target signal is a mixture of two sparse signals.

The first part of this dissertation discusses a low-computational Bayesian algorithm for the direct sparsity, called BHT-BP. In this framework, we consider LDPC-like measurement matrices which has a tree-structured property, and additive white Gaussian noise. BHT-BP has a joint detection-and-estimation structure consisting of a sparse support detector and a nonzero estimator. The support detector is designed under the criterion of the minimum detection error probability using nonparametric belief propagation (nBP) and composite binary hypothesis tests. The nonzeros are estimated in the sense of linear MMSE, where the support detection result is utilized. BHT-BP has its strength in noise robust support detection, effectively removing quantization errors caused by the uniform sampling-based nBP. Therefore, in the direct sparsity recovery, BHT-BP has advantages over CS-BP which is an existing nBP algorithm, being comparable to other recent CS solvers, in several aspects. In addition, we examine impact of the minimum nonzero value of directly sparse signals via BHT-BP, on the basis of the results of Wainwright et al. and Fletcher et al.. Our empirical result shows that variation of x_{min} is reflected to recovery performance in the form of SNR shift.

The second part proposes a fast approximate message-passing (AMP) algorithm for solving CS recovery problems with 1D-finite-difference sparsity in term of MMSE estimation. The proposed algorithm, named ssAMP-BGFD, is low-computational with its fast convergence and cheap per-iteration cost, providing phase transition nearly approaching to the state-of-the-art. The ssAMP-BGFD algorithm is originated from a sum-product message-passing rule, applying a Bernoulli-Gaussian (BG) prior, seeking an MMSE solution. The algorithm construction includes not only the conventional AMP technique for the measurement fidelity, but also suggests a simplified message-passing method to promote the signal sparsity in finite-difference. Furthermore, we provide an EM-tuning methodology to learn the BG prior parameters, suggesting how to use some practical measurement matrices satisfying the RIP requirement under the ssAMP-BGFD recovery. Extensive empirical results confirms performance of the ssAMP-BGFD algorithm, in phase transition, convergence speed, and CPU runtime, compared to the recent algorithms.

In the third part, we propose a faster AMP to solve the L_1-Split-Analysis for the 2D sparsity separation, which is referred to as MixAMP. We develop the MixAMP based on the factor graphical modeling and the min-sum message-passing. Then, we examine MixAMP for separation of two types of the sparsity mixtures: separation of the direct-and-group sparsity mixture, and that of the direct-and-finite-difference sparsity mixture. This case study shows that the MixAMP method offers computational advantages over the conventional first-order method, TFOCS.

목록
 
 
광주과학기술원 한·러 MT-IT 융합기술연구센터 광주과학기술원정보통신공학부