Home > Publications > Doctoral dissertation & Master thesis
Domestic Conference
 
 
Title Bayesian Message-Passing Recovery for Compressed Sensing: Algorithm Construction and Performance/
Degree Ph.D.
Author Jaewook Kang
Advisor Kiseon Kim
Graduation Date 2016.02.25 File Link icon
    Date 2019-02-25 10:06
압축센싱(CS)은 희소신호를 언더 샘플링 하고 다시 정교하게 복구하기

위한 많은 기술을 포함하는 신호처리 프레임 워크이다. CS기술은

전통적인 Shannon-Nyquist 샘플링 이론에서 요구하는 한계를 뛰어넘어

희소신호의 정보를 더 작은 표본율을 가지고 보존 그리고 복원하는 것을

가능케한다. 그러나 측정된 샘플로부터 원래 신호를 복원하는

CS복구문제는 저대역통과 필터를 사용하는 전통적인 방식과는 다르게

매우 복잡하다. CS복구문제는 자연스럽게 l_0-노름 최소화로 정의 될

수 있다. 하지만 l_0-노름 최소화의 계산량은 NP-complete 이기 때문에

실용적으로 매우 제한된다. 그러므로 l_1-노름 최소화나 Greedy 방식과

같은 다양한 대안적인 알고리즘이 l_0-노름 최소화의 계산적 어려움을

완화시키기 위하여 제안되어 왔다. 그런 대안 알고리즘을 Bayesian

철학을 기반으로도 고려할 수 있다. Bayesian 철학의 관점에서는 희소

신호 복구문제를 최대-사후 (MAP) 또는 최소-평균-제곱-오차 (MMSE)

추정문제로 설명한다. 본 학위논문에서는 강력하면서 저복잡도

복구능력을 가지는 메시지 전달 방식을 적용하여 Bayesian MAPMMSE

문제를 해결하는 CS복구 알고리즘에 대해서 연구한다. 고려하는

대상신호가 직접 희소성 (direct sparsity)와 유한차 희소성

(finite-difference sparsity)를 가지는 CS 프레임워크를를 고려한다.

그리고 나아가서 본 논문은 목표신호가 두 다른 희소신호의 혼합신호인

경우도 포함하여 연구한다.

Compressed sensing (CS) is a signal processing framework which

includes a growing body of techniques that undersample sparse

signals and yet reconstruct them precisely. Such technique allows us

to sample sparse signals beyond the traditional Shannon-Nyquist

sampling theory demands. Namely, information on the signals is

compressively acquired and preserved in sampled measurements where

the sampling rate is less than the Nyquist rate. However, as

compared to the traditional sampling framework, which can recover

signals by using a simple low-pass filter, the task of CS recovery

problem is complicated. The CS recovery problem is intuitively

defined as an l_0-norm minimization. Solving the l_0-norm task

is very limited in practice because it is a NP-complete problem.

Therefore, several alternative recovery algorithms have been

developed to relax the computational infeasibility of the l_0-norm

task, such as l_1-norm minimization and greedy approaches; there

has been a long and rich history of such approaches improving the

recovery performance and the computational speed. Another line of

the approaches is based on Bayesian philosophy. In the Bayesian

framework, the sparse recovery problem is described as

maximum-a-posteriori (MAP) or minimum-mean-square-error (MMSE)

estimation problem, and its sparse solution then is sought by

imposing a proper sparsifying prior probability density function

with respect to the target signal. In this dissertation, we are

interested in a class of Bayesian recovery algorithms applying

strategy of belief propagation or message-passing, which have been

known to have strong recovery ability and low-computational nature.

We will study Bayesian message-passing algorithms to handle two

types of the sparsity: direct sparsity and sparsity in finite

differences. In addition, we extend our scope to a case where the

target signal is a mixture of two sparse signals.

The first part of this dissertation discusses a low-computational

Bayesian algorithm for the direct sparsity, called BHT-BP. In this

framework, we consider LDPC-like measurement matrices which has a

tree-structured property, and additive white Gaussian noise. BHT-BP

has a joint detection-and-estimation structure consisting of a

sparse support detector and a nonzero estimator. The support

detector is designed under the criterion of the minimum detection

error probability using nonparametric belief propagation (nBP) and

composite binary hypothesis tests. The nonzeros are estimated in the

sense of linear MMSE, where the support detection result is

utilized. BHT-BP has its strength in noise robust support detection,

effectively removing quantization errors caused by the uniform

sampling-based nBP. Therefore, in the direct sparsity recovery,

BHT-BP has advantages over CS-BP which is an existing nBP

algorithm, being comparable to other recent CS solvers, in several

aspects. In addition, we examine impact of the minimum nonzero value

of directly sparse signals via BHT-BP, on the basis of the results

of Wainwright et al. and Fletcher et al.. Our

empirical result shows that variation of x_{min} is reflected to

recovery performance in the form of SNR shift.

The second part proposes a fast approximate message-passing (AMP)

algorithm for solving CS recovery problems with 1D-finite-difference

sparsity in term of MMSE estimation. The proposed algorithm, named

ssAMP-BGFD, is low-computational with its fast convergence and cheap

per-iteration cost, providing phase transition nearly approaching to

the state-of-the-art. The ssAMP-BGFD algorithm is originated from a

sum-product message-passing rule, applying a Bernoulli-Gaussian (BG)

prior, seeking an MMSE solution. The algorithm construction includes

not only the conventional AMP technique for the measurement

fidelity, but also suggests a simplified message-passing method to

promote the signal sparsity in finite-difference. Furthermore, we

provide an EM-tuning methodology to learn the BG prior parameters,

suggesting how to use some practical measurement matrices satisfying

the RIP requirement under the ssAMP-BGFD recovery. Extensive

empirical results confirms performance of the ssAMP-BGFD algorithm,

in phase transition, convergence speed, and CPU runtime, compared to

the recent algorithms.

In the third part, we propose a faster AMP to solve the

L_1-Split-Analysis for the 2D sparsity separation, which is

referred to as MixAMP. We develop the MixAMP based on the factor

graphical modeling and the min-sum message-passing. Then, we examine

MixAMP for separation of two types of the sparsity mixtures:

separation of the direct-and-group sparsity mixture, and that of the

direct-and-finite-difference sparsity mixture. This case study shows

that the MixAMP method offers computational advantages over the

conventional first-order method, TFOCS.
목록
 
 
광주과학기술원 한·러 MT-IT 융합기술연구센터 광주과학기술원정보통신공학부