Timezone: »
Speech dereverberation remains an open problem after more than three decades of research. The most challenging step in speech dereverberation is blind channel identification (BCI). Although many BCI approaches have been developed, their performance is still far from satisfactory for practical applications. The main difficulty in BCI lies in finding an appropriate acoustic model, which not only can effectively resolve solution degeneracies due to the lack of knowledge of the source, but also robustly models real acoustic environments. This paper proposes a sparse acoustic room impulse response (RIR) model for BCI, that is, an acoustic RIR can be modeled by a sparse FIR filter. Under this model, we show how to formulate the BCI of a single-input multiple-output (SIMO) system into a l1-norm regularized least squares (LS) problem, which is convex and can be solved efficiently with guaranteed global convergence. The sparseness of solutions is controlled by l1-norm regularization parameters. We propose a sparse learning scheme that infers the optimal l1-norm regularization parameters directly from microphone observations under a Bayesian framework. Our results show that the proposed approach is effective and robust, and it yields source estimates in real acoustic environments with high fidelity to anechoic chamber measurements.
Author Information
Yuanqing Lin (University of Pennsylvania)
Jingdong Chen
Youngmoo E Kim (Drexel University)
Daniel Lee (Samsung Research/Cornell University)
Related Events (a corresponding poster, oral, or spotlight)
-
2007 Poster: Blind channel identification for speech dereverberation using l1-norm sparse learning »
Wed. Dec 5th 06:30 -- 06:40 PM Room
More from the Same Authors
-
2017 : Poster Session (encompasses coffee break) »
Beidi Chen · Borja Balle · Daniel Lee · iuri frosio · Jitendra Malik · Jan Kautz · Ke Li · Masashi Sugiyama · Miguel A. Carreira-Perpinan · Ramin Raziperchikolaei · Theja Tulabandhula · Yung-Kyun Noh · Adams Wei Yu -
2017 Poster: Generative Local Metric Learning for Kernel Regression »
Yung-Kyun Noh · Masashi Sugiyama · Kee-Eung Kim · Frank Park · Daniel Lee -
2016 Poster: Efficient Neural Codes under Metabolic Constraints »
Zhuo Wang · Xue-Xin Wei · Alan A Stocker · Daniel Lee -
2016 Poster: Maximizing Influence in an Ising Network: A Mean-Field Optimal Solution »
Christopher W Lynn · Daniel Lee -
2014 Workshop: Novel Trends and Applications in Reinforcement Learning »
Csaba Szepesvari · Marc Deisenroth · Sergey Levine · Pedro Ortega · Brian Ziebart · Emma Brunskill · Naftali Tishby · Gerhard Neumann · Daniel Lee · Sridhar Mahadevan · Pieter Abbeel · David Silver · Vicenç Gómez -
2013 Poster: Optimal Neural Population Codes for High-dimensional Stimulus Variables »
Zhuo Wang · Alan A Stocker · Daniel Lee -
2012 Poster: Optimal Neural Tuning Curves for Arbitrary Stimulus Distributions: Discrimax, Infomax and Minimum $L_p$ Loss »
Zhuo Wang · Alan A Stocker · Daniel Lee -
2012 Poster: Diffusion Decision Making for Adaptive k-Nearest Neighbor Classification »
Yung-Kyun Noh · Frank Park · Daniel Lee -
2010 Poster: Learning via Gaussian Herding »
Yacov Crammer · Daniel Lee -
2010 Poster: Generative Local Metric Learning for Nearest Neighbor Classification »
Yung-Kyun Noh · Byoung-Tak Zhang · Daniel Lee -
2008 Poster: Extended Grassmann Kernels for Subspace-Based Learning »
Jihun Hamm · Daniel Lee