Timezone: »

Gradient Information for Representation and Modeling
Jie Ding · Robert Calderbank · Vahid Tarokh

Thu Dec 12 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #238

Motivated by Fisher divergence, in this paper we present a new set of information quantities which we refer to as gradient information. These measures serve as surrogates for classical information measures such as those based on logarithmic loss, Kullback-Leibler divergence, directed Shannon information, etc. in many data-processing scenarios of interest, and often provide significant computational advantage, improved stability and robustness. As an example, we apply these measures to the Chow-Liu tree algorithm, and demonstrate remarkable performance and significant computational reduction using both synthetic and real data.

Author Information

Jie Ding (University of Minnesota)
Robert Calderbank (Duke University)
Vahid Tarokh (Duke University)

More from the Same Authors