Timezone: »

Modeling Uncertainty by Learning a Hierarchy of Deep Neural Connections
Raanan Yehezkel Rohekar · Yaniv Gurwicz · Shami Nisimov · Gal Novik

Wed Dec 11 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #45

Modeling uncertainty in deep neural networks, despite recent important advances, is still an open problem. Bayesian neural networks are a powerful solution, where the prior over network weights is a design choice, often a normal distribution or other distribution encouraging sparsity. However, this prior is agnostic to the generative process of the input data, which might lead to unwarranted generalization for out-of-distribution tested data. We suggest the presence of a confounder for the relation between the input data and the discriminative function given the target label. We propose an approach for modeling this confounder by sharing neural connectivity patterns between the generative and discriminative networks. This approach leads to a new deep architecture, where networks are sampled from the posterior of local causal structures, and coupled into a compact hierarchy. We demonstrate that sampling networks from this hierarchy, proportionally to their posterior, is efficient and enables estimating various types of uncertainties. Empirical evaluations of our method demonstrate significant improvement compared to state-of-the-art calibration and out-of-distribution detection methods.

Author Information

Raanan Yehezkel Rohekar (Intel AI Lab)
Yaniv Gurwicz (Intel AI Lab)
Shami Nisimov (Intel AI Lab)
Gal Novik (Intel AI Lab)

More from the Same Authors