The Many Faces of Responsible AI
Conventional machine learning paradigms often rely on binary distinctions between positive and negative examples, disregarding the nuanced subjectivity that permeates real-world tasks and content. This simplistic dichotomy has served us well so far, but because it obscures the inherent diversity in human perspectives and opinions, as well as the inherent ambiguity of content and tasks, it poses limitations on model performance aligned with real-world expectations. This becomes even more critical when we study the impact and potential multifaceted risks associated with the adoption of emerging generative AI capabilities across different cultures and geographies. To address this, we argue that to achieve robust and responsible AI systems we need to shift our focus away from a single point of truth and weave in a diversity of perspectives in the data used by AI systems to ensure the trust, safety and reliability of model outputs.
In this talk, I present a number of data-centric use cases that illustrate the inherent ambiguity of content and natural diversity of human perspectives that cause unavoidable disagreement that needs to be treated as signal and not noise. This leads to a call for action to establish culturally-aware and society-centered research on impacts of data quality and data diversity for the purposes of training and evaluating ML models and fostering responsible AI deployment in diverse sociocultural contexts.
Coherence statistics, self-generated experience and why young humans are much smarter than current AI.
The world presents massive amounts of data for learning but the data relevant to any one thing or event is sparse. I will present evidence from the egocentric experiences of infants and young children in daily lives at home that demonstrate this sparsity, focusing on the case of early visual object recognition and object name learning. I will show how the statistics of infant self-generated experiences present solutions to the problem: learner control and optimization of the input, a developmentally constrained curriculum of spatial and temporal properties of the input, and the coherence statistics of individual episodes of experience. I will present evidence with respect to both low-level visual statistics and higher-level semantic categories. I conclude with a discussion of the alliance of the neural mechanisms that generate the statistics at any point in development and the neural mechanisms do the learning. I will the implications of the findings for artificial intelligence including studies using infant egocentric experiences as training data.
Test of Time Award Talk:Distributed Representations of Words and Phrases and their Compositionality
The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several improvements that make the Skip-gram model more expressive and enable it to learn higher quality vectors more rapidly. We show that by subsampling frequent words we obtain significant speedup, and also learn higher quality representations as measured by our tasks. We also introduce Negative Sampling, a simplified variant of Noise Contrastive Estimation (NCE) that learns more accurate vectors for frequent words compared to the hierarchical softmax. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of Canada'' and "Air'' cannot be easily combined to obtain "Air Canada''. Motivated by this example, we present a simple and efficient method for finding phrases, and show that their vector representations can be accurately learned by the Skip-gram model.
Creative AI Session 1
How to Know your Market Value in AI
Faith and AI
Space, Foundation Models, and Earth System Predictability (ESP) Social Event
Bridging the gap: the future of AI policy and academic contribution
| NeurIPS uses cookies for essential functions only. We do not sell your personal information. Our Privacy Policy » |