Timezone: »
The accelerating pace of intelligent systems research and real world deployment presents three clear challenges for producing "good" intelligent systems: (1) the research community lacks incentives and venues for results centered on social impact, (2) deployed systems often produce unintended negative consequences, and (3) there is little consensus for public policy that maximizes "good" social impacts, while minimizing the likelihood of harm. As a result, researchers often find themselves without a clear path to positive real world impact.
The Workshop on AI for Social Good addresses these challenges by bringing together machine learning researchers, social impact leaders, ethicists, and public policy leaders to present their ideas and applications for maximizing the social good. This workshop is a collaboration of three formerly separate lines of research (i.e., this is a "joint" workshop), including researchers in applications-driven AI research, applied ethics, and AI policy. Each of these research areas are unified into a 3-track framework promoting the exchange of ideas between the practitioners of each track.
We hope that this gathering of research talent will inspire the creation of new approaches and tools, provide for the development of intelligent systems benefiting all stakeholders, and converge on public policy mechanisms for encouraging these goals.
Sat 8:00 a.m. - 8:05 a.m.
|
Opening remarks
(
Opening remaks
)
link »
Speaker bio: Yoshua Bengio is Full Professor of the Department of Computer Science and Operations Research, scientific director of Mila, CIFAR Program co-director of the CIFAR Learning in Machines and Brains program (formerly Neural Computation and Adaptive Perception), scientific director of IVADO and Canada Research Chair in Statistical Learning Algorithms. His main research ambition is to understand principles of learning that yield intelligence. He supervises a large group of graduate students and post-docs. His research is widely cited (over 130000 citations found by Google Scholar in August 2018, with an H-index over 120, and rising fast). |
Yoshua Bengio 🔗 |
Sat 8:05 a.m. - 8:25 a.m.
|
Computational Sustainability: Computing for a Better World and a Sustainable Future
(
Invited Talk Track 1
)
link »
Computational sustainability is a new interdisciplinary research field with the overarching goal of developing computational models, methods, and tools to help manage the balance of environmental, economic, and societal needs for a sustainable future. I will provide a short overview of computational sustainability, with examples ranging from wildlife conservation and biodiversity to evaluating the impacts of hydropower dam proliferation in the Amazon basin. Our research leverages the recent artificial intelligence (AI) advances in deep learning, reasoning, and decision making. I will highlight cross-cutting computational themes, how AI enriches sustainability sciences and conversely, how sustainability questions enrich AI and computer science. Speaker bio: Carla Gomes is a Professor of Computer Science and the Director of the Institute for Computational Sustainability at Cornell University. Her research area is artificial intelligence with a focus on large-scale constraint-based reasoning, optimization and machine learning. She is noted for her pioneering work in developing computational methods to address challenges in sustainability. |
Carla Gomes 🔗 |
Sat 8:25 a.m. - 8:45 a.m.
|
Translating AI Research into operational impact to achieve the Sustainable Development Goals
(
Invited Talk Track 1
)
link »
In September 2015, Member States of the United Nations adopted the Sustainable Development Goals: a set of goals to end poverty, protect the planet and ensure prosperity for all as part of a new global agenda. To achieve the SDGs by 2030, governments, private sector, civil society and academia must work together. In this talk, I will present my journey working almost a decade at UN Global Pulse - an innovation initiative of the UN Secretary General- researching and developing real applications of data innovation and AI for sustainable development, humanitarian action and peace. The work of the UN includes providing food and assistance to 80 million people, supplying vaccines to 45% of the world's children and assisting 65 million people fleeing war, famine and persecution. Examples of innovation projects include understanding perceptions on refugees from social data; mapping population movements in the aftermath of natural disasters; understanding recovery from shocks with financial transaction data; using satellite data to inform humanitarian operations in conflict zones or monitoring public radio to give voice to citizens in unconnected areas. Based on these examples, the session will discuss operational realities and the global policy environment, as well as challenges and opportunities for the research community to ensure that its groundbreaking discoveries are used responsibly and can be translated into social impact for all. Speaker bio: Dr. Miguel Luengo-Oroz is the Chief Data Scientist at UN Global Pulse, an innovation initiative of the United Nations Secretary-General. He is the head of the data science teams across the network of Pulse labs in New York, Jakarta & Kampala. Over the last decade, Miguel has built and directed teams bringing data and AI to operations and policy through innovation projects with international organizations, govs, private sector & academia. He has worked in multiple domains including poverty, food security, refugees & migrants, conflict prevention, human rights, economic indicators, gender, hate speech and climate change. |
Miguel Luengo-Oroz 🔗 |
Sat 8:45 a.m. - 9:15 a.m.
|
Sacred Waveforms: An Indigenous Perspective on the Ethics of Collecting and Usage of Spiritual Data for Machine Learning
(
Invited Talk Track 3
)
link »
This talk is an introduction to the intersection of revitalizing sacred knowledge and exploitation of this data. For centuries Indigenous Peoples of the Americas have resisted the loss of their land, technology, and cultural knowledge. This resistance has been enabled by vibrant cultural protocols, unique to each tribal nation, which controls the sharing of, and limits access to sacred knowledge. Technology has made preserving cultural data easy, but there is a natural tension between reigniting ancient knowledge and mediums that allow uncontrollable exploitation of this data. Easy to access ML opens a new path toward creating new Indigenous technology, such as ASR, but creating AI using Indigenous heritage requires care. Speakers bio: Michael Running Wolf was raised in a rural village in Montana with intermittent water and electricity; naturally he now has a Masters of Science in Computer Science. Though he is a published poet, he is a computer nerd at heart. His lifelong goal is to pursue endangered indigenous language revitalization using Augmented Reality and Virtual Reality (AR/VR) technology. He was raised with a grandmother who only spoke his tribal language, Cheyenne, which like many other indigenous languages is near extinction. By leveraging his advanced Master’s degree in Computer Science and his technical skills, Michael hopes to strengthen the ecology of thought represented by indigenous languages through immersive technology. Caroline Running Wolf, nee Old Coyote, is an enrolled member of the Apsáalooke Nation (Crow) in Montana, with a Swabian (German) mother and also Pikuni, Oglala, and Ho-Chunk heritage. Thanks to her genuine interest in people and their stories she is a multilingual Cultural Acclimation Artist dedicated to supporting Indigenous language and culture vitality. Together with her husband, Michael Running Wolf, they create virtual and augmented reality experiences to advocate for Native American voices, languages and cultures. Caroline has a Master’s degree in Native American Studies from Montana State University in Bozeman, Montana. She is currently pursuing her PhD in anthropology at the University of British Columbia in Vancouver, Canada. |
Michael Running Wolf · Caroline Running Wolf 🔗 |
Sat 9:15 a.m. - 9:20 a.m.
|
Balancing Competing Objectives for Welfare-Aware Machine Learning with Imperfect Data
(
Contributed Talk Track 1
)
link »
From financial loans and humanitarian aid, to medical diagnosis and criminal justice, consequential decisions in society increasingly rely on machine learning. In most cases, the machine learning algorithms used in these contexts are trained to optimize a single metric of performance; however, most real-world decisions exist in a multi-objective setting that requires the balance of multiple incentives and outcomes. To this end, we develop a methodology for optimizing multi-objective decisions. Building on the traditional notion of Pareto optimality, we focus on understanding how to balance multiple objectives when those objectives are measured noisily or not directly observed. We believe this regime of imperfect information is far more common in real-world decisions, where one cannot easily measure the social consequences of an algorithmic decision. To show how the multi-objective framework can be used in practice, we present results using data from roughly 40,000 videos promoted by YouTube’s recommendation algorithm. This illustrates the empirical trade-off between maximizing user engagement and promoting high-quality videos. We show that multi-objective optimization could produce substantial increases in average video quality at the expense of almost negligible reductions in user engagement. Speaker bio: Esther Rolf is a 4th year Ph.D. student in the Computer Science department at the University of California, Berkeley, advised by Benjamin Recht and Michael I. Jordan. She is an NSF Graduate Research Fellow and is a fellow in the Global Policy Lab in the Goldman School of Public Policy at UC Berkeley. Esther’s research targets machine learning algorithms that interact with society. Her current focus lies in two main domains: the field of algorithmic fairness, which aims to design and audit black-box decision algorithms to ensure equity and benefit for all individuals, and in machine learning for environmental monitoring, where abundant sources of temporally recurrent data provide an exciting opportunity to make inferences and predictions about our planet. |
Esther Rolf 🔗 |
Sat 9:20 a.m. - 9:25 a.m.
|
Dilated LSTM with ranked units for Classification of suicide note
(
Contributed Talk Track 1
)
link »
Recent statistics in suicide prevention show that people are increasingly posting their last words online and with the unprecedented availability of textual data from social media platforms researchers have the opportunity to analyse such data. This work focuses on identifying suicide notes from other types of text in a document-level classification task, using a hierarchical recurrent neural network to uncover linguistic patterns in the data. Speaker bio: Annika Marie Schoene is a third-year PhD candidate in Natural Language Processing at the University of Hull and is affiliated to IBM Research UK. The main focus of her work lies in investigating recurrent neural networks for fine-grained emotion detection in social media data. She also has an interest in mental health issues on social media, where she looks at how to identify suicidal ideation in textual data. |
Annika Schoene 🔗 |
Sat 9:25 a.m. - 9:30 a.m.
|
Speech in Pixels: Automatic Detection of Offensive Memes for Moderation
(
Contributed Talk Track 1
)
link »
This work addresses the challenge of hate speech detection in Internet memes, and attempts using visual information to automatically detect hate speech, unlike previous works that have focused in language. Speaker bio: Xavier Giro-i-Nieto is an associate professor at the Universitat Politecnica de Catalunya (UPC) in Barcelona and visiting researcher at Barcelona Supercomputing Center (BSC). His obtained his doctoral degree from UPC in 2012 under the supervision of Prof. Ferran Marques (UPC) and Prof. Shih-Fu Chang (Columbia University). His research interests focus on deep learning applied to multimedia and reinforcement learning. |
Xavier Giro-i-Nieto 🔗 |
Sat 9:30 a.m. - 9:35 a.m.
|
Towards better healthcare: What could and should be automated?
(
Contributed Talk Track 1
)
link »
While artificial intelligence (AI) and other automation technologies might lead to enormous progress in healthcare, they may also have undesired consequences for people working in the field. In this interdisciplinary study, we capture empirical evidence of not only what healthcare work could be automated, but also what should be automated. We quantitatively investigate these research questions by utilizing probabilistic machine learning models trained on thousands of ratings, provided by both healthcare practitioners and automation experts. Based on our findings, we present an analytical tool (Automatability-Desirability Matrix) to support policymakers and organizational leaders in developing practical strategies on how to harness the positive power of automation technologies, while accompanying change and empowering stakeholders in a participatory fashion. Speaker bio: Wolfgang Frühwirt is an Associate Member of the Oxford-Man Institute (University of Oxford, Engineering Department), where he works with the Machine Learning Research Group. |
Wolfgang Fruehwirt 🔗 |
Sat 9:35 a.m. - 9:45 a.m.
|
All Tracks Poster Session ( Poster Session ) link » | 🔗 |
Sat 9:45 a.m. - 10:30 a.m.
|
Break / All Tracks Poster Session ( Poster Session ) link » | 🔗 |
Sat 10:30 a.m. - 11:30 a.m.
|
Towards a Social Good? Theories of Change in AI
(
Panel Discussion Track 3
)
link »
Considerable hope and energy is put into AI -- and its critique -- under the assumption that the field will make the world a “better” place by maximizing social good. Will it? For who? At what time scale? Most importantly, who defines "social good"? This panel invites dissent. Leading voices will pick apart these questions by sharing their own competing theories of social change in relation to AI. In addition to answering audience questions, they will share how they decide on trade-offs between pragmatism and principles and how they resist elements of AI research that are known to be dangerous and/or socially degenerative, particularly in relation to surveillance, criminal justice and privacy. We undertake a probing and genuine conversation around these questions. Audience members are invited to submit questions at: https://app.sli.do/event/yndumbf3/live/questions. Facilitators bio: Dhaval Adjodah is a research scientist at the MIT Media Lab. His research investigates the current limitations of generalization in machine learning as well as how to move beyond them by leveraging the social cognitive adaptations humans evolved to collaborate effectively at scale. Beyond pushing the limits of modern machine learning, he is also interested in improving institutions by using online human experiments to better understand the cognitive limits and biases that affect everyday individual economic, political, and social decisions. During his PhD, Dhaval was an intern in Prof. Yoshua Bengio's group at MILA, a member of the Harvard Berkman Assembly on Ethics and Governance in Artificial Intelligence, and a fellow at the Dalai Lama Center For Ethics And Transformative Values. He has a B.S. in Physics from MIT, and an M.S. in Technology Policy from the MIT Institute for Data, Systems, and Society. Natalie Saltiel (MIT) Speakers bio: Dr. Prem Natarajan is a Vice President in Amazon’s Alexa unit where he leads the Natural Understanding (NU) organization within Alexa AI. NU is a multidisciplinary science and engineering organization that develops, deploys, and maintains state-of-the-art conversational AI technologies including natural language understanding, intelligent dialog systems, entity linking and resolution, and associated worldwide runtime operations. Dr. Natarajan joined Amazon from the University of Southern California (USC) where he was Senior Vice Dean of Engineering in the Viterbi School of Engineering, Executive Director of the Information Sciences Institute (a 300-person R&D organization), and Research Professor of computer science with distinction. Prior to that, as Executive VP at Raytheon BBN Technologies, he led the speech, language, and multimedia business unit, which included research and development operations, and commercial products for real-time multimedia monitoring, document analysis, and information extraction. During his tenure at USC and at BBN, Dr. Natarajan directed R&D efforts in speech recognition, natural language processing, computer vision, and other applications of machine learning. While at USC, he directly led nationally influential DARPA and IARPA sponsored research efforts in biometrics/face recognition, OCR, NLP, media forensics, and forecasting. Most recently, he helped to launch the Fairness in AI (FAI) program – a collaborative effort between NSF and Amazon for funding fairness focused research efforts in US Universities. Rashida Richardson: As Director of Policy Research, Rashida designs, implements, and coordinates AI Now’s research strategy and initiatives on the topics of law, policy, and civil rights. Rashida joins AI Now after working as Legislative Counsel at the American Civil Liberties Union of New York (NYCLU), where she led the organization’s work on privacy, technology, surveillance, and education issues. Prior to the NYCLU, she was a staff attorney at the Center for HIV Law and Policy, where she worked on a wide-range of HIV-related legal and policy issues nationally, and she previously worked at Facebook Inc. and HIP Investor in San Francisco. Rashida currently serves on the Board of Trustees of Wesleyan University, the Advisory Board of the Civil Rights and Restorative Justice Project, the Board of Directors of the College & Community Fellowship, and she is an affiliate and Advisory Board member of the Center for Critical Race + Digital Studies. She received her BA with honors in the College of Social Studies at Wesleyan University and her JD from Northeastern University School of Law. Sarah T. Hamid is an abolitionist and organizer in Southern California, working to build community defense against carceral technologies. She's built and worked on campaigns against predictive policing, risk assessment technologies, public/private surveillance partnerships, electronic monitoring, and automated border screening. In March 2019, she co-founded the Prison Tech Research Group (PTR-Grp), a coalition of abolitionists working on the intersection of technology/innovation and the prison-industrial complex. PTR-Grp focuses on private-public research partnerships deployed under the guise of prison reform, which stage the prison as a site for technological innovation and low-cost testing. The project centers the needs and safety of incarcerated and directly impacted people who face the violently expropriative data science industry with few safety nets. Sarah also facilitates the monthly convening of the Community Defense Syllabus, during which activists of color from all over the country work to theorize the intersection of race and carceral computing. In 2020, she will lead the launch and roll-out of the Carceral Tech Resistance Network, a community archive and knowledge-sharing project that seeks to amplify the capacity of community organizations to resist the encroachment and experimentation of harmful technologies. |
natalie saltiel · Rashida Richardson · Sarah T. Hamid 🔗 |
Sat 11:30 a.m. - 11:35 a.m.
|
Hard Choices in AI Safety
(
Contributed Talk Track 2
)
link »
As AI systems become prevalent in high stakes domains such as surveillance and healthcare, researchers now examine how to design and implement them in a safe manner. However, the potential harms caused by systems to stakeholders in complex social contexts and how to address these remains unclear. In this paper, we explain the inherent normative uncertainty in debates about the safety of AI systems. We then address this as a problem of vagueness by examining its place in the design, training, and deployment stages of AI system development. We adopt Ruth Chang's theory of intuitive comparability to illustrate the dilemmas that manifest at each stage. We then discuss how stakeholders can navigate these dilemmas by incorporating distinct forms of dissent into the development pipeline, drawing on Elizabeth Anderson's work on the epistemic powers of democratic institutions. We outline a framework of sociotechnical commitments to formal, substantive and discursive challenges that address normative uncertainty across stakeholders, and propose the cultivation of related virtues by those responsible for development. Speakers bio: Roel Dobbe’s research addresses the development, analysis, integration and governance of data-driven systems. His PhD work combined optimization, machine learning and control theory to enable monitoring and control of safety-critical systems, including energy & power systems and cancer diagnosis and treatment. In addition to research, Roel has experience in industry and public institutions, where he has served as a management consultant for AT Kearney, a data scientist for C3 IoT, and a researcher for the National ThinkTank in The Netherlands. His diverse experiences led him to examine the ways in which values and stakeholder perspectives are represented in the process of designing and deploying AI and algorithmic decision-making and control systems. Roel founded Graduates for Engaged and Extended Scholarship around Computing & Engineering (GEESE); a student organization stimulating graduate students across all disciplines studying or developing technologies to take a broader lens at their field of study and engage across disciplines. Roel has published his work in various journals and conferences, including Automatica, the IEEE Conference on Decision and Control, the IEEE Power & Energy Society General Meeting, IEEE/ACM Transactions on Computational Biology and Bioinformatics and NeurIPS. Thomas Krendl Gilbert is an interdisciplinary Ph.D. candidate in Machine Ethics and Epistemology at UC Berkeley. With prior training in philosophy, sociology, and political theory, Tom researches the various technical and organizational predicaments that emerge when machine learning alters the context of expert decision-making. In particular, he is interested in how different algorithmic learning procedures (e.g. reinforcement learning) reframe classic ethical questions, such as the problem of aggregating human values and interests. In his free time he enjoys sailing and creative writing. Yonatan Mintz is a Postdoctoral Research Fellow at the H. Milton Stewart School of Industrial and Systems Engineering at the Georgia Institute of Technology, previously he completed his PhD at the department of Industrial Engineering and Operations research at the University of California, Berkeley. His research interests focus on human sensitive decision making and in particular the application of machine learning and optimization methodology for personalized healthcare and fair and accountable decision making. Yonatan's work has been published in many journals and conferences across the machine learning, operations research, and medical fields. |
Roel Dobbe · Thomas Gilbert · Yonatan Mintz 🔗 |
Sat 11:35 a.m. - 11:40 a.m.
|
The Effects of Competition and Regulation on Error Inequality in Data-driven Markets
(
Contributed Talk Track 3
)
link »
Much work has documented instances of unfairness in deployed machine learning models, and significant effort has been dedicated to creating algorithms that take into account issues of fairness. Our work highlight an important but understudied source of unfairness: market forces that drive differing amounts of firm investment in data across populations.We develop a high-level framework, based on insights from learning theory and industrial organization, to study this phenomenon. In a simple model of this type of data-driven market, we first show that a monopoly will invest unequally in the groups. There are two natural avenues for preventing this disparate impact: promoting competition and regulating firm behavior. We show first that competition, under many natural models, does not eliminate incentives to invest unequally, and can even exacerbate it. We then consider two avenues for regulating the monopoly - requiring the monopoly to ensure that each group’s error rates are low, or forcing each group’s error rates to be similar to each other - and quantify the price of fairness (and who pays it). These models imply that mitigating fairness concerns may require policy-driven solutions, and not only technological ones. Speaker bio: Hadi Elzayn is a 4th year PhD Candidate in Applied Math and Computational Science at the University of Pennsylvania, advised by Michael Kearns. He is interested in the intersection of computer science and economics, and in the particular topics of how algorithmic learning interacts with social concerns like fairness, privacy, and markets (and how to design algorithms respecting those concerns). He received his BA from Columbia University in Mathematics and Economics. He has interned at Microsoft Research, and previously worked at the consulting firm TGG. |
Hadi Elzayn 🔗 |
Sat 11:40 a.m. - 11:45 a.m.
|
Learning Fair Classifiers in Online Stochastic Setting
(
Contributed Talk Track 3
)
link »
One thing that differentiates policy-driven machine learning is that new public policies are often implemented in a trial-and-error fashion, as data might not be available upfront. In this work, we try to accomplish approximate group fairness in an online decision-making process where examples are sampled \textit{i.i.d} from an underlying distribution. Our work follows from the classical learning-from-experts scheme, extending the multiplicative weights algorithm by keeping separate weights for label classes as well as groups. Although accuracy and fairness are often conflicting goals, we try to mitigate the trade-offs using an optimization step and demonstrate the performance on real data set. Speaker bio: Yi (Alicia) Sun is a PhD Candidate in Institute for Data, Systems and Society at MIT. Her research interests are designing algorithms that are robust and reliable, and as well as align with societal values. |
Yi Sun 🔗 |
Sat 11:45 a.m. - 11:50 a.m.
|
Fraud detection in telephone conversations for financial services using linguistic features
(
Contributed Talk Track 2
)
link »
In collaboration with linguistics and expert interrogators, we present an approach for fraud detection in transcribed telephone conversations. The proposed approach exploits the syntactic and semantic information of the transcription to extract both the linguistic markers and the sentiment of the customer's response. The results of the proposed approach are demonstrated with real-world financial services data using efficient, robust and explainable classifiers such as Naive Bayes, Decision Tree, Nearest Neighbours, and Support Vector Machines. Speaker bio: Nikesh Bajaj is a Postdoctoral Research Fellow at the University of East London, working on the Innovate UK funded project - Automation and Transparency across Financial and Legal Services, in collaboration with Intelligent Voice Ltd. and Strenuus Ltd. The project includes working with machine learning researchers, data scientists, linguistics experts and expert interrogators to model human behaviour for deception detection. He completed his PhD from Queen Mary University of London in a joint program with University of Genova. His PhD work is focused on predictive analysis of auditory attention using physiological signals (e.g. EEG, PPG, GSR). In addition to research, Nikesh has 5+ years of teaching experience. His research interests focus on signal processing, machine learning, deep learning, and optimization. |
🔗 |
Sat 11:50 a.m. - 11:55 a.m.
|
A Typology of AI Ethics Tools, Methods and Research to Translate Principles into Practices
(
Contributed Talk Track 2
)
link »
What tools are available to guide the ethically aligned research, development and deployment of intelligence systems? We construct a typology to help practically-minded developers ‘apply ethics’ at each stage of the AI development pipeline, and to signal to researchers where further work is needed. Speaker bio: Libby Kinsey: Libby is lead technologist for AI at Digital Catapult, the UK's advanced digital technology innovation centre, where she works with a multi-disciplinary team to support organisations in building their AI capabilities responsibly. She spent her early career in technology venture capital before returning to university to study machine learning in 2014. |
Libby Kinsey 🔗 |
Sat 11:55 a.m. - 12:00 p.m.
|
AI Ethics for Systemic Issues: A Structural Approach
(
Contributed Talk Track 3
)
link »
Much of the discourse on AI ethics has focused on technical improvements and holding individuals accountable to prevent accidents and malicious use of AI. While this is useful and necessary, such an “agency-focused” approach does not cover all the harmful outcomes caused by AI. In particular it ignores the more indirect and complex risks resulting from AI’s interaction with the socio-economic and political context. A “structural” approach is needed to account for such broader negative impacts where no individual can be held accountable. This is particularly relevant for AI applied to systemic issues such as climate change. This talk explains why a structural approach is needed in addition to the existing agency approach to AI ethics, and offers some preliminary suggestions for putting this into practice. Speaker bio: Agnes Schim van der Loeff: Hi, my name is Agnes and I do ethics and policy research at Cervest, which is developing Earth Science AI to quantify climate uncertainty and inform decisions on more sustainable land use. As part of Cervest’s research residency programme earlier this year I started exploring the ethical implications of such use of AI, which resulted in this NeurIPS paper! Now I am developing a framework to ensure all steps in the development, distribution and use of Cervest’s AI-driven platform are ethical and prevent any harmful outcomes. I hold a first class Honours degree in Arabic and Development Studies from SOAS University of London. Having studied the intersection of social, economic and political aspects of development, I am interested in how dilemmas around AI reflect wider debates on power relations in society and I want to explore how AI could be a vehicle for transformative social change. I am particularly passionate about climate justice, which I have engaged with academically and through campaigning. |
Agnes Schim van der Loeff 🔗 |
Sat 12:00 p.m. - 2:00 p.m.
|
Lunch - on your own
|
🔗 |
Sat 2:00 p.m. - 2:05 p.m.
|
ML system documentation for transparency and responsible AI development - a process and an artifact
(
Invited Talk Track 2
)
link »
One large-scale multistakeholder effort to implement the values of the Montreal Declaration as well as other AI ethical principles is ABOUT ML, a recently-launched project led by the Partnership on AI to synthesize and advance the existing research by bringing PAI's Partner community and beyond into a public conversation and catalyze building a set of resources that allow more organizations to experiment with pilots. Eventually ABOUT ML aims to surface research-driven best practices and aid with translating those into new industry norms. This talk will be an overview of the work to date and ways to get involved moving forward. Speaker bio: Jingying Yang is a Program Lead on the Research team at the Partnership on AI, where she leads a portfolio of collaborative multistakeholder projects on the topics of safety, fairness, transparency, and accountability, including the ABOUT ML project to set new industry norms on ML documentation. Previously, she worked in Product Operations at Lyft, for the state of Massachusetts on health care policy, and in management consulting at Bain & Company. |
Jingying Yang 🔗 |
Sat 2:05 p.m. - 2:30 p.m.
|
Beyond Principles and Policy Proposals: A framework for the agile governance of AI
(
Invited Talk Track 2
)
link »
The mismatch between the speed at which innovative technologies are deployed and the slow traditional implementation of ethical/lethal oversight, requires creative, agile, multi-stakeholder, and cooperative approaches to governance. Agile governance must go beyond hard law and regulations to accommodate soft law, corporate self-governance, and technological solutions to challenges. This presentation will summarize the concepts, insights, and creative approaches to AI oversight that have led to the 1st International Congress for the Governance of AI, which will convey in Prague on April 16-18, 2020. Speaker bio: Wendell Wallach is an internationally recognized expert on the ethical and governance concerns posed by emerging technologies, particularly artificial intelligence and neuroscience. He is a consultant, an ethicist, and a scholar at Yale University’s Interdisciplinary Center for Bioethics, where he chairs the working research group on technology and ethics. He is co-author (with Colin Allen) of Moral Machines: Teaching Robots Right from Wrong, which maps the new field variously called machine ethics, machine morality, computational morality, and friendly AI. His latest book is A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control. Wallach is the principal investigator of a Hastings Center project on the control and responsible innovation in the development of autonomous machines. |
Wendell Wallach 🔗 |
Sat 2:30 p.m. - 2:45 p.m.
|
Untangling AI Ethics: Working Toward a Root Issue
(
Invited Talk Track 2
)
link »
Given myriad issues in AI ethics as well as many competing frameworks/declarations, it may be useful to step back to see if we can find a root or common issue, which may help to suggest a broad solution to the complex problem. This involves returning to first principles: what is the nature of AI? I will suggest that AI is the power of increasing omniscience, which is not only generally disruptive to society but also a threat to our autonomy. A broad solution, then, is to aim at restoring that autonomy. Speaker bio: Patrick Lin is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is also a philosophy professor. He has published several books and papers in the field of technology ethics, especially with respect to robotics—including Robot Ethics (MIT Press, 2012) and Robot Ethics 2.0 (Oxford University Press, 2017)—human enhancement, cyberwarfare, space exploration, nanotechnology, and other areas. |
Patrick Lin 🔗 |
Sat 2:45 p.m. - 3:00 p.m.
|
AI in Healthcare: Working Towards Positive Clinical Impact
(
Invited Talk Track 2
)
link »
Artificial intelligence (AI) applications in healthcare hold great promise, aiming to empower clinicians to diagnose and treat medical conditions earlier and more effectively. To ensure that AI solutions deliver on this promise, it is important to approach the design of prototype solutions with clinical applicability in mind, envisioning how they might fit within existing clinical workflows. Here we provide a brief overview of how we are incorporating this thinking in our research projects, while highlighting challenges that lie ahead. Speaker bio: Nenad Tomasev: My research interests lie at the intersection of theory and impactful real-world AI applications, with a particular focus on AI in healthcare, which I have been pursuing at DeepMind since early 2016. In our most recent work, published in Nature in July 2019, we demonstrate how deep learning can be used for accurate early predictions of patient deterioration from electronic health records and alerting that opens possibilities for timely interventions and preventative care. Prior to moving to London, I had been involved with other applied projects at Google, such as Email Intelligence and the Chrome Data team. I obtained my PhD in 2013 from the Artificial Intelligence Laboratory at JSI in Slovenia, where I was working on better understanding the consequences of the curse of dimensionality in instance-based learning in many dimensions. |
Nenad Tomasev 🔗 |
Sat 3:00 p.m. - 3:30 p.m.
|
Implementing Responsible AI
(
Panel Discussion Track 2
)
link »
This panel will discuss what might be some practical solutions for encouraging and implementing responsible AI. There will be time for audience Q&A. Audience members are invited to submit questions at https://app.sli.do/event/kfdhmkbd/live/questions Facilitator: Brian Patrick Green is Director of Technology Ethics at the Markkula Center for Applied Ethics at Santa Clara University. His interests include AI and ethics, the ethics of space exploration and use, the ethics of technological manipulation of humans, the ethics of catastrophic risk, and the intersection of human society and technology, including religion and technology. Green teaches AI ethics in the Graduate School of Engineering and is co-author of the Ethics in Technology Practice corporate technology ethics resources. Speakers bio: Wendell Wallach is an internationally recognized expert on the ethical and governance concerns posed by emerging technologies, particularly artificial intelligence and neuroscience. He is a consultant, an ethicist, and a scholar at Yale University’s Interdisciplinary Center for Bioethics, where he chairs the working research group on technology and ethics. He is co-author (with Colin Allen) of Moral Machines: Teaching Robots Right from Wrong, which maps the new field variously called machine ethics, machine morality, computational morality, and friendly AI. His latest book is A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control. Wallach is the principal investigator of a Hastings Center project on the control and responsible innovation in the development of autonomous machines. Patrick Lin is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is also a philosophy professor. He has published several books and papers in the field of technology ethics, especially with respect to robotics—including Robot Ethics (MIT Press, 2012) and Robot Ethics 2.0 (Oxford University Press, 2017)—human enhancement, cyberwarfare, space exploration, nanotechnology, and other areas. Nenad Tomasev: My research interests lie at the intersection of theory and impactful real-world AI applications, with a particular focus on AI in healthcare, which I have been pursuing at DeepMind since early 2016. In our most recent work, published in Nature in July 2019, we demonstrate how deep learning can be used for accurate early predictions of patient deterioration from electronic health records and alerting that opens possibilities for timely interventions and preventative care. Prior to moving to London, I had been involved with other applied projects at Google, such as Email Intelligence and the Chrome Data team. I obtained my PhD in 2013 from the Artificial Intelligence Laboratory at JSI in Slovenia, where I was working on better understanding the consequences of the curse of dimensionality in instance-based learning in many dimensions. Jingying Yang is a Program Lead on the Research team at the Partnership on AI, where she leads a portfolio of collaborative multistakeholder projects on the topics of safety, fairness, transparency, and accountability, including the ABOUT ML project to set new industry norms on ML documentation. Previously, she worked in Product Operations at Lyft, for the state of Massachusetts on health care policy, and in management consulting at Bain & Company. Libby Kinsey: Libby is lead technologist for AI at Digital Catapult, the UK's advanced digital technology innovation centre, where she works with a multi-disciplinary team to support organisations in building their AI capabilities responsibly. She spent her early career in technology venture capital before returning to university to study machine learning in 2014. |
Brian Green · Wendell Wallach · Patrick Lin · Nenad Tomasev · Jingying Yang · Libby Kinsey 🔗 |
Sat 3:30 p.m. - 4:15 p.m.
|
Break / All Tracks Poster Session ( Poster Session ) link » | 🔗 |
Sat 4:15 p.m. - 4:20 p.m.
|
"Good" isn't good enough
(
Contributed Talk Track 3
)
link »
Despite widespread enthusiasm among computer scientists to contribute to “social good,” the field's efforts to promote good lack a rigorous foundation in politics or social change. There is limited discourse regarding what “good” actually entails, and instead a reliance on vague notions of what aspects of society are good or bad. Moreover, the field rarely considers the types of social change that result from algorithmic interventions, instead following a “greedy algorithm” approach of pursuing technology-centric incremental reform at all points. In order to reason well about doing good, computer scientists must adopt language and practices to reason reflexively about political commitments, a praxis that considers the long-term impacts of technological interventions, and an interdisciplinary focus that no longer prioritizes technical considerations as superior to other forms of knowledge. Speaker bio: Ben Green is a PhD Candidate in Applied Math at Harvard, an Affiliate at the Berkman Klein Center for Internet & Society at Harvard, and a Research Fellow at the AI Now Institute at NYU. He studies the social and policy impacts of data science, with a focus on algorithmic fairness, municipal governments, and the criminal justice system. His book, The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future, was published in 2019 by MIT Press. |
Ben Green 🔗 |
Sat 4:20 p.m. - 4:50 p.m.
|
Automated Quality Control for a Weather Sensor Network
(
Invited Talk Track 1
)
link »
TAHMO (the Trans-African Hydro-Meteorological Observatory) is a growing network of more than 500 automated weather stations. The eventual goal is to operate 20,000 stations covering all of sub-Saharan Africa and providing ground truth for weather and climate models. Because sensors fail and go out of calibration, some form of quality control is needed to detect bad values and determine when a technician needs to visit a station. We are deploying a three-layer architecture that consists of (a) fitted anomaly detection models, (b) probabilistic diagnosis of broken sensors, and (c) spatial statistics to detect extreme weather events (that may exonerate flagged sensors). Speaker bio: Dr. Dietterich is Distinguished Emeritus Professor of computer science at Oregon State University and currently pursues interdisciplinary research at the boundary of computer science, ecology, and sustainability policy. |
Thomas Dietterich 🔗 |
Sat 4:50 p.m. - 5:50 p.m.
|
AI and Sustainable Development
(
Panel Discussion Track 1
)
link »
The focus on this panel is the use of AI for Sustainable Development and will explore the many opportunities this technology presents to improve lives around the world, as well as address the challenges and barriers to its applications. While there is much outstanding work being done to apply AI to such situations, too often this research is not deployed and there is a disconnect between the research and industry communities and the public sector actors. With leading researchers and practitioners from across the academic, public, UN and private sectors this panel brings a diversity of experience to address these important issues. Audience members are invited to submit questions at: https://app.sli.do/event/skexhgej/live/questions Facilitator: I am an Assistant Professor in the Institute for Software Research in the School of Computer Science at Carnegie Mellon University. Speaker bio: Carla Gomes is a Professor of Computer Science and the Director of the Institute for Computational Sustainability at Cornell University. Her research area is artificial intelligence with a focus on large-scale constraint-based reasoning, optimization and machine learning. She is noted for her pioneering work in developing computational methods to address challenges in sustainability. Dr. Miguel Luengo-Oroz is the Chief Data Scientist at UN Global Pulse, an innovation initiative of the United Nations Secretary-General. He is the head of the data science teams across the network of Pulse labs in New York, Jakarta & Kampala. Over the last decade, Miguel has built and directed teams bringing data and AI to operations and policy through innovation projects with international organizations, govs, private sector & academia. He has worked in multiple domains including poverty, food security, refugees & migrants, conflict prevention, human rights, economic indicators, gender, hate speech and climate change. Thomas G. Dietterich: Dr. Dietterich is Distinguished Emeritus Professor of computer science at Oregon State University and currently pursues interdisciplinary research at the boundary of computer science, ecology, and sustainability policy. Julien Cornebise is an Honorary Associate Professor at University College London. He focuses on putting Machine Learning firmly into the hands of nonprofits, certain part of certain, governments, NGOs, UN agencies: those who actually work on tackling our societies' biggest problems: . He built and until recently was a Director of Research of Element AI's AI for Good team, and head of its London Office. Prior to this, Julien was at DeepMind (later acquired by Google) as an early employee, where he led several fundamental research projects used in early demos and fundraising then co-created its Health Research team. Since leaving DeepMind in 2016, he has been working with Amnesty International, Human Rights Watch, and other actors. Julien holds an MSc in Computer Engineering, an MSc in Mathematical Statistics, and a PhD in Mathematics, specialized in Computational Statistics, from University Paris VI Pierre and Marie Curie and Telecom ParisTech. He received the 2010 Savage Award in Theory and Methods from the International Society for Bayesian Analysis for his PhD work. |
Fei Fang · Carla Gomes · Miguel Luengo-Oroz · Thomas Dietterich · Julien Cornebise 🔗 |
Sat 5:50 p.m. - 6:00 p.m.
|
Open announcement and Best Papers/Posters Award link » | 🔗 |
Author Information
Fei Fang (Carnegie Mellon University)
Joseph Aylett-Bullock (UN Global Pulse and Durham University)
Joseph is a Research Associate at UN Global Pulse, an innovation initiative part of the Executive Office of the UN Secretary General to harness emerging technologies for humanitarian good. His research focusses on mathematical modelling and machine learning for crisis response and humanitarian development. His work has included: the development of an AI based satellite image analysis tool for refugee camp mapping, damage detection and flood mapping; NLP applications to multilingual social media analysis and topic exploration; and an assessment of the political and social risks of automated text generation. Most recently, Joseph is leading a team of academics and UN experts in modelling epidemic spreads in refugee and IDP settlements. Joseph is also an Industry Research Associate at the RiskEcon Lab, part of the Courant Institute of Mathematical Sciences at New York University. He has worked with companies in the medical, utility, and insurance industries and spoken widely on the use of AI in the humanitarian and health sectors, as well as Applied Ethics in Data Analytics.
Marc-Antoine Dilhac (Université de Montréal)
Brian Green (Santa Clara University)
natalie saltiel (MIT)
Dhaval Adjodah (MIT)
Dhaval is a 4th year PhD candidate at the Media Lab doing research in AI and Finance. He previously worked in finance after doing his masters in the MIT Technology Policy Program, and undergraduate in Physics also at MIT. He is interested in understanding how to optimally organize networks of human and AI agents, and how large groups of people can sense new information, and take action collectively. His work is relevant to improving deep reinforcement learning algorithms, improving financial trading, rewiring collaboration networks, crowd-sourcing, voting, and innovation. He grew up in Mauritius, and greatly enjoys cooking and running. PhD Thesis abstract: I first investigate the cognitive limits humans suffer from - and the inductive biases we employ to overcome these limits - and how to build systems to improve our learning and decision-making in the face of these biases and limits. Secondly, I show how some of these inductive biases can be applied to enable modern machine learning to be more interpretable, robust and sample-efficient. Humans have been shown to employ various inductive biases in how they learn: we show preference for certain approaches to improve sample complexity and minimize communication cost. Previous work has documented in detail when such inductive biases fail. However, when learning is distributed or when data is sparse, such inductive biases can lead to improved learning. In a first study, I observe that humans in a large online experiment make strong assumptions on the distributional properties of the data, and that they prefer to learn from other people's compressed estimates rather than from actual data. I then create novel metric - a measure of social learning - to identify individuals who improve the accuracy of the group. In another domain where hundreds of thousands of traders have to decide which other traders to learn strategies from to make their trades, I observe strong Dunbar cognitive limits. Specifically, I observe a tradeoff between frequency of data update and number of people to follow, and test recommender bots that improve trader performance without increasing cognitive load. Although inductive biases can lead to sub-optimal behavior, there is an increasing movement to imbue more inductive biases into modern machine learning to make it more sample efficient, robust and interpretable. I show, for example, that the inductive bias of humans to use certain network topologies to perform decentralized optimization can be applied to modern deep learning: I observe strong improvements in large-scale deep reinforcement learning by forcing gradient updates to be communicated over synthetic optimized random graphs. In another project, I create a novel 'relational unit' and demonstrate that it strongly outperforms other state-of-the-art reinforcement approaches (MLP, multi-head attention, partitioned DQN) due to the imbued relational inductive bias of our model. I also observe that the relations learned are clearly interpretable. Such work not only provides tangible contributions to making both human learning more accurate in more realistic settings, and helping machine learning be more sample-efficient on harder problems, but also provides suggestions as to how to make human-AI collaboration more effective.
Jack Clark (OpenAI)
Sean McGregor (Syntiant and XPRIZE)

Sean McGregor is a machine learning PhD, founder of the Responsible AI Collaborative, lead technical consultant for the IBM Watson AI XPRIZE, and consulting researcher with the neural accelerator startup Syntiant. His current focus is the development of the AI Incident Database as an index of harms or near harms experienced in the real world, which builds on his experience in AI safety and interpretability for deep and reinforcement learning as applied to wildfire suppression policy, speech, and heliophysics. Outside his paid work, Sean's open source development work has earned media attention in the Atlantic, Der Spiegel, Mashable, Wired, Venture Beat, Vice, and O'Reilly while his technical publications have appeared in a variety of machine learning, HCI, ethics, and application-centered proceedings.
Margaux Luck (MILA)
Jonathan Penn (University of Cambridge)
Author, technologist, and historian. Interested in the societal implications of AI over time. PhD candidate in the History and Philosophy of Science Department at the University of Cambridge. Studies the history of AI in the twentieth century. Currently a visiting scholar at MIT. Prior Google Technology Policy Fellow, Assembly Fellow at the MIT Media Lab/Berkman Kline Centre. Holds degrees from the University of Cambridge and McGill University.
Tristan Sylvain (MILA)
Geneviève Boucher (IRIC)
Sydney Swaine-Simon (District 3)
Girmaw Abebe Tadesse (University of Oxford)
Myriam Côté (Mila)
Anna Bethke (Intel)
Yoshua Bengio (Mila)
Yoshua Bengio is Full Professor in the computer science and operations research department at U. Montreal, scientific director and founder of Mila and of IVADO, Turing Award 2018 recipient, Canada Research Chair in Statistical Learning Algorithms, as well as a Canada AI CIFAR Chair. He pioneered deep learning and has been getting the most citations per day in 2018 among all computer scientists, worldwide. He is an officer of the Order of Canada, member of the Royal Society of Canada, was awarded the Killam Prize, the Marie-Victorin Prize and the Radio-Canada Scientist of the year in 2017, and he is a member of the NeurIPS advisory board and co-founder of the ICLR conference, as well as program director of the CIFAR program on Learning in Machines and Brains. His goal is to contribute to uncover the principles giving rise to intelligence through learning, as well as favour the development of AI for the benefit of all.
More from the Same Authors
-
2021 Spotlight: Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization »
Kartik Ahuja · Ethan Caballero · Dinghuai Zhang · Jean-Christophe Gagnon-Audet · Yoshua Bengio · Ioannis Mitliagkas · Irina Rish -
2021 : Systematic Evaluation of Causal Discovery in Visual Model Based Reinforcement Learning »
Nan Rosemary Ke · Aniket Didolkar · Sarthak Mittal · Anirudh Goyal · Guillaume Lajoie · Stefan Bauer · Danilo Jimenez Rezende · Yoshua Bengio · Chris Pal · Michael Mozer -
2021 : Deep Gaussian Processes for Preference Learning »
Rex Chen · Norman Sadeh · Fei Fang -
2021 : Long-Term Credit Assignment via Model-based Temporal Shortcuts »
Michel Ma · Pierluca D'Oro · Yoshua Bengio · Pierre-Luc Bacon -
2021 : A Consciousness-Inspired Planning Agent for Model-Based Reinforcement Learning »
Mingde Zhao · Zhen Liu · Sitao Luan · Shuyuan Zhang · Doina Precup · Yoshua Bengio -
2021 : Effect of diversity in Meta-Learning »
Ramnath Kumar · Tristan Deleu · Yoshua Bengio -
2021 : Learning Neural Causal Models with Active Interventions »
Nino Scherrer · Olexa Bilaniuk · Yashas Annadani · Anirudh Goyal · Patrick Schwab · Bernhard Schölkopf · Michael Mozer · Yoshua Bengio · Stefan Bauer · Nan Rosemary Ke -
2021 : Multi-Domain Balanced Sampling Improves Out-of-Distribution Generalization of Chest X-ray Pathology Prediction Models »
Enoch Tetteh · David Krueger · Joseph Paul Cohen · Yoshua Bengio -
2022 Poster: Discrete Compositional Representations as an Abstraction for Goal Conditioned Reinforcement Learning »
Riashat Islam · Hongyu Zang · Anirudh Goyal · Alex Lamb · Kenji Kawaguchi · Xin Li · Romain Laroche · Yoshua Bengio · Remi Tachet des Combes -
2022 : Posterior samples of source galaxies in strong gravitational lenses with score-based priors »
Alexandre Adam · Adam Coogan · Nikolay Malkin · Ronan Legin · Laurence Perreault-Levasseur · Yashar Hezaveh · Yoshua Bengio -
2022 : Indexing AI Risks with Incidents, Issues, and Variants »
Sean McGregor · Kevin Paeth · Khoa Lam -
2022 : Designing Biological Sequences via Meta-Reinforcement Learning and Bayesian Optimization »
Leo Feng · Padideh Nouri · Aneri Muni · Yoshua Bengio · Pierre-Luc Bacon -
2022 : Bayesian Dynamic Causal Discovery »
Alexander Tong · Lazar Atanackovic · Jason Hartford · Yoshua Bengio -
2022 : Object-centric causal representation learning »
Amin Mansouri · Jason Hartford · Kartik Ahuja · Yoshua Bengio -
2022 : Equivariance with Learned Canonical Mappings »
Oumar Kaba · Arnab Mondal · Yan Zhang · Yoshua Bengio · Siamak Ravanbakhsh -
2022 : Interventional Causal Representation Learning »
Kartik Ahuja · Yixin Wang · Divyat Mahajan · Yoshua Bengio -
2022 : Multi-Objective GFlowNets »
Moksh Jain · Sharath Chandra Raparthy · Alex Hernandez-Garcia · Jarrid Rector-Brooks · Yoshua Bengio · Santiago Miret · Emmanuel Bengio -
2022 : PhAST: Physics-Aware, Scalable, and Task-specific GNNs for accelerated catalyst design »
ALEXANDRE DUVAL · Victor Schmidt · Alex Hernandez-Garcia · Santiago Miret · Yoshua Bengio · David Rolnick -
2022 : Efficient Queries Transformer Neural Processes »
Leo Feng · Hossein Hajimirsadeghi · Yoshua Bengio · Mohamed Osama Ahmed -
2022 : Rethinking Learning Dynamics in RL using Adversarial Networks »
Ramnath Kumar · Tristan Deleu · Yoshua Bengio -
2022 : Consistent Training via Energy-Based GFlowNets for Modeling Discrete Joint Distributions »
Chanakya Ekbote · Moksh Jain · Payel Das · Yoshua Bengio -
2022 : A Multi-Level Framework for the AI Alignment Problem »
Betty L Hou · Brian Green -
2022 : A General-Purpose Neural Architecture for Geospatial Systems »
Martin Weiss · Nasim Rahaman · Frederik Träuble · Francesco Locatello · Alexandre Lacoste · Yoshua Bengio · Erran Li Li · Chris Pal · Bernhard Schölkopf -
2022 : Interventional Causal Representation Learning »
Kartik Ahuja · Yixin Wang · Divyat Mahajan · Yoshua Bengio -
2023 Workshop: NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning: Blending New and Existing Knowledge Systems »
Rasika Bhalerao · Mark Roth · Kai Jeggle · Jorge Montalvo Arvizu · Shiva Madadkhani · Yoshua Bengio -
2023 Workshop: AI for Science: from Theory to Practice »
Yuanqi Du · Max Welling · Yoshua Bengio · Marinka Zitnik · Carla Gomes · Jure Leskovec · Maria Brbic · Wenhao Gao · Kexin Huang · Ziming Liu · Rocío Mercado · Miles Cranmer · Shengchao Liu · Lijing Wang -
2023 Workshop: Computational Sustainability: Promises and Pitfalls from Theory to Deployment »
Suzanne Stathatos · Christopher Yeh · Laura Greenstreet · Tarun Sharma · Katelyn Morrison · Yuanqi Du · Chenlin Meng · Sherrie Wang · Fei Fang · Pietro Perona · Yoshua Bengio -
2022 : Interventional Causal Representation Learning »
Kartik Ahuja · Yixin Wang · Divyat Mahajan · Yoshua Bengio -
2022 Workshop: Tackling Climate Change with Machine Learning »
Peetak Mitra · Maria João Sousa · Mark Roth · Jan Drgona · Emma Strubell · Yoshua Bengio -
2022 Spotlight: Lightning Talks 2A-4 »
Sarthak Mittal · Richard Grumitt · Zuoyu Yan · Lihao Wang · Dongsheng Wang · Alexander Korotin · Jiangxin Sun · Ankit Gupta · Vage Egiazarian · Tengfei Ma · Yi Zhou · Yishi Xu · Albert Gu · Biwei Dai · Chunyu Wang · Yoshua Bengio · Uros Seljak · Miaoge Li · Guillaume Lajoie · Yiqun Wang · Liangcai Gao · Lingxiao Li · Jonathan Berant · Huang Hu · Xiaoqing Zheng · Zhibin Duan · Hanjiang Lai · Evgeny Burnaev · Zhi Tang · Zhi Jin · Xuanjing Huang · Chaojie Wang · Yusu Wang · Jian-Fang Hu · Bo Chen · Chao Chen · Hao Zhou · Mingyuan Zhou -
2022 Spotlight: Is a Modular Architecture Enough? »
Sarthak Mittal · Yoshua Bengio · Guillaume Lajoie -
2022 : Equivariance with Learned Canonical Mappings »
Oumar Kaba · Arnab Mondal · Yan Zhang · Yoshua Bengio · Siamak Ravanbakhsh -
2022 : Invited Keynote 1 »
Yoshua Bengio -
2022 : FL Games: A Federated Learning Framework for Distribution Shifts »
Sharut Gupta · Kartik Ahuja · Mohammad Havaei · Niladri Chatterjee · Yoshua Bengio -
2022 : Panel Discussion »
Cheng Zhang · Mihaela van der Schaar · Ilya Shpitser · Aapo Hyvarinen · Yoshua Bengio · Bernhard Schölkopf -
2022 Workshop: AI for Science: Progress and Promises »
Yi Ding · Yuanqi Du · Tianfan Fu · Hanchen Wang · Anima Anandkumar · Yoshua Bengio · Anthony Gitter · Carla Gomes · Aviv Regev · Max Welling · Marinka Zitnik -
2022 Poster: Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints »
Jose Gallego-Posada · Juan Ramirez · Akram Erraqabi · Yoshua Bengio · Simon Lacoste-Julien -
2022 Poster: MAgNet: Mesh Agnostic Neural PDE Solver »
Oussama Boussif · Yoshua Bengio · Loubna Benabbou · Dan Assouline -
2022 Poster: Neural Attentive Circuits »
Martin Weiss · Nasim Rahaman · Francesco Locatello · Chris Pal · Yoshua Bengio · Bernhard Schölkopf · Erran Li Li · Nicolas Ballas -
2022 Poster: Weakly Supervised Representation Learning with Sparse Perturbations »
Kartik Ahuja · Jason Hartford · Yoshua Bengio -
2022 Poster: Trajectory balance: Improved credit assignment in GFlowNets »
Nikolay Malkin · Moksh Jain · Emmanuel Bengio · Chen Sun · Yoshua Bengio -
2022 Poster: Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning »
Aniket Didolkar · Kshitij Gupta · Anirudh Goyal · Nitesh Bharadwaj Gundavarapu · Alex Lamb · Nan Rosemary Ke · Yoshua Bengio -
2022 Poster: Is a Modular Architecture Enough? »
Sarthak Mittal · Yoshua Bengio · Guillaume Lajoie -
2022 : Keynote talk: A Deep Learning Journey »
Yoshua Bengio -
2021 : (Live) Panel Discussion: Cooperative AI »
Kalesha Bullard · Allan Dafoe · Fei Fang · Chris Amato · Elizabeth M. Adams -
2021 : Live Q&A Session 2 with Susan Athey, Yoshua Bengio, Sujeeth Bharadwaj, Jane Wang, Joshua Vogelstein, Weiwei Yang »
Susan Athey · Yoshua Bengio · Sujeeth Bharadwaj · Jane Wang · Weiwei Yang · Joshua T Vogelstein -
2021 : Live Q&A Session 1 with Yoshua Bengio, Leyla Isik, Konrad Kording, Bernhard Scholkopf, Amit Sharma, Joshua Vogelstein, Weiwei Yang »
Yoshua Bengio · Leyla Isik · Konrad Kording · Bernhard Schölkopf · Joshua T Vogelstein · Weiwei Yang -
2021 Workshop: Tackling Climate Change with Machine Learning »
Maria João Sousa · Hari Prasanna Das · Sally Simone Fobi · Jan Drgona · Tegan Maharaj · Yoshua Bengio -
2021 : General Discussion 1 - What is out of distribution (OOD) generalization and why is it important? with Yoshua Bengio, Leyla Isik, Max Welling »
Yoshua Bengio · Leyla Isik · Max Welling · Joshua T Vogelstein · Weiwei Yang -
2021 : AI X Discovery »
Yoshua Bengio -
2021 : Panel Discussion 2 »
Susan L Epstein · Yoshua Bengio · Lucina Uddin · Rohan Paul · Steve Fleming -
2021 : Desiderata and ML Research Programme for Higher-Level Cognition »
Yoshua Bengio -
2021 Workshop: Causal Inference & Machine Learning: Why now? »
Elias Bareinboim · Bernhard Schölkopf · Terrence Sejnowski · Yoshua Bengio · Judea Pearl -
2021 Poster: Dynamic Inference with Neural Interpreters »
Nasim Rahaman · Muhammad Waleed Gondal · Shruti Joshi · Peter Gehler · Yoshua Bengio · Francesco Locatello · Bernhard Schölkopf -
2021 Poster: Gradient Starvation: A Learning Proclivity in Neural Networks »
Mohammad Pezeshki · Oumar Kaba · Yoshua Bengio · Aaron Courville · Doina Precup · Guillaume Lajoie -
2021 Poster: A Consciousness-Inspired Planning Agent for Model-Based Reinforcement Learning »
Mingde Zhao · Zhen Liu · Sitao Luan · Shuyuan Zhang · Doina Precup · Yoshua Bengio -
2021 Poster: Neural Production Systems »
Anirudh Goyal · Aniket Didolkar · Nan Rosemary Ke · Charles Blundell · Philippe Beaudoin · Nicolas Heess · Michael Mozer · Yoshua Bengio -
2021 Poster: Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation »
Emmanuel Bengio · Moksh Jain · Maksym Korablyov · Doina Precup · Yoshua Bengio -
2021 Poster: The Causal-Neural Connection: Expressiveness, Learnability, and Inference »
Kevin Xia · Kai-Zhan Lee · Yoshua Bengio · Elias Bareinboim -
2021 Poster: Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization »
Kartik Ahuja · Ethan Caballero · Dinghuai Zhang · Jean-Christophe Gagnon-Audet · Yoshua Bengio · Ioannis Mitliagkas · Irina Rish -
2021 Poster: Discrete-Valued Neural Communication »
Dianbo Liu · Alex Lamb · Kenji Kawaguchi · Anirudh Goyal · Chen Sun · Michael Mozer · Yoshua Bengio -
2020 : Panel discussion 2 »
Danielle S Bassett · Yoshua Bengio · Cristina Savin · David Duvenaud · Anna Choromanska · Yanping Huang -
2020 : Invited Talk Yoshua Bengio »
Yoshua Bengio -
2020 : Invited Talk #7 »
Yoshua Bengio -
2020 : Panel #1 »
Yoshua Bengio · Daniel Kahneman · Henry Kautz · Luis Lamb · Gary Marcus · Francesca Rossi -
2020 : Yoshua Bengio - Incentives for Researchers »
Yoshua Bengio -
2020 Workshop: Tackling Climate Change with ML »
David Dao · Evan Sherwin · Priya Donti · Lauren Kuntz · Lynn Kaack · Yumna Yusuf · David Rolnick · Catherine Nakalembe · Claire Monteleoni · Yoshua Bengio -
2020 Poster: Untangling tradeoffs between recurrence and self-attention in artificial neural networks »
Giancarlo Kerg · Bhargav Kanuparthi · Anirudh Goyal · Kyle Goyette · Yoshua Bengio · Guillaume Lajoie -
2020 Poster: Your GAN is Secretly an Energy-based Model and You Should Use Discriminator Driven Latent Sampling »
Tong Che · Ruixiang ZHANG · Jascha Sohl-Dickstein · Hugo Larochelle · Liam Paull · Yuan Cao · Yoshua Bengio -
2020 Poster: Hybrid Models for Learning to Branch »
Prateek Gupta · Maxime Gasse · Elias Khalil · Pawan K Mudigonda · Andrea Lodi · Yoshua Bengio -
2020 Poster: Language Models are Few-Shot Learners »
Tom B Brown · Benjamin Mann · Nick Ryder · Melanie Subbiah · Jared Kaplan · Prafulla Dhariwal · Arvind Neelakantan · Pranav Shyam · Girish Sastry · Amanda Askell · Sandhini Agarwal · Ariel Herbert-Voss · Gretchen M Krueger · Tom Henighan · Rewon Child · Aditya Ramesh · Daniel Ziegler · Jeffrey Wu · Clemens Winter · Chris Hesse · Mark Chen · Eric Sigler · Mateusz Litwin · Scott Gray · Benjamin Chess · Jack Clark · Christopher Berner · Sam McCandlish · Alec Radford · Ilya Sutskever · Dario Amodei -
2020 Oral: Language Models are Few-Shot Learners »
Tom B Brown · Benjamin Mann · Nick Ryder · Melanie Subbiah · Jared Kaplan · Prafulla Dhariwal · Arvind Neelakantan · Pranav Shyam · Girish Sastry · Amanda Askell · Sandhini Agarwal · Ariel Herbert-Voss · Gretchen M Krueger · Tom Henighan · Rewon Child · Aditya Ramesh · Daniel Ziegler · Jeffrey Wu · Clemens Winter · Chris Hesse · Mark Chen · Eric Sigler · Mateusz Litwin · Scott Gray · Benjamin Chess · Jack Clark · Christopher Berner · Sam McCandlish · Alec Radford · Ilya Sutskever · Dario Amodei -
2019 : Panel Session: A new hope for neuroscience »
Yoshua Bengio · Blake Richards · Timothy Lillicrap · Ila Fiete · David Sussillo · Doina Precup · Konrad Kording · Surya Ganguli -
2019 : AI and Sustainable Development »
Fei Fang · Carla Gomes · Miguel Luengo-Oroz · Thomas Dietterich · Julien Cornebise -
2019 : Implementing Responsible AI »
Brian Green · Wendell Wallach · Patrick Lin · Nenad Tomasev · Jingying Yang · Libby Kinsey -
2019 : Yoshua Bengio - Towards compositional understanding of the world by agent-based deep learning »
Yoshua Bengio -
2019 : Invited talk: Fei Fang (CMU) »
Fei Fang -
2019 : Lunch Break and Posters »
Xingyou Song · Elad Hoffer · Wei-Cheng Chang · Jeremy Cohen · Jyoti Islam · Yaniv Blumenfeld · Andreas Madsen · Jonathan Frankle · Sebastian Goldt · Satrajit Chatterjee · Abhishek Panigrahi · Alex Renda · Brian Bartoldson · Israel Birhane · Aristide Baratin · Niladri Chatterji · Roman Novak · Jessica Forde · YiDing Jiang · Yilun Du · Linara Adilova · Michael Kamp · Berry Weinstein · Itay Hubara · Tal Ben-Nun · Torsten Hoefler · Daniel Soudry · Hsiang-Fu Yu · Kai Zhong · Yiming Yang · Inderjit Dhillon · Jaime Carbonell · Yanqing Zhang · Dar Gilboa · Johannes Brandstetter · Alexander R Johansen · Gintare Karolina Dziugaite · Raghav Somani · Ari Morcos · Freddie Kalaitzis · Hanie Sedghi · Lechao Xiao · John Zech · Muqiao Yang · Simran Kaur · Qianli Ma · Yao-Hung Hubert Tsai · Ruslan Salakhutdinov · Sho Yaida · Zachary Lipton · Daniel Roy · Michael Carbin · Florent Krzakala · Lenka Zdeborová · Guy Gur-Ari · Ethan Dyer · Dilip Krishnan · Hossein Mobahi · Samy Bengio · Behnam Neyshabur · Praneeth Netrapalli · Kris Sankaran · Julien Cornebise · Yoshua Bengio · Vincent Michalski · Samira Ebrahimi Kahou · Md Rifat Arefin · Jiri Hron · Jaehoon Lee · Jascha Sohl-Dickstein · Samuel Schoenholz · David Schwab · Dongyu Li · Sang Keun Choe · Henning Petzka · Ashish Verma · Zhichao Lin · Cristian Sminchisescu -
2019 : Climate Change: A Grand Challenge for ML »
Yoshua Bengio · Carla Gomes · Andrew Ng · Jeff Dean · Lester Mackey -
2019 : Towards a Social Good? Theories of Change in AI »
natalie saltiel · Rashida Richardson · Sarah T. Hamid -
2019 Workshop: Tackling Climate Change with ML »
David Rolnick · Priya Donti · Lynn Kaack · Alexandre Lacoste · Tegan Maharaj · Andrew Ng · John Platt · Jennifer Chayes · Yoshua Bengio -
2019 : Opening remarks »
Yoshua Bengio -
2019 : Poster Session »
Gergely Flamich · Shashanka Ubaru · Charles Zheng · Josip Djolonga · Kristoffer Wickstrøm · Diego Granziol · Konstantinos Pitas · Jun Li · Robert Williamson · Sangwoong Yoon · Kwot Sin Lee · Julian Zilly · Linda Petrini · Ian Fischer · Zhe Dong · Alexander Alemi · Bao-Ngoc Nguyen · Rob Brekelmans · Tailin Wu · Aditya Mahajan · Alexander Li · Kirankumar Shiragur · Yair Carmon · Linara Adilova · SHIYU LIU · Bang An · Sanjeeb Dash · Oktay Gunluk · Arya Mazumdar · Mehul Motani · Julia Rosenzweig · Michael Kamp · Marton Havasi · Leighton P Barnes · Zhengqing Zhou · Yi Hao · Dylan Foster · Yuval Benjamini · Nati Srebro · Michael Tschannen · Paul Rubenstein · Sylvain Gelly · John Duchi · Aaron Sidford · Robin Ru · Stefan Zohren · Murtaza Dalal · Michael A Osborne · Stephen J Roberts · Moses Charikar · Jayakumar Subramanian · Xiaodi Fan · Max Schwarzer · Nicholas Roberts · Simon Lacoste-Julien · Vinay Prabhu · Aram Galstyan · Greg Ver Steeg · Lalitha Sankar · Yung-Kyun Noh · Gautam Dasarathy · Frank Park · Ngai-Man (Man) Cheung · Ngoc-Trung Tran · Linxiao Yang · Ben Poole · Andrea Censi · Tristan Sylvain · R Devon Hjelm · Bangjie Liu · Jose Gallego-Posada · Tyler Sypherd · Kai Yang · Jan Nikolas Morshuis -
2019 : Approaches to Understanding AI »
Yoshua Bengio · Roel Dobbe · Madeleine Elish · Joshua Kroll · Jacob Metcalf · Jack Poulson -
2019 : Invited Talk »
Yoshua Bengio -
2019 Workshop: Retrospectives: A Venue for Self-Reflection in ML Research »
Ryan Lowe · Yoshua Bengio · Joelle Pineau · Michela Paganini · Jessica Forde · Shagun Sodhani · Abhishek Gupta · Joel Lehman · Peter Henderson · Kanika Madan · Koustuv Sinha · Xavier Bouthillier -
2019 Poster: How to Initialize your Network? Robust Initialization for WeightNorm & ResNets »
Devansh Arpit · Víctor Campos · Yoshua Bengio -
2019 Poster: Wasserstein Dependency Measure for Representation Learning »
Sherjil Ozair · Corey Lynch · Yoshua Bengio · Aaron van den Oord · Sergey Levine · Pierre Sermanet -
2019 Poster: Unsupervised State Representation Learning in Atari »
Ankesh Anand · Evan Racah · Sherjil Ozair · Yoshua Bengio · Marc-Alexandre Côté · R Devon Hjelm -
2019 Poster: Variational Temporal Abstraction »
Taesup Kim · Sungjin Ahn · Yoshua Bengio -
2019 Poster: Gradient based sample selection for online continual learning »
Rahaf Aljundi · Min Lin · Baptiste Goujaud · Yoshua Bengio -
2019 Poster: MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis »
Kundan Kumar · Rithesh Kumar · Thibault de Boissiere · Lucas Gestin · Wei Zhen Teoh · Jose Sotelo · Alexandre de Brébisson · Yoshua Bengio · Aaron Courville -
2019 Invited Talk: From System 1 Deep Learning to System 2 Deep Learning »
Yoshua Bengio -
2019 Poster: On Adversarial Mixup Resynthesis »
Christopher Beckham · Sina Honari · Alex Lamb · Vikas Verma · Farnoosh Ghadiri · R Devon Hjelm · Yoshua Bengio · Chris Pal -
2019 Poster: Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input »
Maxence Ernoult · Julie Grollier · Damien Querlioz · Yoshua Bengio · Benjamin Scellier -
2019 Poster: Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics »
Giancarlo Kerg · Kyle Goyette · Maximilian Puelma Touzel · Gauthier Gidel · Eugene Vorontsov · Yoshua Bengio · Guillaume Lajoie -
2019 Oral: Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input »
Maxence Ernoult · Julie Grollier · Damien Querlioz · Yoshua Bengio · Benjamin Scellier -
2018 : Exploiting data and human knowledge for predicting wildlife poaching »
Fei Fang -
2018 : Rural Infrastructure Health Monitoring System: Using AI to Increase Rural Water Supply Reliability »
Girmaw Abebe Tadesse -
2018 : Opening remarks »
Yoshua Bengio -
2018 Workshop: AI for social good »
Margaux Luck · Tristan Sylvain · Joseph Paul Cohen · Arsene Fansi Tchango · Valentine Goddard · Aurelie Helouis · Yoshua Bengio · Sam Greydanus · Cody Wild · Taras Kucherenko · Arya Farahi · Jonathan Penn · Sean McGregor · Mark Crowley · Abhishek Gupta · Kenny Chen · Myriam Côté · Rediet Abebe -
2018 Poster: Image-to-image translation for cross-domain disentanglement »
Abel Gonzalez-Garcia · Joost van de Weijer · Yoshua Bengio -
2018 Poster: MetaGAN: An Adversarial Approach to Few-Shot Learning »
Ruixiang ZHANG · Tong Che · Zoubin Ghahramani · Yoshua Bengio · Yangqiu Song -
2018 Poster: Bayesian Model-Agnostic Meta-Learning »
Jaesik Yoon · Taesup Kim · Ousmane Dia · Sungwoong Kim · Yoshua Bengio · Sungjin Ahn -
2018 Poster: Sparse Attentive Backtracking: Temporal Credit Assignment Through Reminding »
Nan Rosemary Ke · Anirudh Goyal · Olexa Bilaniuk · Jonathan Binas · Michael Mozer · Chris Pal · Yoshua Bengio -
2018 Spotlight: Sparse Attentive Backtracking: Temporal Credit Assignment Through Reminding »
Nan Rosemary Ke · Anirudh Goyal · Olexa Bilaniuk · Jonathan Binas · Michael Mozer · Chris Pal · Yoshua Bengio -
2018 Spotlight: Bayesian Model-Agnostic Meta-Learning »
Jaesik Yoon · Taesup Kim · Ousmane Dia · Sungwoong Kim · Yoshua Bengio · Sungjin Ahn -
2018 Poster: Dendritic cortical microcircuits approximate the backpropagation algorithm »
João Sacramento · Rui Ponte Costa · Yoshua Bengio · Walter Senn -
2018 Oral: Dendritic cortical microcircuits approximate the backpropagation algorithm »
João Sacramento · Rui Ponte Costa · Yoshua Bengio · Walter Senn -
2017 : Yoshua Bengio »
Yoshua Bengio -
2017 : From deep learning of disentangled representations to higher-level cognition »
Yoshua Bengio -
2017 : More Steps towards Biologically Plausible Backprop »
Yoshua Bengio -
2017 : A3T: Adversarially Augmented Adversarial Training »
Aristide Baratin · Simon Lacoste-Julien · Yoshua Bengio · Akram Erraqabi -
2017 : Competition III: The Conversational Intelligence Challenge »
Mikhail Burtsev · Ryan Lowe · Iulian Vlad Serban · Yoshua Bengio · Alexander Rudnicky · Alan W Black · Shrimai Prabhumoye · Artem Rodichev · Nikita Smetanin · Denis Fedorenko · CheongAn Lee · EUNMI HONG · Hwaran Lee · Geonmin Kim · Nicolas Gontier · Atsushi Saito · Andrey Gershfeld · Artem Burachenok -
2017 Poster: Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net »
Anirudh Goyal · Nan Rosemary Ke · Surya Ganguli · Yoshua Bengio -
2017 Demonstration: A Deep Reinforcement Learning Chatbot »
Iulian Vlad Serban · Chinnadhurai Sankar · Mathieu Germain · Saizheng Zhang · Zhouhan Lin · Sandeep Subramanian · Taesup Kim · Michael Pieper · Sarath Chandar · Nan Rosemary Ke · Sai Rajeswar Mudumba · Alexandre de Brébisson · Jose Sotelo · Dendi A Suhubdy · Vincent Michalski · Joelle Pineau · Yoshua Bengio -
2017 Poster: GibbsNet: Iterative Adversarial Inference for Deep Graphical Models »
Alex Lamb · R Devon Hjelm · Yaroslav Ganin · Joseph Paul Cohen · Aaron Courville · Yoshua Bengio -
2017 Poster: Plan, Attend, Generate: Planning for Sequence-to-Sequence Models »
Caglar Gulcehre · Francis Dutil · Adam Trischler · Yoshua Bengio -
2017 Poster: Z-Forcing: Training Stochastic Recurrent Networks »
Anirudh Goyal · Alessandro Sordoni · Marc-Alexandre Côté · Nan Rosemary Ke · Yoshua Bengio -
2016 : Yoshua Bengio – Credit assignment: beyond backpropagation »
Yoshua Bengio -
2016 : From Brains to Bits and Back Again »
Yoshua Bengio · Terrence Sejnowski · Christos H Papadimitriou · Jakob H Macke · Demis Hassabis · Alyson Fletcher · Andreas Tolias · Jascha Sohl-Dickstein · Konrad P Koerding -
2016 : Yoshua Bengio : Toward Biologically Plausible Deep Learning »
Yoshua Bengio -
2016 : Panel on "Explainable AI" (Yoshua Bengio, Alessio Lomuscio, Gary Marcus, Stephen Muggleton, Michael Witbrock) »
Yoshua Bengio · Alessio Lomuscio · Gary Marcus · Stephen H Muggleton · Michael Witbrock -
2016 : Yoshua Bengio: From Training Low Precision Neural Nets to Training Analog Continuous-Time Machines »
Yoshua Bengio -
2016 Symposium: Deep Learning Symposium »
Yoshua Bengio · Yann LeCun · Navdeep Jaitly · Roger Grosse -
2016 Poster: Architectural Complexity Measures of Recurrent Neural Networks »
Saizheng Zhang · Yuhuai Wu · Tong Che · Zhouhan Lin · Roland Memisevic · Russ Salakhutdinov · Yoshua Bengio -
2016 Poster: Professor Forcing: A New Algorithm for Training Recurrent Networks »
Alex M Lamb · Anirudh Goyal · Ying Zhang · Saizheng Zhang · Aaron Courville · Yoshua Bengio -
2016 Poster: On Multiplicative Integration with Recurrent Neural Networks »
Yuhuai Wu · Saizheng Zhang · Ying Zhang · Yoshua Bengio · Russ Salakhutdinov -
2016 Poster: Binarized Neural Networks »
Itay Hubara · Matthieu Courbariaux · Daniel Soudry · Ran El-Yaniv · Yoshua Bengio -
2015 : RL for DL »
Yoshua Bengio -
2015 : Learning Representations for Unsupervised and Transfer Learning »
Yoshua Bengio -
2015 Symposium: Deep Learning Symposium »
Yoshua Bengio · Marc'Aurelio Ranzato · Honglak Lee · Max Welling · Andrew Y Ng -
2015 Poster: Attention-Based Models for Speech Recognition »
Jan K Chorowski · Dzmitry Bahdanau · Dmitriy Serdyuk · Kyunghyun Cho · Yoshua Bengio -
2015 Poster: Equilibrated adaptive learning rates for non-convex optimization »
Yann Dauphin · Harm de Vries · Yoshua Bengio -
2015 Spotlight: Equilibrated adaptive learning rates for non-convex optimization »
Yann Dauphin · Harm de Vries · Yoshua Bengio -
2015 Spotlight: Attention-Based Models for Speech Recognition »
Jan K Chorowski · Dzmitry Bahdanau · Dmitriy Serdyuk · Kyunghyun Cho · Yoshua Bengio -
2015 Poster: A Recurrent Latent Variable Model for Sequential Data »
Junyoung Chung · Kyle Kastner · Laurent Dinh · Kratarth Goel · Aaron Courville · Yoshua Bengio -
2015 Poster: BinaryConnect: Training Deep Neural Networks with binary weights during propagations »
Matthieu Courbariaux · Yoshua Bengio · Jean-Pierre David -
2015 Tutorial: Deep Learning »
Geoffrey E Hinton · Yoshua Bengio · Yann LeCun -
2014 Workshop: Second Workshop on Transfer and Multi-Task Learning: Theory meets Practice »
Urun Dogan · Tatiana Tommasi · Yoshua Bengio · Francesco Orabona · Marius Kloft · Andres Munoz · Gunnar Rätsch · Hal Daumé III · Mehryar Mohri · Xuezhi Wang · Daniel Hernández-lobato · Song Liu · Thomas Unterthiner · Pascal Germain · Vinay P Namboodiri · Michael Goetz · Christopher Berlind · Sigurd Spieckermann · Marta Soare · Yujia Li · Vitaly Kuznetsov · Wenzhao Lian · Daniele Calandriello · Emilie Morvant -
2014 Workshop: Deep Learning and Representation Learning »
Andrew Y Ng · Yoshua Bengio · Adam Coates · Roland Memisevic · Sharanyan Chetlur · Geoffrey E Hinton · Shamim Nemati · Bryan Catanzaro · Surya Ganguli · Herbert Jaeger · Phil Blunsom · Leon Bottou · Volodymyr Mnih · Chen-Yu Lee · Rich M Schwartz -
2014 Workshop: OPT2014: Optimization for Machine Learning »
Zaid Harchaoui · Suvrit Sra · Alekh Agarwal · Martin Jaggi · Miro Dudik · Aaditya Ramdas · Jean Lasserre · Yoshua Bengio · Amir Beck -
2014 Poster: How transferable are features in deep neural networks? »
Jason Yosinski · Jeff Clune · Yoshua Bengio · Hod Lipson -
2014 Poster: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization »
Yann N Dauphin · Razvan Pascanu · Caglar Gulcehre · Kyunghyun Cho · Surya Ganguli · Yoshua Bengio -
2014 Poster: Generative Adversarial Nets »
Ian Goodfellow · Jean Pouget-Abadie · Mehdi Mirza · Bing Xu · David Warde-Farley · Sherjil Ozair · Aaron Courville · Yoshua Bengio -
2014 Poster: On the Number of Linear Regions of Deep Neural Networks »
Guido F Montufar · Razvan Pascanu · Kyunghyun Cho · Yoshua Bengio -
2014 Demonstration: Neural Machine Translation »
Bart van Merriënboer · Kyunghyun Cho · Dzmitry Bahdanau · Yoshua Bengio -
2014 Oral: How transferable are features in deep neural networks? »
Jason Yosinski · Jeff Clune · Yoshua Bengio · Hod Lipson -
2014 Poster: Iterative Neural Autoregressive Distribution Estimator NADE-k »
Tapani Raiko · Yao Li · Kyunghyun Cho · Yoshua Bengio -
2013 Workshop: Deep Learning »
Yoshua Bengio · Hugo Larochelle · Russ Salakhutdinov · Tomas Mikolov · Matthew D Zeiler · David Mcallester · Nando de Freitas · Josh Tenenbaum · Jian Zhou · Volodymyr Mnih -
2013 Workshop: Output Representation Learning »
Yuhong Guo · Dale Schuurmans · Richard Zemel · Samy Bengio · Yoshua Bengio · Li Deng · Dan Roth · Kilian Q Weinberger · Jason Weston · Kihyuk Sohn · Florent Perronnin · Gabriel Synnaeve · Pablo R Strasser · julien audiffren · Carlo Ciliberto · Dan Goldwasser -
2013 Poster: Multi-Prediction Deep Boltzmann Machines »
Ian Goodfellow · Mehdi Mirza · Aaron Courville · Yoshua Bengio -
2013 Poster: Generalized Denoising Auto-Encoders as Generative Models »
Yoshua Bengio · Li Yao · Guillaume Alain · Pascal Vincent -
2013 Poster: Stochastic Ratio Matching of RBMs for Sparse High-Dimensional Inputs »
Yann Dauphin · Yoshua Bengio -
2012 Workshop: Deep Learning and Unsupervised Feature Learning »
Yoshua Bengio · James Bergstra · Quoc V. Le -
2011 Workshop: Big Learning: Algorithms, Systems, and Tools for Learning at Scale »
Joseph E Gonzalez · Sameer Singh · Graham Taylor · James Bergstra · Alice Zheng · Misha Bilenko · Yucheng Low · Yoshua Bengio · Michael Franklin · Carlos Guestrin · Andrew McCallum · Alexander Smola · Michael Jordan · Sugato Basu -
2011 Workshop: Deep Learning and Unsupervised Feature Learning »
Yoshua Bengio · Adam Coates · Yann LeCun · Nicolas Le Roux · Andrew Y Ng -
2011 Oral: The Manifold Tangent Classifier »
Salah Rifai · Yann N Dauphin · Pascal Vincent · Yoshua Bengio · Xavier Muller -
2011 Poster: Shallow vs. Deep Sum-Product Networks »
Olivier Delalleau · Yoshua Bengio -
2011 Poster: The Manifold Tangent Classifier »
Salah Rifai · Yann N Dauphin · Pascal Vincent · Yoshua Bengio · Xavier Muller -
2011 Poster: Algorithms for Hyper-Parameter Optimization »
James Bergstra · Rémi Bardenet · Yoshua Bengio · Balázs Kégl -
2011 Poster: On Tracking The Partition Function »
Guillaume Desjardins · Aaron Courville · Yoshua Bengio -
2010 Workshop: Deep Learning and Unsupervised Feature Learning »
Honglak Lee · Marc'Aurelio Ranzato · Yoshua Bengio · Geoffrey E Hinton · Yann LeCun · Andrew Y Ng -
2009 Poster: Slow, Decorrelated Features for Pretraining Complex Cell-like Networks »
James Bergstra · Yoshua Bengio -
2009 Poster: An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism »
Aaron Courville · Douglas Eck · Yoshua Bengio -
2009 Session: Debate on Future Publication Models for the NIPS Community »
Yoshua Bengio -
2007 Poster: Augmented Functional Time Series Representation and Forecasting with Gaussian Processes »
Nicolas Chapados · Yoshua Bengio -
2007 Poster: Learning the 2-D Topology of Images »
Nicolas Le Roux · Yoshua Bengio · Pascal Lamblin · Marc Joliveau · Balázs Kégl -
2007 Spotlight: Augmented Functional Time Series Representation and Forecasting with Gaussian Processes »
Nicolas Chapados · Yoshua Bengio -
2007 Poster: Topmoumoute Online Natural Gradient Algorithm »
Nicolas Le Roux · Pierre-Antoine Manzagol · Yoshua Bengio -
2006 Poster: Greedy Layer-Wise Training of Deep Networks »
Yoshua Bengio · Pascal Lamblin · Dan Popovici · Hugo Larochelle -
2006 Talk: Greedy Layer-Wise Training of Deep Networks »
Yoshua Bengio · Pascal Lamblin · Dan Popovici · Hugo Larochelle