Competition
ROAD-R 2023: the Road Event Detection with Requirements Challenge
Eleonora Giunchiglia · Mihaela C. Stoian · Salman Khan · Reza Javanmard alitappeh · Izzeddin A M Teeti · Adrian Paschke · Fabio Cuzzolin · Thomas Lukasiewicz
Room 353
In recent years, there has been an increasing interest in exploiting readily available background knowledge in order to obtain neural models (i) able to learn from less data, and/or (ii) guaranteed to be compliant with the background knowledge corresponding to requirements about the model. In this challenge, we focus on the autonomous driving domain, and we provide our participants with the recently proposed ROAD-R dataset, which consists of 22 long videos annotated with road events together with a set of requirements expressing well known facts about the world (e.g., “a traffic light cannot be red and green at the same time”). The participants will face two challenging tasks. In the first, they will have to develop the best performing model with only a subset of the annotated data, which in turn will encourage them to exploit the requirements to facilitate training on the unlabelled portion of the dataset. In the second, we ask them to create systems whose predictions are compliant with the requirements. This is the first competition addressing the open questions: (i) If limited annotated data is available, is background knowledge useful to obtain good performance? If so, how can it be injected in deep learning models? And, (ii) how can we design effective deep learning based systems that are compliant with a set of requirements? As a consequence, this challenge is expected to bring together people from different communities, especially those interested in the general topic of Safe-AI as well as in the autonomous driving application domain, and also researchers working in the neuro-symbolic AI, semi-supervised learning and action recognition.
Schedule
Fri 11:30 a.m. - 11:45 a.m.
|
Challenge Overview
(
Presentation
)
>
SlidesLive Video |
Eleonora Giunchiglia 🔗 |
Fri 11:45 a.m. - 12:15 p.m.
|
Invited Talk: Sustainable AI
(
Presentation
)
>
SlidesLive Video Despite all of AI's recent success, various amusing examples of AI failure have become popular on the internet, whether it is a misbehaving self-driving car or a hallucinating large language model. This is not good for the progress of the field, with various stakeholders now calling for a pause in frontier AI development until it becomes trustworthy, fair and reliable. A common thread across many of the failure cases points to an inability of current AI to handle exceptions. The so-called out-of-distribution and multi-hop problems are another manifestation of the same limitation. Purely data-driven AI, also known as Machine Learning, could be said to be incapable of handling exceptions by its very definition as improvement of generalization performance from examples. By contrast, neurosymbolic AI combines neural network learning with knowledge representation and reasoning to address the above issues of trust, fairness and reliability. In neurosymbolic AI, exceptions can be expressed in a formally-defined logical language, expert analysis of trained networks and intervention can produce compact representations satisfying logical constraints. In this talk, I will review recent progress in neurosymbolic AI and earlier theoretical results. I will argue that, to be sustainable in the broad sense of the word, AI will need to: (1) learn compressed models from fewer examples incorporating general rules and exceptions; (2) produce descriptions of what has been learned allowing validation of results and sound reasoning; (3) enable direct model user intervention without the need for reinforcement learning with human feedback. In the next five years, neurosymbolic AI is expected to scale up to enable AI that can not only answer but also ask questions, make conjectures and check its understanding towards trustworthy, reliable and safer AI. |
Artur Garcez 🔗 |
Fri 12:15 p.m. - 12:25 p.m.
|
Task 1: 1st-Placed Team Talk
(
Presentation
)
>
SlidesLive Video |
🔗 |
Fri 12:25 p.m. - 12:35 p.m.
|
Task 1: 2nd-Placed Team Talk
(
Presentation
)
>
SlidesLive Video |
🔗 |
Fri 12:35 p.m. - 12:45 p.m.
|
Task 1: 3rd-Placed Team Talk
(
Presentation
)
>
SlidesLive Video |
🔗 |
Fri 12:45 p.m. - 1:15 p.m.
|
Invited Talk: Efficient and Scalable Behavior Models
(
Presentation
)
>
SlidesLive Video |
Rami Al-Rfou 🔗 |
Fri 1:15 p.m. - 1:25 p.m.
|
Task 2: 1st-Placed Team Talk
(
Presentation
)
>
SlidesLive Video |
🔗 |
Fri 1:25 p.m. - 1:35 p.m.
|
Task 2: 2nd-Placed Team Talk
(
Presentation
)
>
SlidesLive Video |
🔗 |
Fri 1:35 p.m. - 1:45 p.m.
|
Task 2: 3rd-Placed Team Talk
(
Presentation
)
>
SlidesLive Video |
🔗 |
Fri 1:45 p.m. - 2:15 p.m.
|
Invited Talk 3: Neuro-Symbolic AI with Tractable Deep Generative Models
(
Presentation
)
>
SlidesLive Video This talk will overview recent developments in combining symbolic reasoning algorithms with deep generative models. We will use probabilistic circuits as the architecture that bridges learning and reasoning. These circuits represent joint distributions as deep computation graphs. They move beyond other deep generative models and probabilistic graphical models by guaranteeing tractable exact probabilistic and logical inference for certain classes of queries: marginal probabilities, symbolic conditioning, expectations, entropies, causal effects, etc. Probabilistic circuit models are now also effectively learned from data at scale, and achieve state-of-the-art results in constrained sampling from both language models and natural image distributions, as well as other neuro-symbolic tasks. |
Guy Van den Broeck 🔗 |
Fri 2:15 p.m. - 2:30 p.m.
|
Closing Remarks
(
Presentation
)
>
SlidesLive Video |
Eleonora Giunchiglia 🔗 |