Competition: AutoML Decathlon: Diverse Tasks, Modern Methods, and Efficiency at Scale Wed 7 Dec 07:00 a.m.
As more areas beyond the traditional AI domains (e.g., computer vision and natural language processing) seek to take advantage of data-driven tools, the need for developing ML systems that can adapt to a wide range of downstream tasks in an efficient and automatic way continues to grow. The AutoML for the 2020s competition aims to catalyze research in this area and establish a benchmark for the current state of automated machine learning. Unlike previous challenges which focus on a single class of methods such as non-deep-learning AutoML, hyperparameter optimization, or meta-learning, this competition proposes to (1) evaluate automation on a diverse set of small and large-scale tasks, and (2) allow the incorporation of the latest methods such as neural architecture search and unsupervised pretraining. To this end, we curate 20 datasets that represent a broad spectrum of practical applications in scientific, technological, and industrial domains. Participants are given a set of 10 development tasks selected from these datasets and are required to come up with automated programs that perform well on as many problems as possible and generalize to the remaining private test tasks. To ensure efficiency, the evaluation will be conducted under a fixed computational budget. To ensure robustness, the performance profiles methodology is used for determining the winners. The organizers will provide computational resources to the participants as needed and monetary prizes to the winners.
Competition: Multimodal Single-Cell Integration Across Time, Individuals, and Batches Wed 7 Dec 07:00 a.m.
In this workshop, we will hear presentations from winners and competitors in the Multimodal Single-Cell Integration Challenge. For more information about the competition, see our competition page: https://www.kaggle.com/competitions/open-problems-multimodal/
Schedule - all times UTC
Presentation Name Start Stop Team Name Presenter Name
CCompetition Overview 13:00 13:10 Hosts Daniel Burkhardt
First place winner 13:10 13:30 Shuji Suzuki Shuji Suzuki
Third place winner 13:30 13:50 Makotu makoto hyodo
Fifth place 13:50 14:00 Lucky Shake Jeroen Cerpentier
Second place winner 14:00 14:20 senkin & tmp Jin Zhan
Fourth place 14:20 14:30 Oliver Wang Guoxuan Wang
Seventh place 14:30 14:40 chromosom Yury Shapchyts
Eighth place 14:40 14:50 vialactea Fernando Goncalves
Hosts choice 14:50 15:00 Kha | MT | B | Ambros Ambros Marzetta
Top Shake-up 15:00 15:15 One&Only Tianyu Liu
Top Shake-up 15:15 15:30 DANCE Hongzhi Wen
Hosts choice 15:30 15:45 sB2 Alexander Chervov
Wrap Up 15:45 15:50 Hosts Daniel Burkhardt
Competition: Reconnaissance Blind Chess: An Unsolved Challenge for Multi-Agent Decision Making Under Uncertainty Wed 7 Dec 07:00 a.m.
Reconnaissance Blind Chess (RBC) is like chess except a player cannot see her opponent's pieces in general. Rather, each player chooses a 3x3 square of the board to privately observe each turn. State-of-the-art algorithms, including those used to create agents for previous games like chess, Go, and poker, break down in Reconnaissance Blind Chess for several reasons including the imperfect information, absence of obvious abstractions, and lack of common knowledge. Build the best bot for this challenge in making strong decisions in competitive multi-agent scenarios in the face of uncertainty!
Competition: The CityLearn Challenge 2022 Wed 7 Dec 07:00 a.m.
Reinforcement learning has gained popularity as a model-free and adaptive controller for the built-environment in demand-response applications. However, a lack of standardization on previous research has made it difficult to compare different RL algorithms with each other. Also, it is unclear how much effort is required in solving each specific problem in the building domain and how well a trained RL agent will scale up to new environments. The CityLearn Challenge 2022 provides an avenue to address these problems by leveraging CityLearn, an OpenAI Gym Environment for the implementation of RL agents for demand response. The challenge utilizes operational electricity demand data to develop an equivalent digital twin model of the 20 buildings. Participants are to develop energy management agents for battery charge and discharge control in each building with a goal of minimizing electricity demand from the grid, electricity bill and greenhouse gas emissions. We provide a baseline RBC agent for the evaluation of the RL agents performance and rank the participants' according to their solution's ability to outperform the baseline.
Second AmericasNLP Competition: Speech-to-Text Translation for Indigenous Languages of the Americas Wed 7 Dec 07:00 a.m.
AmericasNLP aims to encourage and increase the visibility of research on machine learning approaches for Indigenous languages of the Americas, as, until recently, those have often been overlooked by researchers. For the Second AmericasNLP Competition: Speech-to-Text Translation for Indigenous Languages of the Americas we ask participants to develop or contribute to the development of speech-to-text translation systems for five Indigenous languages of the Americas (Bribri, Guaraní, Kotiria, Quechua and Wa’ikhana), for which available resources are extremely limited. The main task of this competition is speech-to-text translation, and we additionally invite submissions to its two subtasks: automatic speech recognition and text-to-text machine translation.
EURO Meets NeurIPS 2022 Vehicle Routing Competition Wed 7 Dec 07:00 a.m.
Solving vehicle routing problems (VRPs) is an essential task for many industrial applications. While VRPs have been traditionally studied in the operations research (OR) domain, they have lately been the subject of extensive work in the machine learning (ML) community. Both the OR and ML communities have begun to integrate ML into their methods, but in vastly different ways. While the OR community mostly relies on simplistic ML methods, the ML community generally uses deep learning, but fails to outperform OR baselines. To address this gap, this competition, a joint effort of several previous competitions, brings together the OR and ML communities to solve a challenging VRP variant on real-world data provided by ORTEC, a leading provider of vehicle routing software. The challenge focuses on both a `classic' deterministic VRP with time windows (VRPTW) and a dynamic version in which new orders arrive over the course of a day. As a baseline, we will provide a state-of-the-art VRPTW solver and a simple strategy to use it to solve the dynamic variant, thus ensuring that all competition participants have the tools necessary to solve both versions of the problem. We anticipate that the winning method will significantly advance the state-of-the-art for solving routing problems, therefore providing a strong foundation for further research in both the OR and ML communities, as well as a practical impact on the real-world solving of VRPs.
Competition: Weakly Supervised Cell Segmentation in Multi-modality High-Resolution Microscopy Images Wed 7 Dec 07:00 a.m.
Cell segmentation is usually the first step for downstream single-cell analysis in microscopy image-based biology and biomedical research. Deep learning has been widely used for image segmentation, but it is hard to collect a large number of labelled cell images to train models because manually annotating cells is extremely time-consuming and costly. Furthermore, datasets used are often limited to one modality and lacking in diversity, leading to poor generalization of trained models. This competition aims to benchmark cell segmentation methods that could be applied to various microscopy images across multiple imaging platforms and tissue types. We frame the cell segmentation problem as a weakly supervised learning task to encourage models that use limited labelled and many unlabelled images for cell segmentation as unlabelled images are relatively easy to obtain in practice. We will implement a U-Net model as a baseline owing to their established success in biomedical image segmentation. This competition could serve as an important step toward universal and fully automatic cell image analysis tools, greatly accelerating the rate of discovery from image-based biological and biomedical research.
Competition: Inferring Physical Properties of Exoplanets From Next-Generation Telescopes Wed 7 Dec 07:05 a.m.
The study of extra-solar planets, or simply, exoplanets, planets outside our own Solar System, is fundamentally a grand quest to understand our place in the Universe. Discoveries in the last two decades have re-defined what we know about planets, and helped us comprehend the uniqueness of our very own Earth. In recent years, however, the focus has shifted from planet detection to planet characterisation, where key planetary properties are inferred from telescope observations using Monte Carlo-based methods. However, the efficiency of sampling-based methodologies is put under strain by the high-resolution observational data from next generation telescopes, such as the James Webb Space Telescope and the Ariel Space Mission. We propose to host a regular competition with the goal of identifying a reliable and scalable method to perform planetary characterisation. Depending on the chosen track, participants will provide either quartile estimates or the approximate distribution of key planetary properties. They will have access to synthetic spectroscopic data generated from the official simulators for the ESA Ariel Space Mission. The aims of the competition are three-fold. 1) To offer a challenging application for comparing and advancing conditional density estimation methods. 2) To provide a valuable contribution towards reliable and efficient analysis of spectroscopic data, enabling astronomers to build a better picture of planetary demographics, and 3) To promote the interaction between ML and exoplanetary science.
Spotlight: Featured Papers Panels 3A Wed 7 Dec 11:00 a.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
- [ 64941 ] Collaborative Learning by Detecting Collaboration Partners
- [ 64942 ] Federated Learning from Pre-Trained Models: A Contrastive Learning Approach
- [ 64944 ] Improving Barely Supervised Learning by Discriminating Unlabeled Samples with Super-Class
- [ 64946 ] LBD: Decouple Relevance and Observation for Individual-Level Unbiased Learning to Rank
- [ 64947 ] Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks
- [ 64949 ] Unified Optimal Transport Framework for Universal Domain Adaptation
- [ 64951 ] Personalized Federated Learning towards Communication Efficiency, Robustness and Fairness
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64952 ] Online Neural Sequence Detection with Hierarchical Dirichlet Point Process
- [ 64953 ] Pre-Trained Model Reusability Evaluation for Small-Data Transfer Learning
- [ 64954 ] A2: Efficient Automated Attacker for Boosting Adversarial Training
- [ 64955 ] Versatile Multi-stage Graph Neural Network for Circuit Representation
- [ 64956 ] Product Ranking for Revenue Maximization with Multiple Purchases
- [ 64957 ] Robust Semi-Supervised Learning when Not All Classes have Labels
- [ 64958 ] Meta-Complementing the Semantics of Short Texts in Neural Topic Models
- [ 64959 ] Byzantine-tolerant federated Gaussian process regression for streaming data
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64984 ] PointTAD: Multi-Label Temporal Action Detection with Learnable Query Points
- [ 64985 ] Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction
- [ 64986 ] Hand-Object Interaction Image Generation
- [ 64988 ] Coordinates Are NOT Lonely - Codebook Prior Helps Implicit Neural 3D representations
- [ 64989 ] Let Images Give You More: Point Cloud Cross-Modal Training for Shape Analysis
- [ 64991 ] Geometry-aware Two-scale PIFu Representation for Human Reconstruction
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64995 ] ElasticMVS: Learning elastic part representation for self-supervised multi-view stereopsis
- [ 64996 ] Fully Convolutional One-Stage 3D Object Detection on LiDAR Range Images
- [ 64997 ] Embracing Consistency: A One-Stage Approach for Spatio-Temporal Video Grounding
- [ 64999 ] Conditional Diffusion Process for Inverse Halftoning
- [ 65000 ] Object-Category Aware Reinforcement Learning
- [ 65001 ] Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera
Q&A on RocketChat immediately following Lightning Talks
Spotlight: Featured Papers Panels 3B Wed 7 Dec 11:00 a.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
- [ 64963 ] Conditional Meta-Learning of Linear Representations
- [ 64965 ] Supervising the Multi-Fidelity Race of Hyperparameter Configurations
- [ 64966 ] Leveraging the Hints: Adaptive Bidding in Repeated First-Price Auctions
- [ 64968 ] Deep Combinatorial Aggregation
- [ 64969 ] When to Update Your Model: Constrained Model-based Reinforcement Learning
- [ 64970 ] HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Networks
- [ 64972 ] Multi-Sample Training for Neural Image Compression
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64973 ] Provable Generalization of Overparameterized Meta-learning Trained with SGD
- [ 64974 ] Elucidating the Design Space of Diffusion-Based Generative Models
- [ 64975 ] HYPRO: A Hybridly Normalized Probabilistic Model for Long-Horizon Prediction of Event Sequences
- [ 64977 ] Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes
- [ 64978 ] Fast Instrument Learning with Faster Rates
- [ 64980 ] Multi-view Subspace Clustering on Topological Manifold
- [ 64981 ] Distributional Reinforcement Learning for Risk-Sensitive Policies
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65007 ] Contrastive Graph Structure Learning via Information Bottleneck for Recommendation
- [ 65008 ] Debugging and Explaining Metric Learning Approaches: An Influence Function Based Perspective
- [ 65009 ] Revisiting Heterophily For Graph Neural Networks
- [ 65010 ] Understanding the Failure of Batch Normalization for Transformers in NLP
- [ 65011 ] A Unified Model for Multi-class Anomaly Detection
- [ 65013 ] A Closer Look at Offline RL Agents
- [ 65014 ] Bi-directional Weakly Supervised Knowledge Distillation for Whole Slide Image Classification
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65015 ] Get More at Once: Alternating Sparse Training with Gradient Correction
- [ 65016 ] Semi-Supervised Video Salient Object Detection Based on Uncertainty-Guided Pseudo Labels
- [ 65019 ] Mining Unseen Classes via Regional Objectness: A Simple Baseline for Incremental Segmentation
- [ 65020 ] MExMI: Pool-based Active Model Extraction Crossover Membership Inference
- [ 65021 ] Deliberated Domain Bridging for Domain Adaptive Semantic Segmentation
- [ 65022 ] Multivariate Time-Series Forecasting with Temporal Polynomial Graph Neural Networks
- [ 65023 ] Optimal Positive Generation via Latent Transformation for Contrastive Learning
- [ 65025 ] Tenrec: A Large-scale Multipurpose Benchmark Dataset for Recommender Systems
Q&A on RocketChat immediately following Lightning Talks
Featured Papers Panels 3C Wed 7 Dec 11:00 a.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or xtor, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
[ Virtual ]
[ Virtual ]
[ Virtual ]
[ Virtual ]
[ Virtual ]
[ Virtual ]
[ Virtual ]
Expo Workshop: PyTorch: New advances for large-scale training and performance optimizations Wed 7 Dec 11:30 a.m.
[ protected link dropped ]
Large language models and Generative AI have been key drivers for new innovations in large-scale training and performance optimizations. In this workshop, we will dive deeper into new features and solutions in PyTorch that enable training and performance optimizations @ scale.
Following topics will be covered by the PyTorch team in this workshop. The sessions are divided over two days, Nov 28th will cover the PyTorch Distributed and Profiling topics, and Dec: 5th session will cover the PyTorch Compiler based solutions.
## Part 1: Nov 28 (Hybrid, in-person and remote), 9:30a-12:30p CST (UTC-6), Room # 291
-------------------------------------------------------------------------------------------------------
1. FSDP Production Readiness, Speakers: Rohan Varma, Andrew Gu
We will dive deep into recent advances in FSDP which have enabled better throughput, memory savings and extensibility. These improvements have unblocked using FSDP for models of different modalities and varying sizes(model and data). We will share best practices to apply these features to specific use cases such as XLMR, FLAVA, ViT, DHEN and GPT3 style models.
2. Automated Pipeline Parallelism for PyTorch, Speaker: Ke Wen
PiPPy is a library that provides automated pipeline parallelism for PyTorch models. PiPPy consists of a compiler stack capable of automatically splitting a model into stages without requiring intrusive code changes to the model. It also provides a distributed runtime that helps users to distribute the split stages to multiple devices and multiple hosts and orchestrates micro-batch execution in an overlapped fashion. We are going to demonstrate the use of PiPPy for Hugging Face models on clouds.
3. PyTorch Profiler, Speaker: Taylor Robie
Dive into recent enhancements to the PyTorch profiler capabilities, Python function tracing, data flow capture, and memory profiling, and how they enable previously impossible performance analysis.
4. Profiling Distributed Training Workloads, Speaker: Anupam Bhatnagar
We will present Holistic Trace Analysis (HTA), a tool to identify computation, communication and memory bottlenecks in distributed training. HTA identifies these bottlenecks by analyzing the traces collected using the PyTorch Profiler.
5. TorchBench, Speaker: Xu Zhao
In this talk we present PyTorch Benchmark(TorchBench), a benchmarking suite to provide quick and stable performance signals to hold the line of performance in PyTorch development. TorchBench identifies performance regressions and provides CI services for PyTorch developers to test their PRs. It can also be used to profile specific models and identify optimization opportunities.
## Part 2: Dec 5 (Virtual), 9:30a - 11:30a PST (UTC-8) / 11:30a - 1:30p CST (UTC-6)
------------------------------------------------------------------------------------------------
6. A deep dive into TorchDynamo, Speaker: Animesh Jain
This talk presents a deep dive into TorchDynamo. TorchDynamo is a Python-level JIT compiler designed to make unmodified PyTorch programs faster. It rewrites Python bytecode in order to extract sequences of PyTorch operations into a graph which is then just-in-time compiled with a customizable backend. It is designed to mix Python execution with compiled backends to get the best of both worlds: usability and performance
7. A deep dive into TorchInductor, Speakers: Bin Bao, Natalia Gimelshein
This talk presents a deep dive into the design principles of TorchInductor, pytorch compiler backend, the lowering stack that it uses to transform pytorch programs, and the optimization techniques and codegen technologies that it uses.
8: How do backends integrate to PyTorch compiler stack, Speaker: Sherlock Huang
This talk deep dives into the backend integration points in Pytorch compiler stack. It will explain three types of IR used across the stack, torch IR produced by Dynamo, AtenIR produced by AoTAutograd, and loop-level IR used in Inductor. It will introduce the infrastructure and utilities available for backend integration, including a IR-agnostic Pattern Matcher and a Graph Partitioner.
Spotlight: Featured Papers Panels 4B Wed 7 Dec 07:00 p.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
- [ 65048 ] Autoinverse: Uncertainty Aware Inversion of Neural Networks
- [ 65049 ] Meta-Auto-Decoder for Solving Parametric Partial Differential Equations
- [ 65050 ] Bayesian Optimistic Optimization: Optimistic Exploration for Model-based Reinforcement Learning
- [ 65051 ] PhysGNN: A Physics--Driven Graph Neural Network Based Model for Predicting Soft Tissue Deformation in Image--Guided Neurosurgery
- [ 65052 ] Inverse Design for Fluid-Structure Interactions using Graph Network Simulators
- [ 65054 ] Learning Physical Dynamics with Subequivariant Graph Neural Networks
- [ 65055 ] PALBERT: Teaching ALBERT to Ponder
- [ 65056 ] Towards Practical Control of Singular Values of Convolutional Layers
- [ 65057 ] Accelerated Linearized Laplace Approximation for Bayesian Deep Learning
- [ 65058 ] TA-GATES: An Encoding Scheme for Neural Network Architectures
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65060 ] LieGG: Studying Learned Lie Group Generators
- [ 65062 ] Information bottleneck theory of high-dimensional regression: relevancy, efficiency and optimality
- [ 65066 ] Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65092 ] Contact-aware Human Motion Forecasting
- [ 65093 ] Towards Hard-pose Virtual Try-on via 3D-aware Global Correspondence Learning
- [ 65094 ] Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language Navigation
- [ 65096 ] SCONE: Surface Coverage Optimization in Unknown Environments by Volumetric Integration
- [ 65097 ] CoupAlign: Coupling Word-Pixel with Sentence-Mask Alignments for Referring Image Segmentation
- [ 65098 ] AniFaceGAN: Animatable 3D-Aware Face Image Generation for Video Avatars
- [ 65099 ] TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition
- [ 65100 ] Forecasting Human Trajectory from Scene History
- [ 65101 ] Controllable 3D Face Synthesis with Conditional Generative Occupancy Fields
- [ 65102 ] Language Conditioned Spatial Relation Reasoning for 3D Object Grounding
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65103 ] Audio-Driven Co-Speech Gesture Video Generation
- [ 65104 ] Learning Equivariant Segmentation with Instance-Unique Querying
- [ 65105 ] Grounded Video Situation Recognition
- [ 65106 ] Dict-TTS: Learning to Pronounce with Prior Dictionary Knowledge for Text-to-Speech
- [ 65108 ] MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction
- [ 65109 ] BinauralGrad: A Two-Stage Conditional Diffusion Probabilistic Model for Binaural Audio Synthesis
- [ 65112 ] Non-Monotonic Latent Alignments for CTC-Based Non-Autoregressive Machine Translation
- [ 65113 ] APT-36K: A Large-scale Benchmark for Animal Pose Estimation and Tracking
Q&A on RocketChat immediately following Lightning Talks
Spotlight: Featured Papers Panels 4A Wed 7 Dec 07:00 p.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
- [ 65026 ] Online Frank-Wolfe with Arbitrary Delays
- [ 65028 ] Fine-Grained Analysis of Stability and Generalization for Modern Meta Learning Algorithms
- [ 65030 ] Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with an Inexact Prox
- [ 65031 ] Tiered Reinforcement Learning: Pessimism in the Face of Uncertainty and Constant Regret
- [ 65034 ] Dynamic Pricing with Monotonicity Constraint under Unknown Parametric Demand Model
- [ 65035 ] Coresets for Wasserstein Distributionally Robust Optimization Problems
- [ 65036 ] Exploitability Minimization in Games and Beyond
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65038 ] Follow-the-Perturbed-Leader for Adversarial Markov Decision Processes with Bandit Feedback
- [ 65039 ] Exact Shape Correspondence via 2D graph convolution
- [ 65040 ] Smoothed Online Convex Optimization Based on Discounted-Normal-Predictor
- [ 65041 ] Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning
- [ 65042 ] Learning-Augmented Algorithms for Online Linear and Semidefinite Programming
- [ 65043 ] Distributed Online Convex Optimization with Compressed Communication
- [ 65044 ] Composition Theorems for Interactive Differential Privacy
- [ 65045 ] Toward Equation of Motion for Deep Neural Networks: Continuous-time Gradient Descent and Discretization Error Analysis
- [ 65046 ] Zeroth-Order Negative Curvature Finding: Escaping Saddle Points without Gradients
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65070 ] GAGA: Deciphering Age-path of Generalized Self-paced Regularizer
- [ 65071 ] S-Prompts Learning with Pre-trained Transformers: An Occam’s Razor for Domain Incremental Learning
- [ 65072 ] Differentiable hierarchical and surrogate gradient search for spiking neural networks
- [ 65075 ] Online Training Through Time for Spiking Neural Networks
- [ 65076 ] DivBO: Diversity-aware CASH for Ensemble Learning
- [ 65079 ] Earthformer: Exploring Space-Time Transformers for Earth System Forecasting
- [ 65080 ] Does Momentum Change the Implicit Regularization on Separable Data?
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65081 ] Provable General Function Class Representation Learning in Multitask Bandits and MDP
- [ 65082 ] Monte Carlo Tree Search based Variable Selection for High Dimensional Bayesian Optimization
- [ 65083 ] Optimal Transport-based Identity Matching for Identity-invariant Facial Expression Recognition
- [ 65088 ] Toward Robust Spiking Neural Network Against Adversarial Perturbation
- [ 65089 ] The Nature of Temporal Difference Errors in Multi-step Distributional Reinforcement Learning
- [ 65090 ] Fuzzy Learning Machine
- [ 65091 ] BLOX: Macro Neural Architecture Search Benchmark and Algorithms
Q&A on RocketChat immediately following Lightning Talks
Featured Papers Panels 4C Wed 7 Dec 07:00 p.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
[ Virtual ]
[ Virtual ]
[ Virtual ]
[ Virtual ]
Competition: NL4Opt: Formulating Optimization Problems Based on Their Natural Language Descriptions Wed 7 Dec 07:00 p.m.
We propose a competition for extracting the meaning and formulation of an optimization problem based on its text description. For this competition, we have created the first dataset of linear programming (LP) word problems. A deep understanding of the problem description is an important first step towards generating the problem formulation. Therefore, we present two challenging sub-tasks for the participants. For the first sub-task, the goal is to recognize and label the semantic entities that correspond to the components of the optimization problem. For the second sub-task, the goal is to generate a meaning representation (i.e. a logical form) of the problem from its description and its problem entities. This intermediate representation of an LP problem will be converted to a canonical form for evaluation. The proposed task will be attractive because of its compelling application, the low-barrier to entry of the first sub-task, and the new set of challenges the second sub-task brings to semantic analysis and evaluation. The goal of this competition is to increase the access and usability of optimization solvers, allowing non-experts to solve important problems from various industries. In addition, this new task will promote the development of novel machine learning applications and datasets for operations research.