NeurIPS 2025 Career Opportunities
Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.
Search Opportunities
Bala Cynwyd (Philadelphia Area), Pennsylvania United States
Overview
Susquehanna is expanding the Machine Learning group and seeking exceptional researchers to join our dynamic team. As a Machine Learning Researcher, you will apply advanced ML techniques to a wide range of forecasting challenges, including time series analysis, natural language understanding, and more. Your work will directly influence our trading strategies and decision-making processes.
This is a unique opportunity to work at the intersection of cutting-edge research and real-world impact, leveraging one of the highest-quality financial datasets in the industry.
What You’ll Do
Conduct research and develop ML models to enhance trading strategies, with a focus on deep learning and scalable deployment Collaborate with researchers, developers, and traders to improve existing models and explore new algorithmic approaches Design and run experiments using the latest ML tools and frameworks Develop automation tools to streamline research and system development Apply rigorous scientific methods to extract signals from complex datasets and shape our understanding of market behavior Partner with engineering teams to implement and test models in production environments
What we're looking for We’re looking for research scientists with a proven track record of applying deep learning to solve complex, high-impact problems. The ideal candidate will have a strong grasp of diverse machine learning techniques and a passion for experimenting with model architectures, feature engineering, and hyperparameter tuning to produce resilient and high-performing models.
PhD in Computer Science, Machine Learning, Mathematics, Physics, Statistics, or a related field Strong track record of applying ML in academic or industry settings, with 5+ years of experience building impactful deep learning systems A strong publication record in top-tier conferences such as NeurIPS, ICML, or ICLR Strong programming skills in Python and/or C++ Practical knowledge of ML libraries and frameworks, such as PyTorch or TensorFlow, especially in production environments Hands-on experience applying deep learning on time series data Strong foundation in mathematics, statistics, and algorithm design Excellent problem-solving skills with a creative, research-driven mindset Demonstrated ability to work collaboratively in team-oriented environments A passion for solving complex problems and a drive to innovate in a fast-paced, competitive environment
Location Dallas-Fort Worth Metroplex
Description At ServiceLink, we believe in pushing the limits of what’s possible through innovation. We’re looking for a high-achieving AI expert to lead ground-breaking initiatives that redefine our mortgage industry. As our Lead AI Engineer, you’ll harness cutting-edge technologies—from advanced machine learning and deep learning to generative AI, Large Language Models, and Agentic AI—to create production-ready systems that solve real-world challenges. This is your opportunity to shape strategy, mentor top talent, and turn ambitious ideas into transformative solutions in an environment that champions bold thinking and continuous innovation.
Applicants must be currently authorized to work in the United States on a full-time basis and must not require sponsorship for employment visa status now or in the future.
A DAY IN THE LIFE In this role, you will… - Transform complex business challenges into innovative AI solutions that leverage deep learning, LLMs, and autonomous Agentic AI frameworks. - Lead projects end-to-end—from ideation and data gathering to model design, fine-tuning, deployment, and continuous improvement using full MLOps practices. - Collaborate closely with business stakeholders, Data Engineering, Product, and Infrastructure teams to ensure our AI solutions are powerful, secure, and scalable. - Drive both research and production by designing experiments, publishing state-of-the-art work in high-impact journals, and protecting strategic intellectual property. - Mentor and inspire our next generation of Data scientists and AI Engineers, sharing insights on emerging trends and best practices in AI.
WHO YOU ARE You possess … - A visionary leader with an advanced degree (Master’s or Ph.D.) in Computer Science, Engineering, or a related field, backed by 7+ years of progressive experience in AI and data science. - A technical powerhouse with a solid track record in statistical analysis, machine learning, deep learning, and building production-grade models using transformer architectures and Agentic AI systems. - Re-engineered conventional workflows leveraging AI technologies, achieving measurable business outcomes. - Proficient in Python—and comfortable with other modern programming environments—armed with real-world experience in cloud platforms (preferably Microsoft Azure) and end-to-end AI development (CRISP-DM and ML-Ops). - An exceptional communicator who can distill complex technical ideas into strategic insights for diverse audiences, from the boardroom to the lab. - A proactive problem solver and collaborative team player who thrives in a fast-paced, interdisciplinary setting, ready to balance innovative risk with practical execution.
Work Location: Toronto, Ontario, Canada
Job Description
We are currently seeking talented individuals for a variety of positions, ranging from mid to senior levels, and will evaluate your application in its entirety.
Layer 6 is the AI research centre of excellence for TD Bank Group. We develop and deploy industry-leading machine learning systems that impact the lives of over 27 million customers, helping more people achieve their financial goals and needs. Our research broadly spans the field of machine learning with areas such as deep learning and generative AI, time series forecasting and responsible use of AI. We have access to massive financial datasets and actively collaborate with world renowned academic faculty. We are always looking for people driven to be at the cutting edge of machine learning in research, engineering, and impactful applications.
Day-to-day as a Technical Product Owner:
-
Translate broad business problems into sharp data science use cases, and craft use cases into product visions
-
Own machine learning products from vision to backlog; prioritizing features and defining minimum viable releases; maximizing the value your products generate, and the ROI of your pod
-
Guide Agile pods on continuous improvement, ensuring that the next sprint is delivered better than the previous
-
Work closely with stakeholders to identify, refine and (occasionally) reject opportunities to build machine learning products; collaborate with support functions such as risk, technology, model risk management and incorporate interfacing features
-
Facilitate the professional & technical development of your colleagues through mentorship and feedback
-
Anticipate resource needs as solutions move through the model lifecycle, scaling pods up and down as models are built, perform, degrade, and need to be rebuilt
-
Championing model development standards, industry best-practices and rigorous testing protocols to ensure model excellence
-
Self-direct, with the ability to identify meaningful work in down times and effectively prioritize in busy times
-
Drive value through product, feature & release prioritization, maximizing ROI & modelling velocity
-
Be an exceptional collaborator in a high-interaction environment
Job Requirements
-
Minimum five years of experience delivering major data science projects in large, complex organizations
-
Strong communication, business acumen and stakeholder management competencies
-
Strong technical skills: machine learning, data engineering, MLOps, cloud solution architecture, software development practices
-
Strong coding proficiency: python, R, SQL and / or Scala, cloud architecture
-
Certified Scrum Product Owner and / or Certified Scrum Master or equivalent experience
-
Familiarity with cloud solution architecture, Azure a plus
-
Master’s degree in data science, artificial intelligence, computer science or equivalent experience
Handshake is building the career network for the AI economy. Our three-sided marketplace connects 18 million students and alumni, 1,500+ academic institutions across the U.S. and Europe, and 1 million employers to power how the next generation explores careers, builds skills, and gets hired.
Handshake AI is a human data labeling business that leverages the scale of the largest early career network. We work directly with the world’s leading AI research labs to build a new generation of human data products. From PhDs in physics to undergrads fluent in LLMs, Handshake AI is the trusted partner for domain-specific data and evaluation at scale. This is a unique opportunity to join a fast-growing team shaping the future of AI through better data, better tools, and better systems—for experts, by experts.
Now’s a great time to join Handshake. Here’s why: Leading the AI Career Revolution: Be part of the team redefining work in the AI economy for millions worldwide. Proven Market Demand: Deep employer partnerships across Fortune 500s and the world’s leading AI research labs. World-Class Team: Leadership from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, just to name a few. Capitalized & Scaling: $3.5B valuation from top investors including Kleiner Perkins, True Ventures, Notable Capital, and more.
About the Role We’re expanding our team and seeking an Engineering Manager to lead the development of high-impact products that empower our Fellows in driving AI innovation. This is a unique opportunity to play a pivotal role in a fast-growing space, working directly with leading AI labs and tech companies to develop solutions that enhance research workflows, model evaluation processes, and domain-specific AI applications. We’re expanding our team and seeking an experienced Engineering Manager to lead our Annotations Team, responsible for building and maintaining the tools that power our data creation pipeline. This team develops products used by operators and fellows to create data used by researchers to train the latest, state of the art language models. As an Engineering Manager, you'll play a key role in shaping technical direction, fostering engineering excellence, and ensuring your team delivers reliable, scalable, and user-centric solutions. You’ll manage a talented group of engineers focused on improving workflow efficiency, quality assurance, and scalability across annotation systems. This is a highly cross-functional role that sits at the intersection of engineering, operations, and product, ensuring our data creation platform is robust, intuitive, and ready to meet evolving research needs.
Location: San Francisco, CA| 5 days/week in-office Lead and grow a high-performing engineering team, fostering a culture of ownership, inclusion, and technical excellence. Partner with product and design teams to define roadmaps, scope projects, and deliver end-to-end solutions aligned with user needs. Drive technical architecture and decision-making, ensuring systems are scalable, maintainable, and performant. Mentor engineers across experience levels, supporting their growth through regular feedback, coaching, and development opportunities. Oversee day-to-day execution, ensuring high-quality code, effective reviews, and healthy team velocity.
Desired Capabilities 2–4+ years of experience in an engineering leadership role (team lead, tech lead, or manager), in addition to prior experience as a hands-on engineer. Strong technical background in full-stack development, particularly with ReactJS, TypeScript, and backend technologies like Node.js or Python. Experience leading teams building end-user products with a focus on quality, usability, and performance. Solid understanding of system architecture, relational databases (e.g., PostgreSQL), and cloud infrastructure (e.g., AWS, GCP). Proven track record of successfully shaping product direction and delivering results in a fast-paced environment. Excellent communication skills
Who we are:
Peripheral is developing spatial intelligence, starting in live sports and entertainment. Our models generate interactive, photorealistic 3D reconstructions of sporting events, building the future of live media. We’re solving key research challenges in 3D computer vision, creating the foundations for the next generation of robotic perception and embodied intelligence.
We’re backed by Tier-1 investors and working with some of the biggest names in sports. Our team includes top robotics and machine learning researchers from the University of Toronto, advised by Dr. Steven Waslander and Dr. Igor Gilitshenski.
Our team is ambitious and looking to win. We’re seeking a Machine Learning engineer to develop our motion capture models through synthetic data curation, model training, and inference-time optimization.
What you’ll be doing:
-
Developing our data flywheel to autolabel and generate synthetic data,
-
Improving our motion capture accuracy by fine-tuning existing models on our domain,
-
Optimizing inference time through model distillation and quantization,
What we’d want to see:
-
Prior experience with 3D computer vision and training new ML models,
-
Strong understanding of GPU optimization methods (Profiling, Quantization, Model Distillation),
-
Proficiency in Python and real-time ML inference backends,
Ways to stand out from the crowd:
-
Previous experience in architecting and optimizing 3D computer vision systems,
-
Strong understanding of CUDA and Kernel programming,
-
Familiarity with state-of-the-art research in VLMs,
-
Top publications at conferences like NeurIPS, ICLR, ICML, CVPR, WACV, CoRL, ICRA,
Why join us:
-
Competitive equity as an early team member.
-
$80-120K CAD + bonuses, flexible based on experience.
-
Exclusive access to the world’s biggest sporting events and venues,
-
Work on impactful projects, developing the future of 3D media and spatial intelligence.
To explore additional roles, please visit: www.peripheral.so
Location: Toronto, ON, Canada
London
Description - Bloomberg’s Engineering AI department has 350+ AI practitioners building highly sought after products and features that often require novel innovations. We are investing in AI to build better search, discovery, and workflow solutions using technologies such as transformers, gradient boosted decision trees, large language models, and dense vector databases. We are expanding our group and seeking highly skilled individuals who will be responsible for contributing to the team (or teams) of Machine Learning (ML) and Software Engineers that are bringing innovative solutions to AI-driven customer-facing products.
At Bloomberg, we believe in fostering a transparent and efficient financial marketplace. Our business is built on technology that makes news, research, financial data, and analytics on over 35 million financial instruments searchable, discoverable, and actionable across the global capital markets.
Bloomberg has been building Artificial Intelligence applications that offer solutions to these problems with high accuracy and low latency since 2009. We build AI systems to help process and organize the ever-increasing volume of structured and unstructured information needed to make informed decisions. Our use of AI uncovers signals, helps us produce analytics about financial instruments in all asset classes, and delivers clarity when our clients need it most.
We are looking for Senior AI Engineers with expertise and a passion for Information Retrieval, Search technologies, Natural Language Processing and Generative AI to join our AI Experiences team. Our teams are working on exciting initiatives such as:
-Developing and deploying robust Retrieval-Augmented Generation (RAG) systems, curating high-quality data for model training and evaluation, and building evaluation frameworks to enable rapid iteration and continuous improvement based on real-world user interactions. -Designing and implementing tools that enable LLM-powered search agents to effectively handle complex client queries, shaping Bloomberg's generative AI ecosystem, and scaling these innovative solutions to support thousands of users. -Leveraging both traditional ML approaches and Generative AI to prototype, build, and maintain high-performing, client-facing search and streaming applications that deliver timely and relevant financial insights. -Building robust APIs to facilitate search across diverse collections of data, ensuring highly relevant results and maintaining system stability and reliability.
You'll have the opportunity to: -Collaborate closely with cross-functional teams, including product managers and engineers, to integrate AI solutions into client facing products , enhance analytical capabilities and improve user experience. -Architect, develop, and deploy production-quality search systems powered by LLMs, emphasizing both ML innovation and solid software engineering practices. -Continuously identify areas for improvement within our search systems, proactively experiment with new ideas, and rapidly implement promising solutions—even when improvements rely purely on engineering without direct ML involvement. -Design, train, test, and iterate on models and algorithms while taking ownership of the entire lifecycle, from idea inception to robust deployment and operationalization. -Stay at the forefront of research in IR, NLP, and Generative AI, incorporating relevant innovations into practical, impactful solutions. -Represent Bloomberg at industry events, scientific conferences, and within open-source communities.
Remote Internship Opportunities at UIUC ScaleML Lab
Location
University of Illinois Urbana-Champaign, Illinois, United States
Introduction
UIUC ScaleML Lab (https://scaleml.github.io/people) covers a wide range of research topics, including machine learning theory, optimization algorithms, reinforcement learning algorithms, generative models, and agents.
Research Topics
- Large language models and AI agents
- Diffusion language models
- Multimodal models
- Reinforcement learning
- Optimization algorithms
- Other topics in generative modeling
Requirements
- Understanding of fundamental concepts in deep learning, NLP, and large language models; familiarity with relevant frameworks and tools such as PyTorch, Hugging Face, LMFlow, LLaMA Factory, verl, etc.
- Strong engineering skills, a high sense of responsibility, and strong self-motivation
- Ability to commit to at least a 3-month internship
- Preferred qualifications include publications at top-tier conferences, industry internship/work experience, and high-impact open-source projects
Contact
If you are interested, please email your CV to ruip4@illinois.edu ;-)
Location USA, WA, Seattle USA, NY, New York USA, CA, Palo Alto
Description The Sponsored Products and Brands (SPB) team at Amazon Ads is reimagining the advertising landscape through generative AI, revolutionizing how millions of customers discover products and engage with brands on Amazon and beyond. We are at the forefront of redefining advertising experiences—bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle, from ad creation and optimization to performance measurement and customer insights.
We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance advertiser needs, enhance the shopping experience, and strengthen the Amazon marketplace. If you are energized by solving complex challenges and pushing the boundaries of what’s possible with AI, join us in shaping the future of advertising.
The role We seek an experienced AI/ML Specialist Solutions Architect to support AI-focused customers leveraging Nebius services. In this role, you will be a trusted advisor, collaborating with clients to design scalable AI solutions, resolve technical challenges and manage large-scale AI deployments involving hundreds to thousands of GPUs.
You’re welcome to work remotely from the United States or Canada.
Your responsibilities will include: - Designing customer-centric solutions that maximize business value and align with strategic goals. - Building and maintaining long-term relationships to foster trust and ensure customer satisfaction. - Delivering technical presentations, producing whitepapers, creating manuals and hosting webinars for audiences with varying technical expertise. - Collaborating with engineering and product teams to effectively prioritize and relay customer feedback.
We expect you to have: - 7-10 + years of experience with cloud technologies in MLOps engineering, Machine Learning engineering or similar roles. - Strong understanding of ML ecosystems, including models, use cases and tooling. - Proven experience in setting up and optimizing distributed training pipelines across multi-node and multi-GPU environments. - Hands-on knowledge of frameworks like PyTorch or JAX. - Excellent verbal and written communication skills.
It will be an added bonus if you have: - Expertise in deploying inference infrastructure for production workloads. - Ability to transition ML pipelines from POC to scalable production systems.
Preferred tooling: - Programming Languages – Python, Go, Java, C++ - Orchestration – Kubernetes (K8s), Slurm - DevOps Tools – Git, Docker, Helm - Infrastructure as Code (IaC) – Terraform - ML Frameworks and Libraries – PyTorch, TensorFlow, JAX, HuggingFace, Scikit-learn
We are Bagel Labs - a distributed machine learning research lab working toward open-source superintelligence.
Role Overview
We encourage curiosity-driven research and welcome bold, untested concepts.
You will push the boundaries of diffusion models and distributed learning systems, testing hypotheses at the intersection of generative AI and scalable infrastructure.
We love novel, provocative, and untested ideas that challenge conventional paradigms.
Key Responsibilities
- Prototype AI methodologies that can redefine distributed machine learning.
- Pioneer next-generation diffusion architectures including rectified flows, EDM variants, and latent consistency models that scale across distributed infrastructures.
- Develop novel sampling algorithms, guidance mechanisms, and conditioning strategies that unlock new capabilities in controllable generation.
- Partner with cryptographers and economists to embed secure, incentive-aligned protocols into model pipelines.
- Publish papers at top-tier ML venues, organize workshops, and align our roadmap with the latest academic advances.
- Share insights through internal notes, external blog posts, and conference-grade write-ups (for example, blog.bagel.com).
- Contribute to open-source code and stay active in the ML community.
Who You Might Be
You are extremely curious and motivated by discovery.
You actively consume the latest ML research - scanning arXiv, attending conferences, dissecting new open-source releases, and integrating breakthroughs into your own experimentation.
You thrive on first-principles reasoning, see potential in unexplored ideas, and view learning as a perpetual process.
Desired Skills (Flexible)
- Deep expertise in modern diffusion models, score matching, flow matching, consistency training, and distillation techniques.
- Hands-on experience with distributed training frameworks such as FairScale, DeepSpeed, Megatron-LM, or custom tensor and pipeline parallelism implementations.
- Strong mathematical foundation in SDEs, ODEs, optimal transport, and variational inference for designing novel generative objectives.
- Clear and concise communication skills.
- Bonus: experience with model quantization (QLoRA, GPTQ), knowledge distillation for diffusion models, or cryptographic techniques for secure distributed training.
What We Offer
- Top-of-market compensation and time to pursue open-ended research
- A deeply technical culture where bold ideas are debated, stress-tested, and built
- Remote flexibility within North American time zones
- Ownership of work shaping decentralized AI
- Paid travel to leading ML conferences worldwide
Apply now - help us build the infrastructure for open-source superintelligence.