NeurIPS 2025 Career Opportunities
Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.
Search Opportunities
Successful candidates will contribute to building and deploying AI-powered systems, including automated code generation, smart agents, retrieval-augmented generation (RAG) frameworks, and tools that integrate cutting-edge AI with scientific software and machine learning research. These systems aim to support drug discovery programs, increase research productivity, and improve the quality and efficiency of ML model training through intelligent data workflows and feedback loops. Candidates should have a strong interest in artificial intelligence (specifically, generative and agentic AI), with responsibilities spanning end-to-end system design: from idea conception and rapid prototyping to production-scale deployment. They should be comfortable working in a fast-paced environment where innovation, experimentation, and rigorous software engineering are all valued, but specific knowledge of any of these areas is less critical than intellectual curiosity, versatility, and a track record of achievement and innovation in the field of AI. For more information, visit www.DEShawResearch.com.
Please apply using this link:
https://apply.deshawresearch.com/careers/Register?pipelineId=923&source=NeurIPS_1
The expected annual base salary for this position is USD 250,000 – USD 600,000. Our compensation package also includes variable compensation in the form of sign-on and year-end bonuses, and generous benefits, including relocation and immigration assistance. The applicable annual base salary paid to a successful applicant will be determined based on multiple factors including the nature and extent of prior experience and educational background. We follow a hybrid work schedule, in which employees work from the office on Tuesday through Thursday, and have the option of working from home on Monday and Friday.
D. E. Shaw Research, LLC is an equal opportunity employer.
Location Beijing CHINA
Description
-
Program and Vision: BAAI launches its "Rising Star" Researcher Program, designed to recruit exceptional young scholars who have demonstrated outstanding research potential in AI and related fields. We provide a world-class research platform and robust development support, enabling you to launch your academic career from a high starting point, collaborate with leading scientists, and rapidly grow into a future leader in your field.
-
Qualifications:
- A record of notable early-career research achievements in AI, Computer Science, Mathematics, or related interdisciplinary fields, demonstrating significant potential.
- A soon-to-graduate outstanding Ph.D. candidate, a postdoctoral fellow, or an early-career scholar with a pure passion for scientific inquiry and innovation.
-
Strong independent research capabilities and a collaborative spirit.
-
We Offer:
- A market-competitive salary and benefits package with a clear path for career advancement.
- Ample start-up research funding and shared access to top-tier computing resources.
- A clear career development path, with support to grow into an independent researcher.
- Access to subsidized talent apartments and support for Beijing residency registration.
-
A comprehensive supplementary health insurance plan.
-
How to Apply: Please send your full CV, representative publications, and reference letters or contact information for references to: [recruiting@baai.ac.cn] Use the email subject line: "Researcher Application - [Name] - [Specific Research Focus]"
Bala Cynwyd (Philadelphia Area), Pennsylvania United States
Overview
We’re looking for a Machine Learning Systems Engineer to help build the data infrastructure that powers our AI research. In this role, you'll develop reliable, high-performance systems for handling large and complex datasets, with a focus on scalability and reproducibility. You’ll partner with researchers to support experimental workflows and help translate evolving needs into efficient, production-ready solutions. The work involves optimizing compute performance across distributed systems and building low-latency, high-throughput data services. This role is ideal for someone with strong engineering instincts, a deep understanding of data systems, and an interest in supporting innovative machine learning efforts.
What You’ll Do
Design and implement high-performance data pipelines for processing large-scale datasets with an emphasis on reliability and reproducibility Collaborate with researchers to translate their requirements into scalable, production-grade systems for AI experimentation Optimize resource utilization across our distributed computing infrastructure through profiling, benchmarking, and systems-level improvements Implement low-latency high-throughput sampling for models
What we're looking for
Experience building and maintaining data pipelines and ETL systems at scale Experience with large-scale ML infrastructure and familiarity with training and inference workflows Strong understanding of best practices in data management and processing Knowledge of systems level programming and performance optimization Proficiency in software engineering in python Understanding of AI/ML workloads, including data preprocessing, feature engineering, and model evaluation
Why Join Us?
Susquehanna is a global quantitative trading firm that combines deep research, cutting-edge technology, and a collaborative culture. We build most of our systems from the ground up, and innovation is at the core of everything we do. As a Machine Learning Systems Engineer, you’ll play a critical role in shaping the future of AI at Susquehanna — enabling research at scale, accelerating experimentation, and helping unlock new opportunities across the firm.
The role We seek an experienced Senior ML Solutions Architect to support customers leveraging Nebius Token Factory's serverless inference platform for open-source LLMs across multiple modalities. In this role, you will be collaborating with clients to design and implement customized LLM-based solution and architect scalable AI applications using our served models, and working together with our backend team to improve our platform to match the clients' needs.
You’re welcome to work remotely from the United States or Canada.
Your responsibilities will include: - Design and implement LLM-based solutions using Nebius Token Factory’s inference services to drive business value and support customer goals. - Build production-ready applications leveraging our serverless LLM APIs, including multimodal models (text, vision, audio) and domain-specific models. - Provide technical expertise in prompt engineering, RAG architectures, model selection, and inference optimization. - Collaborate with product and engineering teams to surface customer feedback and shape the platform roadmap. - Guide customers in scaling from POC to production with a focus on performance, reliability, and cost efficiency.
We expect you to have: - 5+ years of experience in ML/AI systems, with at least 2 years focused on LLMs and generative AI. - Deep knowledge of the LLM ecosystem, including model architectures and fine-tuning approaches.
Hands-on experience with: - Prompt engineering and LLM pipeline development, including evaluation. - Agentic frameworks such as Langchain, Langsmith, smolagents, or equivalent. - Vector databases and RAG implementation patterns. - Deploying LLM-powered applications using APIs from OpenAI, Anthropic, or open-source models. - Strong Python programming skills. - Excellent communication skills, with the ability to clearly explain technical concepts to diverse audiences.
It will be an added bonus if you have: - Experience with inference frameworks and libraries (e.g., vLLM, SGLang, TensorRT-LLM, Transformers). - Familiarity with inference optimization techniques such as quantization, batching, caching, and routing. - Work with multimodal AI models (e.g., vision-language, speech). - Proficiency with DevOps tools (Docker, Kubernetes). - Contributions to open-source ML/AI projects.
Preferred tooling: - Programming Languages – Python - ML Frameworks and Libraries– vLLM, SGLang, TensorRT-LLM, Transformers, OpenAI/Anthropic SDKs - Frameworks for Agentic Pipelines : Langchain / Langsmith / smolagents / equivalent - API and Web Frameworks– FastAPI, Flask - MLOps and DevOps tools– Kubernetes (K8s), Docker, Git - Cloud Platforms– AWS (SageMaker, Bedrock), GCP (Vertex AI), Azure (Azure ML)
San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we’re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other’s unique experiences and embrace the flexibility to do your best work. Creating a career you love? It’s Possible.
Pinterest helps more than half a billion users discover new ideas to help design their lives. Our users come to Pinterest to explore ideas and run more than 6 billion search queries every month. Many of these queries represent exploratory search intent and are broad, which means that the search system should be able to deeply understand this intent, and then help people explore content, personalize results, harness visual signals effectively, and show the most engaging content up front. In addition, the team also owns query refinements and modules that help Pinners narrow their search intent from broad exploratory queries, to help them narrow their results down. All of this means that Pinterest search presents a unique challenge quite unlike other search systems and the opportunity to innovate on a product that only Pinterest can build.
The Closeup Relevance team owns Closeup recommendations (a.k.a. related pins), which help pinners explore topics they are interested in and get continuous inspirations. This is the largest recommendation surface on Pinterest and we work with some of the largest datasets in the world, creating unique experiences for hundreds of millions of pinners. Our system serves billions of daily impressions and is critical for company business. We are looking for a highly motivated Staff ML engineer to work as a cross-team technical leader.
What you will do:
- Work in a cross-team environment with many talented ML engineers.
- Use a state-of-the-art recommendation tech stack and lead innovation and/or redesign of the full recommendation funnel.
- Act as a leader in ML innovations whose impact is felt across the organization.
- Work closely with other engineering teams in Pinterest to bring superior Closeup experience to our users, such as ATG, ML Platform, Closeup Product, Content Quality and Core Infrastructure.
- Develop an inspiring technical vision and ambitious but grounded strategy for the team, and deliver outstanding results.
- Provide visibility for senior leadership into the team’s global impact.
- Partner with stakeholders to expand impact across the company, including product management and data scientists.
- Mentor and grow senior engineers on the team.
- Build a culture of innovation and excellence.
What we are looking for:
- 7+ years of professional experience in Machine Learning.
- 5+ years of proposing and delivering innovations from original research or adopting cutting edge ML research.
- 3+ years of experience in leading large-scale and mature ML recommendation systems teams, in an end-to-end fashion.
- Bachelor’s/Master’s degree in a relevant field such as computer science, or equivalent experience.
- Thrives in ambiguity; skilled in defining and exploring open ended problems.
- Experience in setting and delivering technical directions at both team and organizational level.
- Expertise in machine learning modeling and infrastructure.
- Adept in statistics, backend, batch and realtime processing systems.
- Ability to drive the team roadmap end to end.
- A knack for product and impact on users of a consumer product.
Successful hires will support our machine learning team by processing and organizing scientific datasets relevant to drug discovery and development. Candidates should have an undergraduate degree in a STEM field, hands-on experience in a Linux environment, and familiarity with Python and SQL. Experience handling large-scale datasets, structuring data from diverse sources, and cataloging metadata to facilitate data discovery and maintain accurate data provenance is highly desirable. This is a two-year position with full benefits. For more information, visit www.DEShawResearch.com.
Please apply using the link below:
https://apply.deshawresearch.com/careers/Register?pipelineId=909&source=NeurIPS_1
The expected annual base salary for this role is USD 160,000 - USD 200,000. Our compensation package includes variable compensation in the form of sign-on and year-end bonuses, as well as generous benefits, including relocation and immigration assistance. The final salary will be determined based on multiple factors, including prior experience and educational background. We follow a hybrid work schedule, with employees working in-office from Tuesday through Thursday and having the option to work remotely on Monday and Friday.
D. E. Shaw Research, LLC is an equal opportunity employer.
Location Beijing CHINA
Description
-
About Us: The Beijing Academy of Artificial Intelligence (BAAI), established in November 2018, is a non-profit research institute dedicated to becoming a global leader in AI innovation. We strive to create the world's premier ecosystem for academic and technological advancement, tackling the most fundamental and critical challenges in the field. BAAI aims to be the source of academic thought, foundational theory, top talent, industrial innovation, and policy for artificial intelligence, fostering sustainable development for humanity, our environment, and intelligence itself.
-
Open Research Tracks:
- Multimodal Large Model Researcher:
- Focus on exploring next-generation vision and multimodal foundation models (e.g., the Emu series). You will research novel algorithms and data systems, dedicated to solving core challenges in multimodal perception and generation.
- Embodied AI Researcher:
- Research and develop Vision-Language-Action (VLA) models and hierarchical architectures. You will work on the full pipeline from simulation and synthetic data to real-world deployment, aiming to build powerful embodied AI base models with exceptional generalization capabilities, enabling robots to understand and execute long-horizon, complex instructions in novel environments.
- Researcher (AI for Science):
-
Leverage AI methods to solve cutting-edge problems in life sciences. You will design and develop new models and algorithms, participate in world-class scientific collaborations, and pioneer breakthroughs from 0 to 1 in the field of biological computation.
-
We Are Looking For:
- A Ph.D. or outstanding Master's degree in Computer Science, Artificial Intelligence, Electronic Engineering, Life Sciences, or related fields.
- Solid foundation and research experience in at least one of the following areas:
- Multimodal: Deep understanding of mainstream large models and strong algorithm implementation skills.
- Embodied AI: Familiarity with VLA models, mainstream simulators, and experience with pre-training, fine-tuning, or real-world deployment.
- AI for Science: Strong mathematical foundation and machine learning knowledge, with a passion for solving life science problems.
-
Proven Research Excellence: A track record of publications at top-tier conferences such as NeurIPS, ICML, ICLR, CVPR, ICRA, RSS, or experience in leading high-impact open-source projects.
-
What We Offer:
- Work on the Cutting Edge: Confront the field's most challenging problems. Your work will directly contribute to breakthroughs in next-generation AI.
- Mentorship & Collaboration: Work alongside and receive guidance from renowned scientists and senior researchers within a world-class team.
- Freedom & Resources: Enjoy an atmosphere of academic freedom and access to abundant, state-of-the-art computational resources to support your ambitious research ideas.
- Global Impact: Publish your research at leading global conferences and see it potentially transformed into projects that advance industry and science.
How to Apply: Please send your CV, representative papers, or project portfolio to: [Zstar@baai.ac.cn] Use the email subject line: "NeurIPS - Z star - [Your Desired Track] - [Your Name]" (e.g., NeurIPS - Z star- Multimodal Large Model - Xiao Zhi)
JR2003228
NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society — from gaming to robotics, self-driving cars to life-saving healthcare, climate change to virtual worlds where we can all connect and create.
Our internships offer an excellent opportunity to expand your career and get hands on experience with one of our industry leading Generative AI teams. We’re seeking strategic, ambitious, hard-working, and creative individuals who are passionate about helping us tackle challenges no one else can solve.
What you will be doing: Design and implement algorithms that push the boundaries of generative AI, computer vision, robotics, and other technology domains relevant to NVIDIA’s business.
Collaborate with other team members, teams, and/or external researchers.
Transfer your research to product groups to enable new products or types of products. Deliverable results include prototypes, patents, products, and/or publishing original research.
What we need to see: Must be actively enrolled in a university pursuing a PhD degree in Computer Science, Electrical Engineering, or a related field, for the entire duration of the internship.
Depending on the internship, prior experience or knowledge requirements could include the following programming skills and technologies: Python, C++, CUDA, Deep Learning Frameworks (PyTorch, JAX, Tensorflow, etc.)
Strong background in research with publications at top conferences.
Excellent communication and collaboration skills.
Experience with large-scale model training is a plus.
Potential internships require research experience in at least one of the following areas: Multimodal Foundation Models
Diffusion Models
World Models
Image, Video, or Audio Generation
Large Language Models
Vision-Language Models
Action-Based Transformers
Long Context Methods
Physics-Based Simulation
Flow Based Generative Models
Synthetic Data Generation
AI for Science
Protein/Molecule Generation
Climate Modeling and Weather Forecasting
Partial Differential Equations (PDEs)
Our internship hourly rates are a standard pay based on the position, your location, year in school, degree, and experience. The hourly rate for our interns is 30 USD - 94 USD.
You will also be eligible for Intern benefits.
Applications are accepted on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Abu Dhabi, UAE
The Department of Machine Learning at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) has faculty openings at all ranks (full/associate/assistant professors). Qualified candidates are invited to apply. Applicants are expected to conduct outstanding research and be committed to teaching. Successful applicants will be provided an attractive remuneration package and allocated generous research funding for each year. Long-term research on big problems is particularly encouraged.
More details about the positions and the submission are available at https://apply.interfolio.com/176242
Those attending NeurIPS are welcome to talk to us to learn more about the university and the positions.
Contact: Chih-Jen Lin (chihjen.lin@mbzuai.ac.ae)
The role We are seeking a highly skilled and customer-focused professional to join our team as a Cloud Solutions Architect specializing in Cloud infrastructure and MLOps. As a Cloud Solutions Architect, you will play a pivotal role in designing and implementing cutting-edge solutions for our clients, leveraging cloud technologies for ML/AI teams and becoming a trusted technical advisor for building their pipelines.
You’re welcome to work remotely from the US or Canada.
Your responsibilities will include: - Act as a trusted advisor to our clients, providing technical expertise and guidance throughout the engagement. Conduct PoC, workshops, presentations, and training sessions to educate clients on GPU cloud technologies and best practices. - Collaborate with clients to understand their business requirements and develop solution architecture that align with their needs: design and document Infrastructure as code solutions, documentation and technical how-tos in collaboration with support engineers and technical writers. - Help customers to optimize pipeline performance and scalability to ensure efficient utilization of cloud resources and services powered by Nebius AI. - Act as a single point of expertise of customer scenarios for product, technical support, marketing teams. - Assist to Marketing department efforts during events (Hackathons, conferences, workshops, webinars, etc.)
We expect you to have: - 5 - 10 + years of experience as a cloud solutions architect, system/network engineer, developer or a similar technical role with a focus on cloud computing - Strong hands-on experience with IaC and configuration management tools (preferably Terraform/Ansible), Kubernetes, skills of writing code in Python - Solid understanding of GPU computing practices for ML training and inference workloads, GPU software stack components, including drivers, libraries (e.g. CUDA, OpenCL) - Excellent communication skills - Customer-centric mindset
It will be an added bonus if you have: - Hands-on experience with HPC/ML orchestration frameworks (e.g. Slurm, Kubeflow) - Hands-on experience with deep learning frameworks (e.g. TensorFlow, PyTorch) - Solid understanding of cloud ML tools landscape from industry leaders (NVIDIA, AWS, Azure, Google)