Skip to yearly menu bar Skip to main content


NeurIPS 2025 Career Opportunities

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.

Search Opportunities

San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US; Remote, US


About Pinterest:

Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we’re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.

Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other’s unique experiences and embrace the flexibility to do your best work. Creating a career you love? It’s Possible.

Pinterest helps Pinners discover and do what they love. Core Engineering touches every surface the Pinner sees across the app and forms the front-and-center of the Pinterest experience for 500M+ Pinners every month. The Core team’s mission is to recommend inspiring & engaging pins for all our Pinners. We are looking for a Machine Learning Engineer Engineering Manager II who can drive the team’s technical direction, strategic planning and execution. You'll have the opportunity to lead a team that works cross-team on various innovative projects of new product experiences, builds large-scale low-latency systems and state-of-the-art machine learning models, and delivers great impact to our pinners and business metrics.


What you’ll do:

  • Be responsible for major areas of search, recommendations, notifications, etc. for more than 500 million monthly active Pinterest users. Potential areas of impact include ML-based retrieval, multi-domain ranking, L1 modeling, candidate generators, sequence modeling, relevance modeling, and infrastructure efficiency and scalability.
  • Deeply understand the Pinterest product and drive the vision for the team, ensuring the team’s work directly contributes to the company’s goals.
  • Manage and mentor a team of Machine Learning engineers (L13–L16), providing technical guidance and support to help them grow their careers. Identify team needs and hire strong candidates.
  • Collaborate closely with other engineering teams at Pinterest to enhance the experience for users, including Advanced Technology Group, Infrastructure, Content Understanding and User Understanding.
  • Provide visibility to senior leadership regarding the team’s global impact.
  • Partner with stakeholders across the company, including product management, data scientists, and design, to shape the future of the content ecosystem and personalization at Pinterest.
  • Build a culture of excellence and expertise within the team.

What we’re looking for:

  • Degree in Computer Science, ML, NLP, Statistics, Information Sciences, related field, or equivalent experience.
  • Experience leading and working on a large-scale production recommendation, e-commerce, search or ads system that is based on state-of-the-art machine learning and big data technology.
  • Strong experience in related fields such as recommendation systems and applied machine learning is required; natural language processing and computer vision are a bonus.
  • Demonstrated ability to define and drive the strategic roadmap for scalable, production-quality systems from concept to execution.
  • Strong focus on product impact and user experience within a consumer-focused environment.
  • Minimum of 1 year of experience managing a high-performing machine learning engineering team.
  • 8+ years of experience in software development, with a proven track record of delivering impactful solutions.

We are Bagel Labs - a distributed machine learning research lab working toward open-source superintelligence.

Role Overview

We encourage curiosity-driven research and welcome bold, untested concepts.
You will push the boundaries of diffusion models and distributed learning systems, testing hypotheses at the intersection of generative AI and scalable infrastructure.
We love novel, provocative, and untested ideas that challenge conventional paradigms.


Key Responsibilities

  • Prototype AI methodologies that can redefine distributed machine learning.
  • Pioneer next-generation diffusion architectures including rectified flows, EDM variants, and latent consistency models that scale across distributed infrastructures.
  • Develop novel sampling algorithms, guidance mechanisms, and conditioning strategies that unlock new capabilities in controllable generation.
  • Partner with cryptographers and economists to embed secure, incentive-aligned protocols into model pipelines.
  • Publish papers at top-tier ML venues, organize workshops, and align our roadmap with the latest academic advances.
  • Share insights through internal notes, external blog posts, and conference-grade write-ups (for example, blog.bagel.com).
  • Contribute to open-source code and stay active in the ML community.

Who You Might Be

You are extremely curious and motivated by discovery.
You actively consume the latest ML research - scanning arXiv, attending conferences, dissecting new open-source releases, and integrating breakthroughs into your own experimentation.
You thrive on first-principles reasoning, see potential in unexplored ideas, and view learning as a perpetual process.


Desired Skills (Flexible)

  1. Deep expertise in modern diffusion models, score matching, flow matching, consistency training, and distillation techniques.
  2. Hands-on experience with distributed training frameworks such as FairScale, DeepSpeed, Megatron-LM, or custom tensor and pipeline parallelism implementations.
  3. Strong mathematical foundation in SDEs, ODEs, optimal transport, and variational inference for designing novel generative objectives.
  4. Clear and concise communication skills.
  5. Bonus: experience with model quantization (QLoRA, GPTQ), knowledge distillation for diffusion models, or cryptographic techniques for secure distributed training.

What We Offer

  • Top-of-market compensation and time to pursue open-ended research
  • A deeply technical culture where bold ideas are debated, stress-tested, and built
  • Remote flexibility within North American time zones
  • Ownership of work shaping decentralized AI
  • Paid travel to leading ML conferences worldwide

Apply now - help us build the infrastructure for open-source superintelligence.

San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US; Remote, US


About Pinterest:

Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we’re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.

Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other’s unique experiences and embrace the flexibility to do your best work. Creating a career you love? It’s Possible.

Pinterest helps Pinners discover and do what they love. Homefeed is literally the first surface Pinners see when they open the app and so it forms the front-and-center of the Pinterest experience for 400M+ pinners every month. The Homefeed Relevance team’s mission is to recommend inspiring & engaging pins for all our Pinners. We are looking for a Tech Lead Architect who can drive cross-team engineering efforts for shipping ML-driven product experiences to our pinners. You'll have the opportunity to work on various innovative projects of new product experiences, build large-scale low-latency systems and state-of-the-art machine learning models, and deliver great impact to our pinners and business metrics.


What you'll do:

  • Improve relevance and the user experience on Homefeed.
  • Work on state-of-the-art large-scale applied machine learning projects.
  • Improve the efficiency and reliability of large-scale data processing and ML inference pipeline.
  • Improve our engineering systems to improve the latency, capacity, stability and reduce infra cost.

What we're looking for:

  • Languages: Python, Java.
  • Machine Learning: PyTorch, TensorFlow.
  • Big data processing: Spark, Hive, MapReduce.
  • 7+ years’ experience with recommender systems or user modeling, implementing production ML systems at scale.
  • 7+ years’ experience with large-scale distributed backend services.
  • Experience working with deep learning and generative AI models.
  • Experience closely collaborating with product managers/designers to ship ML-driven user-facing products.
  • Bachelor’s in computer science or equivalent experience.

Stellenbosch University, South Africa


The Department of Mathematical Sciences at Stellenbosch University (South Africa) has a 2-year postdoctoral position available in the area of machine learning for wildlife monitoring and conservation. The project will look at:

zero-shot capabilities of foundation models on challenging real-world datasets typical in African wildlife and environment monitoring (e.g., camera trap imagery);

few-shot learning and generative modelling to deal with these large, unlabelled, long-tailed, noisy image sets.

Applicants must have obtained a PhD degree within the last 4 years, in a field related to the project's themes. The fellowship must commence by 1 March 2026 (preferably sooner).

Applications and supporting documents can be submitted through this online form.

Applications close 15 December 2025.

Enquiries: Prof. Willie Brink (wbrink@sun.ac.za).

San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US


About Pinterest:

Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we’re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.

Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other’s unique experiences and embrace the flexibility to do your best work. Creating a career you love? It’s Possible.

Within the Ads Delivery team, we try to connect the dots between the aspirations of Pinners and the products offered by our partners. We are looking for a Machine Learning Engineer/Economist with a strong theoretical and data analysis background that understands market design concepts and has the engineering skills to bring them to market. We are looking for an economist who can get their hands dirty and work side by side with other engineers, to advance the efficiency of the Pinterest Marketplace. The nature of projects within this team require a deep understanding of trade-offs, founded on both economic theory and data analysis, from the ideation phase all the way to launch review.


What you’ll do:

  • Build statistical models and production systems to improve marketplace design and operations for Pinners, Partners, and Pinterest.
  • Tune marketplace parameters (e.g., utility function), optimize ad diversity and load, implement auctions, and model long‑term effects to reduce ad fatigue and improve advertiser outcomes.
  • Define and implement experiments to understand long term Marketplace effects.
  • Develop strategies to balance long and short term business objectives.
  • Drive multi-functional collaboration with peers and partners across the company to improve knowledge of marketplace design and operations.
  • Work across application areas such as marketplace performance analysis, advertiser churn/retention modeling, promotional bandwidth allocation, ranking/pricing/mechanism design, bidding/budgeting innovation, and anticipating second‑order effects for new ad offerings.

What we’re looking for:

  • Degree in Computer Science, Machine Learning, Economics, Operations Research, Statistics or a related field.
  • Industry experience in applying economics or machine learning to real products (e.g., ads auctions, pricing, marketplaces, or large‑scale recommendation/search systems).
  • Knowledge in auction theory, market design, and econometrics with excellent data analysis skills.
  • Strong software engineering and mathematical skills and proficiency with statistical methods.
  • Experience with online experimentation and causal inference (A/B testing, long‑running experiments, or similar) in large‑scale systems.
  • Practical understanding of machine learning algorithms and techniques.
  • Impact‑driven, highly collaborative, and an effective communicator; prior ads or two‑sided marketplace experience strongly preferred.

San Francisco


About this role

We’re looking for a Data Engineer to help design, build, and scale the data infrastructure that powers our analytics, reporting, and product insights. As part of a small but high-impact Data team, you’ll define the architectural foundation and tooling for our end-to-end data ecosystem.

You’ll work closely with engineering, product, and business stakeholders to build robust pipelines, scalable data models, and reliable workflows that enable data-driven decisions across the company. If you are passionate about data infrastructure, and solving complex data problems, we want to hear from you!

Tech stack

Core tools: Snowflake, BigQuery, dbt, Fivetran, Hightouch, Segment Periphery tools: AWS DMS, Google Datastream, Terraform, GithHub Actions

What you’ll do

Data infrastructure: * Design efficient and reusable data models optimized for analytical and operational workloads. * Design and maintain scalable, fault-tolerant data pipelines and ingestion frameworks across multiple data sources. * Architect and optimize our data warehouse (Snowflake/BigQuery) to ensure performance, cost efficiency, and security. * Define and implement data governance frameworks — schema management, lineage tracking, and access control.

Data orchestration: * Build and manage robust ETL workflows using dbt and orchestration tools (e.g., Airflow, Prefect). * Implement monitoring, alerting, and logging to ensure pipeline observability and reliability. * Lead automation initiatives to reduce manual operations and improve data workflow efficiency.

Data quality: * Develop comprehensive data validation, testing, and anomaly detection systems. * Establish SLAs for key data assets and proactively address pipeline or data quality issues. * Implement versioning, modularity, and performance best practices within dbt and SQL.

Collaboration & leadership: * Partner with product and engineering teams to deliver data solutions that align with downstream use cases. * Establish data engineering best practices and serve as a subject matter expert on our data pipelines, models and systems.

What we’re looking for

  • 5+ years of hands-on experience in a data engineering role, ideally in a SaaS environment.
  • Expert-level proficiency in SQL, dbt, and Python.
  • Strong experience with data pipeline orchestration (Airflow, Prefect, Dagster, etc.) and CI/CD for data workflows.
  • Deep understanding of cloud-based data architectures (AWS, GCP) — including networking, IAM, and security best practices.
  • Experience with event-driven systems (Kafka, Pub/Sub, Kinesis) and real-time data streaming is a plus.
  • Strong grasp of data modeling principles, warehouse optimization, and cost management.
  • Passionate about data reliability, testing, and monitoring — you treat pipelines like production software.
  • Thrive in ambiguous, fast-moving environments and enjoy building systems from the ground up.

AI Platform Engineer

Location: Boston (US) / Barcelona (Spain)

Position Overview

As an AI Platform Engineer, you are the bridge between AI research and production software. You will:

  • Build and maintain AI infrastructure: model serving, vector databases, embedding pipelines
  • Enable AI developers to deploy their work reproducibly and safely
  • Design APIs for AI inference, prompt management, and evaluation
  • Implement MLOps pipelines: versioning, monitoring, logging, experimentation tracking
  • Optimize performance: latency, cost, throughput, reliability
  • Collaborate with backend engineers to integrate AI capabilities into the product

Key Responsibilities

AI Infrastructure

  • Deploy and serve LLMs (OpenAI, Anthropic, HuggingFace, fine-tuned models)
  • Optimize inference latency and costs
  • Implement caching, rate limiting, and retry strategies

MLOps & Pipelines

  • Version models, prompts, datasets, and evaluation results
  • Implement experiment tracking (Weights & Biases)
  • Build CI/CD pipelines for model deployment
  • Monitor model performance and drift
  • Set up logging and observability for AI services

API Development

  • Design and implement APIs (FastAPI)
  • Create endpoints for prompt testing, model selection, and evaluation
  • Integrate AI services with backend application
  • Ensure API reliability, security, and performance

Collaboration & Enablement

  • Work with AI Developers to productionize their experiments regarding improving user workflows
  • Define workflows: notebook/test repository → PR → staging → production
  • Document AI infrastructure and best practices
  • Review code and mentor AI developers on software practices

Required Skills & Experience

Must-Have

  • 7+ years of software engineering experience (Python preferred)
  • Experience with LLMs and AI/ML in production: OpenAI API, HuggingFace, LangChain, or similar
  • Understanding of vector databases (Pinecone, Chroma, Weaviate, FAISS)
  • Cloud infrastructure experience: GCP (Vertex AI preferred) or AWS (SageMaker)
  • API development: FastAPI, REST, async programming
  • CI/CD and DevOps: Docker, Terraform, GitHub Actions
  • Monitoring and observability
  • Problem-solving mindset: comfortable debugging complex distributed systems
  • Operating experience with AI deployment in enterprise environment

Nice-to-Have

  • Experience fine-tuning or training models
  • Familiarity with LangChain, Pydantic AI or similar frameworks
  • Knowledge of prompt engineering and evaluation techniques
  • Experience with real-time inference and streaming responses
  • Background in data engineering or ML engineering
  • Understanding of RAG architectures
  • Contributions to open-source AI/ML projects

Tech Stack

Current Stack:

  • Languages: Python (primary), Bash
  • AI/ML: OpenAI API, Anthropic, HuggingFace, LangChain, Pydantic AI
  • Vector DBs: Pinecone, Chroma, Weaviate, or FAISS
  • Backend: FastAPI, SQLAlchemy, Pydantic
  • Cloud: GCP (Vertex AI, Cloud Run), Terraform
  • CI/CD: GitHub Actions
  • Experiment Tracking: MLflow, Weights & Biases, or custom
  • Containers: Docker, Kubernetes (optional)

What we offer:

Competitive compensation

Stock Options Plan: Empowering you to share in our success and growth.
Cutting-Edge Tools: Access to state-of-the-art tools and collaborative opportunities with leading experts in artificial intelligence, physics, hardware and electronic design automation.
Work-Life Balance: Flexible work arrangements in one of our offices with potential options for remote work.
Professional Growth: Opportunities to attend industry conferences, present research findings, and engage with the global AI research community.
Impact-Driven Culture: Join a passionate team focused on solving some of the most challenging problems at the intersection of AI and hardware.

About the multiple postdoctoral fellowship positions: Join the Deep Learning for Precision Health Lab at the University of Texas Southwestern to build next-generation AI for medicine with direct access to large, deeply-phenotyped datasets and clinical partners across UT Southwestern Medical Center, Children’s Medical Center, and Parkland Hospital. Roles are ideal for researchers who have recently (or will soon) completed a PhD (typically ≤3 years from degree). Based in Dallas—one of the largest, most vibrant, and fastest-growing cities in the U.S. —fellows work closely with Prof. Albert Montillo, PhD (Associate Professor, tenured, Fellow of IEEE/ MICCAI / ISMRM / OHBM / SPIE/ ASNR) and collaborate with neurologists, radiologists, psychiatrists, and neuroscientists on clinically grounded problems—aimed at high-impact publications and deployable methods.

Project tracks (pick one or blend across): 1. Deep multimodal fusion models & GNNs: Integrate multi-contrast MRI & PET with electrophysiology, EHR/clinical data, and multi-omics via deep fusion and graph learning to predict disease trajectories and treatment response (Opportunities in Parkinson’s, AD, ASD, epilepsy, depression). 2. Image foundation models (FMs): Pretrain & fine-tune on very large medical image datasets (10k–100k+ subjects) for site-generalizable transfer to downstream tasks with per-subject explainability. 3. Bayesian Causal Discovery method development: Combine neuroimaging, interventional data, and priors to infer effective brain connectivity and mechanisms in developmental disorders (epilepsy, ASD).
4. Reinforcement learning to guide neuromodulation therapy: Fuse computational neuroscience models with data-driven FMs, optimizing neuromodulation under uncertainty. 5. Speech + Imaging for early dementia: Build multimodal FMs over voice (audio), language (linguistics/NLP), and neuroimaging for earliest, most accurate dementia diagnosis.

Required Qualifications:

  1. We will only consider scholars having (or will have) PhD degrees in CS, ECE, Applied Math, Computational Physics, BME, Bioinformatics, Statistics, or related field with machine learning and signal/audio/text or omics analysis experience (e.g., MRI/CT/PET; MEG/EEG; speech/voice; NLP/clinical text; genomics/proteomics).
  2. Proficient in DL programming in Python (PyTorch/TensorFlow) and strong mathematical training for fast DL prototyping.
  3. Major contributions in peer-reviewed publications at top venues: NeurIPS/ICLR/ICML/AAAI, MICCAI, CVPR/ICCV, journals such as TPAMI, TMI, MedIA, Nature Communications, and related high-impact outlets.

Appointment and support:

Full-time position with competitive salary & benefits, based in Dallas, TX, USA. Initial appointment is 1 year, renewable; fellows should plan for minimum 2-yr commitment. US citizens strongly encouraged; visa sponsorship for exceptional international candidates. Start window: early 2026; later starts considered.

For consideration:

Reach out for an in-person meeting in San Diego at NeurIPS 2025 (or virtually afterwards) via email to Albert.Montillo@UTSouthwestern.edu with the subject “Postdoc-Applicant-MM/FM-NeurIPS” and include: (1) CV, (2) contact info for 3 references, (3) up to 3 representative publications, and (4) your preferred track(s) + start window. Positions open until filled; review begins immediately.

USA or International


The Renaissance Philanthropy Engineering Hub provides on-demand development technical assistance in support of grant-funded educational technology projects. Our support allows mission-driven teams to overcome hurdles and achieve their technical and impact goals. As a Research Engineer, you will provide technical strategy and hands-on guidance to product and research teams by developing proof-of-concepts that explore new applications of generative AI and validate the viability of emerging approaches. Visit the URL above to learn more!

About Handshake AI Handshake is building the career network for the AI economy. Our three-sided marketplace connects 18 million students and alumni, 1,500+ academic institutions across the U.S. and Europe, and 1 million employers to power how the next generation explores careers, builds skills, and gets hired. Handshake AI is a human data labeling business that leverages the scale of the largest early career network. We work directly with the world’s leading AI research labs to build a new generation of human data products. From PhDs in physics to undergrads fluent in LLMs, Handshake AI is the trusted partner for domain-specific data and evaluation at scale. This is a unique opportunity to join a fast-growing team shaping the future of AI through better data, better tools, and better systems—for experts, by experts.

Now’s a great time to join Handshake. Here’s why: Leading the AI Career Revolution: Be part of the team redefining work in the AI economy for millions worldwide. Proven Market Demand: Deep employer partnerships across Fortune 500s and the world’s leading AI research labs. World-Class Team: Leadership from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, just to name a few. Capitalized & Scaling: $3.5B valuation from top investors including Kleiner Perkins, True Ventures, Notable Capital, and more.

About the Role As a Staff Research Scientist, you will play a pivotal role in shaping the future of large language model (LLM) alignment by leading research and development at the intersection of data quality and post-training techniques such as RLHF, preference optimization, and reward modeling. You will operate at the forefront of model alignment, with a focus on ensuring the integrity, reliability, and strategic use of supervision data that drives post-training performance. You’ll set research direction, influence cross-functional data standards, and lead the development of scalable systems that diagnose and improve the data foundations of frontier AI.

You will: Lead high-impact research on data quality frameworks for post-training LLMs — including techniques for preference consistency, label reliability, annotator calibration, and dataset auditing. Design and implement systems for identifying noisy, low-value, or adversarial data points in human feedback and synthetic comparison datasets. Drive strategy for aligning data collection, curation, and filtering with post-training objectives such as helpfulness, harmlessness, and faithfulness. Collaborate cross-functionally with engineers, alignment researchers, and product leaders to translate research into production-ready pipelines for RLHF and DPO. Mentor and influence junior researchers and engineers working on data-centric evaluation, reward modeling, and benchmark creation. Author foundational tools and metrics that connect supervision data characteristics to downstream LLM behavior and evaluation performance. Publish and present research that advances the field of data quality in LLM post-training, contributing to academic and industry best practices.

Desired Capabilities PhD or equivalent experience in machine learning, NLP, or data-centric AI, with a track record of leadership in LLM post-training or data quality research. 5 years of academic or industry experience post-doc Deep expertise in RLHF, preference data pipelines, reward modeling, or evaluation systems. Demonstrated experience designing and scaling data quality infrastructure — from labeling frameworks and validation metrics to automated filtering and dataset optimization. Strong engineering proficiency in Python, PyTorch, and ecosystem tools for large-scale training and evaluation. A proven ability to define, lead, and execute complex research initiatives with clear business and technical impact. Strong communication and collaboration skills, with experience driving strategy across research, engineering, and product teams.