Skip to yearly menu bar Skip to main content


NeurIPS 2025 Career Opportunities

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.

Search Opportunities

San Jose, CA, USA


We are seeking a creative and technically skilled Prompt Engineer to enhance large language model (LLM) performance across business-critical workflows. This position centers on designing, testing, and integrating strategies that drive intelligent agents and enterprise use cases. You will work closely with AI engineers, product teams, and domain experts to guarantee scalable, safe, and high-accuracy AI applications.

What you'll Do - Prompt Strategy & Design: Develop templates and multi-step chains tailored to specific business functions (e.g., sales enablement, support, knowledge management). Develop few-shot, zero-shot, and hybrid patterns for enhanced reasoning and context retention. Maintain libraries for reuse and version control.

  • Function Calling & Tool Use: Implement LLM function calling to trigger APIs, databases, or internal tools. Build tool-use pipelines within agent workflows for complex task automation.

  • Conversation Flow & Persona Design: Define and build agent personas, roles, and behaviors for domain-specific applications. Manage multi-turn conversations, memory handling, and contextual continuity.

  • Enterprise-grade Optimization: Tailor prompts for performance in enterprise environments, prioritizing accuracy, privacy, fairness, and compliance. Collaborate with legal and security teams to mitigate hallucination, bias, and misuse risks.

  • Testing & Evaluation: Tailor prompts for performance in enterprise environments, prioritizing accuracy, privacy, fairness, and compliance. Collaborate with legal and security teams to mitigate hallucination, bias, and misuse risks.

  • Deployment & Integration: Partner with AI Agent Engineers to integrate prompts into agent workflows and orchestration pipelines. Maintain documentation and workflows for deployment in production environments.

What you need to succeed - 3+ years of experience in NLP, AI/ML product development, or application scripting - Strong grasp of LLM capabilities and limitations (e.g., OpenAI, Claude, Mistral, Cohere) - Experience crafting prompts and evaluation methods for enterprise tasks - Familiarity with frameworks like LangChain, Semantic Kernel, or AutoGen - Strong Python and API integration skills - Excellent written communication and structured thinking

Preferred Qualifications - Experience with LLM function calling, custom tool integration, and agent workflows - Background in UX writing, human-computer interaction, or instructional design - Understanding of enterprise compliance (e.g., SOC 2, GDPR) in AI systems - Bachelor's or equivalent experience in Computer Science, Computational Linguistics, Cognitive Science, or a related field

Toronto or remote

Mission: We are seeking a highly skilled Machine Learning Engineer to join our advanced model development team. This role focuses on pre-training, continued training, and post-training of models, with a particular emphasis on draft model optimization for speculative decoding and quantization-aware training (QAT). The ideal candidate has deep experience with training methodologies, open-weight models, and performance-tuning for inference.

Responsibilities & opportunities in this role: Lead pre-training and post-training efforts for draft models tailored to speculative decoding architectures. Conduct continued training and post-training of open-weight models for non-draft (standard) inference scenarios. Implement and optimize quantization-aware training pipelines to enable low-precision inference with minimal accuracy loss. Collaborate with model architecture, inference, and systems teams to evaluate model readiness across training and deployment stages. Develop tooling and evaluation metrics for training effectiveness, draft model fidelity, and speculative hit-rate optimization. Contribute to experimental designs for novel training regimes and speculative decoding strategies.

Ideal candidates have/are: 5+ years of experience in machine learning, with a strong focus on model training. Proven experience with transformer-based architectures (e.g., LLaMA, Mistral, Gemma). Deep understanding of speculative decoding and draft model usage. Hands-on experience with quantization-aware training, including PyTorch QAT workflows or similar frameworks. Familiarity with open-weight foundation models and continued/pre-training techniques. Proficient in Python and ML frameworks such as PyTorch, JAX, or TensorFlow.

Preferred Qualifications: Experience optimizing models for fast inference and sampling in production environments. Exposure to distributed training, low-level kernel optimizations, and inference-time system constraints. Publications or contributions to open-source ML projects.

Attributes of a Groqster: Humility - Egos are checked at the door Collaborative & Team Savvy - We make up the smartest person in the room, together Growth & Giver Mindset - Learn it all versus know it all, we share knowledge generously Curious & Innovative - Take a creative approach to projects, problems, and design Passion, Grit, & Boldness - no limit thinking, fueling informed risk taking

Location Hybrid (2-3 days a week) on-site in San Mateo, CA.


BigHat is opening an ML Fellowship. We've got an awesome high-throughput wetlab that pumps proprietary data into custom data and ML Ops infra to power our weekly design-build-train loop. Come solve hard-enough-to-be-fun problems in protein engineering in service of helping patients!

Handshake is building the career network for the AI economy. Our three-sided marketplace connects 18 million students and alumni, 1,500+ academic institutions across the U.S. and Europe, and 1 million employers to power how the next generation explores careers, builds skills, and gets hired.

Handshake AI is a human data labeling business that leverages the scale of the largest early career network. We work directly with the world’s leading AI research labs to build a new generation of human data products. From PhDs in physics to undergrads fluent in LLMs, Handshake AI is the trusted partner for domain-specific data and evaluation at scale. This is a unique opportunity to join a fast-growing team shaping the future of AI through better data, better tools, and better systems—for experts, by experts.

Now’s a great time to join Handshake. Here’s why: Leading the AI Career Revolution: Be part of the team redefining work in the AI economy for millions worldwide. Proven Market Demand: Deep employer partnerships across Fortune 500s and the world’s leading AI research labs. World-Class Team: Leadership from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, just to name a few. Capitalized & Scaling: $3.5B valuation from top investors including Kleiner Perkins, True Ventures, Notable Capital, and more.

About the Role We’re expanding our team and seeking an Engineering Manager to lead the development of high-impact products that empower our Fellows in driving AI innovation. This is a unique opportunity to play a pivotal role in a fast-growing space, working directly with leading AI labs and tech companies to develop solutions that enhance research workflows, model evaluation processes, and domain-specific AI applications. We’re expanding our team and seeking an experienced Engineering Manager to lead our Annotations Team, responsible for building and maintaining the tools that power our data creation pipeline. This team develops products used by operators and fellows to create data used by researchers to train the latest, state of the art language models. As an Engineering Manager, you'll play a key role in shaping technical direction, fostering engineering excellence, and ensuring your team delivers reliable, scalable, and user-centric solutions. You’ll manage a talented group of engineers focused on improving workflow efficiency, quality assurance, and scalability across annotation systems. This is a highly cross-functional role that sits at the intersection of engineering, operations, and product, ensuring our data creation platform is robust, intuitive, and ready to meet evolving research needs.

Location: San Francisco, CA| 5 days/week in-office Lead and grow a high-performing engineering team, fostering a culture of ownership, inclusion, and technical excellence. Partner with product and design teams to define roadmaps, scope projects, and deliver end-to-end solutions aligned with user needs. Drive technical architecture and decision-making, ensuring systems are scalable, maintainable, and performant. Mentor engineers across experience levels, supporting their growth through regular feedback, coaching, and development opportunities. Oversee day-to-day execution, ensuring high-quality code, effective reviews, and healthy team velocity.

Desired Capabilities 2–4+ years of experience in an engineering leadership role (team lead, tech lead, or manager), in addition to prior experience as a hands-on engineer. Strong technical background in full-stack development, particularly with ReactJS, TypeScript, and backend technologies like Node.js or Python. Experience leading teams building end-user products with a focus on quality, usability, and performance. Solid understanding of system architecture, relational databases (e.g., PostgreSQL), and cloud infrastructure (e.g., AWS, GCP). Proven track record of successfully shaping product direction and delivering results in a fast-paced environment. Excellent communication skills

Redwood City, CA


Biohub is leading the new era of AI-powered biology to cure or prevent disease through its 501c3 medical research organization, with the support of the Chan Zuckerberg Initiative.

The Team Biohub supports the science and technology that will make it possible to help scientists cure, prevent, or manage all diseases by the end of this century. While this may seem like an audacious goal, in the last 100 years, biomedical science has made tremendous strides in understanding biological systems, advancing human health, and treating disease.

Achieving our mission will only be possible if scientists are able to better understand human biology. To that end, we have identified four grand challenges that will unlock the mysteries of the cell and how cells interact within systems — paving the way for new discoveries that will change medicine in the decades that follow:

Building an AI-based virtual cell model to predict and understand cellular behavior Developing novel imaging technologies to map, measure and model complex biological systems
Creating new tools for sensing and directly measuring inflammation within tissues in real time.tissues to better understand inflammation, a key driver of many diseases Harnessing the immune system for early detection, prevention, and treatment of disease The Opportunity At Biohub, we are generating unprecedented scientific datasets that drive biological modeling innovation:

Billions of standardized cells of single-cell transcriptomic data, with a focus on measuring genetic and environmental perturbations 10s of thousands of donor-matched DNA & RNA samples PB-scale static and dynamic imaging datasets TB-scale mass spectrometry datasets Diverse, large multi-modal biological datasets that enable biological bridges across measurement types and facilitate multi-modal model training to define how cells act. After model training, we make all data products available through public resources like CELLxGENE Discover and the CryoET Portal, used by tens of thousands of scientists monthly to advance understanding of genetic variants, disease risk, drug toxicities, and therapeutic discovery.

As a Senior Staff Data Scientist, you'll lead the creation of groundbreaking imaging datasets that decode cellular function at the molecular level, describe development, and predict responses to genetic or environmental changes. Working at the intersection of data science, biology, and AI, you'll define data needs, format standards, analysis approaches, quality metrics, and our technical strategy, creating systems to ingest, transform, and validate and deploy data products.

Success for this role means delivering high-quality, usable datasets that directly address modeling challenges and accelerate scientific progress. Join us in building the data foundation that will transform our understanding of human biology and move us along the path to curing, preventing, and managing all disease.

Toronto or Remote from US


Mission: As Senior Staff Compiler Engineer, you will be responsible for defining and developing compiler optimizations for our state-of-the-art compiler, targeting Groq's revolutionary LPU, the Language Processing Unit.

In this role you will drive the future of Groq's LPU compiler technology. You will be in charge of architecting new passes, developing innovative scheduling techniques, and developing new front-end language dialects to support the rapidly evolving ML space. You will also be required to benchmark and monitor key performance metrics to ensure that the compiler is producing efficient mappings of neural network graphs to the Groq LPU.

Ideal candidates have experience with LLVM and MLIR, and knowledge with functional programming languages an asset. Also, knowledge with ML frameworks such as TensorFlow and PyTorch, and portable graph models such as ONNX desired.

Responsibilities & opportunities in this role: Compiler Architecture & Optimization: Lead the design, development, and maintenance of Groq’s optimizing compiler, building new passes and techniques that push the performance envelope on the LPU. IR Expansion & ML Enablement: Extend Groq’s intermediate representation dialects to capture emerging ML constructs, portable graph models (e.g., ONNX), and evolving deep learning frameworks. Performance & Benchmarking: Benchmark compiler outputs, diagnose inefficiencies, and drive enhancements to maximize quality-of-results on LPU hardware. Cross-Disciplinary Collaboration: Partner with hardware architects and software leads to co-design compiler and system improvements that deliver measurable acceleration gains. Leadership & Mentorship: Mentor junior engineers, review contributions, and guide large-scale, multi-geo compiler projects to completion. Innovation & Impact: Publish novel compilation techniques and contribute thought leadership to top-tier ML, compiler, and computer architecture conferences.

Ideal candidates have/are: 8+ years of experience in the area of computer science/engineering or related 5+ years of direct experience with C/C++ and LLVM or compiler frameworks Knowledge of spatial architectures such as FPGA or CGRAs an asset Knowledge of functional programming an asset Experience with ML frameworks such as TensorFlow or PyTorch desired Knowledge of ML IR representations such as ONNX and Deep Learning

Additionally nice to have: Strong initiative and personal drive, able to self-motivate and drive projects to closure Keen attention to detail and high levels of conscientiousness Strong written and oral communication; ability to write clear and concise technical documentation Team first attitude, no egos Leadership skills and ability to motivate peers Optimistic Outlook, Coaching and mentoring ability

Attributes of a Groqster: Humility - Egos are checked at the door Collaborative & Team Savvy - We make up the smartest person in the room, together Growth & Giver Mindset - Learn it all versus know it all, we share knowledge generously Curious & Innovative - Take a creative approach to projects, problems, and design Passion, Grit, & Boldness - no limit thinking, fueling informed risk taking

Location Bay area or remote


Description

Goaly AI is hiring multiple AI research & AI infra positions, full-time and intern!

=== About Goaly === We are an early-stage stealth mode AI startup located in Silicon Valley, founded by ex-FAANG AI engineers & researchers who have led multiple GenAI products from research to production, powering billion-user products. We are backed by accredited investors and AI leadership from top tech firms, primarily serving rising AI labs and enterprise clients. We are on a mission to make frontier AI accessible and affordable to every business.

=== Why This Matters for Your Research === You will own end-to-end LLM/SLM development, working on cutting-edge model architectures and novel optimization techniques. Your research is backed by abundant GPU clusters, and we offer full support to publish breakthrough results at top conferences, including NeurIPS, ICML, and CVPR.

We are a super fun team that work and play hard together. We are actively hiring AI researchers and AI infra engineers (intern and full-time). Our job postings: https://goaly.ai/jobs?job_function=research
Send your cool projects / resume to recruiting@goaly.ai and lets YOLO together!

Abu Dhabi, UAE


The Department of Machine Learning at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) has faculty openings at all ranks (full/associate/assistant professors). Qualified candidates are invited to apply. Applicants are expected to conduct outstanding research and be committed to teaching. Successful applicants will be provided an attractive remuneration package and allocated generous research funding for each year. Long-term research on big problems is particularly encouraged.

More details about the positions and the submission are available at https://apply.interfolio.com/176242

Those attending NeurIPS are welcome to talk to us to learn more about the university and the positions.

Contact: Chih-Jen Lin (chihjen.lin@mbzuai.ac.ae)

Postdoctoral Researcher – AI/ML for Scientific Discovery

The Center for Artificial Intelligence in Science and Engineering (ARTISAN) at Georgia Tech is seeking a highly motivated Postdoctoral Researcher to advance AI/ML methods for large-scale scientific challenges—including protein structure prediction, molecular dynamics, quantum chemistry, and computational neuroscience.

The immediate project centers on protein structure prediction and molecular dynamics. Emphasis is on creating innovative AI approaches grounded in protein biophysics and interpretability approaches to advance foundational scientific theories in protein science. The position is ideal for candidates excited by frontier questions at the intersection of AI, scientific computing, and molecular science.

Georgia Tech’s ARTISAN center offers a rich, interdisciplinary environment that brings together computational scientists, biologists, data scientists, and AI engineers. The candidate will be supported by a strong engineering team and have access to top-tier computing resources, including new AI4Science–dedicated HPC frameworks and GPU clusters.

Contact : For questions or to submit materials, contact: artisan-hiring at groups.gatech.edu. If you are at NeurIPS, please reach out to Giri Krishnan to meet in person and discuss giri@gatech.edu

Cupertino, California


Horizon Robotics (HKEX: 9660) is a leading provider of Smart Driving solutions for passenger vehicles, empowered by our proprietary software and hardware technologies. Our solutions combine cutting-edge algorithms, purpose-built software and processing hardware, providing the core technologies for smart driving that enhance the safety and experience of drivers and passengers. Horizon Robotics is key enabler for the smart vehicle transformation and commercialization with our integrated solutions deployed at scale.

The Silicon Valley Applied Research Lab is a research team located in Silicon Valley, dedicated to developing advanced algorithms and models for Advanced Driver Assistance Systems (ADAS), Autonomous Driving (AD) , and other generic robotics systems.

If you are looking for a role to explore, develop and innovate machine learning algorithms for AD and robotics technologies, you are welcome to join us.

Your Daily Practice

As an applied research scientist, you will take part in:

  • Devising novel deep-learning based algorithms and converting them to prototypes of autonomous driving solutions, including but not limited to training foundation models, post-training with reinforcement learning, and world models.
  • Practicing end-to-end machine learning skills, including data pipelining, model construction and fine-tuning, and comprehensive performance testing.
  • Following closely with academia, identifying the latest trends and extending them to real industrial development.
  • Publishing research papers in top notch conferences in machine learning domain.
  • Showcasing the work via presentation in internal and external talks, conferences, and workshops.

What You Must Have

  • PhD / MS degree in computer vision, machine learning or a related field with multiple research publications in top conferences or journals; alternatively equivalent years of industry experience solving CV problems which do not have readily available solutions.
  • Expertise in at least one of these specific areas: deep learning-based perception, prediction and planning, vision-language models, and reinforcement learning.
  • Track record of driving ML research projects from start to completion, including conception, problem definition, experimentation, iteration, and publication or productization.
  • Strong programming skills in Python and/or C++.
  • Extensive experience with ML frameworks such as PyTorch and Jax.
  • Strong verbal and written communication skills.

Bonus points! - Track record in innovative solutions to real-world problems in machine learning domain. - Experienced with closed-loop methods in solving prediction and planning problems.

This is a hybrid role with the expectation of working at least 3 days per week in our Cupertino office. The base pay range for this full-time Applied Research Scientist position is between $150,000 and $300,000/year plus equity incentive, depending on your experience, qualifications, education, skills and other related factors.

This position is also eligible for an annual performance bonus and a competitive benefits package. Employees have day one access to medical, dental, and vision insurance, a 401(k) savings plan with company match, Paid Holidays, Sick Days and Personal Time Off. We also sponsor H-1B visas and green card petitions.

Horizon Robotics is committed to be an Equal Opportunity Employer. It is our policy to provide equal employment opportunities to all qualified persons without regard to race, age, color, sex, sexual orientation, religion, national origin, disability, veteran status or marital status or any other prescribed category set forth in federal or state regulations.