NeurIPS 2025 Career Opportunities
Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.
Search Opportunities
USA – Austin, Seattle
Job Overview
At Arm, we’re enabling the next wave of AI innovation - from cloud to edge, data center to device. Our AI Product Managers play a pivotal role in turning cutting-edge research and engineering into real-world solutions that scale across billions of devices. As part of a globally trusted ecosystem, you’ll define and shape products that power the future of intelligent, energy-efficient computing.
We’re looking for AI-focused Product Managers who thrive at the intersection of technology, strategy, and customer need - individuals who can align market trends with technical innovation, and help bring transformative AI products to life.
Responsibilities
As an AI Product Manager at Arm, your role may include: Defining and owning product roadmaps for AI/ML software, hardware, tools, or platforms Identifying emerging AI market opportunities and customer needs across domains Working closely with engineering, research, and design teams to guide product development Collaborating with business development and partner teams to support go-to-market strategy Ensuring delivery of impactful, scalable solutions aligned with Arm’s long-term vision
Required Skills and Experience
Demonstrated experience in product management, technical program management, or product strategy Familiarity with AI/ML technologies, platforms, or development workflows Strong ability to synthesize market trends, customer feedback, and technical input into clear product direction Excellent cross-functional collaboration and communication skills Ability to work across a range of stakeholders - from engineers to executives A strategic mindset with a drive to build products that solve real problems at scale
“Nice to Have” Skills and Experience
Experience with AI deployment in edge, embedded, cloud, or mobile environments Exposure to AI frameworks (e.g., TensorFlow, PyTorch), ML compilers, or hardware accelerators Background in developer tooling, ML model optimization, or platform product management Prior involvement in launching or scaling AI or infrastructure products
San Francisco
We are seeking a talented software engineer with generative AI experience who is deeply proficient with python and typescript to join our dynamic and growing team at Writer. As a key member of our engineering team, you will play a crucial role in building the genAI software. Your primary focus will be on developing a state-of-the-art platform that harnesses generative AI technologies and you will deliver seamless and scalable solutions. You will work closely with cross-functional teams to design, implement, and maintain features that enhance the user experience, drive product growth, establish best practices, and integrate cutting-edge AI capabilities.
Your responsibilities
- Design and develop robust and scalable generative AI services using Python and open source frameworks such as Writer Agent Builder, LangChain, and n8n.
- Implement responsive and user-friendly frontend interfaces, leveraging technologies like React, TypeScript, and modern web frameworks.
- Work with cloud platforms such as AWS, GCP, or Azure to deploy and scale applications.
- Develop and integrate high-performance, low-latency APIs for AI-driven features.
- Ensure code quality through testing, peer reviews, and continuous integration.
- Collaborate with the team to build, and maintain generative AI agents.
- Participate in architectural design discussions and promote engineering best practices.
- Continuously improve the application’s performance, scalability, and maintainability.
Is This You?
- 5+ years of experience in software engineering at expert level with Python
- Experience building web applications using FastAPI and Asyncio
- Experience building with generative AI applications in production environments.
- Expertise with microservices architecture and RESTful APIs.
- Solid understanding of database technologies such as PostgreSQL and vector databases as Elastic, Pinecone, Weaviate, or similar.
- Familiarity with cloud platforms (AWS, GCP, etc.) and containerized environments (Docker, Kubernetes).
- Familiarity with MCP, devtools, AI agents, or contributed to open source
- You are committed to writing clean, maintainable, and scalable code, following best practices in software development.
- You enjoy solving complex problems and continuously improving the performance and scalability of systems.
- You thrive in collaborative environments, working closely with cross-functional teams to build impactful features.
- Proven ability to help teams adopt technical best practices.
Amsterdam
Flow Traders is committed to leveraging the most recent advances in machine learning, computer science, and AI to generate value in the financial markets. We are looking for Quantitative Researchers to join this challenge.
As a Quantitative Researcher at Flow Traders, you are an expert in mathematics and statistics. You are passionate about translating challenging problems into equations and models, and have the ability to optimize them using cutting-edge computational techniques. You collaborate with a global team of researchers and engineers to design, build, and optimize our next generation of models and trading strategies.
Are you at the top of your quantitative, modeling, and coding game, and excited by the prospect of demonstrating these skills in competitive live markets? Then this opportunity is for you.
IMC Trading is seeking researchers with a proven track record to apply state-of-the-art machine learning & deep learning to solve challenging trading problems. This role is part of a central ML research team that collaborates across trading teams at IMC. The ideal candidate will have experience working with other researchers and engineers to build and continuously improve models, systems, and research tooling. We firmly believe that success for research-driven efforts lies in bringing together skills in ML, statistics, and trading intuition as well as a problem-solving mindset and pragmatism. This is an opportunity to dive deep into feature engineering and alpha research, and focus on applying a wide range of ML models as well as to perform research on building custom models.
Position: Data Science Intern
Location: 660 5th Avenue, New York, NY
Viking Global Investors (“Viking”) is a global investment firm founded in 1999, managing over $53 billion in capital across public and private investments. With offices in Stamford, New York, Hong Kong, London, and San Francisco, Viking is registered with the U.S. Securities and Exchange Commission. For more information, visit www.vikingglobal.com.
Internship Opportunity
The Data Science Intern will collaborate with the Data Science team, Investment Analysts, and Data Engineers to analyze and expand Viking’s alternative data assets, generating actionable investment insights. This role is ideal for analytical, creative problem solvers eager to apply their data science skills to pressing research questions. Interns will work both independently and alongside quantitative professionals, with flexibility in duration, start dates, and full-time/part-time options.
Informational Webinar: October 30, 6:00–7:00pm ET
Register here
Responsibilities
- Develop and deliver predictive analytics on companies, sectors, and macroeconomic trends
- Generate investment insights from alternative data analysis
- Create methodologies to identify and evaluate private company investment opportunities
- Identify and assess new data sources
- Streamline data lifecycle, operating models, and processes
- Test and evaluate new technologies for the big data platform
- Build centralized, automated analyses and processes
- Share information and insights to support Viking’s research efforts
Qualifications
- Currently enrolled in a Master’s or PhD program (3rd year+) in Data Science, Economics, Finance, Statistics, or related quantitative fields
- Strong communication skills, with the ability to explain complex ideas to non-technical audiences
- Independent thinker, capable of leading research projects with partial supervision
- Proficient in Python, statistical libraries, SQL, BI tools (e.g., Tableau), and cloud technologies
- Sound judgment and big-picture perspective
- Passionate about research, proactive, and self-motivated
- Committed to excellence
Application
Submit your resume and a 1–2 page supplement describing a recent quantitative research project via the Viking career site.
Supplement must include:
- Research question
- Data used
- Approach and statistical methodologies
- Findings
- Computational environment (language, main libraries, etc.)
Application Deadline: November 11, 2025 (11:59 PM EST)
Interviews: Conducted virtually in December
Compensation & Benefits
- Base Salary Range (NYC): $175,000 – $250,000 annually
- Actual compensation determined by skill set, experience, education, and qualifications
Equal Opportunity Employer
Viking is an equal opportunity employer. For questions or accommodation requests, contact:
Viking Campus Recruiting Team
campusrecruiting@vikingglobal.com
San Jose, CA, USA
We are looking for a hands-on, systems-oriented AI Agent Engineer to design, build, and maintain intelligent agents that drive automation and business impact across the enterprise. This role is responsible for the full lifecycle of agent development — from design to versioning, orchestration, and continuous learning.
You’ll contribute directly to scaling our AI strategy by engineering reusable components, optimizing agent workflows, and ensuring real-world performance in production environments.
What you'll Do
-
Agent Development: Build and fine-tune specialized AI agents for targeted customer experience use cases such as discovery, support, and lead qualification. Implement prompt engineering strategies, memory handling, resource management and tool-calling integrations
-
Multi-Agent Communication: Adopt agent-to-agent communication protocols and handoff mechanisms to enable cooperative task execution and delegation. Build orchestrated workflows across agents using frameworks like LangChain, AutoGen, or Semantic Kernel
-
Templates & Reusability: Create reusable agent templates and modular components to accelerate deployment across business units. Build plug-and-play configurations for domain-specific requirements.
-
Lifecycle Management & Monitoring: Track and improve conversation quality, task success rate, user satisfaction, and performance metrics. Automate monitoring of agent behavior using observability tools (e.g., Arize, LangSmith, custom dashboards)
-
Continuous Improvement: Implement learning workflows, including human-in-the-loop feedback and automatic retraining. Refine prompts and model behavior through structured experimentation and feedback loops.
-
Maintenance & Governance: Handle knowledge base updates, drift detection, performance degradation, and integration of new business logic. Ensure agents stay aligned with evolving enterprise data sources and compliance requirements
-
Deployment: Manage agent versioning, testing pipelines (unit, regression, UX), and controlled rollout processes. Collaborate with DevOps, QA, and infrastructure teams to ensure scalable deployments
What you need to succeed - 3–5+ years of experience in AI/ML engineering, NLP systems, or backend development - Strong proficiency with LLM frameworks (e.g., OpenAI APIs, LangChain, RAG pipelines) - Experience building conversational agents or workflow bots in production environments - Familiarity with cloud platforms (AWS/GCP/Azure), REST APIs, Python, and containerization (Docker, K8s) - Comfort with prompt design, vector databases, and memory handling strategies
Preferred Qualifications - Experience with multi-agent frameworks or agent orchestration systems - Familiarity with observability tools, data labeling workflows, or synthetic data generation - Background in conversational design or dialogue management systems - Degree in Computer Science, Data Science, Engineering, or a related field
San Jose, CA, USA
Adobe is looking for a Senior Machine Learning Engineer to help shape the future of agentic AI in the enterprise. In this role, you will design, build, and scale cutting-edge platforms and products that redefine how enterprises create and optimize customer experiences and marketing campaigns. You will play a key role in advancing AEP Agent Orchestrator—a foundational platform layer that manages and connects Adobe and third-party agents. You will work in a fast-moving, high-impact environment with a team of talent engineers and applied scientists where creativity, collaboration, and data-driven innovation come together to make a real difference.
What you'll do
-
Design and development of state-of-the-art agentic AI system and platform powered by generative AI, including working on engineering problems such as defining APIs, integrating with UIs, deploying Cloud services, CICD, etc., as well as implementing ML- and LLM-Ops best practices, delivering high quality, production ready code.
-
Design and build ML workflows for enterprise-scale model customization, serving, and ecosystem integration.
-
Partner with researchers and applied scientists on productization of innovations
-
Engage in the product lifecycle, design, deployment, and production operations.
What you need to succeed
-
The ideal candidate will have the following background:
-
PhD or MS degree in Computer Science or related field required.
-
5+ years of experience in machine learning, including production-scale deployments
-
Experience with agile development, and short release cycles
-
Good understanding of statistical modeling, machine learning, or analytics concepts; ability to quickly learn new skills and work in a fast-paced team.
-
Proficient in one or more programming languages such as Python and Java. Familiarity with cloud development on Azure/AWS.
-
Experience using Relational (MySQL, Postgres) and NoSQL datastores (Redis, ElasticSearch, Snowflake) along with data access patterns and strategies
-
Experience working with at least one deep learning framework such as TensorFlow or PyTorch.
-
Experience with LLMs including prompt/context-engineering, modern LLM APIs, fine-tuning models etc.
-
Experience working with both research and product teams.
-
Excellent problem-solving and analytic skills
-
Excellent communication and relationship building skills.
San Francisco
About this role
We’re looking for a Data Engineer to help design, build, and scale the data infrastructure that powers our analytics, reporting, and product insights. As part of a small but high-impact Data team, you’ll define the architectural foundation and tooling for our end-to-end data ecosystem.
You’ll work closely with engineering, product, and business stakeholders to build robust pipelines, scalable data models, and reliable workflows that enable data-driven decisions across the company. If you are passionate about data infrastructure, and solving complex data problems, we want to hear from you!
Tech stack
Core tools: Snowflake, BigQuery, dbt, Fivetran, Hightouch, Segment Periphery tools: AWS DMS, Google Datastream, Terraform, GithHub Actions
What you’ll do
Data infrastructure: * Design efficient and reusable data models optimized for analytical and operational workloads. * Design and maintain scalable, fault-tolerant data pipelines and ingestion frameworks across multiple data sources. * Architect and optimize our data warehouse (Snowflake/BigQuery) to ensure performance, cost efficiency, and security. * Define and implement data governance frameworks — schema management, lineage tracking, and access control.
Data orchestration: * Build and manage robust ETL workflows using dbt and orchestration tools (e.g., Airflow, Prefect). * Implement monitoring, alerting, and logging to ensure pipeline observability and reliability. * Lead automation initiatives to reduce manual operations and improve data workflow efficiency.
Data quality: * Develop comprehensive data validation, testing, and anomaly detection systems. * Establish SLAs for key data assets and proactively address pipeline or data quality issues. * Implement versioning, modularity, and performance best practices within dbt and SQL.
Collaboration & leadership: * Partner with product and engineering teams to deliver data solutions that align with downstream use cases. * Establish data engineering best practices and serve as a subject matter expert on our data pipelines, models and systems.
What we’re looking for
- 5+ years of hands-on experience in a data engineering role, ideally in a SaaS environment.
- Expert-level proficiency in SQL, dbt, and Python.
- Strong experience with data pipeline orchestration (Airflow, Prefect, Dagster, etc.) and CI/CD for data workflows.
- Deep understanding of cloud-based data architectures (AWS, GCP) — including networking, IAM, and security best practices.
- Experience with event-driven systems (Kafka, Pub/Sub, Kinesis) and real-time data streaming is a plus.
- Strong grasp of data modeling principles, warehouse optimization, and cost management.
- Passionate about data reliability, testing, and monitoring — you treat pipelines like production software.
- Thrive in ambiguous, fast-moving environments and enjoy building systems from the ground up.
Austin, TX
About the Team
Avride builds autonomous solutions from the ground up, using machine learning as the core of our navigation pipeline. We are evolving our stack to support the next generation of self-driving, leveraging efficient CNNs, Transformers, and MLLMs to solve complex perception and planning challenges. Our goal is to apply the right approach to the right problem, laying the groundwork for unified, data-driven approaches.
About the Role
We are seeking a Machine Learning Engineer to build the infrastructure and ML foundations for advanced autonomous behaviors. You won't just optimize isolated models; you will architect scalable training workflows and high-fidelity components.
This is a strategic position: You will contribute to the critical infrastructure that paves the way for future end-to-end capabilities. You will translate relevant research ideas into production-ready improvements when they prove beneficial, helping prepare our stack for a transition toward unified, learned behaviors.
What You'll Do
- Strengthen Core Modules: Design and refine models for perception, prediction, or planning, enhancing reliability to support future holistic learning approaches.
- Architect Data Foundations: Build scalable pipelines for multimodal datasets, ensuring they support both current needs and future large-scale E2E experiments.
- Advance Training Infra: Develop distributed training workflows capable of handling massive model architectures for next-gen foundation models.
- Bridge Research & Production: Analyze research in relevant fields, identifying specific opportunities to introduce these techniques into our production stack.
- System Integration: Collaborate with engineering teams to ensure individual ML improvements translate into better system-level performance.
What You'll Need
- Strong ML Fundamentals: Mastery of processing and fusing self-driving modalities (multiview camera, sparse LiDAR, vector maps).
- Architectural Expertise: Deep knowledge of modern architectures like Transformers and Attention Mechanisms.
- Applied Experience: 5+ years of combined experience in industry or applied research settings, with a strong grasp of the full lifecycle from data to deployment.
- Technical Proficiency: Python, PyTorch/JAX/TensorFlow, and distributed computing (PySpark, Ray).
- Systems Mindset: Ability to visualize how modular systems evolve into end-to-end learners and the practical challenges of deploying them.
- Research Capability: Ability to distill complex papers into practical engineering roadmaps.
Nice to Have
- Advanced degree in CS, ML, Robotics, or related field.
- Familiarity with World Models, Occupancy Networks, or Joint Perception-Planning.
- Experience with inference optimization (Triton, TensorRT) and embedded hardware.
New York
Description - Bloomberg’s Engineering AI department has 350+ AI practitioners building highly sought after products and features that often require novel innovations. We are investing in AI to build better search, discovery, and workflow solutions using technologies such as transformers, gradient boosted decision trees, large language models, and dense vector databases. We are expanding our group and seeking highly skilled individuals who will be responsible for contributing to the team (or teams) of Machine Learning (ML) and Software Engineers that are bringing innovative solutions to AI-driven customer-facing products.
At Bloomberg, we believe in fostering a transparent and efficient financial marketplace. Our business is built on technology that makes news, research, financial data, and analytics on over 1 billion proprietary and third-party data points published daily -- across all asset classes -- searchable, discoverable, and actionable.
Bloomberg has been building Artificial Intelligence applications that offer solutions to these problems with high accuracy and low latency since 2009. We build AI systems to help process and organize the ever-increasing volume of structured and unstructured information needed to make informed decisions. Our use of AI uncovers signals, helps us produce analytics about financial instruments in all asset classes, and delivers clarity when our clients need it most.
We are looking for Senior MLOps Engineers with strong expertise and passion for building and maintaining AI systems to join our team.
As a Senior MLOps Engineer you will design and build tools to improve the efficiency of our Model Development Life Cycle (MDLC), automate ML processes, enhance the performance of our systems and more.
Join the AI Group as a Senior MLOps Engineer and you will have the opportunity to: -Architect, build, and diagnose production AI applications and systems -Collaborate with colleagues on production systems and write, test, and maintain production quality code -Define and provide strong SLAs around latency, throughput and resource (memory / disk / network / CPU / GPU) usage -Work closely with AI Platform teams to operationalize continuous model training, inference, and monitoring workflows
We are looking for a Senior MLOps engineer with:
-4+ years of experience working with an object-oriented programming language (Python, Go, etc.) -A Degree in Computer Science, Engineering, Mathematics, similar field of study or equivalent work experience -An understanding of Computer Science fundamentals such as data structures and algorithms -An honest approach to problem-solving, and ability to collaborate with peers, stakeholders and management -Industry experience with machine learning teams -Working knowledge of common ML frameworks such as PyTorch, ONNX, DeepSpeed etc. -Prior experience with cloud-native technologies like Kubernetes, Argo Workflows, Buildpacks, etc. -Experience with cloud providers such as AWS, GCP or Azure -A track record of collaboration with colleagues to achieve repeatable high quality outcomes