hero

Job Board

Discover opportunities across our network

Responsible AI Researcher, AI.x

Charles Schwab

Charles Schwab

Software Engineering, Data Science
San Francisco, CA, USA
USD 180k-270k / year
Posted on Dec 26, 2025

Responsible AI Researcher, AI.x

Job Locations US-CA-San Francisco
Requisition ID
2025-117961
Posted Date
3 hours ago(12/26/2025 5:05 AM)
Category
Data, Analytics, Business Insights & Strategy
Salary Range
USD $180,000.00 - $270,000.00 / Year
Application deadline
1/20/2026
Position Type
Full time

Your Opportunity

At Schwab, you will build a rewarding career while making a difference in the lives of our millions of clients. Here, innovative thinking meets creative problem solving as we work together to challenge the status quo. We believe in the power of collaboration and value being together in the office, which is why this role is based on-site in our San Francisco office. Joining Schwab means joining a company committed to transforming the financial industry and putting clients at the center of everything we do.

Schwab’s AI Strategy & Transformation team, known as AI.x, is the central hub for Artificial Intelligence at Schwab. We are an integrated product, engineering, strategy and risk team, all based in San Francisco. We help set the enterprise vision for AI, invest in the most promising opportunities, and accelerate delivery across the company. We also build the research platform that powers AI at scale and explore next-generation GenAI efforts that will redefine how we serve our clients.

As a Responsible AI Researcher, you will bridge the gap between regulation, ethical principles, and technical innovation, shaping how AI is safely and ethically deployed across our products and services. You will translate complex and often ambiguous regulatory requirements into actionable technical guardrails, evaluation frameworks, and continuous monitoring benchmarks. In addition to interpreting existing standards, you will advocate for best practices, anticipate where regulations may be heading, and help position Schwab to maintain the trust that is central to our business.

In this highly visible and cross-functional role, you will ensure Schwab’s AI systems set industry benchmarks for safety, fairness, and transparency. You’ll work closely with teams across research, engineering, product, legal, compliance, and risk to address complex and emerging challenges in responsible AI. This role offers significant opportunities to innovate—designing and implementing solutions for problems where established best practices may not yet exist—and to help shape the technical and organizational standards that guide Schwab’s AI systems. Your work will directly inform executive decision-making, regulatory engagement, and the development of trusted AI products at scale.

What You’ll Do

  • Design and implement innovative methods for bias detection and develop technical guardrails aligned with responsible AI principles.
  • Collaborate with cross-functional teams—research, product, legal, compliance, and risk—to ensure Schwab’s AI systems are safe, fair, and transparent.
  • Build and maintain monitoring systems for AI models, integrating human-in-the-loop and automated metrics for compliance at scale.
  • Influence executive decision-making and regulatory engagement while driving trusted AI solutions for millions of clients.

If you’re passionate about responsible AI and eager to innovate in a dynamic environment, this role is for you.

What you have

We’re looking for someone who thrives in ambiguity, is passionate about AI ethics, and enjoys solving complex challenges where no playbook exists.

Required Qualifications

  • Master’s degree (or equivalent) in Computer Science, Engineering, Data Science, Social/Applied Sciences, or related field—or equivalent experience.
  • 6+ years in AI ethics, AI research, Security, Trust & Safety, or similar roles (academic doctoral experience counts).
  • Expertise in fairness, alignment, adversarial robustness, or interpretability/explainability.
  • Experience with responsible generative AI challenges and risk mitigations.
  • Strong analytical and communication skills for technical and non-technical audiences.

Preferred Qualifications

  • 7+ years in AI/ML research and development using Python.
  • Familiarity with regulatory frameworks (AI-specific or financial sector) and responsible AI standards.
  • Published research in AI safety, alignment, or governance (e.g., FAccT, NeurIPS).
  • Experience with LLMs and deploying LLM-powered applications.
  • Skills in adversarial testing, red-teaming, and risk assessment for AI deployments.
  • Strong data science fundamentals (SQL, pandas, visualization) and robust pipeline development.
  • Financial domain knowledge is a plus.

We welcome applicants from diverse backgrounds and encourage you to apply even if you don’t meet every requirement.

In addition to the salary range, this role is also eligible for bonus or incentive opportunities.

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed

Why work for us?

At Schwab, we’re committed to empowering our employees’ personal and professional success. Our purpose-driven, supportive culture, and focus on your development means you’ll get the tools you need to make a positive difference in the finance industry.


We offer a competitive benefits package to our full-time employees that takes care of the whole you – both today and in the future:

  • 401(k) with company match and Employee stock purchase plan
  • Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions
  • Paid parental leave and family building benefits
  • Tuition reimbursement
  • Health, dental, and vision insurance
Application FAQs

Software Powered by iCIMS
www.icims.com