Job Title:
Research Scientist - Post-training, Inference, & Safety and Security
Company: Virtue AI
Location: San Francisco, CA
Created: 2026-04-19
Job Type: Full Time
Job Description:
About Virtue AIVirtue AI sets the standard for advanced AI security platforms. Built on decades of foundational and award-winning research in AI security, its AI-native architecture unifies automated red-teaming, real-time multimodal guardrails, and systematic governance for enterprise apps and agents. Deploy in minutes-across any environment-to keep your AI protected and compliant. We are a well-funded, early-stage startup founded by industry veterans, and we're looking for passionate builders to join our core team.What You'll DoAs a Research Scientist, you will play a key role in developing production-ready and cutting-edge agent and ML security techniques. Your work will directly contribute to advancing our products and services and driving innovation within the industry.You will:Develop our core techniques for agent and model red-teaming, including designing new red-teaming methods and optimizing the overall testing platformDevelop and train our core guardrail models for agents and different input modalitiesConduct a comprehensive model evaluation and analyze the evaluation resultsOptimize the model training pipelines and infrastructureApply efficiency inference methods to reduce model latencyLead research projects, collaborate across the entire team, and contribute to the full technical stackWhat Makes You a Great FitYou'll thrive in this role if you're excited by new technology, love solving customer problems, and can comfortably bridge the business and technical worlds.Required qualifications:Degree (BS required, MS or PhD preferred) in Machine Learning, Security, or a related fieldProficiency in programming languages such as Python, along with expertise in LLM libraries like PyTorch, HuggingFace, etcExperience in LLM finetuning for different modalities using packages like LlamaFactory, verl, Slime, etcExperience in optimizing LLM inference with Sglang or vLLM, etcExperience in building LLM-based agents for various applications using ADKs like Google, OpenAI, and LangChainExperience in conducting large-scale red-teaming for LLMs and agentsStrong problem-solving skills and effective communication abilitiesPreferred qualifications:Hands-on experience with Docker and Kubernetes for containerization and deploymentHands-on experience in back-end engineering (Go, C/C++) and front-end development (Typescript, React)Enthusiasm for thriving in a fast-paced startup environmentWhy Join Virtue AICompetitive base salary compensation + equity commensurate with skills and experience.Impact at scale - Help define the category of AI security and partner with Fortune 500 enterprises on their most strategic AI initiatives.Work on the frontier - Engage with bleeding-edge AI/ML and deploy AI security solutions for use cases that don't yet exist anywhere else yet.Collaborative culture - Join a team of builders, problem-solvers, and innovators who are mission-driven and collaborative.Opportunity for growth - Shape not only our customer engagements, but also the processes and culture of an early lean team with plans for scale.Equal Opportunity EmploymentVirtue AI is an Equal Opportunity Employer. We welcome and celebrate diversity and are committed to creating an inclusive workplace for all employees. Employment decisions are made without regard to race, color, religion, sex, gender identity or expression, sexual orientation, marital status, national origin, ancestry, age, disability, medical condition, veteran status, or any other status protected by law.We also provide reasonable accommodations for applicants and employees with disabilities or sincerely held religious beliefs, consistent with legal requirements.