Job Title:
Senior AI Security Researcher
Company: remoterocketship
Location: New York City, NY
Created: 2026-05-14
Job Type: Full Time
Job Description:
Job Description: Develop and answer open-ended AI security research questions that helps NVIDIA understand, measure, and reduce risk in frontier models, agentic systems, AI platforms, and AI-enabled products. Develop practical methods, prototypes, evaluations, or tools that reveal how AI systems can fail under adversarial conditions and how those risks can be mitigated. Explore a range of AI security problems, such as LLM and agent security, adversarial testing, model evaluation, cyber-defense automation, vulnerability discovery, secure deployment, or autonomous response. Translate research into usable outcomes for engineering and security teams, including proof-of-concept demonstrations, benchmarks, technical guidance, mitigations, and secure-by-design recommendations. Collaborate across offensive security, product security, AI research, platform, cloud, and infrastructure teams to connect research insights with NVIDIA's highest-impact security priorities. Help shape NVIDIA's AI-security research strategy by mentoring others, identifying emerging risks, and building repeatable practices for evaluating and defending AI systems. Requirements: 12+ years of experience in AI security, cybersecurity research, applied ML research, offensive security, cyber defense, or related technical fields. Demonstrated record of original research and practical impact, such as deployed security ML systems, AI-security evaluations, CVEs, patents, publications, conference talks, open-source tools, production mitigations, or funded research programs. Hands-on ability to build working research systems in Python and modern ML/data tooling such as PyTorch, JAX, TensorFlow, scikit-learn, Pandas, NumPy, Spark, BigQuery, or comparable platforms. Experience with one or more AI-security areas: LLM security, adversarial ML, model evaluation, agent security, prompt injection, model backdoors, data poisoning, model abuse, secure RAG, synthetic data, or AI-enabled security automation. Strong cybersecurity foundation, including threat modeling, adversary simulation, exploit or vulnerability research, malware analysis, network defense, threat hunting, detection engineering, digital forensics, secure code review, or incident-response automation. Ability to work across ambiguous research problems and practical product constraints, translating findings into prioritized recommendations and measurable security outcomes. Bachelor's degree or equivalent experience in Computer Science, Machine Learning, Cybersecurity or a related field. Experience leading AI-security research for major models, AI platforms, security products, or large-scale production systems. Benefits: equity benefits