AI Alignment Research Engineer (AI Labs) Job at Krutrim, Palo Alto, CA

WHNzWXp2SnlEeGZRM3p2cEdRQ2xlU2NVQ0E9PQ==
  • Krutrim
  • Palo Alto, CA

Job Description

Principal Research Scientist, AI Alignment (Reinforcement Learning, Red Teaming, Explainability)

Location: Palo Alto (CA, US)

About Us:

is building AI computing for the future. Our envisioned AI computing stack encompasses the AI computing infrastructure, AI Cloud, multilingual and multimodal foundational models, and AI-powered end applications. We are India’s first AI unicorn and built the first foundation model from the country.

Our AI stack is empowering consumers, startups, enterprises and scientists across India and the world to build their end AI applications or AI models. While we are building foundational models across text, voice, and vision relevant to our focus markets, we are also developing AI training and inference platforms that enable AI research and development across industry domains.

The platforms being built by Krutrim have the potential to impact millions of lives in India, across income and education strata, and across languages.

Job Description:

We are seeking an experienced and visionary Principal Research Scientist to lead our AI Alignment efforts, encompassing Trust and Safety, Interpretability , and Red Teaming . In this critical role, you will oversee teams dedicated to ensuring our AI systems are safe, ethical, interpretable, and reliable . You will work at the intersection of cutting-edge AI research and practical implementation, guiding the development of AI technologies that positively impact millions of lives while adhering to the highest standards of safety and transparency.

Responsibilities:

  1. Provide strategic leadership for the AI Alignment division, encompassing Trust and Safety, Interpretability, and Red Teaming teams.
  2. Oversee and coordinate the efforts of the Lead AI Trust and Safety Research Scientist and Lead AI Interpretability Research Scientist, ensuring alignment of goals and methodologies.
  3. Develop and implement comprehensive strategies for AI alignment, including safety measures, interpretability techniques, and robust red teaming protocols.
  4. Drive the integration of advanced safety and interpretability techniques such as Reinforcement Learning with Human Feedback ( RLHF), Group Relative Policy Optimization (GRPO), Reinforcement Learning from Verifiable Rewards (RLVR), Direct Preference Optimization (DPO), Proximal Policy Optimization (PPO), Local Interpretable Model-agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP) across our AI development pipeline.
  5. Establish and maintain best practices for red teaming exercises to identify potential vulnerabilities and ensure our models do not generate harmful or undesirable outputs.
  6. Collaborate with product and research teams to define and implement safety and interpretability aspects that ensure our AI models deliver helpful, honest, and transparent outputs.
  7. Lead cross-functional initiatives to integrate safety measures and interpretability throughout the AI development lifecycle.
  8. Stay at the forefront of AI ethics, safety, and interpretability research, fostering a culture of continuous learning and innovation within the team.
  9. Represent the company in industry forums, conferences, and regulatory discussions related to AI alignment and ethics.
  10. Manage resource allocation, budgeting, and strategic planning for the AI Alignment division.
  11. Mentor and develop team members, fostering a collaborative and innovative research environment.
  12. Liaise with executive leadership to communicate progress, challenges, and strategic recommendations for AI alignment efforts.

Qualifications

  1. Ph.D. in Computer Science, Machine Learning, or a related field with a focus on AI safety, ethics, and interpretability.
  2. 7+ years of experience in AI research and development, with at least 3 years in a leadership role overseeing multiple AI research teams.
  3. Demonstrated expertise in AI safety, interpretability, and red teaming methodologies for large language models and multimodal systems.
  4. Strong understanding of advanced techniques such as Reinforcement Learning with Human Feedback ( RLHF), Group Relative Policy Optimization (GRPO), Reinforcement Learning from Verifiable Rewards (RLVR), Direct Preference Optimization (DPO), Proximal Policy Optimization (PPO), Local Interpretable Model-agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP) and attention-based methods for AI safety and interpretability.
  5. Proven track record of leading teams working on models with 10s and 100s of billions of parameters.
  6. Experience in designing and overseeing comprehensive red teaming exercises for AI systems.
  7. Deep knowledge of ethical considerations in AI development and deployment, including relevant regulatory frameworks and industry standards.
  8. Strong publication record in top-tier AI conferences and journals, specifically in areas related to AI safety, ethics, and interpretability.
  9. Excellent communication and presentation skills, with the ability to convey complex technical concepts to diverse audiences, including executive leadership and non-technical stakeholders.
  10. Demonstrated ability to manage and mentor diverse teams of researchers and engineers.
  11. Strong project management skills with experience in resource allocation and budgeting for large-scale research initiatives.
  12. Visionary mindset with the ability to anticipate future trends and challenges in AI alignment and ethics.

Impact:

As the Principal Research Scientist of AI Alignment, you will play a pivotal role in shaping the future of responsible AI development. Your leadership will ensure that our AI systems are not only powerful and innovative but also safe, interpretable, and aligned with human values. By fostering collaboration between Trust and Safety, Interpretability, and Red Teaming efforts, you will create a holistic approach to AI alignment that sets new industry standards. Your work will be instrumental in building public trust in AI technologies and positioning our company as a leader in ethical and responsible AI development.

Job Tags

Local area,

Similar Jobs

Mosaic

Associate Direct Support Manager Job at Mosaic

 ...options available ~ Professional & Personal Development Opportunities ~403b Retirement Plan Schedule:Wednesday- Friday 9 am - 7 PM, Saturday 7 am - 5 pm Commitment to Mosaic Values: At Mosaic, we believe in creating a workplace where everyone has... 

R2 Global

Business Central Functional Consultant (Houston) Job at R2 Global

 ...About the Role Our client is seeking an experienced Dynamics 365 Business Central Consultant to join their team. This is an exciting...  ...of hands-on experience with D365 Business Central or Dynamics NAV implementations . ~ Strong understanding of accounting principles... 

Pcstalent

Sharepoint Developer Job at Pcstalent

Job ResponsibilitiesAble to create SharePoint sites using Standard Out of the Box SharePoint Web Parts Pages: (Design complete custom site, Home Page, child pages, sub sites).Good understanding of SharePoint object model.Working experience in SPFx.Power platforms... 

Staffmark Group

Warehouse Order Selector (Olive Branch) Job at Staffmark Group

 ...Ready to take your career to the next level? Staffmark is seeking motivated Warehouse Order Selector in Olive Branch, MS! This is your chance to step into a fast-paced, hands-on role where your skills keep operations moving smoothly. Be part of a team that puts safety... 

Trelleborg

Toolmaker Apprentice Job at Trelleborg

 ..., fixtures and related tools, completing repairs and revising design specifications as needed. Essential Job Functions/Primary Responsibilities...  ...)\n \n More information about the Trelleborg Toolmaker Apprenticeship Program What is it? A registered apprenticeship is a time...