AI Alignment Research Engineer (AI Labs) Job at Krutrim, Palo Alto, CA

WHNzWXp2SnlEeGZRM3p2cEdRQ2xlU2NVQ0E9PQ==
  • Krutrim
  • Palo Alto, CA

Job Description

Principal Research Scientist, AI Alignment (Reinforcement Learning, Red Teaming, Explainability)

Location: Palo Alto (CA, US)

About Us:

is building AI computing for the future. Our envisioned AI computing stack encompasses the AI computing infrastructure, AI Cloud, multilingual and multimodal foundational models, and AI-powered end applications. We are India’s first AI unicorn and built the first foundation model from the country.

Our AI stack is empowering consumers, startups, enterprises and scientists across India and the world to build their end AI applications or AI models. While we are building foundational models across text, voice, and vision relevant to our focus markets, we are also developing AI training and inference platforms that enable AI research and development across industry domains.

The platforms being built by Krutrim have the potential to impact millions of lives in India, across income and education strata, and across languages.

Job Description:

We are seeking an experienced and visionary Principal Research Scientist to lead our AI Alignment efforts, encompassing Trust and Safety, Interpretability , and Red Teaming . In this critical role, you will oversee teams dedicated to ensuring our AI systems are safe, ethical, interpretable, and reliable . You will work at the intersection of cutting-edge AI research and practical implementation, guiding the development of AI technologies that positively impact millions of lives while adhering to the highest standards of safety and transparency.

Responsibilities:

  1. Provide strategic leadership for the AI Alignment division, encompassing Trust and Safety, Interpretability, and Red Teaming teams.
  2. Oversee and coordinate the efforts of the Lead AI Trust and Safety Research Scientist and Lead AI Interpretability Research Scientist, ensuring alignment of goals and methodologies.
  3. Develop and implement comprehensive strategies for AI alignment, including safety measures, interpretability techniques, and robust red teaming protocols.
  4. Drive the integration of advanced safety and interpretability techniques such as Reinforcement Learning with Human Feedback ( RLHF), Group Relative Policy Optimization (GRPO), Reinforcement Learning from Verifiable Rewards (RLVR), Direct Preference Optimization (DPO), Proximal Policy Optimization (PPO), Local Interpretable Model-agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP) across our AI development pipeline.
  5. Establish and maintain best practices for red teaming exercises to identify potential vulnerabilities and ensure our models do not generate harmful or undesirable outputs.
  6. Collaborate with product and research teams to define and implement safety and interpretability aspects that ensure our AI models deliver helpful, honest, and transparent outputs.
  7. Lead cross-functional initiatives to integrate safety measures and interpretability throughout the AI development lifecycle.
  8. Stay at the forefront of AI ethics, safety, and interpretability research, fostering a culture of continuous learning and innovation within the team.
  9. Represent the company in industry forums, conferences, and regulatory discussions related to AI alignment and ethics.
  10. Manage resource allocation, budgeting, and strategic planning for the AI Alignment division.
  11. Mentor and develop team members, fostering a collaborative and innovative research environment.
  12. Liaise with executive leadership to communicate progress, challenges, and strategic recommendations for AI alignment efforts.

Qualifications

  1. Ph.D. in Computer Science, Machine Learning, or a related field with a focus on AI safety, ethics, and interpretability.
  2. 7+ years of experience in AI research and development, with at least 3 years in a leadership role overseeing multiple AI research teams.
  3. Demonstrated expertise in AI safety, interpretability, and red teaming methodologies for large language models and multimodal systems.
  4. Strong understanding of advanced techniques such as Reinforcement Learning with Human Feedback ( RLHF), Group Relative Policy Optimization (GRPO), Reinforcement Learning from Verifiable Rewards (RLVR), Direct Preference Optimization (DPO), Proximal Policy Optimization (PPO), Local Interpretable Model-agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP) and attention-based methods for AI safety and interpretability.
  5. Proven track record of leading teams working on models with 10s and 100s of billions of parameters.
  6. Experience in designing and overseeing comprehensive red teaming exercises for AI systems.
  7. Deep knowledge of ethical considerations in AI development and deployment, including relevant regulatory frameworks and industry standards.
  8. Strong publication record in top-tier AI conferences and journals, specifically in areas related to AI safety, ethics, and interpretability.
  9. Excellent communication and presentation skills, with the ability to convey complex technical concepts to diverse audiences, including executive leadership and non-technical stakeholders.
  10. Demonstrated ability to manage and mentor diverse teams of researchers and engineers.
  11. Strong project management skills with experience in resource allocation and budgeting for large-scale research initiatives.
  12. Visionary mindset with the ability to anticipate future trends and challenges in AI alignment and ethics.

Impact:

As the Principal Research Scientist of AI Alignment, you will play a pivotal role in shaping the future of responsible AI development. Your leadership will ensure that our AI systems are not only powerful and innovative but also safe, interpretable, and aligned with human values. By fostering collaboration between Trust and Safety, Interpretability, and Red Teaming efforts, you will create a holistic approach to AI alignment that sets new industry standards. Your work will be instrumental in building public trust in AI technologies and positioning our company as a leader in ethical and responsible AI development.

Job Tags

Local area,

Similar Jobs

Taylor Morrison

Staff Accountant Job at Taylor Morrison

 ...Job Description Summary As a Staff Accountant I working for Taylor Morrison you will perform professional accounting work including...  ...knowledge of Microsoft Excel Ability to work independently in a fast paced dynamic environment Strong analytical skills FLSA... 

Just Pended Hawaii

Lead Photographer/Videographer Job at Just Pended Hawaii

ROLE SUMMARY The team at Just Pended Hawaii Real Estate Media is looking for a talented Lead Creative Specialist who is an expert in photography, videography, and drone operation. The ideal candidate will have an eye for detail, advanced client relationship & customer...

AZ Sales Group

Sports-Minded Marketing Manager Job at AZ Sales Group

 ...to put forth amazing efforts to help brands realize their customer acquisition and sales goals. We are looking for an Entry Level Marketing Manager to join our team!Responsibilities of the Entry Level Marketing Manager:Communicate directly with customers on a daily basis... 

Boys & Girls Clubs of Northeast Florida

SUMMER -YDP - ART - JORDAN PARK Job at Boys & Girls Clubs of Northeast Florida

 ...animation, photography, movie making, Claymation, game design, music composition, and digital illustration. Performing Arts Arts...  ...record)# Other duties as assigned. RELATIONSHIPS: Internal: Maintains close, daily contact with club staff and volunteers,... 

V-Soft Consulting Group, Inc.

Security Guard Job at V-Soft Consulting Group, Inc.

 ...Title: Security Guard, Unarmed Client Requisition ID: 64064 Client: The State of TN - TN School for the Blind Location : Onsite - 115 Stewarts Ferry Pike Nashville, Tennessee 37214 Background checks would be needed at the time of the offer. Need locals....