
Frontier Model Forum

Click to visit website
About
The Frontier Model Forum is an industry non-profit dedicated to advancing the safe development and deployment of frontier AI systems. It focuses on identifying best practices and supporting standards development for frontier AI safety, advancing independent research in this area, and facilitating information sharing among government, academia, civil society, and industry. The Forum's members include prominent AI companies like Anthropic, Google, Microsoft, OpenAI, Amazon, and Meta. Their work centers on addressing risks to public safety and critical infrastructure posed by advanced AI, while also promoting the beneficial applications of this technology. The Forum is committed to collaborating with global stakeholders to create safe and responsible AI systems.
Platform
Task
Features
• collaborating across sectors
• identifying best practices
• advancing ai safety research
• help ai meet society’s greatest challenges
Job Opportunities
Head of AI Safety
The Frontier Model Forum is an industry non-profit focused on advancing the safe development and deployment of frontier AI systems through research, best practice identification, and collaboration.
Education Requirements:
PhD or advanced degree in a STEM field or computational social science
Experience Requirements:
significant experience in program management and driving forward collaborative projects
strong writing and editorial skills
clear track record of managing collaborative writing projects from conception to publication
strong communication skills, both written and verbal, and an ability to develop constructive relationships with key partners and stakeholders
experience supporting teams and leadership in fast-paced and constantly changing environments
Other Requirements:
clear passion for advancing frontier AI safety and actively seek high-impact opportunities to push the field forward
extensive experience carrying out or documenting safety evaluations on advanced general-purpose AI models, including automated benchmarks, redteaming exercises, and uplift studies
extensive familiarity and experience with natural language processing, computer vision, causal reasoning and/or multi-modal models
Responsibilities:
Design and execute AI safety workstreams, working with Forum leadership to develop a relevant workshops and outputs
Act as a key partner to Forum leadership, helping to inform and shape the Forum’s strategy for AI safety
Organize, moderate and lead multiple AI safety working groups, workstreams, and other research-oriented initiatives
Oversee the development of issue briefs for publication on the FMF website and/or circulation among member firms and key stakeholders and partners
Independently represent the FMF’s strategy and narrative with external AI safety stakeholders and partners
Show more details
Head of AI Security
The Frontier Model Forum is an industry non-profit focused on advancing the safe development and deployment of frontier AI systems through research, best practice identification, and collaboration.
Education Requirements:
advanced degree or extensive professional experience in AI and/or cybersecurity
Experience Requirements:
significant experience in program management and driving forward collaborative projects
strong writing and editorial skills
clear track record of managing collaborative writing projects on cybersecurity from conception to publication
strong communication skills, both written and verbal, and an ability to develop constructive relationships with key partners and stakeholders
experience supporting teams and leadership in fast-paced and constantly changing environments
Other Requirements:
clear passion for advancing frontier AI safety and security and actively seek opportunities to expand your domain knowledge
extensive familiarity and experience with cybersecurity standards, protocols, and information-sharing mechanisms
extensive experience carrying out and/or documenting cyber evaluations, including automated benchmarks, redteaming, cyber ranges, and CTF exercises
Responsibilities:
Lead AI security and AI-cyber workstream direction and implementation, working with Forum leadership to develop a portfolio of research workshops and outputs
Manage AI security and AI-cyber outputs from initial draft through publication, ensure timely delivery
Organize, moderate and lead multiple working groups and research-oriented initiatives, delivering against a range of research objectives
Research and develop white papers for use as structured read-aheads for working group meetings and for potential publication on the FMF’s website
Act as a key partner to Forum leadership, helping to inform and shape the team’s research strategy on AI and cybersecurity
Show more details
AI Safety Manager
The Frontier Model Forum is an industry non-profit focused on advancing the safe development and deployment of frontier AI systems through research, best practice identification, and collaboration.
Education Requirements:
MS or advanced training in a STEM field or computational social science
Experience Requirements:
strong writing skills
experience supporting teams and leadership in fast-paced and constantly changing environments
Other Requirements:
clear passion for advancing frontier AI safety and actively seek opportunities to expand your domain knowledge
experience documenting or analyzing safety evaluations on advanced general-purpose AI models, including automated benchmarks, redteaming exercises, and uplift studies
familiarity with modern deep learning architectures, including natural language processing, computer vision, and/or multi-modal models
Responsibilities:
Working with Forum leadership to implement a portfolio of timely research workshops and outputs
Help organize multiple working groups and research-oriented initiatives, delivering against a range of research objectives
Research and draft memos for use as structured read-aheads for working group meetings and for potential publication on the FMF’s website
Conduct and write comprehensive literature reviews on various AI safety research topics
Draft and compile research surveys on various topics for circulation within expert networks
Show more details
Ratings & Reviews
No ratings available yet. Be the first to rate this tool!
Alternatives

Center for AI Safety
The Center for AI Safety (CAIS) is a non-profit dedicated to reducing societal-scale risks from AI through research, field-building, and advocacy.
View DetailsFeatured Tools
Songmeaning
Songmeaning uses AI to reveal the stories and meanings behind song lyrics. It offers lyric translation and AI music generation.
View DetailsWhisper Notes
Offline AI speech-to-text transcription app using Whisper AI. Supports 80+ languages, audio file import, and offers lifetime access with a one-time purchase. Available for iOS and macOS.
View DetailsGitGab
Connects Github repos and local files to AI models (ChatGPT, Claude, Gemini) for coding tasks like implementing features, finding bugs, writing docs, and optimization.
View Details
nuptials.ai
nuptials.ai is an AI wedding planning partner, offering timeline planning, budget optimization, vendor matching, and a 24/7 planning assistant to help plan your perfect day.
View DetailsMake-A-Craft
Make-A-Craft helps you discover craft ideas tailored to your child's age and interests, using materials you already have at home.
View Details
Pixelfox AI
Free online AI photo editor with comprehensive tools for image, face/body, and text. Features include background/object removal, upscaling, face swap, and AI image generation. No sign-up needed, unlimited use for free, fast results.
View Details
Smart Cookie Trivia
Smart Cookie Trivia is a platform offering a wide variety of trivia questions across numerous categories to help users play trivia, explore different topics, and expand their knowledge.
View Details
Code2Docs
AI-powered code documentation generator. Integrates with GitHub. Automates creation of usage guides, API docs, and testing instructions.
View Details