
Center for AI Safety

Click to visit website
About
The Center for AI Safety (CAIS) is a non-profit organization dedicated to reducing societal-scale risks from artificial intelligence. They conduct impactful research, build the field of AI safety researchers, and advocate for safety standards. CAIS's work includes a statement on AI risk signed by leading AI experts and a compute cluster for AI safety research. Their website provides information on their mission, research, resources, and career opportunities.
Platform
Task
Features
FAQs
What does CAIS do?
CAIS’ mission is to reduce societal-scale risks from AI. We do this through research and field-building.
What does CAIS mean by field-building?
By field-building, we mean expanding the research field of AI safety by providing funding, research infrastructure, and educational resources. Our goal is to create a thriving research ecosystem that will drive progress towards safe AI. You can see examples of our projects on our field-building page.
How does CAIS choose which projects it works on?
Our work is driven by three main pillars: advancing safety research, building the safety research community, and promoting safety standards. We understand that technical work will not solve AI safety alone, and prioritize having a real-world positive impact. You can see more on our mission page.
Where is CAIS located?
CAIS’ main offices are located in San Francisco, California.
Where can I learn more about the research CAIS is doing?
As a technical research laboratory, CAIS develops foundational benchmarks and methods which concretize the problem and progress towards technical solutions. You can see examples of our work on our research page.
What is the statement on AI risk?
On May 30, 2023, CAIS released a statement signed by a historic coalition of AI experts — along with philosophers, ethicists, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists, and climate scientists. You can read the statement and find the list of signatories here.
Are the signatories verified?
Yes! We send a verification email to each signatory and verify each person before adding their name to the statement.
How did the statement come together?
In recent months, more AI experts and public figures have been speaking out about AI as an existential risk. An academic at the University of Cambridge, David Krueger, came up with the idea of developing a one-sentence statement (as opposed to a longer letter) to draw broader consensus, and we thought the time was right to make this happen. The statement was created independent of any input from the AI industry.
How will my donations be used?
Donations to CAIS support our operating expenses and allow us to scale up our research and field-building efforts as an independent lab. Learn more about our ongoing work.
Will my donations be tax deductible?
The Center for AI Safety is a US federally recognized 501c(3) non-profit organization. US donors can take tax deductions for donations to CAIS to the extent permitted by law. If you need our organization number (EIN) for your tax return, it’s 88-1751310.
Is CAIS an independent organization?
CAIS is a nonprofit organization. We would not accept funding from stakeholders which would compromise our mission of reducing AI risk. We are currently bottlenecked by funding and seeking to diversify our funding sources, so please consider donating here.
How does CAIS prioritize research directions?
CAIS focuses on research which will have a large research impact and will contribute towards reducing societal-scale risks. For further information, see this paper or this paper.
Why does CAIS focus on research benchmarks and metrics?
Machine learning research progresses through well-defined metrics for progress towards well-defined goals. Once a goal is defined empirically, is tractable, and is incentivized properly, the field is well-equipped to make progress towards it. We focus on benchmarks and metrics to help concretize research directions in ML safety and enable others to further build upon our research.
How can I get support for my research from CAIS?
To apply for compute support, see the compute cluster page. Professors looking for funding may be interested in this NSF grant opportunity.
What is AI capable of doing today?
AI has numerous and increasing capabilities today, including: Generating original images, Successfully completing the bar exam, Determining the structures of hundreds of millions of proteins.
Job Opportunities
Operations Associate
The Center for AI Safety (CAIS) is a non-profit dedicated to reducing societal-scale risks from AI through research, field-building, and advocacy.
Benefits:
Health insurance for you and your dependents
401K plan + 4% matching
Unlimited PTO
Lunch and dinner at the office
Annual Professional Development Stipend
Experience Requirements:
Experience in operational support, especially in dynamic environments like startups
Other Requirements:
Have a strong organizational standard, have high attention to detail, and are great at problem solving
Are able to manage multiple projects and meet deadlines
Are comfortable with technology
Are open to new ideas and feedback, honest with yourself and others; at CAIS, even when it’s slightly uncomfortable, we prioritize getting to the best answer
Are motivated by work that is focused on AI safety and risk
Responsibilities:
Oversee general office management, including ordering supplies, managing food and snack orders, handling mail and deliveries, and ensuring the office is equipped for daily operations
Coordinate employee onboarding and offboarding processes, including managing contracts, account setups, and workspace arrangements to ensure a seamless first and last day for team members
Manage software access and general technical assistance for all employees
Support financial processes, including expense reviews, reimbursements, and audit preparation
Assist with legal processes, such as visa management, corporate filings, and other compliance tasks
Show more details
Policy Lead
The Center for AI Safety (CAIS) is a non-profit dedicated to reducing societal-scale risks from AI through research, field-building, and advocacy.
Benefits:
Health, dental, vision insurance for you and your dependents
Competitive PTO
401(k) plan with 4% matching
Personalized ergonomic technology set-up
Education Requirements:
Bachelor's degree in public policy, law, or STEM degree; Advanced degree preferred
Experience Requirements:
5+ years of experience in policy development and implementation
Experience working closely with technical teams and the ability to understand technical issues
Other Requirements:
Strong communication and collaboration skills
Ability to balance competing priorities and make sound decisions in a fast-paced environment
Responsibilities:
Help develop the strategy to achieve CAIS AF's policy priorities and goals in partnership with the Director of Government Relations and Public Policy
Monitor and analyze legislation, administrative or regulatory activity, or other government-led initiatives at all levels that could be of potential impact to CAIS AF
Draft policy analysis documents for policymakers
Prioritize and respond to government RFI's, public or inbound requests from policymakers and others
Liaise with CAIS to ensure alignment on advocacy, policy priorities, and related activities between the organizations
Show more details
Research Engineer Intern
The Center for AI Safety (CAIS) is a non-profit dedicated to reducing societal-scale risks from AI through research, field-building, and advocacy.
Other Requirements:
Are able to read an ML paper, understand the key result, and understand how it fits into the broader literature
Are comfortable setting up, launching, and debugging ML experiments
Are familiar with relevant frameworks and libraries (e.g., pytorch)
Communicate clearly and promptly with teammates
Take ownership of your individual part in a project
Have co-authored a ML paper in a top conference
Responsibilities:
Work very closely with our researchers on projects in fields such as Trojans, Adversarial Robustness, Power Aversion, Machine Ethics, and Out-of-Distribution Detection
Plan and run experiments
Conduct code reviews
Work in a small team to create a publication with outsized impact
Leverage our internal compute cluster to run experiments at scale on large language models
Show more details
Ratings & Reviews
No ratings available yet. Be the first to rate this tool!
Alternatives

Frontier Model Forum
The Frontier Model Forum is an industry non-profit focused on advancing the safe development and deployment of frontier AI systems through research, best practice identification, and collaboration.
View DetailsFeatured Tools
Songmeaning
Songmeaning uses AI to reveal the stories and meanings behind song lyrics. It offers lyric translation and AI music generation.
View DetailsWhisper Notes
Offline AI speech-to-text transcription app using Whisper AI. Supports 80+ languages, audio file import, and offers lifetime access with a one-time purchase. Available for iOS and macOS.
View DetailsGitGab
Connects Github repos and local files to AI models (ChatGPT, Claude, Gemini) for coding tasks like implementing features, finding bugs, writing docs, and optimization.
View Details
nuptials.ai
nuptials.ai is an AI wedding planning partner, offering timeline planning, budget optimization, vendor matching, and a 24/7 planning assistant to help plan your perfect day.
View DetailsMake-A-Craft
Make-A-Craft helps you discover craft ideas tailored to your child's age and interests, using materials you already have at home.
View Details
Pixelfox AI
Free online AI photo editor with comprehensive tools for image, face/body, and text. Features include background/object removal, upscaling, face swap, and AI image generation. No sign-up needed, unlimited use for free, fast results.
View Details
Smart Cookie Trivia
Smart Cookie Trivia is a platform offering a wide variety of trivia questions across numerous categories to help users play trivia, explore different topics, and expand their knowledge.
View Details
Code2Docs
AI-powered code documentation generator. Integrates with GitHub. Automates creation of usage guides, API docs, and testing instructions.
View Details