US AI Safety Consortium Launches with Tech Titans: A Coalition for Responsible AI Advancements

  1. US AI Safety Institute Consortium announcement: The U.S. Department of Commerce has created the AI Safety Institute Consortium to support the creation of red-teaming and risk management processes related to AI. The consortium will involve industry leaders, academia, and government agencies, aiming to harness AI's potential in healthcare while ensuring its safety (Source: iapp.org/news/a/us-announces-ai-safety-institute-consortium/).

  2. National Institute of Standards and Technology (NIST) perspective: NIST has formed the U.S. Artificial Intelligence Safety Institute (USAISI) and related consortium (AISIC). The consortium will work on equipping and empowering U.S. AI practitioners with the tools necessary to responsibly develop safe AI. Some founding members include Stability AI, Stanford University, and Cornell University. The consortium appreciates the U.S. government's leadership role in uniting industry, civil society, and government to accelerate AI safety efforts (Source: nist.gov/artificial-intelligence/artificial-intelligence-safety-institute/aisic-member-perspectives).

  3. Leading AI companies join the consortium: Major AI companies such as Microsoft, Alphabet's Google, Apple, Facebook, OpenAI, and more have joined the AI Safety Institute Consortium to support the safe development and deployment of generative AI. The newly formed consortium aims to work on guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content. Under the president's executive order, the group will strive to ensure data privacy during training and the responsible development of AI (Source: thehill.com/policy/technology/4456050-ai-companies-join-new-us-safety-consortium/).

  1. U.S. AI Safety Institute Consortium membership: More than 200 organizations from big tech, academia, local government, and non-profits have joined a newly formed AI safety alliance (AISIC) announced by the U.S. Department of Commerce. Google, Microsoft, NVIDIA, and OpenAI are among the inaugural member cohort. The consortium will address red-teaming, evaluations of AI features, risk management, safety and security, and other AI guardrails (Source: ciodive.com/news/AI-safety-consortium-Biden-Google/707030/).

  2. MongoDB becomes a founding member: MongoDB has announced its founding membership in the U.S. Artificial Intelligence Safety Institute Consortium (AISIC). The consortium focuses on state-of-the-art AI systems, assessing risk, and impact. By supporting the consortium, MongoDB will contribute to setting safety standards and protecting the innovation ecosystem (Source: prnewswire.com/news-releases/mongodb-announces-founding-membership-in-the-us-artificial-intelligence-safety-institute-consortium-302057706.html).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top