- Harris outlines stringent standards for federal agencies regarding AI application.
- The US government mandates annual AI risk assessments and transparency measures.
- The administration aims to recruit 100 AI professionals into government roles by mid-2024.
- A comparison between the EU’s AI Act and the US approach highlights differences in regulation focus and scope.
In a recent development, the Biden-Harris administration has unveiled a robust framework to regulate the utilization of artificial intelligence across federal agencies. Vice President Kamala Harris disclosed the new standards during a White House press call on March 28. These regulations mandate federal entities to implement rigorous safeguards for AI applications that could potentially impact the rights or safety of Americans. The directives require agencies to appoint a chief AI officer within 60 days, disclose their AI engagements, and establish protective frameworks to ensure responsible AI usage.
This initiative aligns with President Joe Biden’s executive order on AI issued in October 2023, which underscores the government’s commitment to harnessing AI’s potential while mitigating associated risks.
US Government Mandates Annual AI Risk Assessments
Vice President Harris emphasized the administration’s vision of leveraging AI to advance public interests while prioritizing transparency and accountability. Under the new regulations, US government agencies are obligated to conduct annual assessments of their AI systems, evaluate associated risks, and delineate risk management strategies. This move reflects a broader government effort to foster a culture of responsibility and oversight in AI deployment, ensuring alignment with national values and public welfare. Additionally, the administration aims to bolster the federal workforce by recruiting 100 AI professionals by the summer of 2024, signaling a commitment to enhancing AI expertise within government ranks.
Comparison with EU’s AI Act
While the US government’s approach to AI regulation focuses on comprehensive coverage, emphasizing voluntary disclosures over mandatory compliance, the EU’s AI Act adopts a more targeted, risk-based framework. The EU legislation concentrates on high-risk AI applications, necessitating stringent assessments and approvals. In contrast, the US strategy prioritizes preventing harm without imposing outright bans, allowing for a more flexible regulatory environment. The distinctions between the two approaches underscore differing perspectives on AI governance and risk mitigation strategies.