March 26, 2024

Artificial Intelligence Briefing: UN Unanimously Adopts Landmark AI Resolution

The United Nations unanimously adopts a landmark resolution mapping a path for international cooperation on AI, and the Financial Stability Oversight Council announces a two-day conference exploring the benefits and risks of AI in the financial sector. Meanwhile, six U.S. states have adopted the NAIC AI model bulletin — and more are on the horizon. We’re diving into these developments and more in the latest briefing.

Regulatory, Legislative and Litigation Developments

  • EU Parliament Approves AI Act. The EU Artificial Intelligence Act (the AI Act) was approved on March 13 by a significant majority of the European Parliament. Pending approval by the European Council (expected in April), minor corrections and translation into the EU’s official languages, the AI Act will enter into force in substantially the same form as the final draft summarized on our website earlier this year. Some obligations under the AI Act will take effect six months from the date of final publication (expected in the summer of 2024), while others will be phased in over a longer period. Affected businesses should begin assessing the risks and implement compliance programs as soon as possible.
  • UN General Assembly Unanimously Adopts Landmark AI Resolution. On March 21, 2024, the 193 United National General Assembly Member States unanimously adopted a U.S.-led resolution titled “Seizing the Opportunities of Safe, Secure, and Trustworthy Artificial Intelligence Systems for Sustainable Development.” The text of the resolution, which recognizes AI’s potential to support sustainable development, calls on Member States “to develop and support regulatory and governance approaches and frameworks related to safe, secure and trustworthy artificial intelligence systems.” According to the White House, the resolution “lays out a path for international cooperation on AI, including to promote equitable access, take steps to manage the risks of AI, protect privacy, guarding against misuse, prevent exacerbated bias and discrimination.”
  • Financial Stability Oversight Council to Host a Two-Day Conference on Financial Stability and AI. The United States Department of Treasury announced that on June 6 and 7, 2024, the Financial Stability Oversight Council (FSOC) will host a two-day conference discussing the benefits and risks of AI in the financial sector. The conference will be conducted in partnership with the Brookings Institution, with FSOC hosting the first day of the conference and Brookings taking over the second day. While an agenda has yet to be released, FSOC indicates that the conference will discuss the rapid growth of AI in the financial sector, the potential systemic risks that AI may pose, and insights into mitigating and providing effective oversight of these risks. A livestream will be available to the public both days; in-person attendance is by invitation only. Day one will commence at 1:00 PM EST and end at 5:00 PM. Day two will begin at 9:30 am and end at 3:00 PM.
  • FTC Finalizes Rule on AI-Enabled Scam Calls. On March 7, the FTC announced a final rule extending to businesses existing consumer protections against deceptive and abusive telemarketing. Specifically, the Telemarketing Sales Rule was updated to prohibit deceptive and abusive practices in all business-to-business calls, including a prohibition on material misrepresentations and false or misleading statements in business-to-business telemarketing. The changes to the rule will allow the FTC to take action against telemarketers employing AI robocalls and other emerging technologies. The rule also includes an update to recordkeeping requirements to require tracking of call detail records, records of consent and records of compliance with the Do Not Call Registry to reflect advancements in technology and the marketplace. The new rule is part of the FTC’s overarching review of the TSR and DNC Registry rules and provisions.
  • FDA Publishes New AI and Medical Products White Paper. On March 15, the FDA published a new white paper entitled “Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together.” The white paper is intended to shed light on how FDA centers are coordinating to update regulation of the medical product life cycle. This includes promoting the development of standards and supporting research for evaluating and monitoring AI performance. The paper does not outline specifically how FDA will execute or delegate these tasks but highlights the agency’s commitment to responsible deployment of AI and its plan to tailor regulatory approaches to ensure both safety and innovation. Learn more about the white paper in Faegre Drinker's recent client alert.
  • States Adopt NAIC Bulletin. To date, six states have adopted the National Association of Insurance Commissioners’ AI model bulletin, which sets forth regulators’ expectations for insurers that use AI systems. The list includes Alaska, Connecticut, Illinois, New Hampshire, Rhode Island and Vermont, with more states on the way.
  • NAIC Begins Consideration of Third-Party Models. The NAIC’s Third-Party Data and Models Task Force, led by Commissioners Michael Conway (Chair – CO) and Michael Yaworksy (Vice Chair – FL), held its inaugural meeting on March 16. The task force will spend 2024 exploring various efforts to regulate third-party data and predictive models. Work on a potential model could begin in 2025.
  • Preliminary Rulemaking on Proposed Automated Decision-Making Technology. On March 8, the California Privacy Protection Agency (CPPA) Board voted 3:2 to move forward with initiating formal rulemaking on the use of Automated Decision-Making Technology (ADMT) based on the December 2023 discussion draft. The CPPA highlighted several specific definitions and associated examples in the draft proposals, including with respect to “ADMT,” “Profiling,” “Significant decision” and “Behavioral advertising.” The main considerations for these definitions and associated proposed examples are to address possible scenarios where businesses may attempt to circumvent the regulations. The CPPA staff anticipates completing the necessary paperwork for the ADMT proposal around July 2024. After that, a version of the proposed amendments will be open for public comment for 45 days.
  • Deputy Attorney General Announces Justice AI. In a recent speech, Deputy Attorney General of the United States Lisa Monaco introduced the Department of Justice’s new AI initiative, “Justice AI.” The initiative will “convene individuals from across civil society, academia, science, and industry” to help understand AI’s impact on the Department’s mission. Monaco’s remarks also highlighted the Department’s current deployment of AI, other existing government efforts regarding AI’s impact, and ongoing efforts to establish “effective guardrails for AI uses that impact rights and safety.” Monaco also announced that to “deepen accountability and exert deterrence,” Department prosecutors will be able to seek enhanced punishment “for offenses made significantly more dangerous” due to the misuse of AI technology.

Upcoming Events

Suggested Reading

The Faegre Baker Daniels website uses cookies to make your browsing experience as useful as possible. In order to have the full site experience, keep cookies enabled on your web browser. By browsing our site with cookies enabled, you are agreeing to their use. Review Faegre Baker Daniels' cookies information for more details.