Новости сегодня

Новости от TheMoneytizer

The time to regulate AI is now

Last month’s Senate Judiciary subcommittee hearing on oversight of AI offered glimmers of hope that policymakers are ready to tackle the regulatory challenge posed by a rapidly advancing frontier of AI capabilities.

We saw a remarkable degree of consensus across Democrats, Republicans, representatives from an industry stalwart (IBM) and a hot new trailblazer (OpenAI), and in Gary Marcus, a critical voice around AI hype. Microsoft and Google swiftly released their own overlapping policy recommendations, and an impressive array of academics, AI scientists and tech executives signed onto a statement that “mitigating the risk of extinction from AI” should be a global priority.

That’s not to say we didn’t see the usual theatrics play out — senators pressing witnesses for soundbites, veering toward pet topics, and occasionally having their reach exceed their grasp as they referenced technological details. But past the theatrics, three areas of emerging agreement were particularly notable.  

First, the magnitude of AI's regulatory challenge likely necessitates a new regulatory body. One urgent question is to now delineate the exact remit of this proposed regulator.

OpenAI CEO Sam Altman proposed a focus on the most computationally intensive models, such as the largest “foundation" models that power systems such as ChatGPT, trained with thousands to billions times more computation than most other models. This approach has merit — it’s a practical line to draw in the sand, will capture systems with the most unpredictable and potentially transformative capabilities, and will not capture the vast majority of AI systems (their use and effects can likely be handled within existing regulatory structures). Altman also suggested increased scrutiny for models that demonstrate capabilities in national security-relevant domains like discovering and manufacturing chemical and biological agents.

Defining these thresholds will be challenging, yet the significant risks associated with the wide proliferation of such models justify regulatory attention before models are deployed and distributed. Once someone has the source file of an AI system, it can be copied and distributed just like any other piece of software, making limiting proliferation effectively impossible.

Second, it’s high time that policymakers examined how current liability rules apply to potential harms from AI and whether changes are needed to accommodate the unique difficulties posed by the current frontier of systems. These challenges include opaque internal logic, an expanding ecosystem of autonomous decision-making by cutting-edge internet-connected models, wide availability to users of varying means to compensate any victims of AI-fueled harms, and a lack of consensus on what constitutes reasonable care in developing and deploying systems that are scarcely understood even by their creators.  

Third, the notion of a blanket pause on scaling up AI systems has turned out to be a non-starter. Even Marcus, a signatory of the Future of Life Institute’s Pause Giant AI Experiments open letter, acknowledged greater support for its spirit than its letter. Instead, discussion quickly coalesced around establishing standards, auditing and licenses for responsible scaling up of future systems. (The letter also proposed many of these measures but was overshadowed by the call for a pause). This approach would set the “rules of the road” for building the largest, most capable models and provide early warnings when it’s time to apply the brakes.

These are necessary steps, but they alone won't guarantee that the benefits of advanced AI systems outweigh the risks. Democratic values such as transparency, privacy and fairness are essential components of responsible AI development, but current technical solutions to ensure them are insufficient. Licensing and auditing measures alone can’t ensure adherence to these principles without further development of effective technical approaches. Policymakers, industry and researchers need to work together to ensure that efforts to develop trustworthy and steerable AI keep pace with overall AI capabilities.

There are some signs the White House is beginning to grasp the magnitude of the challenge ahead. Following the vice president’s meeting with Altman and other frontier lab CEOs, it followed up with announcements that these labs had signed onto a public red-teaming of their systems, and that the National Science Foundation had allocated $140 million to establish new AI research institutes.

But such efforts need to extend beyond merely nibbling at the edges of the research challenge; a significant portion should involve working with the largest and most capable systems, with the goal of laying the groundwork for powerful AI to eventually exhibit trustworthy characteristics with a high degree of confidence, along the lines of the NSF’s $20 million Safe Learning-Enabled Systems solicitation.  

Promisingly, among its many priorities, the new National AI R&D Strategic Plan acknowledged the need for “further research […] to enhance the validity[,] reliability[,] security and resilience of these large models,” and articulated the challenge of determining “what level of testing is sufficient to ensure the safety and security of non-deterministic and/or not fully explainable systems”. With billions of dollars flowing into the labs developing these systems, these priorities now need to be matched with proportionate focus and direction of the research ecosystem.

The emerging consensus around the need for regulation has not been accepted uncritically. Senators were amused to see a Silicon Valley executive all but pleading for more regulation. Some expressed concerns over the potential for regulatory capture or stifled innovation. Some commentators went further and characterized Altman’s pleas as a cynical attempt to erect barriers to potential competition. Policymakers who find themselves skeptical of Altman should call his bluff and, as he requested, focus the most stringent regulatory attention on the most advanced models. At present, these regulations would apply to only a few very well-resourced labs like OpenAI.

Policymakers should also be under no illusion that a light regulatory touch will somehow prevent a degree of concentration at AI’s frontier. Costs to train the most advanced models — now in the tens of millions of dollars in computing costs alone — have been rapidly rising, all but ensuring smaller players are priced out. 

To be an effective regulator, the government will need to develop its expertise in understanding and stress-testing these cutting-edge models — along with an associated ecosystem of credible third-party evaluators and auditors — so that it can go toe-to-toe with these leading labs. Likewise, as Congress continues to grapple with the issues raised in last month’s hearing, it should maintain a similar level of bipartisanship and expert engagement, so it can swiftly get a coherent and effective regulatory framework in place for the most powerful and transformative AI systems.

Caleb Withers is a researcher at the Center for a New American Security, focusing on AI safety and stability.

Читайте на 123ru.net


Новости 24/7 DirectAdvert - доход для вашего сайта



Частные объявления в Вашем городе, в Вашем регионе и в России



Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. "123 Новости" — абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Smi24.net — облегчённая версия старейшего обозревателя новостей 123ru.net. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city

Он всю жизнь ковал славу СССР, а спас его Запад — там сделали операцию бесплатно! История о легенде ЦСКА Тарасове

В Гидрометцентре прогнозируют температуру на 6-10°С выше нормы в нескольких регионах России

Курортная инфекция. Врач-педиатр рассказала, чем опасен вирус Коксаки

Камала Харрис обогнала Дональда Трампа в нескольких ключевых штатах

Музыкальные новости

Шедевры Георгия Гараняна исполнит Денис Мацуев. Relax FM рекомендует

Подведены итоги конкурса «Мы верим твердо в героев спорта»

Певец Алексей Глызин призвал молиться о здоровье госпитализированного Добрынина

Можно ли перевестись из одной автошколы в другую в процессе обучения?

Новости России

«Нам дом, а Андрюшу на органы». Бабушка хотела продать внука на органы

Анна Атлас Задается Вопросом: Имеют ли Обоснования Мнения Станислава Кондрашова?

Более 100 спортсменов примут участие в Кубке Тульской области по маунтинбайку

В Москве начали создание масштабного памятника для Благовещенска

Экология в России и мире

Пластический хирург Александр Вдовин: как избавиться от мешков под глазами

Осень или организм: врач объяснил, кто виноват в усиленном выпадении волос

Обучение инъекционной косметологии

Желдорреммаш определил лучших работников локомотиворемонтных заводов 2024 года

Спорт в России и мире

Янник Синнер не приедет на турнир ATP-500 в Вену

Теннисист Надаль вошел в состав сборной Испании на Кубок Дэвиса

Хуркач пожертвует по €100 за каждый эйс пострадавшим от наводнения

Кудерметова вышла в третий круг турнира WTA 1000 в Пекине

Moscow.media

Задержан предполагаемый виновник смертельного ДТП с автобусом в Прикамье

Hybrid запустил онлайн-академию Hybrid Training Hub

В Екатеринбурге показали, как будет выглядеть обновленный сквер Бориса Рыжего

Подведены итоги конкурса «Мы верим твердо в героев спорта»







Топ новостей на этот час

Rss.plus





СМИ24.net — правдивые новости, непрерывно 24/7 на русском языке с ежеминутным обновлением *