Добавить новость
News in English
Новости сегодня

Новости от TheMoneytizer

OpenAI Dissolves ‘Superalignment Team,’ Distributes AI Safety Efforts Across Organization

OpenAI reportedly effectively dissolved its “superalignment team,” which was dedicated to ensuring the safety of future advanced artificial intelligence systems.

The decision came in the wake of the departure of the team’s leaders, Bloomberg reported Friday (May 17).

Rather than maintaining the team as a separate entity, OpenAI has chosen to integrate its members into the company’s overall research efforts. The move is aimed at helping OpenAI achieve its safety goals while developing advanced AI technologies, the company told Bloomberg, per the report.

The superalignment team was formed less than a year ago and was led by Ilya Sutskever, co-founder and chief scientist of OpenAI, and Jan Leike, another experienced member of OpenAI, according to the report.

However, recent departures from OpenAI, including those of both Sutskever and Leike, have raised questions about the organization’s approach to balancing speed and safety in AI development, the report said.

Sutskever announced his departure after disagreements with OpenAI CEO Sam Altman regarding the pace of AI development. Leike also revealed his resignation shortly after, citing disagreements with the company, per the report.

Sutskever’s departure was the final straw for Leike, who had been facing challenges in securing resources for the superalignment team, the report said.

Other members of the superalignment team have also left OpenAI in recent months, further highlighting the challenges faced by the team, per the report. OpenAI has named John Schulman, a co-founder specializing in large language models, as the scientific lead for the organization’s alignment work moving forward.

In addition to the superalignment team, OpenAI has other employees dedicated to AI safety across various teams within the organization, the report said. The company also has individual teams focused solely on safety, including a preparedness team that analyzes and mitigates potential catastrophic risks associated with AI systems.

Speaking on the “All-In” podcast May 10, Altman expressed support for establishing an international agency to regulate AI, citing concerns about the potential for “significant global harm.”

Altman also emphasized the need for a balanced approach to regulation, cautioning against excessive and insufficient oversight.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post OpenAI Dissolves ‘Superalignment Team,’ Distributes AI Safety Efforts Across Organization appeared first on PYMNTS.com.

Читайте на сайте


Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. Абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city
Музыкальные новости
Новости России
Экология в России и мире
Спорт в России и мире
Moscow.media






Топ новостей на этот час

Rss.plus





СМИ24.net — правдивые новости, непрерывно 24/7 на русском языке с ежеминутным обновлением *