Добавить новость
News in English
Новости сегодня

Новости от TheMoneytizer

OpenAI just dragged its own brand

It sounds like a brag-worthy business coup: not just snagging a high-profile client, but doing so just after your chief rival’s deal with that same client unraveled in a brutally public way. But artificial intelligence pioneer OpenAI’s Pentagon deal didn’t end up being a brand-halo event. To the contrary, “it just looked opportunistic and sloppy”—and that’s the judgment of OpenAI’s CEO, Sam Altman.

Given widespread concerns about the potential downsides of AI, ranging from mass layoffs to robot overlords, “opportunistic and sloppy” are just about the last attributes OpenAI wants to be associated with, perhaps especially in the context of a Department of War partnership. But this isn’t just an image headache; the brand backlash has included a surge of signups for the rival OpenAI seemed to have bested, Anthropic, whose Claude AI leapt past OpenAI’s ChatGPT to the top of the app charts.

Some of that surge can be attributed to Anthropic’s behavior and rhetoric matching up to its brand image as a thoughtful steward of AI that’s mindful of its possible consequences. It’s a brand image that was tested recently when Anthropic wanted to add some caveats to the Pentagon’s desire to use its tech for “all legal purposes.” 

Anthropic’s Claude, then the only AI agent cleared for use in classified operations, had already been used to plan the recent military action against Venezuela (and was used in preparing for the attack on Iran.) But this evidently harmonious relationship snagged on Anthropic seeking guardrails that would prevent its technology from being used to enable mass surveillance or autonomous lethality. The Pentagon pushed back, and over a few weeks, this spiraled into an acrimonious and very public split that included petulant criticism from the president. The Department of War not only signalled it wanted more compliance as it added AI partners, but threatened to kneecap Anthropic by labeling it a “supply chain risk.”

In sticking to its guns, so to speak, Anthropic stayed true to its brand as the serious, non-reckless AI company. In general, Silicon Valley seemed to rally around Anthropic, with employees at Google, Microsoft, and Amazon circulating petitions and open letters urging corporate leadership to follow Anthropic’s example and “hold the line” against objectionable government uses of AI.   

That was the backdrop when OpenAI’s deal with the Pentagon was announced. While the Department of War had already been in talks with various AI firms to add them to classified use cases, the timing of the announcement came across as if OpenAI was effectively replacing Anthropic. While Altman promised the company had the same “red lines” as Anthropic, it agreed to Pentagon language that permits the technology’s use for “all lawful purposes.” OpenAI insists the contract details establish guardrails, and Altman has said Anthropic should be offered the same deal, and should not be tagged as a security risk.

But the timing and what some observers saw as capitulation led to a backlash. Aside from online sniping at OpenAI, the results were plain enough in the app charts, as Anthropic downloads and paid subscriptions spiked. The big-tech Information Technology Industry Council, whose members include Nvidia and Apple, weighed in with a letter of concern about “the Department of War’s consideration of imposing a supply-chain risk designation in response to a procurement dispute.” Research firm Sensor Tower found ChatGPT mobile uninstalls jumped 295%. It was almost the Anthropic vs. Pentagon story run in reverse: Instead of a client battle oddly burnishing a brand, a prestigious new-client deal seemed to blow up in a brand’s face.

Altman has called the backlash “really painful,” and the result of poor optics rather than any substantial capitulation or opportunism. He reportedly told an all-hands meeting that the deal was a “complex” decision with “extremely difficult brand consequences” in the short term, but ultimately the correct decision. And this may prove right in the long run. 

Anthropic is back in talks with the Pentagon about salvaging their relationship. And its investors reportedly want to see more diplomacy and less ego from the company; the brand won’t mean much without clients. Meanwhile there’s still plenty of room for OpenAI to be opportunistic, but maybe do a better job at not looking opportunistic—because the best way to avoid “difficult brand consequences” is to anticipate them. 

Читайте на сайте


Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. Абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city
Музыкальные новости
Новости России
Экология в России и мире
Спорт в России и мире
Moscow.media






Топ новостей на этот час

Rss.plus





СМИ24.net — правдивые новости, непрерывно 24/7 на русском языке с ежеминутным обновлением *