AI

This Week in AI: OpenAI moves away from safety

Comment

SAN FRANCISCO, CALIFORNIA - NOVEMBER 06: OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first ever Open AI DevDay conference. (Photo by Justin Sullivan/Getty Images)
Image Credits: Justin Sullivan / Getty Images

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

By the way, TechCrunch plans to launch an AI newsletter soon. Stay tuned. In the meantime, we’re upping the cadence of our semiregular AI column, which was previously twice a month (or so), to weekly — so be on the lookout for more editions.

This week in AI, OpenAI once again dominated the news cycle (despite Google’s best efforts) with not only a product launch, but also with some palace intrigue. The company unveiled GPT-4o, its most capable generative model yet, and just days later effectively disbanded a team working on the problem of developing controls to prevent “superintelligent” AI systems from going rogue.

The dismantling of the team generated a lot of headlines, predictably. Reporting — including ours — suggests that OpenAI deprioritized the team’s safety research in favor of launching new products like the aforementioned GPT-4o, ultimately leading to the resignation of the team’s two co-leads, Jan Leike and OpenAI co-founder Ilya Sutskever.

Superintelligent AI is more theoretical than real at this point; it’s not clear when — or whether — the tech industry will achieve the breakthroughs necessary in order to create AI capable of accomplishing any task a human can. But the coverage from this week would seem to confirm one thing: that OpenAI’s leadership — in particular CEO Sam Altman — has increasingly chosen to prioritize products over safeguards.

Altman reportedly “infuriated” Sutskever by rushing the launch of AI-powered features at OpenAI’s first dev conference last November. And he’s said to have been critical of Helen Toner, director at Georgetown’s Center for Security and Emerging Technology and a former member of OpenAI’s board, over a paper she co-authored that cast OpenAI’s approach to safety in a critical light — to the point where he attempted to push her off the board.

Over the past year or so, OpenAI has let its chatbot store fill up with spam and (allegedly) scraped data from YouTube against the platform’s terms of service while voicing ambitions to let its AI generate depictions of porn and gore. Certainly, safety seems to have taken a back seat at the company — and a growing number of OpenAI safety researchers have come to the conclusion that their work would be better supported elsewhere.

Here are some other AI stories of note from the past few days:

  • OpenAI + Reddit: In more OpenAI news, the company reached an agreement with Reddit to use the social site’s data for AI model training. Wall Street welcomed the deal with open arms — but Reddit users may not be so pleased.
  • Google’s AI: Google hosted its annual I/O developer conference this week, during which it debuted a ton of AI products. We rounded them up here, from the video-generating Veo to AI-organized results in Google Search to upgrades to Google’s Gemini chatbot apps.
  • Anthropic hires Krieger: Mike Krieger, one of the co-founders of Instagram and, more recently, the co-founder of personalized news app Artifact (which TechCrunch corporate parent Yahoo recently acquired), is joining Anthropic as the company’s first chief product officer. He’ll oversee both the company’s consumer and enterprise efforts.
  • AI for kids: Anthropic announced last week that it would begin allowing developers to create kid-focused apps and tools built on its AI models — so long as they follow certain rules. Notably, rivals like Google disallow their AI from being built into apps aimed at younger ages.
  • AI film festival: AI startup Runway held its second-ever AI film festival earlier this month. The takeaway? Some of the more powerful moments in the showcase came not from AI but from the more human elements.

More machine learnings

AI safety is obviously top of mind this week with the OpenAI departures, but Google DeepMind is plowing onward with a new “Frontier Safety Framework.” Basically it’s the organization’s strategy for identifying and hopefully preventing any runaway capabilities — it doesn’t have to be AGI; it could be a malware generator gone mad or the like.

Image Credits: Google DeepMind

The framework has three steps: (1) Identify potentially harmful capabilities in a model by simulating its paths of development; (2) evaluate models regularly to detect when they have reached known “critical capability levels”; and (3) apply a mitigation plan to prevent exfiltration (by another or itself) or problematic deployment. There’s more detail here. It may sound kind of like an obvious series of actions, but it’s important to formalize them or everyone is just kind of winging it. That’s how you get the bad AI.

A rather different risk has been identified by Cambridge researchers, who are rightly concerned at the proliferation of chatbots that one trains on a dead person’s data in order to provide a superficial simulacrum of that person. You may (as I do) find the whole concept somewhat abhorrent, but it could be used in grief management and other scenarios if we are careful. The problem is we are not being careful.

Image Credits: Cambridge University / T. Hollanek

“This area of AI is an ethical minefield,” said lead researcher Katarzyna Nowaczyk-Basińska. “We need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here.” The team identifies numerous scams, potential bad and good outcomes, and discusses the concept generally (including fake services) in a paper published in Philosophy & Technology. Black Mirror predicts the future once again!

In less creepy applications of AI, physicists at MIT are looking at a useful (to them) tool for predicting a physical system’s phase or state, normally a statistical task that can grow onerous with more complex systems. But training up a machine learning model on the right data and grounding it with some known material characteristics of a system and you have yourself a considerably more efficient way to go about it. Just another example of how ML is finding niches even in advanced science.

Over at CU Boulder, they’re talking about how AI can be used in disaster management. The tech may be useful for quickly predicting where resources will be needed, mapping damage, even helping train responders, but people are (understandably) hesitant to apply it in life-and-death scenarios.

Attendees at the workshop.
Image Credits: CU Boulder

Professor Amir Behzadan is trying to move the ball forward on that, saying, “Human-centered AI leads to more effective disaster response and recovery practices by promoting collaboration, understanding and inclusivity among team members, survivors and stakeholders.” They’re still at the workshop phase, but it’s important to think deeply about this stuff before trying to, say, automate aid distribution after a hurricane.

Lastly some interesting work out of Disney Research, which was looking at how to diversify the output of diffusion image generation models, which can produce similar results over and over for some prompts. Their solution? “Our sampling strategy anneals the conditioning signal by adding scheduled, monotonically decreasing Gaussian noise to the conditioning vector during inference to balance diversity and condition alignment.” I simply could not put it better myself.

Image Credits: Disney Research

The result is a much wider diversity in angles, settings, and general look in the image outputs. Sometimes you want this, sometimes you don’t, but it’s nice to have the option.

More TechCrunch

J2 Ventures, a firm led mostly by the U.S. military veterans, announced on Thursday that it has raised a $150 million second fund. The Boston-based firm invests in startups whose…

J2 Ventures, focused on military healthcare, grabs $150M for its second fund

HealthEquity said in an 8-K filing with the SEC that it detected “anomalous behavior by a personal use device belonging to a business partner.”

HealthEquity says data breach is an ‘isolated incident’

Roll20 said that on June 29 it had detected that a “bad actor” gained access to an account on the company’s administrative website for one hour.

Roll20, an online tabletop role-playing game platform, discloses data breach

Fisker has a willing buyer for its remaining inventory of all-electric Ocean SUVs, and has asked the Delaware Bankruptcy Court judge overseeing its Chapter 11 case to approve the sale.…

Fisker asks bankruptcy court to sell its EVs at average of $14,000 each

Teddy Solomon just moved to a new house in Palo Alto, so he turned to the Stanford community on Fizz to furnish his room. “Every time I show up to…

Fizz, the anonymous Gen Z social app, adds a marketplace for college students

With increasing competition for what is, essentially, still a small number of hard tech and deep tech deals, Sidney Scott realized it would be a challenge for smaller funds like…

Why deep tech VC Driving Forces is shutting down

A guide to turn off reactions on your iPhone and Mac so you don’t get surprised by effects during work video calls.

How to turn off those silly video call reactions on iPhone and Mac

Amazon has decided to discontinue its Astro for Business device, a security robot for small- and medium-sized businesses, just seven months after launch.  In an email sent to customers and…

Amazon retires its Astro for Business security robot after only 7 months

Hiya, folks, and welcome to TechCrunch’s regular AI newsletter. This week in AI, the U.S. Supreme Court struck down “Chevron deference,” a 40-year-old ruling on federal agencies’ power that required…

This Week in AI: With Chevron’s demise, AI regulation seems dead in the water

Noplace had already gone viral ahead of its public launch because of its feature that allows users to express themselves by customizing the colors of their profile.

noplace, a mashup of Twitter and Myspace for Gen Z, hits No. 1 on the App Store

Cloudflare analyzed AI bot and crawler traffic to fine-tune automatic bot detection models.

Cloudflare launches a tool to combat AI bots

Twilio says “threat actors were able to identify” phone numbers of people who use the two-factor app Authy.

Twilio says hackers identified cell phone numbers of two-factor app Authy users

The news brings closure to more than two years of volleying back and forth between some of the biggest names in additive manufacturing.

Nano Dimension is buying Desktop Metal

Planning to attend TechCrunch Disrupt 2024 with your team? Maximize your team-building time and your company’s impact across the entire conference when you bring your team. Groups of 4 to…

Groups save big at TechCrunch Disrupt 2024

As more music streaming apps and creation tools emerge to compete for users’ attention, social music-sharing app Popster is getting two new features to grow its user base: an AI…

Music video-sharing app Popster uses generative AI and lets artists remix videos

Meta’s Threads now has more than 175 million monthly active users, Mark Zuckerberg announced on Wednesday. The announcement comes two days away from Threads’ first anniversary. Zuckerberg revealed back in…

Threads nears its one-year anniversary with more than 175M monthly active users

Cartken and its diminutive sidewalk delivery robots first rolled into the world with a narrow charter: carrying everything from burritos and bento boxes to pizza and pad thai that last…

From burritos to biotech: How robotics startup Cartken found its AV niche

Ashwin Nandakumar and Ashwin Jainarayanan were working on their doctorates at adjacent departments in Oxford, but they didn’t know each other. Nandakumar, who was studying oncology, one day stumbled across…

Granza Bio grabs $7M seed from Felicis and YC to advance delivery of cancer treatments

LG has acquired an 80% stake in Athom, a Dutch smart home company and maker of the Homey smart home hub. According to LG’s announcement, it will purchase the remaining…

LG acquires smart home platform Athom to bring third-party connectivity to its ThinQ ecosytem

CoinDCX, India’s leading cryptocurrency exchange, is expanding internationally through the acquisition of BitOasis, a digital asset platform in the Middle East and North Africa, the companies said Wednesday. The Bengaluru-based…

CoinDCX acquires BitOasis in international expansion push

Collaborative document features are being made available inside Proton Drive, further extending the company’s trademark pitch of robust security.

In a major update, Proton adds privacy-safe document collaboration to Drive, its freemium E2EE cloud storage service

Telegram launched a digital currency called Stars for in-app use last month. Now, the company is expanding its use cases to paid content. The chat app is also allowing channels…

Telegram lets creators share paid content to channels

For the past couple of years, innovation has been accelerating in new materials development. And a new French startup called Altrove plans to play a role in this innovation cycle.…

Altrove uses AI models and lab automation to create new materials

The Indian social media platform Koo, which positioned itself as a competitor to Elon Musk’s X, is ceasing operations after its last-resort acquisition talks with Dailyhunt collapsed. Despite securing over…

Indian social network Koo is shutting down as buyout talks collapse

Apiday leverages AI to save time for its customers. But like legacy consultants, it also offers human expertise.

Europe is still serious about ESG, and Apiday is helping companies comply

Google totally dodges the question of how much energy is AI is using — perhaps because the answer is “way more than we’d care to say.”

Google’s environmental report pointedly avoids AI’s actual energy cost

SpaceX’s ambitious plans to launch its Starship mega-rocket up to 44 times per year from NASA’s Kennedy Space Center are causing a stir among some of its competitors. Late last…

SpaceX wants to launch up to 120 times a year from Florida — and competitors aren’t happy about it

The situation around a data breach that’s affected an ever-growing number of fintech companies has gotten even weirder. Evolve Bank & Trust announced last week that it was hacked and…

Newsletter writer covering Evolve Bank’s data breach says the bank sent him a cease and desist letter

The new bylines go beyond the typical @username references that often accompany link posts from news publications and those pointing to other written content, like a WordPress blog or Substack

Twitter/X alternative Mastodon appeals to journalists with new ‘byline’ feature

code references found in the X iOS app indicate that the company could be considering adding downvotes for replies only to improve how they’re ranked.

X weighs adding a downvote button to replies — but it doesn’t want to emulate Reddit