Skip to main content

This week in AI: Companies voluntarily submit to AI guidelines — for now

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

This week in AI, we saw OpenAI, Anthropic, Google, Inflection, Microsoft, Meta and Amazon voluntarily commit to pursuing shared AI safety and transparency goals ahead of a planned Executive Order from the Biden administration.

As my colleague Devin Coldewey writes, there’s no rule or enforcement being proposed, here — the practices agreed to are purely voluntary. But the pledges indicate, in broad strokes, the AI regulatory approaches and policies that each vendor might find amendable in the U.S. as well as abroad.

Among other commitments, the companies volunteered to conduct security tests of AI systems before release, share information on AI mitigation techniques and develop watermarking techniques that make AI-generated content easier to identify. They also said that they would invest in cybersecurity to protect private AI data and facilitate the reporting of vulnerabilities, as well as prioritize research on societal risks like systemic bias and privacy issues.

The commitments are important step, to be sure — even if they’re not enforceable. But one wonders if there are ulterior motives on the part of the undersigners.

Reportedly, OpenAI drafted an internal policy memo that shows the company supports the idea of requiring government licenses from anyone who wants to develop AI systems. CEO Sam Altman first raised the idea at a U.S. Senate hearing in May, during which he backed the creation of an agency that could issue licenses for AI products — and revoke them should anyone violate set rules.

In a recent interview with press, Anna Makanju, OpenAI’s VP of global affairs, insisted that OpenAI wasn’t “pushing” for licenses and that the company only supports licensing regimes for AI models more powerful than OpenAI’s current GPT-4. But government-issued licenses, should they be implemented in the way that OpenAI proposes, set the stage for a potential clash with startups and open source developers who may see them as an attempt to make it more difficult for others to break into the space.

Devin said it best, I think, when he described it to me as “dropping nails on the road behind them in a race.” At the very least, it illustrates the two-faced nature of AI companies who seek to placate regulators while shaping policy to their favor (in this case putting small challengers at a disadvantage) behind the scenes.

It’s a worrisome state of affairs. But, if policymakers step up to the plate, there’s hope yet for sufficient safeguards without undue interference from the private sector.

Here are other AI stories of note from the past few days:

  • OpenAI’s trust and safety head steps down: Dave Willner, an industry veteran who was OpenAI’s head of trust and safety, announced in a post on LinkedIn that he’s left the job and transitioned to an advisory role. OpenAI said in a statement that it’s seeking a replacement and that CTO Mira Murati will manage the team on an interim basis.
  • Customized instructions for ChatGPT: In more OpenAI news, the company has launched custom instructions for ChatGPT users so that they don’t have to write the same instruction prompts to the chatbot every time they interact with it.
  • Google news-writing AI: Google is testing a tool that uses AI to write news stories and has started demoing it to publications, according to a new report from The New York Times. The tech giant has pitched the AI system to The New York Times, The Washington Post and The Wall Street Journal’s owner, News Corp.
  • Apple tests a ChatGPT-like chatbot: Apple is developing AI to challenge OpenAI, Google and others, according to a new report from Bloomberg’s Mark Gurman. Specifically, the tech giant has created a chatbot that some engineers are internally referring to as “Apple GPT.”
  • Meta releases Llama 2: Meta unveiled a new family of AI models, Llama 2, designed to drive apps along the lines of OpenAI’s ChatGPTBing Chat and other modern chatbots. Trained on a mix of publicly available data, Meta claims that Llama 2’s performance has improved significantly over the previous generation of Llama models.
  • Authors protest against generative AI: Generative AI systems like ChatGPT are trained on publicly available data, including books — and not all content creators are pleased with the arrangement. In an open letter signed by more than 8,500 authors of fiction, non-fiction and poetry, the tech companies behind large language models like ChatGPT, Bard, LLaMa and more are taken to task for using their writing without permission or compensation.
  • Microsoft brings Bing Chat to the enterprise: At its annual Inspire conference, Microsoft announced Bing Chat Enterprise, a version of its Bing Chat AI-powered chatbot with business-focused data privacy and governance controls. With Bing Chat Enterprise, chat data isn’t saved, Microsoft can’t view a customer’s employee or business data and customer data isn’t used to train the underlying AI models.

More machine learnings

Technically this was also a news item, but it bears mentioning here in the research section. Fable Studios, which previously made CG and 3D short films for VR and other media, showed off an AI model it calls Showrunner that (it claims) can write, direct, act in and edit an entire TV show — in their demo, it was South Park.

I’m of two minds on this. On one hand, I think pursuing this at all, let alone during a huge Hollywood strike that involves issues of compensation and AI, is in rather poor taste. Though CEO Edward Saatchi said he believes that the tool puts power in the hands of creators, the opposite is also arguable. At any rate it was not received particularly well by people in the industry.

On the other hand, if someone on the creative side (which Saatchi is) does not explore and demonstrate these capabilities, then they will be explored and demonstrated by others with less compunction about putting them to use. Even if the claims Fable makes are a bit expansive for what they actually showed (which has serious limitations) it is like the original DALL-E in that it prompted discussion and indeed worry even though it was no replacement for a real artist. AI is going to have a place in media production one way or the other — but for a whole sack of reasons it should be approached with caution.

On the policy side, a little while back we had the National Defense Authorization Act going through with (as usual) some really ridiculous policy amendments that have nothing to do with defense. But among them was one addition that the government must host an event where researchers are companies can do their best to detect AI-generated content. This kind of thing is definitely approaching “national crisis” levels so it’s probably good this got slipped in there.

Over at Disney Research, they’re always trying to find a way to bridge the digital and the real — for park purposes, presumably. In this case they have developed a way to map virtual movements of a character or motion capture (say for a CG dog in a film) onto an actual robot, even if that robot is a different shape or size. It relies on two optimization systems each informing the other of what is ideal and what is possible, sort of like a little ego and super-ego. This should make it much easier to make robot dogs act like regular dogs, but of course it’s generalizable to other stuff as well.

And here’s hoping AI can help us steer the world away from sea-bottom mining for minerals, because that is definitely a bad idea. A multi-institutional study put AI’s ability to sift signal from noise to work predicting the location of valuable minerals around the globe. As they write in the abstract:

In this work, we embrace the complexity and inherent “messiness” of our planet’s intertwined geological, chemical, and biological systems by employing machine learning to characterize patterns embedded in the multidimensionality of mineral occurrence and associations.

The study actually predicted and verified locations of uranium, lithium, and other valuable minerals. And how about this for a closing line: the system “will enhance our understanding of mineralization and mineralizing environments on Earth, across our solar system, and through deep time.” Awesome.



source https://techcrunch.com/2023/07/22/this-week-in-ai-companies-voluntarily-submit-to-ai-guidelines-for-now/

Comments

Popular posts from this blog

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT