Posted inEmergent Tech

Would a pause in AI development allow us to develop a modest regulatory requirement?

Implementing regulations around AI may seem like a logical first step but it still raises too many questions

The open letter from the Future of Life Institute (FLI) has had one clear and immediate impact: everybody is talking about it! But what should we, as a society, do in the face of this technological explosion? Let’s be concrete. Let’s be pragmatic.

Imagine that the motion is adopted and that we have a 6-month pause in the development of giant AI models. What could be the first regulation that we would adopt? 

Let’s start with something simple: one immediately workable first step that regulators could pursue would be to require that all media (text, audio, image, video) that is generated by AI to be clearly labeled as such when used in a commercial or political context. 

What questions would we need to answer to enact such a regulation?

What would the exact scope of it be? 

Would commercial context apply strictly to selling a product, or also to a professional building their audience in order to then sell a product? 

Would political context apply to only politicians or to any one engaging in a political fight? 

How would we technically enforce it? 

There’s currently reasonable ways to detect images generated by freely available tools, but that may very well be untrue in six months. Detecting generated text is already a significant challenge. How would we enforce it if detection is too much a challenge? 

One way to fix this problem would be to transfer some responsibility upstream to the main technology providers: they would need to add marks to the content they generate. 

Florian Douetteau, Co-founder and CEO at Dataiku

How would we enforce it across the globe? 

Like GDPR, such regulation would make sense only if we can leverage it across borders, for example, a European AI content regulation applying to any company doing business with European citizens. But how would we test it? 

What would be the subtle limits to it? 

To some extent, a lot of content creation tools *are* leveraging AI. For instance, even if this text is written by me, a human (pinkie promise), I’m using here and there Google Docs auto-completion to complete some words, which is (very arguably) some form of AI. So where does such a regulation start, and where does it end? 

How much time would it take to put such regulation in place? 

If we look at comparable regulations, probably 5 to 10 years. Which is more than 6 months. 

Labeling requirements are commonplace around the world, be it for the ingredients in packaged food, or the country of manufacture on the clothes you wear. Regulators have ensured that consumers are provided with the essential information needed to make informed decisions. If we look at regulations around AI, a labeling requirement could be one of the first things we do, but even this simple, incremental approach raises too many questions.