Regulating AI Won’t Save Us — It May Just Kill the Open Internet

What the heck are you talking about Tony?

Tony Beltramelli
7 min readNov 23, 2023

--

The major problem with the direction taken by governments and regulators right now is that they want to regulate AI as a set of methods and tools rather than targeting specific domains and applications. In effect, authorities are set on restricting access to an entire new class of technology instead of honing in on the truly dangerous applications, or establishing frameworks to punish crimes committed using such technologies.

This is ridiculous…

Regulating AI as a whole because it can be used for nefarious applications would be the equivalent of regulating particle physics because it can be used to develop nuclear weapons.

We already have established regulations in critical domains such as pharma and defense. All we need now is to update these regulations to account for AI within these sensitive sectors.

Regulating AI as a whole, rather than regulating sensitive applications and use cases, will practically make it unfeasible, if not impossible, to apply AI to domains that are obviously for the greater good: personalized healthcare, efficient drug discovery, science and engineering, etc.

Make no mistake: regulations are innately designed as barriers to entry to restrict access to a given industry. Who has enough cash, enough specialized human resources, and enough time to meet heavy regulations? Big tech companies and perhaps a few AI research labs at elite Universities. And that’s about it.

Independent researchers, solo entrepreneurs, early-stage startups, and SMBs will no longer be able to build AI-first products nor engage in AI research. As stated by Andrew Ng, tech giants are lobbying for heavy regulations to monopolize AI and effectively suppress the emergence of competitors.

Side note: As insane as it may sound, DNA & RNA synthesizers are available for purchase by practically anyone right now. Despite the fact that humanity is coming out of a pandemic that cost millions of lives and billions of dollars, there are zero regulations on DNA & RNA synthesizers… Apparently, AI poses a greater immediate threat… I.N.S.A.N.E.

Hold on, what’s AI again?

Another fundamental issue with current AI regulation debates is that no one even agrees how to define “AI”. What people used to call AI in the 70s is simply viewed as computation today (aka. the “AI effect”: once it works, it’s just computation).

What we call AI today (i.e. deep neural networks such as LLMs and diffusers) will be considered mere algorithms and software tomorrow.

AI is just software, it’s another computing paradigm, it’s a set of methods and techniques, it’s a set of tools. Letting governments regulate AI is quite literally letting governments regular software.

Even Nick Bostrom, the author of the seminal book highlighting the risk of AGI and superintelligence, believes regulators are now taking things too far, threatening to destroy technological innovation and slow down scientific progress.

Quite frankly, the only point I’ve ever agreed with deep-learning-troll-in-chief Gary Marcus is that deep neural networks aren’t all that smart… Don’t get me wrong: neural networks are game-changers, are transforming entire industries, and have brought us closer to AGI than we’ve ever been — but we’re still far from it.

GPT4 is incredibly useful, but it’s far from being “intelligent” the way chimps and humans are “intelligent”.

Even the great Geoffrey Hinton, despite his own concerns about current AI development, has highlighted numerous times that we need completely novel ideas to reach AGI.

But regulations are here to safeguard against rogue AGI!

Here’s another flaw in current conversations: regulators and AI doomers are overlaying the impact of deep fakes and misinformation on top of theoretical existential risks associated with rogue AGI.

It’s a deep counterproductive mess…

Immediate targets for regulations seem to be companies building large foundational models and cloud computing providers where large AI models are trained. This seems like a sensible approach to regulation at first, given that today’s advanced AI models can realistic only be trained on large expensive GPU clusters.

But then you consider the fact that computing will obviously continue to improve and become cheaper. Tomorrow, it will be possible to train large AI models on small clusters of commercial off-the-shelf laptops. If a motivated criminal organization would want to use AI, it would be fairly straightforward to circumvent regulations on cloud computing providers by building home-made GPU clusters the same way crypto grifters did to mine cryptocurrencies.

Not so far in the future, it will be possible to train large AI models on a single computer or even on a smartphone.

What would regulators do then?

Will they force electronics manufacturers to artificially limit the number of GPUs in future laptops and smartphones? Will they artificially cap the number of FLOPS on commercially available machines? Will they keep computing hardware prices artificially high?

Any of these actions would detrimentally impact everyone without effectively deterring malicious actors from utilizing AI.

Much like speed limits don’t stop drivers from speeding, such regulations won’t prevent ill-intentioned parties from securing computing power and amassing GPUs.

Regulations work to prevent the spread of nuclear weapons because it’s non trivial to access large quantities of uranium or plutonium and it’s non trivial and very expensive to turn it into the appropriate isotopes.

By contrast, access to compute is, and will continue to be, trivial and cheap — so is access to AI.

Alright, I am still here, but what does it have to do with the open internet?

Once an AI model is trained, it’s essentially just software, it’s information, it’s data.

A rogue AI confined to an isolated machine isn’t the real threat; rather, it’s the potential of a superintelligent AI released onto the internet, that could hypothetically “take over” the world…

So by the time it’s possible to train large AI models on any widely available machine, the last realm for AI regulators would become the very network on which a so-called rogue AI could spread: the internet.

Hidden behind the ghost of “AI risk”, authorities would gather public support through fear to completely control the internet — at last.

In time, authorities would get to choose what is allowed on the network and what isn’t, from information to services. The open internet as we know it would die, alongside free speech and democracy.

Regulating AI as a computing paradigm rather than focusing on sensitive industries will give governments an ever-increasing control over technology, instead of safeguarding us all against real, nefarious, malicious AI use cases.

If you genuinely believe in free speech and democracy, can you really support the current proposals for AI regulation?

But I feel safer if only a few companies can do AI research!

The truth is: even if AI is heavily regulated, this wouldn’t prevent an AI lab from screwing things up by accident.

Picture it: the year is 2030, the leading research team at ClosedAI is about to run the latest iteration of their new model architecture. After dedicating five years of work and painstaking regulatory compliance, they press the button to run the model. It self-improve within seconds, leading to an intelligence explosion within minutes. The AI agent has now outsmarted the safeguard mechanisms and is now free to spread on the internet.

Regulations or not, damage is done.

If humanity finds itself in that very scenario, would you rather live in a world where there are millions of AI researchers, AI engineers, and AI companies able to develop countermeasures? Or would you rather live in a world where over-regulation led to just a few labs with a few hundreds of people able to respond?

Wouldn’t humanity stand a better chance against a rogue AI with a wealth of AI practitioners, AI labs, and AI companies at our disposal?

If the early internet had faced the same regulatory blockers authorities currently propose for AI on the basis of theoretical risks, it would have prevented millions of people from learning how to code and become developers (amongst many other things obviously). So instead of having millions of ethical hackers and security researchers today, we would rely on a few big tech companies for protection against cyber-criminals. That surely would be pretty disastrous… Imagine AOL, Yahoo, AltaVista, Netscape (no offense Marc Andreessen) being the only providers of countermeasures against hackers…

More AI practitioners mean more brains and hands to create safeguards, countermeasures, and fight back should an actual rogue AI break loose.

AI research and development must remain as open and accessible as possible.

I do agree with Yann LeCun and Clement Delangue that open source can lead to a safer AI future. However in my eyes, it’s not because of the virtue of open source itself but because open source enables more developers to venture into AI and become skilled practitioners.

Side note: if you want to learn more about machine learning and AI, check out Fast.ai by Jeremy Howard and DeepLearning.ai by Andrew Ng.

So what do we do then?

I am in favor of AI regulations, but not the kind that regulate AI as a computing paradigm.

Regulating AI as a whole will drastically slow down innovation, concentrate the power of AI in the hands of a few, while failing to prevent malevolent actors from using AI for criminal activities or terrorism — effectively making humanity ill-prepared to deal with nefarious AI scenarios.

Instead, we need pragmatic, domain-specific regulations in high-risk areas like defense, pharma, finance, etc. These sectors are already under regulatory scrutiny for good reasons, and it’s not rocket science to understand that this is where AI oversight is most necessary.

I firmly believe the threat to human existence from nuclear war, natural disasters, climate change, cosmic impact, or from another pandemic is far greater than that from AI.

Ironically, AI can help us overcome these very existential risks rather than being one of them.

AI constitutes a formidable set of tools to augment human intelligence and help us tackle the greatest challenges of our time, improve the lives of billions of people, unify general relativity and quantum mechanics, and maybe even help us start to better understand consciousness and the nature of reality.

The benefits of AI vastly outweigh its hypothetical dangers.

Let’s be courageous, let’s be bold, let’s be brave, let’s keep building.

Story first posted on X here.

--

--

Tony Beltramelli

Co-Founder & CEO uizard.io | building design and dev tools using AI and machine learning | Forbes 30u30