AI Regulation and the Potential of a Regulatory Splinternet
I hope that you are having a good summer, although in the UK so far that means a near constant few months of rain!
In this Substack we’re touching on the potential impact if democratic countries build their AI regulation in an uncoordinated way. Are we already seeing how this could create a Splinternet even within democratic nations? And how much will this damage the consumer experience and the ability of SMEs to operate across national borders?
I’d love to hear your thoughts on this. Please drop me a line david@artemonstrategy.com and check out the website at www.artemonstrategy.com
In IP 12 we talked about the potential for a tech driven disruption in the televising of sport. The Athletic has a fascinating piece this week about whether Lionel Messi’s move to US football team Inter Miami and whether it might mark the beginning of a bigger play by Apple in sports rights. One to keep a close eye on.
It’s also worth having a look at this fascinating database put together by the excellent Jonathan Tanner and colleagues, who analyse the impact of each amendment to the EU’s AI Bill.
If you like this newsletter, do please subscribe and share. And please click the ♡symbol at the top of the post - Substack’s algorithms love the like symbol.
If you’re a free subscriber, please do think about upgrading to become paid - one month is about the price of a pint of beer in London!
AI Regulation and a Regulatory Splinternet?
Differing models of emerging AI regulation could lead to a regulation driven Splinternet within democratic nations. This could mean consumer experiences in different geographies potentially becoming considerably different as both the technology and the regulation develops. It could also increase friction for SMEs operating across national borders. Non tech businesses need to take this situation as seriously as tech companies. A push for constructive engagement with government as regulation develops and a reasonable degree of baseline regulatory coherence could be an important business goal.
The Splinternet remains one of the most important issues facing global companies, particularly with the continuing divide between the autocratic and the democratic vision of the internet and continuing discussion of decoupling or “de-risking” between the West and China. The Great Firewall of China and the experiments with the Russian Ru-net and Iranian “Halal Internet” are all examples of autocratic regimes trying to cut themselves off from the global internet. This is an element of the Splinternet phenomenon that we’ll explore in more detail in later newsletters.
But another form of Splinternet, driven by differences in regulation, might also be emerging within democratic countries. This could make a substantial difference to how consumers in different democracies experience emerging technology and substantially increase the complexity faced by companies operating across national boundaries.
The beginnings of a regulatory Splinternet?
One of the defining features of the age of big global internet companies has been the ability to access the same services in different locations. When I worked for a big tech company one of the biggest clamours around new product launches (or more often updates to existing products and platforms) was a desire for the product or update to be launched first in as many markets as possible. People generally hated reading about new products when it wasn’t available in their country. But in these circumstances, full global rollouts would often happen within weeks.
But that world of ubiquity of tech services could be changing. And it might be driven as much by regulation as by geopolitics.
Three of the most eye-catching new product launches of the past year could well be the sign of things to come.
ChatGPT was, of course, the product launch that shifted the public conversation around generative AI. But, for several months this Spring ChatGPT was not available in Italy because of privacy and regulatory issues.
In March, Google launched their AI powered chatbot, called Bard. It was initially launched only in the UK and the US and then in 180 other countries over the following months. However, it took the best part of five months before Bard was available in the European Union, with concerns being raised about its applicability to European data protection legislation.
Most recently, Meta launched their text-based social media app Threads (more about this here) in a variety of markets, but not in the European Union. The reason given by Meta is “upcoming regulatory uncertainty.” One MEP even said that the “fact that Threads is still not available for EU citizens shows that EU regulation works.”
You don’t have to look far to see areas in which this trend might be accelerated. Meta has started blocking news from its platforms in Canada due to new revenue sharing legislation. Montana has banned TikTok within the state, with app stores liable to a fine if they offer the app in the state. A number of messaging providers have warned that the UK’s Online Safety Bill could weaken end-to-end encryption and impact their ability to provide services.
AI regulation could accelerate the divide and businesses well outside of the tech sector
Countries are still struggling to come to terms with exactly how they plan to regulate generative AI. Part of the issue is, of course, that countries are looking to regulate before they are fully aware of what the future shape of the technology might look like. Much commentary about the impact of AI has seen dystopian sci-fi talk of “AI ending the world” eclipse more important discussion about maximising the benefits of generative AI and minimising the risks. Dystopian commentary has made sober, agile and future-facing regulation more difficult to achieve.
It already seems clear that different geographies are heading for different types of AI regulation. The European Union has adopted a risk-based approach. The UK has said that it aims to pursue a “pro innovation” approach, whilst also saying that it wants to be a global leader in AI safety. Japan moved from a light touch approach towards a greater expression of privacy concerns. These are just three examples of geographies wrestling with what AI regulation should look like. And it’s not difficult to imagine events, media pressure or controversies pushing politicians towards tougher than initially anticipated regulation.
And it isn’t just at country level where this patchwork quilt could be being developed. In the absence of AI regulation at the federal level, a number of American states are pushing ahead with legislation at the state level. New York, for example, has introduced an AI hiring law and a number of other states are progressing AI legislation.
It’s not hard to see how we could be heading for a confusing patchwork quilt of AI regulation, which will be confusing for users and international companies alike.
For consumers, this would be likely to mean a massively differentiated user experience by geography, with some products not even being available in the most regulated geographies.
For companies operating across borders, this would be likely to mean a confusing mosaic of different regulatory models.
For SMEs or companies looking to scale up geographical operations, such a range of AI regulation could prove to be a bureaucratic nightmare.
Unlike the sci-fi dystopianism, the future of a regulatory splinternet is already emerging. And this is a future that we should be ready to prepare for now.
What does a regulatory splinternet mean for business?
Given the transformative nature of AI, all businesses need to take the nature of AI regulation more seriously
As we noted recently, every company is now a tech company and every company needs an AI strategy. That applies to regulation as much as it does to every other element of AI strategy. Fundamentally, the nature of regulation will dictate how AI can and cannot be used in various geographies. Given that the use of AI will extend far beyond traditional tech companies (it is already being widely used in financial services, insurance and hiring, for example), the emerging shape of AI regulation will impact businesses in a massive range of sectors. Businesses need to start considering the impact of AI regulation in the markets in which they operate and the markets they wish to operate in. They should use scenario planning and other foresight tools to gain a deeper understanding of business impact and actions they can take now.
Consumers might expect constructive engagement between tech and regulators to ensure continuity of user experience
Tech companies are operating in an environment where there is global knowledge of emerging products everywhere with free access to information. Just look at the numbers for specialist websites such as Macworld, the buzz on social media around new product launches and the growing coverage of tech news and product announcements on sites ranging from the BBC to the New York Times to the Daily Mail.
At the same time, regulatory differences might mean that consumers are deprived access to emerging products. Potentially transformative technologies in fields like healthcare and tackling climate change might also be stunted by over-zealous legislation.
Given the nature of emerging technology and emerging regulation, a different approach might be needed to previous phases of tech regulation. The next phase of regulation could be marked by engagement rather than top-down regulation. For tech companies (and other companies likely to use AI extensively) this might mean engaging with regulators about how AI works and how it might develop. For regulators, this engagement could mean sharing emerging thinking around regulation and enabling a two-way dialogue so that both innovation and the user experience are impacted as little as possible, but proper guardrails are in place.
Regulatory coherence and reducing friction should remain an important goal
A world of patchwork regulation is a world in which small business and consumers will inevitably be the losers. It is in the interests of consumers, small, medium and large businesses and civic society that the internet remains as frictionless as possible. It is, of course, important that business and consumer groups make their voice heard in this discussion and also uses scenarios and other tools to build a narrative of the impact of disjointed regulation. Although governments are not going to agree on every detail of AI regulation (surveillance seems an obvious stumbling block), it shouldn’t be impossible for democratic nations to agree a baseline of principles for AI regulation (potentially through the G7 or the London AI Safety summit) to help guide regulation in a way that minimises surprises for product developers and businesses.