IP11: Could AI Accelerate The Splinternet?
Thanks so much for subscribing and for all of the feedback on IP10 about the future of news. It’s the meaty topic of the Splinternet this week, so I’ll get straight into it. You can reach me at david@artemonstrategy.com and read more about Artemon on our site here.
If you like this newsletter, do please subscribe and share. And please click the ♡symbol at the top of the post - Substack’s algorithms love the like symbol.
Generative AI and the acceleration of the Splinternet
In a nutshell: The internet has been splintering for many years, but the advance of generative AI threatens to accelerate this trend considerably, through divergence on regulation, skills and AI products, as well as deeper divides around geopolitics and use of AI for digital authoritarianism. Increased friction and divergence is damaging for business and consumers, but business needs to use scenarios and foresight to prepare for such divergence, whilst making the case for greater coherence.
“Now there's no question China has been trying to crack down on the Internet. Good luck! That's sort of like trying to nail Jell-O to the wall.” Bill Clinton’s remark in 2000 was possibly the apotheosis of the liberal-utopian worldview, with a bit of tech utopianism thrown in for good measure.
Just as we’re clearly at the “end of the end of history”, we’re also at a time when the dream of a borderless global internet has run into ever greater roadblocks, such as techno-nationalism and digital authoritarianism. The age of AI, with competing visions for regulation and adoption, alongside AI’s pivotal role in geopolitics, could accelerate the Balkanisation of the internet and make the Splinternet an even greater reality.
The Splinternet has been one of the defining trends of tech in recent years. The Great Firewall of China has effectively cut off China from the global internet and its protected market has enabled the growth of massive Chinese tech firms. Geopolitical shocks, such as the Russian invasion of Ukraine have caused regional disruption of the internet. Elsewhere, data localisation laws, propelled by a growth in techno nationalism and a desire for “digital sovereignty”, have further limited the free flow of data, just as growing digital authoritarianism has limited the free flow of information. Divergence in internet regulation and regulatory decisions has threatened the creation of regulatory lack of cohesion.
Just as the rise of Generative AI looks set to cause disruption across multiple industries, it also seems likely to press fast forward on the splintering of the web. All of the factors that have contributed to the increasingly patchwork nature of the global web risk being accelerated. This matters both for the global web and for all of the businesses who depend on it. The potential acceleration of an already worrying trend is a threat to be taken very seriously.
Splintering by AI regulation
Even aside from the field of AI, this year has already seen growing divergence in regulatory regimes. In the UK, the Competition and Market Authority’s blocking of the Microsoft acquisition of Activision Blizzard was a sign that the post Brexit regulatory environment would be one of greater regulatory difference. The Digital Markets Act and Digital Services Act in the European Union represented landmark pieces of regulation within Europe. Tougher tech regulation in India and Japan have represented divergence from the global internet. A Splinternet of apps looks set to develop, with several countries mulling a ban on Tik Tok. The BRICS countries have mulled establishing their own payments system.
And this was all before countries worked out how to regulate AI…
We’ve already seen very different responses to the rise of Generative AI. The Italians were first out of the blocks banning Chat GPT because of privacy concerns. Germany was also said to be considering a ban although this seems to have been based on a misunderstanding.
These knee-jerk bans have been replaced with more systemic discussions about regulating AI and there’s every chance that it will complete the feeling of regulatory incoherence.
🇪🇺The EU is pushing ahead with legislation to regulate AI based on perceived levels of risk.
🇺🇲 The United States has convened tech companies to discuss guardrails. The National Institute of Standards and Technology (NIST) does have a National Framework and the White House has proposed the blueprint for an AI Bill of Rights, but the US has no AI specific legislation on the books.
🇬🇧The UK’s White Paper on AI promoted an innovation led approach, based on the potential of AI to “turbocharge growth.”
🇸🇬Last year, Singapore launched AI Verify - “the world’s first AI Governance Testing Framework and Toolkit for companies who want to demonstrate responsible AI in an objective and verifiable manner.”
🇮🇳Possibly surprisingly, given broader attempts to regulate big tech in the country, the Indian government has said that they have no plans to introduce specific AI regulation, saying that “AI is a kinetic enabler of the digital economy and innovation ecosystem.”
These all represent differing (sometimes subtle, sometimes stark) approaches to how to regulate AI. In a recent study comparing the EU and the US, Brookings said that:
”specifics of these AI risk management regimes have more differences than similarities. Regarding many specific AI applications, especially those related to socioeconomic processes and online platforms, the EU and U.S. are on a path to significant misalignment.”
As the details of implementation, rather than regulatory principle, become clearer, any divergence could become even greater. There are also clear scenarios (as symbolised by Italy’s approach to Chat GPT) in which a country suddenly attempts to apply the brakes to generative AI’s momentum. These might include:
A Dangerous Dogs Act scenario (there’s a good explainer here of why this has become shorthand in the UK for ill-thought through and rushed legislation). As generative AI develops, some open source models might emerge that lack some of the “guardrails” that big tech has been careful to put in pace. This might lead some governments to feel that “something must be done”, leading to hasty regulation to tighten existing laws and without consideration of what other countries are doing.
Responding to a white collar revolt. The concept of a “white collar jobs armageddon” might be overblown, but if a disruption of white collar jobs is accompanied by a white collar political revolt that might well also result in rising pressure for governments to act. As professionals represent the backbone of both left and right wing electorates and, crucially, active party supporters, in the West, such a pressure could prove to be irresistible for politicians worried about reelection prospects.
Successful producer pressure. The “techlash” of recent years has many causes, but one of the most important is the political influence of industries that have been disrupted by the internet. The ACCC Platform Reform Regulations in Australia and the Korean Telecommunications Act are both examples of regulation driven in part by powerful industrial players. Industries who might be further disrupted by generative AI (publishers are already ringing alarm bells) might decide to act early, particularly around copyright and privacy issues, leading to a variety of models of AI regulation.
Efforts to gain some kind of regulatory coherence, such as the discussions at the G7, the EU-US Trade and Technology Council and various AI Ethics agreements are important, but it’s easy to see how the existing regulatory incoherence could quickly accelerate into regulatory Balkanisation.
Splintering by geopolitics
“The battle we’re fighting is one of sovereignty… if we don’t build our own champions in all areas - digital, artificial intelligence - our choices will be dictated by others.” President Macron
“Technology is the mainstay of making the country self-reliant.” President Modi
“The construction of a cyber superpower through indigenous innovation.” President Xi
Winning the race for dominance in AI will become one of the defining features of geopolitics in the coming decades. As Eric Schmidt sets out in a recent Foreign Affairs essay, being a leader in AI doesn’t just matter for economic reasons, it also has a clear impact on military capability. Being an AI laggard could also make a country more vulnerable to cyberattacks.
Such a desire to be an AI leader, not laggard, could lead to greater tech-cooperation, but it seems more likely to lead to greater tech-competition. The CHIPS Act and Inflation Reduction Act, in the United States, is an example of how the geopolitical importance of AI leadership is having clear geopolitical consequences. China has also set out its ambition to be the global leader in AI by 2030, with Chinese tech companies now more explicitly operating as “national champions towards that goal”. Controversial Nikkei research even suggested that China might be outstripping the US in AI research. A similar case was made by Dan Wang in Foreign Affairs.
But the geopolitical splintering impact of AI might well be more multipolar than many acknowledge.
Within this context of AI’s increasing importance in geopolitics is a desire to have some form of “digital sovereignty” or strategic autonomy. This, for example, is a central pillar of the European Union’s ambitions, described by the European Commission’s President as “the capability that Europe must have to make its own choices, based on its own values, respecting its own rules.” Similar expressions are seen in different geographies. President Modi in India, for example, puts technology at the heart of his vision of a “self-reliant India” saying that “technology is the mainstay of making the country self-reliant.”
There has also been a push for the development of a “sovereign AI capability” in various countries. British think tank Onward, for example, called for a “sovereign Great British GPT”. Tony Blair has said that “there may only be a few months for [sovereign AI] policy that will enable domestic firms and our public sector to catch up.” Such an idea has also been raised in Australia and other countries.
The geopolitical importance of AI is undeniable and this geopolitical importance might well result in further splintering of the web.
Splintering by different models of AI
Different political cultures are likely to produce varying AI models or products and this is likely to be exaggerated by different models of AI regulation. As has been noted with the unveiling of what some wags called ‘Chat CCP’ by Baidu, a Large Language Model (LLM) in a country that has severe censorship of speech will be incomparable to a LLM in countries with a free press, free flow of information and lack of censorship. Indeed, the Chinese government has already decreed that generative AI models in the country need to “reflect the core values of socialism and should not subvert state power.”
The Compute power and the ability to ship AI products and maximise the potential of AI clearly depends on the ability of companies to access cutting-edge semiconductors. And the growth of sanctions between the United States and China, symbolised by last year’s CHIPS Act, means that companies in different countries are likely to have access to differing levels of semiconductor.
The division in types of AI regulation noted above will also have an impact on the different types of AI based products that are available in different jurisdictions. Google Bard, for example, is available in 180 countries, but not in the European Union, with some analysts (and apparently Bard itself) suggesting that this is due to European privacy and data protection legislation. If regulation becomes more divergent, it’s entirely possible to believe that product availability and the user AI experience will also begin to diverge.
Splintering by authoritarianism
When the AI released their latest draft of AI regulation last week, what was most notable was what had been added to it. This included the prohibition of a number of “biometric surveillance” techniques and other surveillance techniques. This was a stark recognition that AI could amplify the growth in what has been described as “digital authoritarianism”.
Whereas the growth of the internet and technology was seen by many in democracies and more broadly as promoting freedom and access to information, many countries have used technology to monitor and coerce and censor their population in a way that was previously much more difficult to achieve (digital communications are far easier to monitor than letters and clandestine meetings). Steven Feldstein argues that the internet has allowed some regimes to use six techniques as part of techno authoritarianism: “surveillance, censorship, social manipulation and harassment, cyberattacks, internet shutdowns, and targeted persecution against online users.” China has also used its Belt and Road and Digital Silk Road initiatives to export models of external firewalls and internal coercion and surveillance.
The onward march of AI technology will make such systems even more powerful, including through facial recognition and other forms of surveillance. The latest EU AI Act makes it clear that many countries will find such a use of AI unacceptable, whilst others are likely to keenly embrace it.
Splintering by skills
The digital skills gap was a major issue before the rapid advance in generative AI and is even more of an issue now. Recent research shows that only 54% of people of working age in the EU have even basic digital skills, compared to a target of 80% by 2030.
The EU estimates that the bloc needs 20 million ICT specialists by 2030 to meet the even basic digital skills that business needs. At the moment, there are only 9 million such specialists. UK government research suggests that the digital skills gap is costing the country up to £63 billion a year in lost GDP.
This skills gap will grow even more concerning with the growth of generative AI. In short, those who are able to develop the skills to maximise the potential of AI are likely to prosper, whilst those who don’t are at risk of falling behind. This is particularly the case given that all of the “growth” jobs are likely to be linked to AI.
The skills (and social mobility) issue is one that we’ll return to in future Inflection Points. But it’s abundantly clear that the ability for both the public and private sector to ensure that people have the skills needed to succeed in an age of generative AI could be a further cause of an AI Splinernet. Put simply, different countries are likely to diverge based on their success in skills policy.
Responding to the threat to the global web
Global businesses need to prepare for an increasingly fragmented environment.
A splinternet and fragmentation matters hugely to business because it inserts both friction and uncertainty into business operations. Organisations need to ensure that they prepare adequately for what such fragmentation might mean, using scenarios (to consider what this might mean for their business), wargaming (to expose any organisational gaps) and other such foresight tools.
Business and civic society needs to make the case for regulatory cohesion.
The case for making the internet as frictionless as possible is an important one for business and civic society groups. Without this , the ability to conduct business across borders will become increasingly difficult and expensive, potentially shutting out small or emerging companies. It’s important that business makes its voice heard to make clear that consumers and small businesses will be the losers from fragmentation and that governments should push for as much cohesion as possible.
Time to revisit the T-12 concept?
Some steps have already been made to reach some kind of global agreement on AI regulation. As noted above, given such fundamental differences over issues such as surveillance and privacy, such global agreement is unlikely to be found. For democratic countries, however, there is both business and geopolitical sense in reaching some form of regulatory alignment. In 2020, Jared Cohen and Richard Fontaine advocated a T-12 organisation of “technodemocracies”. Such an idea should be revisited as the right potential forum to agree the basics of AI regulation and prevent a schism between democracies that might have a lasting impact.
And finally…
A few additional thoughts for the week.
The Singapore and Hong Kong rivalry
I’m writing this from Singapore, which very much feels like its booming and feels like one of the world’s top-tier business cities. There’s a fascinating Banyan column in this week’s Economist arguing that “a winner has emerged” in the rivalry between Singapore and Hong Kong. The columnist notes that:
Recently the balance shifted. There is clearly no contest anymore. It is game over in favour of Singapore. As a consequence, Banyan has moved there.
And Banyan is clearly not alone. As the piece notes:
Some 200,000 expatriates have left Hong Kong in the past three years, along with even more Hong Kongers. By contrast, in 2022 the number of foreign professionals in Singapore grew by 16%
The chart below, via Mike Bird on Twitter uses household income data to neatly sum up the changing fortunes.
Thoughts on Google’s I/O
I’ve watched plenty of Google I/O’s, sometimes because it was my job and sometimes out of general tech interest. This year’s did genuinely seem like the most interesting and exciting for years. There’s a great Verge summary video here. There’s been radical improvements in services like Translate using AI technology in the past few years, but much of the focus has been on incremental change. This year represented the use of AI to shift already impressive products to the next level. It’s probably the most transformative and important set of product announcements for some time.