2025 is shaping up to be a fascinating year in tech and more broadly.
As is now tradition (in other words it has been done more than once), I’m using this first full Substack of the year to consider what some of the major tech trends might be this year.
Please let me know what you think.
Techno-optimism will gain more of a cross politics audience
Last year, I made the case that “techno-optimism” should have a place across the political spectrum. The fact that people like Elon Musk and David Sacks are about to be in the orbit of the White House might suggest that techno-optimism is going to increasingly the domain of the political right. And the preoccupation of many on the left with “taking on big tech” might suggest that this polarisation around tech policy might be growing.
But, under the radar, things might slowly be changing. Many on the centre-left and centre-right are now beginning to think about ends rather than means. They’re also aware that their present position as “the party of no” when it comes to tech policy basically allows the right to “own” a tech-driven vision of the future.
This means that they are thinking in terms of policy goals and how to achieve them. As I noted in IP23, better use of tech is an essential part of building stronger public services, improving healthcare and tackling global warming. A “progressive” politics that goes beyond being performative and is serious about achieving its goals clearly has to be a politics that takes the potential of tech more seriously. This was a point made by Tony Blair in his book On Leadership, where he discussed the potential of technology to remake the state around the citizen
If anything is going to rocket boost this trend this year, it’s likely to be the publication of Ezra Klein and Derek Thompson’s book called Abundance: How We Build A Better Future.
Klein has gone out of his way to talk about the potentially positive impact that tech can make and on his year-end podcast talked about how ambition to “put invention and innovation and technology much more at the center of any kind of social justice agenda.” Recently, Klein described his tech-optimism as “political realism” and an understanding that technology is the only way of achieving progressive the goals of what he describes as his “abundance liberalism”. 2024 illustrated Klein’s ability to change the political weather and the impact of Jonathan Haidt’s Anxious Generation showed how a high profile book can alter the terms of the debate. In 2025, Klein and Thompson could help techno-optimism cross the political spectrum.
The link between bad tech regulation and poor economic performance will come under greater scrutiny
Tech regulation is facing increasing scrutiny on both sides of the Atlantic. In the US, the appointment of "techno-optimists" to key government positions, coupled with the establishment of the Department of Governance and Emerging Technology (DOGE), signals an acceleration in the debate surrounding the merits of regulation, particularly in the tech sector. The AI Executive Order is widely expected to be revised or replaced.
The potential impact of poorly designed tech regulation on economic growth and dynamism is a growing concern, especially for economies lacking the robust growth and history of creating trillion-dollar tech companies seen in the United States. This is particularly relevant for Europe, which has historically prioritised tech regulation, exemplified by the "Brussels Effect" and legislation like GDPR, DMA, DSA, and the forthcoming AI Act. Post-Brexit UK, with its Online Safety Act, follows a similar trajectory.
The discussion surrounding tech regulation's impact on growth has intensified, fuelled by the diverging economic trajectories of the US and Europe and a sense that a lack of sufficient innovation is holding back European economies. A recent analysis by Tyler Cohen in Bloomberg highlights this stark contrast: in 2013, the EU economy was 91% the size of the US economy (in dollar value); now, it's approximately 65%. On a per capita basis, US GDP is now more than double that of the EU. Given differing demographic trends, this economic disparity is likely to widen.
As Europe grapples with stagnant GDP and productivity growth, the role of tech regulation is coming under increasing scrutiny. The Draghi report on European competitiveness identified "inconsistent and restrictive regulations" as a key obstacle for scaling innovative companies within Europe. Persistent "Eurosclerosis" and projected continued underperformance relative to the US are likely to further intensify the debate about the impact of extensive tech regulation, particularly as it leads to product unavailability for EU consumers. This debate will also extend to countries currently considering tech regulation, as they weigh the potential trade-offs between regulatory oversight and economic growth.
There might be an increasing that getting public polciy around tech right is essential to the future prosperity of a country.
The year of the robots? AI use cases will become more broadly ingrained in the economy
The past few years have witnessed the rise of increasingly powerful AI models and a growing number of real-world applications. 2025 has the potential to mark a significant acceleration in the scale and impact of these applications.
One area poised for significant disruption is scientific discovery. Projects like DeepMind's AlphaFold, which predicts protein folding with unprecedented accuracy, and the GNoMe project , which has discovered millions of new materials, demonstrate AI's capacity to address long-standing scientific challenges. As discussed in this DeepMind paper, more powerful AI models could usher in a new era of scientific breakthroughs in 2025 and beyond, accelerating progress in fields such as medicine, materials science, and climate modelling. For example:
Drug Discovery: AI could significantly speed up the identification of potential drug candidates by analysing vast datasets of biological information and predicting drug interactions.
Materials Science: AI could design novel materials with specific properties for diverse applications, from more efficient batteries to stronger, lighter construction materials.
Another sector ripe for transformation is manufacturing. As discussed in IP25, the combination of advanced AI models with increasingly sophisticated robotics and improved data sources has the potential to revolutionise manufacturing processes and boost productivity. 2025 could see these technologies converge to deliver a substantial real-world impact. The development of humanoid robots will further amplify this impact, extending AI's reach into other sectors such as warehousing, logistics, and even healthcare. For example:
Automated Assembly: Humanoid robots could perform complex assembly tasks on production lines, increasing efficiency and reducing human error.
Predictive Maintenance: AI algorithms analysing sensor data from machinery could predict potential failures, enabling proactive maintenance and minimising downtime.
Beyond these industrial applications, we are likely to see increased personalisation, and perhaps even "hyper-personalisation," powered by AI across various sectors. This trend is already emerging in digital advertising, where AI facilitates experimentation at scale, and this is expected to accelerate in 2025. This could extend to:
Healthcare: AI could generate personalised treatment plans and risk analyses based on individual patient data, leading to more effective and targeted care.
Retail: The online retail experience could become ultra-personalised, with AI tailoring product recommendations, offers, and even the user interface to individual customer preferences.
Education: AI tutors could provide customised learning experiences, adapting to each student's pace and learning style.
Fitness: Personalised fitness plans and coaching could be delivered through AI-powered apps and devices, optimising workouts and tracking progress.
Increased use of agents, in particular, will be notable. And regulators will want to be involved.
AI agents, systems capable of autonomously performing tasks for users, are poised to become a major AI application this year. Last year, Sam Altman described agents as the "killer app for AI," envisioning them as "super-competent colleagues" intimately familiar with every aspect of a user's life and work, yet retaining a distinct identity. In a recent influential essay, Altman further stated:
We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.
Google's Gemini 2.0 is positioned as "a new AI model for the agentic era," building upon existing agentic models that "can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision.”
This increased personalisation and autonomy of AI agents will undoubtedly attract regulatory attention. Concerns surrounding privacy, data protection, and other societal impacts will be paramount. Furthermore, it remains unclear how horizontal regulations, such as the EU AI Act, will classify the risk posed by agents and their real-world deployments.
The current ambiguity surrounding the regulatory treatment of agentic AI underscores the inherent risk of premature regulation, which can quickly become obsolete in the face of rapid technological advancement. This risk is particularly acute when regulations are poorly designed, lacking agility and foresight.
Digital identity will be an increasingly important factor
Digital identity will be a crucial factor in 2025, addressing both general identity verification for regulatory compliance and the more complex challenge of proving personhood to maximise the benefits of AI.
The increasing prevalence of age-appropriate design regulations, as previously discussed, in several US states and Australia is accelerating the demand for technologies that can reliably verify a user's identity and age. This is likely to drive growth in solutions such as AI-driven facial analysis, document verification, and third-party verification services. A key challenge for these services in the coming year will be balancing regulatory requirements with user experience, ensuring both frictionless verification and robust privacy protection.
The second, and arguably more significant, challenge is proving personhood in the age of increasingly sophisticated AI. This has become critical for several reasons. Firstly, as AI becomes more adept at mimicking human behaviour, robust methods are needed to distinguish real individuals from bots and fake accounts. Secondly, proof of personhood has the potential to enhance financial inclusion and broaden access to essential online services, particularly for underserved populations. Thirdly, reliable personhood verification can play a vital role in preventing cyberattacks, including Sybil attacks, which can undermine the integrity of online systems.
2025 is likely to see further advancements in addressing this complex issue. Various approaches are being explored, including biometric solutions like those offered by Worldcoin (using iris scanning) and Yoti (employing facial recognition), social graph-based methods such as BrightID, and a range of blockchain-based techniques.
The Data Competitive Market. Could Data Become More Important Than Compute?
In April 2023, we discussed the potential for a training data shortage to impede the continued rapid progress of AI. A recent article in Nature magazine underscores this ongoing concern, arguing that “some specialists say that we are now approaching the limits of scaling. That’s in part because of the ballooning energy requirements for computing. But it’s also because LLM developers are running out of the conventional data sets used to train their models.”
One projection (see above) suggests that the supply of high-quality training data could be exhausted as early as 2028. This potential scarcity coincides with increased scrutiny and restrictions from news publishers, academic journals, and other copyright holders regarding the use of their content for AI training.
This situation has given rise to what is being termed the “data competitive market,” where AI labs and companies are vying for a seemingly finite supply of high-quality training data. This market is characterised by several key activities: direct data collection from users through products and services, strategic data partnerships for sharing resources, and data acquisition from third-party brokers.
In this environment, where access to high-quality data is crucial for maintaining AI development momentum, 2025 could be the year that data becomes as strategically important as compute power for the future of AI models.