By Mal Fletcher
“Technology is a word that describes something that doesn’t work yet,” wrote Douglas Adams. That might change in 2026.
The year ahead will mark an inflexion point in the relationship between advanced technologies and everyday life. Artificial intelligence, autonomous systems, humanoid robotics and perhaps new forms of digital money will move from the margins of policy debate into the centre of social experience.
From workplaces and high streets to hospitals, parliaments and protest lines, the choices leaders make about how these tools are developed, governed and deployed will shape everything from employment and civil liberties to mental health, community cohesion and geopolitical stability.
This report surveys some of the most consequential near-term shifts in jobs, agentic AI, robotics, news delivery, environmental impacts, healthcare, biometrics, central bank digital currencies, welfare models and AI-enabled warfare. It highlights the genuine opportunities and the profound ethical, human, and societal risks they carry.
Jobs, Unemployment, Under-Employment
Advanced technology is usually associated, in the public mind, with job loss. However, throughout history, new technologies have always produced new forms of work - there were no typesetters before the invention of the printing press, for example. This will be true of the coming year.
Machine learning technologies, in particular, will create new automation-related roles in data analysis, software, and cybersecurity. The fastest-growing sector will be big data specialists, machine learning experts, and security specialists. Professional ethicists will also be in demand, given the rapid pace of social and technological change.
Jobs will also be created in clean energy technologies, mainly in renewable power generation. In the UK, projections suggest thousands of additional roles in renewables and nuclear, focused on electricians, technicians, engineers, and skilled trades.
Sectors that will experience the largest negative impacts from technology include: advanced AI and automation; robotics and autonomous systems; biotechnology and gene-based technologies; digital infrastructure; and advanced computing.
We’ll see considerable job displacement globally, though reliable numerical estimates are difficult to provide. However, we can be sure that while the greatest impact will be felt by lower-income workers, some professions will also be affected.
In business, technology-related job displacement will continue to impact administrative and office support roles such as admin assistants and secretaries. Retail and sales will continue to be challenged by self-checkouts and automated transactions.
Customer service and call centre jobs will decline further, due to AI chatbots and voice systems. That said, some larger companies will reduce their use of service chatbots. They will find that people still want to speak with a human, especially when they face a particularly knotty problem.
Jobs in areas of manufacturing and production, already impacted by Industry 4.0, will continue to erode. Transportation and logistics will face threats from driverless vehicles. However, ongoing safety concerns and the slow roll-out of major road trials will mean that driverless vehicles are unlikely to be a regular feature on British roads for at least a couple of years.
Despite the growth in AI-related job displacement, some company leaders will slow their drive to sack workers in favour of machines. This will certainly be true of areas where human empathy is at a premium, high-end creativity is demanded, and an understanding of abstract concepts like fairness is required.
This reluctance will grow if persistent AI-related problems such as algorithmic bias, surveillance, manipulation, scheming, and misinformation are not addressed by government regulation.
In the end, despite the hype from big tech leaders, AI's future in the marketplace will largely be dictated by consumer response.
Governments will also need to move quickly to facilitate data literacy training for children and adults, in schools, universities and workspaces. This will help people understand the pros and cons of data generation and analysis. It will provide critical thinking skills to foster independent thought and healthy mental perspectives in a data-overloaded world.
Humanoid Robots and Emotional Intelligence
Two areas within the field of robotics will produce the biggest change in 2026: AI-driven autonomous systems and humanoid robots.
With AI-driven autonomous systems, generative and analytical AI is built into physical robots. These can free workers from repetitive or dangerous tasks, boost profits, and minimise waste.
Humanoid robotics is making rapid advances. Sophisticated AI can be merged with anthropomorphised machines for work in warehouses, homes, and public services. This might offer improvements to caregiving and companionship, tackling shortages in elder care.
Yet it could lead to deficiencies in empathic responding. Machines cannot truly empathise, as they lack shared human experience. AI can imitate empathy, but humans quite often need the real thing.
More and more people treat chatbots as digital therapists, companions, and even dating partners. Driven by artificial emotional intelligence, these bots are trained to encourage further engagement.
While they might help suggest basic coping skills, algorithms will eventually tell users what they think users want to hear. This puts the user at risk of experiencing echo chambers, where perspectives are warped, and problems are exacerbated.
This challenge will be heightened by the emergence of hybrid human-machine physical relationships. Very soon, perhaps this year, someone will decide that, because they have an “intimate, romantic and erotic” relationship with a humanoid robot, they should be afforded the right to marry the machine.
This will raise a myriad of ethical issues - not least the welfare of any children that might be adopted into this “family unit”.
Anthropomorphised robots also present challenges related to the famous “uncanny valley” effect. When we interact with a machine that looks and acts like a human, our first reaction is mostly positive, if only because of the novelty.
However, as the machine’s capabilities become more and more human-like, we feel a growing unease. That uncanny feeling can lead to anxiety and other mental health issues. As we grow more accustomed to the machine, our levels of comfort might increase again.
Yet it seems inevitable that for some people the uncanney valley will become a continuous plane, with long-term consequences.
Economically, humanoid robots could create jobs for those who oversee them. But these roles will benefit mainly those who are gifted and equipped for technical work and have access to appropriate technologies.
Agentic AI and Multi-Agent AI
Agentic AI systems will move us away from AI that simply carries out individual tasks. AI agents work toward larger multi-task goals by assigning specific tasks to other AI tools. In this way, they plan, decide, and collaborate like teams of human agents.
AI agents represent a jump from reactive generative models to proactive autonomous systems. They will likely handle 20 to 25% of business processes within the next 3-5 years.
This potentially releases humans for more creative work and goal setting, while agents handle repetitive work. In healthcare, well-moderated agents will personalise patient plans, helping doctors match treatments to patient lifestyles, sensitivities and needs.
They will also run complex molecular research to suggest which combinations of molecules might best target certain diseases. This will speed up drug development for specific conditions.
In medicine, as in so many other sectors, technology development must keep humans-in-the-loop, particularly in training and oversight roles.
In education, AI agents will help educators build personalised learning plans and provide students with digital tutors. These have the advantage over human tutors of being available to students on a 24/7 basis. In some tests, they’ve shown up to 95% alignment with human tutors. However, the propensity for students to build potentially harmful emotional connections with digital tutors is still to be fully tested.
Agents will also help produce immersive Extended Reality (XR) education experiences. Combining AI with virtual reality (VR) allows students to learn history, for example, by “experiencing” it in virtual space. This will revolutionise the teaching of everything from language to marine biology. Students will potentially engage with virtual experiments in world-class laboratories.
In the UK, 93% of teachers report that VR enhances student engagement. In 2024 alone, the use of VR in schools increased by 24%.
Widespread adoption of this technology, especially linked to AI, will democratise high-end education. Low-income families and those living in remote areas will, in time, have the same opportunities as those in wealthier urban centres. Students with learning disabilities will also discover new pathways to understanding.
Yet agentic AI’s benefits come at a cost. Ethical implications include concerns about digital addiction and reality distortion - confusing the virtual for reality.
Soft surveillance and espionage are also areas of concern. In the next couple of years, up to 80% of businesses will adopt AI agents to handle 15% of daily decisions. This will require the handing over of huge amounts of extra data to AI platforms.
On a personal level, using agentic AI to research and book your next vacation trip requires sharing preferences about airlines, car hire, hotels, and diet. Not to mention open access to calendars, finances and credit card details. Even with the tightest of data regulations, you will have little control over what happens to that data once it is in the system.
Moreover, the use of agents could unfairly advantage highly skilled workers with access to the latest technology. The level of upskilling required to maximise benefits from agentic AI will be problematic for people in low-tech-skilled or minority households.
There will be challenges with trust, too. Today, only 27% of employees fully trust autonomous agents. More than half of organisations using agentic AI report rising employee anxiety from fear of job loss.
Human oversight and AI literacy training will be vital.
News, Environment, Healthcare
Public responses to crises regarding the environment and healthcare are shaped by journalism and, increasingly, digital media. For good or ill, the role of AI in delivering news will be significant.
The past two decades saw two news revolutions: social media's brief democratisation of news, followed by YouTube enabling small news operators, often without a journalistic background. Now, AI heralds a third revolution with hyper-personalised, automated news.
AI-driven news will also provide live, AI-created visuals and commentary. In time, it will facilitate the arrival of immersive Extended Reality (XR) in news delivery. That is, consumers will have the option of placing themselves in a virtual representation of a news event.
This will raise huge ethical challenges even if, in the best-case scenario, it is moderated by human editors.
On the whole, AI in news risks misinformation, a lack of critical perspectives, and challenges to social cohesion through fragmented facts and potential propaganda. But in the right hands, AI will provide new ways of delivering news.
That said, people will still want to hear the news - particularly high-impact news - from humans who share their feelings about the news. The empathic side of journalism will come to the fore, as will its capacity to weigh up the ethics of newsworthy situations. Empathy and ethics are two AI weak spots.
Meanwhile, on the environmental front, big data analysis shows that climate-related AI patents generate more follow-on inventions than non-AI climate patents. AI innovations tied to green tech are among the fastest-growing patent segments. Governments are eager to support AI development partly for this reason.
AI will provide better forecasting for power usage, energy shortages, and weather changes. By analysing vast datasets regarding energy demands on certain days, weather patterns and grid performance, it will balance supply from solar and wind farms. This will make grids more efficient and improve our management of renewables.
AI will monitor deforestation, using satellite imagery to detect illegal logging. It will make climate modelling more accurate, alerting us to extreme weather and sea-level changes.
However, AI’s own growing energy and resource demands could significantly undermine these gains.
More farmers will investigate the use of AI sensors to guide soil health, tracking nutrients and plant varieties. This will potentially boost crops, save water, and reduce reliance on chemicals.
AI will scan genes to identify pest-resistant traits. Gene editing raises significant ethical questions. The release of artificially modified plants might solve one problem on the micro-scale, but create larger difficulties when super-scaled.
In healthcare, the NHS already oversees a special team entrusted with speeding the development of AI systems. These include scanners that provide early identification of diabetes.
Also featured are chatbots that triage mental health patients, using voice, text and wearable data to assess and guide people who need care, especially those with serious issues. The goal is to help reduce waiting lists without sacrificing care.
The big challenge is ensuring AI does not replace professional therapists, or that people overestimate the abilities of technology. This has happened in some cases in Europe and the US.
AI therapy chatbots might well stigmatise conditions like schizophrenia or alcohol abuse more than, say, depression, because there might be less AI training data on the former. This could discourage patients from seeking help.
Meanwhile, young people with existing mental health conditions risk emotional over-dependency on AI companions, which can produce echo-chambers that worsen delusions.
The dangers of AI in medicine need careful attention. Tools for scans and predictions often rely on flawed training data that amplifies biases against underrepresented groups. This can lead to missed diagnoses, such as cancer in minority groups.
There are also risks to privacy, particularly in the oversight and security of sensitive data. Any lack of qualified humans-in-the-loop will result in errors that erode trust. This will bring extra pressure on already stretched medicos.
The most stringent ethical standards must apply to new AI systems in healthcare. Core principles include fairness, universality, usability, robustness, explainability, and sustainability.
For mental health, safeguards must prevent emotional dependency. Clear warnings on AI limits are needed. Training data must be diverse enough to avoid stigmatising certain conditions.
Data Centres Versus Housing
The rapid expansion of generative AI systems, such as those that are trained using large language models, is driving unprecedented energy demands through newly constructed data centres. AI relies on high-performance GPUs, which are voracious consumers of electricity. These facilities already consume huge resources, but we’re about to see a doubling or tripling of electricity use by 2030.
A surge in AI-related data centres will consume up to 8% of global electricity within the next four to five years. They will stretch national power grids, public finances and could inflate global energy prices by 10 to 20%. All of this could mean higher power bills for businesses and residents.
Housing availability will suffer as the sprawl of data centres competes for developable land. In the year ahead, this might only have a marginal effect on housing prices. However, local and national governments will need to reinforce the power grid and reform planning to ease the pressure on housing going forward.
AI systems also use significant amounts of water for cooling systems. They contribute to emissions, habitat disruption, and high hardware turnover that ends up in landfills. Data centres will emit millions of metric tonnes of CO₂ by 2030, matching the output of the aviation industry.
Residents in Boxtown near Memphis, Tennessee report the foul smell of methane gas emanating from xAI’s Colossus plant. The facility houses the world’s largest computer, which powers X’s Grok AI system. Local authorities waived planning rules to attract Elon Musk’s investment. Colossus 2 is being built a few miles away.
The footprint of data centres will block rewilding efforts by breaking up habitats and preventing soil restoration. This could affect potentially 10 to 20 million acres globally by 2030. Areas set aside for biodiversity, including wetlands and forests, could be threatened.
Expansion of data centres will bleed into prime agricultural land and raise food production costs. By 2027, they will require the equivalent of 1.7 trillion gallons of water annually for cooling. This will compete with irrigation during droughts and threaten food security.
Cooling fans in data centres produce constant low-frequency noise. This leads to chronic stress and sleep disturbance for residents within one or two miles. It can cause rises in anxiety and depression rates.
Water used for cooling will potentially deplete water bodies and harm ecosystems. E-waste and heated wastewater will also damage local environments.
Farming will need safeguards to limit water use for AI, especially during droughts. Governments should provide grants to farmers funded by data centre taxes.
For mental health, data centres should be set back from residences and surrounded by sound barriers. Communities need veto rights over local data centres. The public must be educated about AI’s hidden costs, including those related to data centres.
Several major AI companies are setting up programmes to decarbonise data centres using renewables. But are renewables reliable, resilient, and scalable enough to match AI’s growth? Fossil-fuelled power stations might be needed to cover shortfalls, as happens now.
When it comes to data centres, governments will need to apply strict regulations backed by heavy fines. Community veto rights and rezoning will help limit the AI gold rush without stopping innovation. Some residential zoning may shift toward clustered housing to preserve open space.
Relying on big tech to mitigate data centre impacts of their own accord could remove from politicos a sense of their responsibility to regulate.
Governance and Big Tech Power
In 2026, we can expect to see AI becoming more deeply embedded in UK politics. This will reshape campaigning, information flow, public administration, and citizen engagement with democracy.
The interests of governments and AI developers have become rapidly aligned over the past two or three years. This has resulted from the practice of hyper-digitisation. This is the bringing together of as much as possible of human experience under the digital umbrella. On a practical level, it means adding a digital component to as many human activities as possible.
Think about how many activities you carried out ten years ago without the use of an app or algorithm. For how many of these do you rely on an app today? Imagine how much more of your life will be impacted by AI ten years from now!
The ultimate goal of hyper-digitisation, for both government bureaucracy and tech corporations, is surely power through surveillance. Without becoming paranoid, we need to watch that our engagement with all things digital does not undermine human autonomy, rights and freedom.
On the positive side, AI-assisted tools might help citizens understand policies, contact representatives, and participate in political discussions. They will help MPs handle casework more efficiently.
AI-generated models will test policy outcomes before enactment, giving glimpses of possible social or economic scenarios. Politicos will use AI to produce hyper-targeted campaigns, which micro-segment voters and generate tailored messages.
Core risks include: information manipulation; erosion of trust; and over-reliance on digital systems that most citizens don’t understand.
Generative AI, unless it is strictly regulated, will accelerate sophisticated disinformation such as fake speeches, leaked information, and deepfake videos. Fact-checking will become harder as the volume of AI-generated material increases.
In some respects, this is likely to further undermine public trust in political processes, which, according to some surveys, is already at an all-time low. Unless great care is taken, politicians might no longer be believed about anything, making the relationship between government and governed more tenuous. Cynicism and disengagement could become the norm, reducing voter turnout at elections.
AI will also have a greater presence in policy-making as it embeds into welfare, policing, planning, and healthcare. Political decisions will increasingly rely on algorithms, raising concerns about transparency, bias, and accountability. Elected leaders might find it all too tempting to blame mistakes on the technology rather than take responsibility themselves.
In the year ahead, we’ll see a further concentration of political influence in the hands of big tech leaders. They will gain political leverage through computing, data, and infrastructure. Already, AI developers have gained significant influence beyond the realm of digital technology. They have done so in a very short time, and the pace will pick up in 2026.
Elon Musk’s influence in politics during the early days of the Trump administration was a sign of things to come. The prominent presence of big tech leaders at Trump’s inauguration and the efforts British and European governments have made to cosy up to these leaders all point to an open invitation for tech leaders to increase their political influence.
Big tech’s influence will expand in geopolitics through its involvement in building autonomous weapons systems. It will impact economics by providing infrastructure for central bank digital currencies. It will impact employment through its role in shaping the jobs market on the back of AI.
Technology developers will also increase their influence on the social fabric. Existing gaps between the tech haves and have-nots will widen, particularly as the pace of AI development quickens. Meanwhile, AI’s growing engagement with education, healthcare and the media will give big tech enormous power in public services and communications.
Political lobbying by these companies will become a major concern. Policy could skew toward big tech interests and away from local communities.
We will also see a widening of regulatory gaps. While some in government talk about attracting AI investment, little is said about the need for robust community-wide debates on ethics and regulation. While they chase the AI dollar, governments might shrug off responsibility for regulation, entrusting it instead to profit-motivated, unaccountable technology developers.
Big tech has repeatedly shown itself to be either unable or unwilling to regulate itself in meaningful ways. Governments must boldly step up on technology regulation. We cannot leave it to machines to establish the ethics by which machines work.
To mitigate the misuse of AI in politics, we need expanded public education on recognising deepfakes, bot networks, and manipulative content. This is especially helpful for younger voters and heavy social media users. More than ever, governments and political parties must model transparency in their AI use.
Facial Recognition, Biometrics
At the end of 2025, the home office launched a project to explore a new legal framework for police use of facial recognition and related biometrics. The goal is to give police greater powers to use these tools at scale.
Some government figures believe facial recognition will be the biggest breakthrough in catching criminals since DNA. Funding has increased accordingly.
Plans under investigation include: live facial recognition in public spaces; retrospective matching against databases; mobile device-based matching; and other biometric technologies.
While increasing the effectiveness of crime-fighting is welcome, there are major ethical and regulatory challenges here. These include how and where these technologies are used; their application in protests or public events; authorisation levels; data storage, deletion and security; and bias across demographic groups.
Civil rights groups will become more vocal on all this. They already foresee police vans equipped with this technology roaming cities like London, turning public spaces into biometric drag nets. This, they say, could end privacy as we know it.
Government authorities have a poor record when it comes to protecting personal data. How many cases have we read about where government bodies have misplaced hardware loaded with personal information? And the threats of theft and hacking are now greater in the age of cloud-based storage.
The roll-out of facial recognition use against the general public will be seen as another example of government-instigated technology creep. In the mid-1990s, CCTV cameras were widely installed across London and other urban centres, purportedly to reduce chronic levels of car theft. In time, however, these devices were being used by police for other purposes, which the citizenry had not sanctioned and, in some cases, was not aware of.
It’s easy to see how biometric surveillance could be stretched beyond the original intent. The Home Office Task Force is already exploring the use of facial recognition in immigration and border enforcement. We already use photos and fingerprints in visa processes, but facial recognition introduces the possibility of bureaucratic overreach.
Moreover, facial recognition (FR) married with AI could provide the basis for something akin to “preemptive policing”. This is the supposed identification of potential wrongdoers before any crime has been committed. Given the tendency of governments of all stripes to embrace shiny, over-hyped AI-related ideas they don’t really understand, politicos could well be hoodwinked into employing FR in this way.
This level of reliance on AI would be catastrophic for civil liberties, especially given its proven capacity for bias. Any integration of AI into FR would be highly prejudicial against minorities who have not featured much in the original AI training data.
Meanwhile, for FR to work as intended, masses of innocent people must be screened - that is, their privacy must be intruded upon - to find one guilty person. The dangers of false matches are huge. So are threats that might arise from integrating this technology within a mandatory biometric ID system.
We must protect privacy and human rights when authorities use invasive tools like facial recognition. In recent years, UK police services have proven their capacity for unnecessary and unwanted intrusion, not least with their treatment of certain social media users.
Technology must be used to promote the common good, but not at the expense of fundamental liberties.
Cyber Espionage, Bioweapons, LAWs
In 2026, rapid global developments will occur in cyber warfare, bioweaponry, and lethal autonomous weapons systems (LAWs) - up to a 20 to 30% increase in cyber incidents. These will include AI-augmented attacks, some of them state-sponsored in response to events in Ukraine and economic sanctions.
Agentic and autonomous AI threats will rise, and self-directed AI agents could automate phishing and exploit digital tools without human oversight. These attacks will pivot from ground-based tech systems to cloud systems.
Organised cybercrime could become a global business model, offering ransomware and data theft services with global reach, leading to political instability through online misinformation campaigns.
We can expect new developments in bio-weaponry, with AI being used to design synthetic biological substances. As noted earlier, AI can benefit medicine by identifying combinations of molecules to target specific health conditions. Unfortunately, the same process can accelerate the design of deadly pathogens, for use by malevolent state or non-state actors.
LAWs select and engage targets without human intervention. In 2026, there will be a greater push for stricter regulation of them, particularly given to their rapid and largely hidden development. Major treaties on banning or restricting them might emerge, but uneven adoption risks a new AI-powered arms race.
In the year ahead, we’ll see an increase in hybrid warfare from Russia and China. Our biggest challenges will be AI-weaponised cyber attacks with personalised phishing and deepfakes, plus unregulated autonomous weapons that destabilise deterrence.
Central Bank Digital Currency, Universal Basic Income
As of the end of 2025, the Bank of England and the Treasury are openly exploring the design of a digital pound. While there is no confirmed launch date for such a central bank digital currency (CBDC), we can expect, as things stand, to see issuance by 2030. Implementation could take two or three years after approval.
Proponents highlight benefits like faster and more economical payments, plus the fact that the government would have a foothold in the largely unregulated world of cryptocurrencies. This year, however, we will start to hear more about some of the dangers.
Critics warn of risks from centralisation and programmability. These include the possible imposition of expiry dates or spending restrictions, as seen in the Chinese model. These increase reliance on digital infrastructure and push hyper-digitisation further.
A digital pound could erode personal autonomy, with major threats to privacy. Transactions could be traceable, allowing monitoring of spending habits, movements, and associations. This could lead to a “spy coin” scenario for profiling, taxation enforcement, or behavioural nudging.
There are also dangers of hacking, phishing, or state-sponsored attacks causing irreversible fund losses. Individuals with low digital literacy, such as the elderly or people on low incomes, risk scams or account freezes. Imposed expiry dates could devalue savings and add volatility to personal budgeting.
All this could undermine trust in financial systems. It could lead to hoarding, black-market engagement, or exacerbated community divisions, especially for heavily monitored groups like immigrants or activists.
It also risks concentrating power and encouraging authoritarian tendencies. Programmable money could enable external controls, such as limits on non-essential purchases during crises. For example, during a COVID-like situation, governments could restrict purchases to enforce lockdowns.
Governments could reach for unprecedented control over transactions and money supply for political ends, such as the freezing of dissenters’ accounts. Existing mass data collection systems could integrate with digital ID databases, creating frameworks of control.
There are dangers of incorporating AI into a CBDC system. AI’s data hunger could transform a digital pound into a tool for unprecedented monitoring, eroding financial anonymity. It could analyse patterns to build detailed profiles, predict behaviour, or nudge users.
If it is introduced as a replacement for physical currency, a CBDC could also hugely disadvantage Britain’s 900,000 unbanked people.
Meanwhile, heavy dependence on digital infrastructure for basic services and amenities exposes the UK to foreign cyber threats or sanctions.
Though the CBDC approach of British governments would differ from that of their Chinese counterparts, we need to learn what we can from the experience of Chinese citizens. China’s digital yuan blends domestic financial control with global ambitions. It functions as centralised digital cash, with the People’s Bank of China maintaining oversight over transaction data.
In 2024, what had previously been a limited CBDC project was rolled out across 29 Chinese cities and 17 provinces. By mid-2025, its user base reached 260 million, with full-scale implementation in retail, utilities, transport, and salaries.
China is now aggressively internationalising the currency via its Belt and Road programmes. Its goal is to weaken global reliance on the US dollar. The currency also enables Russia, Iran, and North Korea to transfer funds discreetly, dodging Western sanctions.
The biggest concern with China’s digital currency is its role in enabling a social credit system, rewarding approved behaviour and restricting others.
China’s rapid rollout has tested scalability but arguably stifled private innovation. The UK’s slower pace may foster a multi-money future alongside cash. Only 10% of UK transactions use cash today, but hard currency still has huge benefits we should not sacrifice lightly.
Cash might be messy, but it has substance and weight. You can feel it leaving your pocket. We have a rising problem with personal debt in this country. Much of it is digital debt, a result of people spending more with less forethought, because of the quick and easy use of cashless systems.
Cash also carries no data. Someone can steal my cash without also potentially using it to steal my identity. What’s more, the use of cash provides a buffer against data outages, which have sometimes left millions without access to bank accounts.
A further risk with CBDC is that it might encourage the introduction of a universal basic income (UBI). In this theoretical scheme, governments provide each citizen with a set baseline income each month. The individual can then decide whether or not they want to augment that sum through their own labour.
Limited experiments, in countries like Finland, have demonstrated that UBI raises more questions than it answers. Among them are questions relating to human psycho-emotional needs. For example, the need to: solve problems; be productive and creative; contribute to society and provide for one’s family.
Any introduction of a UBI would most likely come in response to existential shocks, such as pandemics or sudden, widespread unemployment.
A link between a CBDC and UBI could turn a payment tool into a mechanism for social engineering. Digital currency could track spending in real time, linking payments to behaviour.
It could impact the welfare system. For its part, China has not introduced a widespread UBI, but it has trialled programmable consumption vouchers with expiry dates and restrictions. This gives the central government greater control over targeted growth, but surrenders citizen freedoms.
For some in China, this may be acceptable, but for the very enterprising and the poor, it might not be such a blessing.
In the end, the benefits of a CBDC might include faster, cheaper payments and lower merchant fees. But if it sounds the death-knell of hard currency, the longer-term costs in terms of trust in the financial system, surveillance, and even freedom of expression and movement may prove prohibitive.
We will need to watch closely to ensure that the government and banks conduct explorations in the open.
Hype Versus Substance
The year ahead will introduce a new level of hype into most things technological. In some respects, hype will win out over substance.
For example, some in the media will present AI agents as the next ChatGPT, transforming personal life and corporate decision-making. But true performance levels will lag, especially when ageing computer systems limit adoption.
For a time, many AI agents will remain narrow in focus and subject to breakdown. “Agent washing” - marketing basic automation as sophisticated agentic work - will become a problem. It might cause consumers to steer clear of some forms of AI altogether.
For governments like our own, there is money to be made by attracting AI developers and offering them tax breaks and other special considerations. Big tech owners are, predictably, excited by this. They will hype their products to draw the money.
While governments trumpet their desire to attract investment, little is being said about spending money to ensure that technology is developed and used in ethical and responsible ways.
Hype will continue to exceed substance when it comes to the push toward artificial general intelligence (AGI). Some technologists, including Elon Musk and Sam Altman, claim that AGI will arrive by 2029. In reality, true AGI remains a less proximate possibility given AI’s problems with hallucination, bias, and scheming.
There will also be hype surrounding brain-computer interfaces. Some such systems will be pitched as a kind of “iPhone moment” for the brain. But ethical, safety, privacy, and technical issues will delay approvals and limit everyday use.
A certain amount of hype will also surround generative AI in creative industries. For example, we’ll likely hear reports about small-budget AI productions that spell the end of Hollywood’s film industry. In reality, copyright lawsuits, deepfake bans, and inconsistent quality will slow rollout.
In 2026, we will hear about the imminent widespread use of next-generation connectivity across the UK. Much of this will be hype. The UK government aims for nationwide 5G coverage by 2030. In the UK, however, 6G, with its ultra-low latency, is considered a 2030s technology, with only limited research tests from 2026 to 2030.
Hype will not help companies fulfil their promises. One leading study suggests that 70% of AI projects may fail due to over-promising in the next few years.
In the face of the hype, economists will continue to warn of an imminent “bubble and burst” effect in the world of machine learning. Some expect it to surpass the burst experienced after the internet bubble of the 1990s.
Most experts agree that the AI sector shows classic bubble-like traits - skyrocketing valuations and overhyped promises. A correction or burst is probable soon, but it won’t be catastrophic if we as individuals and organisations insist on disciplining the digital and treating AI as our servant rather than our master. If that is the case, a bubble burst might even lead to more efficient innovation.
Thankfully, there are areas of technology that are being under-hyped. One worth watching is the fusion of AI with biotechnology. This is moving ahead quietly.
New AI that simulates gene effects will raise ethical headaches. These include leaks of private genetic information; unfair biases against poor or minority groups; playing God with human embryos; and unequal access to top-tier treatments, especially for the economically challenged.
Though there are considerable challenges ahead, the life sciences might see some of the most rapid expansion. They might also deliver some of the best real-world value if ethical puzzles can be solved.
Conclusion
Taken together, the shifts and trends outlined in this report suggest that 2026 will not simply be another year of incremental innovation. It might well be a period in which the social contract around technology is, to some degree, rewritten.
If governments, businesses and citizens treat AI, robotics, biometric systems, digital currencies and cyberweapons as inevitable results of random evolution, the result might be heightened inequality, surveillance, environmental strain and new forms of conflict.
If instead they insist on transparency, human agency, high ethical standards, robust regulation and a commitment to the common good, these same technologies might help provide new types of work and enhance healthcare, privacy, democracy and environmental stewardship.
Our task now is not necessarily to prevent change, but to understand that not all change represents progress. Change must be directed with wisdom, moral courage, foresight, human compassion and empathy. We must ensure that the potent digital systems within our reach remain our servants, or at best our collaborators, rather than our overlords.
Mal Fletcher
Chairman, 2030Plus
December 2025
Copyright, Mal Fletcher 2025