2023 is and will be significant in many ways from both a (geo)political and a technological perspective. One, currently standing above all, is the generative AI and its integration into our everyday lives. Generative AI, or the Artificial Intelligence capable of producing content, be it text, image or video, presents a new reality and will soon influence our democracies and social structures in ways we cannot yet fully imagine. It thus requires immediate attention and action from policy-makers and experts alike.
A new revolution
Since the late 2010s, a great deal of thinkers, scientists and experts have compared AI to other great human inventions to demonstrate its impact on society and prove its significance, both in terms of advantages and threats. Sundar Pichai, Google CEO, has said the AI would be more significant for humanity than fire. Elon Musk has warned the AI could be more dangerous than nuclear warheads. And after the arrival of ChatGPT, Geoffrey Hinton, sometimes called the “godfather” of AI, a scientist who recently resigned from Google warning about AI’s potential detrimental effects on society, has compared the AI to the invention of the wheel or electricity.
It is now clear we are living amidst of another technological revolution. While many of us cannot wait for improvements in healthcare, government services and efficiency by reducing administrative burden at work, it should be a cause for concern, especially for democracies already struggling with polarization and lack of trust.
The integration of generative AI into our lives will be completely different to any other technological revolution, mostly due to the scarcity of time we have to adapt. The time passed between Gutenberg inventing the printing press and the publication of the first mass-produced newspaper was 150 years. Turning a first prototype of TV into a device owned by half of homes in the US took 30 years. Getting half of the world’s population to use the internet since its invention took 27 years. Facebook needed 4 years to reach 100 million active users, but it only took ChatGPT 2 months to reach that number. With every invention, the window for society to understand it and implement it properly diminishes exponentially. We are not thinking in decades or years but in months and days.
Nothing is ready
Unfortunately, neither our brains nor our legislative and education systems seem to be ready to take on this challenge. Neuroscientists longsuggest that the information overload and digital media, especially social media, affect our brain, as well as emotions and social behavior. Information overload makes our brains incapable of selecting and remembering the most important information, while the design of social media that fosters engagement can lead to addiction, difficulties in socializing, loneliness and mental health issues. Although our incredibly adaptable, we have not yet learned how speed them up to catch up with technology.
Education systems usually do not facilitate this adaptability, but on the contrary, seem to slow it down. Education largely based on past centuries’ style of teaching, with restrictive curricula emphasising "by heart" learning instead of encouraging creative thinking, problem solving and media literacy, is another part of the problem.
The same applies to our ability to address new developments in regulation. The regulatory system is also set to serve the needs of the 20th century, when stakeholders had the luxury of months, even years, to debate, draft and implement new legislation. While such luxury is long gone, lawmakers do not seem to fully understand that.
Social media are a prime example of the gap between the needs and abilities of the regulators to address the most pressing challenges. For the past 15 years, social media has repeatedly been proven to influence democracies and societies worldwide, yet the Transatlantic community still cannot agree on how to regulate them. The EU has taken the initiative of addressing the platforms via legislation such as the Digital Services Act that came into effect last year, but it is not efficient, as it does not adequately address the core of the problem linked to social media, which is, in fact, its use of simple AI – recommendation systems selecting content and advertising for its users based on personal data.
AI vs. democracy
Having witnessed social media’s ability to reshape the information spaces and power dynamics in democracies, generative AI has the potential to do the same on steroids. In societies suffering from polarization or fragmentation, simple AI algorithms used by social media to recommend content have exacerbated these trends. Tailored content selection based on one’s “interests“ reinforces users‘ existing assumptions, contributes to creating alternative realities for different groups of people and potentially leads to radicalization. This can be witnessed in the US, where Democrats and Republicans increasingly view each other as evil, but also in Slovakia, where a part of population shaped by a few opposition parties and sources supportive of them categorically distrust everything said by quality media, the president or government officials.
Advanced forms of AI will be able to elevate content recommendation to content creation. Articles, images, and videos will perfectly cater to our own needs, fears and assumptions about the world around us, which has on objective potential to further mess up our minds.
Let’s demonstrate it on the society in Slovakia, which is particularly prone to believing disinformation and suffers from a historically how trust in institutions according to GLOBSEC Trends polls. Based on konspiratori.sk database, there are currently around 280 websites in Czech and Slovak language producing problematic content in the form of conspiracy theories, disinformation, or extremism. At the moment, people spend time writing these texts and creating shocking clickbait titles, images, collages or subtitles for videos. Given that these are low-quality materials mostly based on content already written elsewhere, AI can quite easily take over this task or make it easier for the so-called authors to produce such contents. It would not only accelerate the production of manipulative low-quality contents, but also ensure more time for perfecting the tactics to target vulnerable groups.
Not to only paint a negative picture, AI can also improve the work of journalists and experts in producing quality information much more quickly. But since conscience, journalistic standards and ethical requirements compel them to produce trustworthy content, the production will likely be slower than that of conspiracy theory producers, as is the case today.
Loss of trust
If the information space is flooded by AI-generated texts, images and videos that mimic political representatives, leaders and experts, and exploit vulnerable groups’ frustrations and fears, we will face yet another wave of distrust and confusion about what is true and what is false. It can lead to movements, protests or political turmoil based on completely made-up information created by AI. Yuval Harari, one of today’s great philosophers, even suggests in his latest essay new cults and political manifestos and political campaigns could be written and run by AI.
In short, democracies might face complete disruption if we approach generative AI as laxly and late as we approached social media. Democracies findings themselves in an already vulnerable shape may not survive it.
Preventing disruption
While we cannot be sure these catastrophic scenarios are evitable, we should do everything in our power to prevent it.
First, we should return the good old quote popularized by the Spider-Man movie 20 years ago - “With great power comes great responsibility.” Top decision-makers at big tech companies have powers comparable to the leaders of the world’s largest states. Their decisions influence masses of people irrespective of state borders, they have the ability to facilitate genocides, assist or ignore autocracies, and help or destroy democracies. While the “power versus responsibility” balance works quite well in the political sphere, it has been mostly absent with big tech leaders whose powers have not equally met responsibility for (in)action.
The efforts aiming to motivate the big tech to self-regulate for the “greater good” of reducing malign content, such as the Code of Practice on Disinformation, are proving less effective. This is mostly due to a lack of enforcement, which can only be secured through regulation. Naturally, it is not an easy task as it requires drawing red lines and boundaries in a space where they do not exist. But quality and flexible legislation offering a space for adjustments as the technology advances can be drawn up if top thinkers from think tanks and academia take part in it. Efforts to do so are already ongoing on the EU level with the AI Act, which aims to create rules based on protecting human rights and safety, but the buy-in and understanding on national levels is necessary too. Given the lack of expertise currently available, regulators must be able to get such experts on board and, mostly, afford to pay them.
Those leading the race for more powerful AI must take responsibility for the power they have over our information spaces and us as humans. It can only be achieved if AI receives immediate attention, dedication and resources from all democratic political leaders. They must be ready to put AI at the centre of every discussion and future decision. It is the only viable way to protect democracy.
Author: Dominika Hajdu