With ongoing Russian aggression against Ukraine jeopardising post-war peace in Europe, the talks about strengthening the European Union’s energy security are coming to the fore and capturing media headlines. However, European legislators are also risking Europe’s digital security, leaving the continent vulnerable to disinformation content spread by malign actors.
The attack by Russia on Ukraine has shattered the idea that peace in Europe can be taken for granted. The EU’s response was swift and the sudden cohesiveness of EU member states is positively surprising. Among the sanctions was the suspension of the broadcasting activities of the Russian state-controlled media Sputnik and RT/Russia Today in the EU. Some countries in the central European region, which has been particularly targeted by Russian propaganda and disinformation, went even further in protecting their information space.
The Czech top-level domain administrator CZ.NIC blocked eight selected disinformation websites allegedly jeopardising state security. Similarly, the Slovak parliament very quickly passed amendments to the Cybersecurity Act, giving power to the Slovak National Security Authority to block websites with "harmful content." This resulted in several online news portals with alleged ties to Russia being shut down.
In light of the current geopolitical crisis that Europe has been caught in, there is a natural demand for acting quickly and with resolve. Yet, in spite of many governments, politicians and NGOs calling on digital platforms to quickly remove harmful content, their legislative activity seems to go in a different direction. The EU’s landmark content moderation law – the Digital Services Act (DSA) – is slowly taking shape in negotiations known as trilogues. Although there is still a lot of work to be done, the current path of the negotiations points to a concern outcome when it comes to Europe’s resilience against disinformation spread by malign actors.
In the quest to increase transparency and consumer protection, Article 15 of DSA states that providers of hosting services should inform service recipients with a specific statement of reasons every time they "remove or disable access to specific items of information provided by the recipients." What this means is that a platform such as YouTube would have to inform the creator of a video that was uploaded to the platform every time his or her content was "demoted" whether it be through a reduction in visibility or a change in ranking recommendations. This will result in a huge volume of notifications daily.
We are all familiar with cookies notifications on websites but this notification requirement could far outweigh them in terms of the impact on user experience. Moreover, other DSA articles deal with user redress mechanisms such as internal complaint-handling systems and out-of-court dispute settlements. While these seem benign enough, they have the potential to impact Europe’s security and deserve closer scrutiny.
Firstly, disinformation is spreading quickly and lengthy notification and redress procedures might impact the ability of platforms and governments to react to disinformation. Just imagine that hosting services could not take down clearly fake videos spreading disinformation related to the war in Ukraine before properly informing its creator and going through a lengthy internal or external dispute settlement that could take weeks or months. Blocking measures such as those by the Czech or Slovak governments might raise questions related to freedom of speech, but underline that there is no room for hesitation where there is the potential of fake news endangering national security.
Secondly, the transparency in Article 15 could do more harm than good. Malign actors are already now diligently working on ways to avoid the online content moderation mechanisms of platforms, for instance by posting several very similar versions of the same video to see which one gets taken down and which one avoids the blocking decision. By providing them with a detailed statement of reasons why content is removed or limited, it would mean effectively telling them what to do next time to spread their fake news more efficiently.
Thirdly, redress mechanisms could be misused by third parties (that could be funded by hostile governments) to flood the complaint handling capacities of platforms with unsubstantiated requests, thus paralysing the ability to deal with legitimate user appeals. Recent amendments to the DSA aim to strengthen the complaint-handling mechanisms by granting users wider appeal rights (e.g. the right to contest a mere reduction in visibility of content) or requiring a human review. These requirements could become overly onerous if there is a flood of illegal content from malign actors, ultimately limiting the ability of platforms to react quickly and efficiently.
Despite all of this, there are potential solutions. For instance, the DSA provisions on the statement of reasons or redress mechanisms could focus solely on "hard content" restrictions, while excluding decisions related to ranking or recommendations. Further carve-outs should be provided for high-volume commercial spam or deliberately organized influence operations to protect the efficiency of redress mechanisms. Put simply, the EU must take the new security situation into account and avoid making itself unnecessarily vulnerable in the name of transparency and consumer protection on the road towards a fair Digital Single Market.
Filip Kužma is a research analyst at the EPPP – European Public Policy Partnership.