For Ruairí Harrison and Anne Koopmans, as the intrigue into the final contours of the Digital Services Act (DSA) is displaced by an anticipation as to the Act’s impact on both the user experience and platform transparency, the emphasis on social media platforms is hard to avoid. Yet as the dust similarly settles on the Joe Rogan-Neil Young controversy for Spotify, how will the DSA impact Europe’s leading streaming service? Further, is the DSA appropriately ‘future proof’ to address Spotify’s increasingly diversified business model or is there a necessity for focused regulatory action on digital streaming services?
The Digital Services Act (DSA) and Streaming Services
EU lawmakers recently agreed upon the DSA, the EU’s most ambitious digital rulebook to date – a rulebook expected to fundamentally reshape how Europe regulates its digital landscape. Aiming to create a safe, digital environment where the fundamental rights of users are respected, the DSA includes concrete measures that establish liability standards for online platforms based on their size, role and impact in the digital ecosystem.
At the outset, it must be explicitly stated that Spotify and similar streaming services will be regulated by the DSA. Irrespective of media coverage being largely focused on the major social media and eCommerce platforms, the Act is horizontal in that it applies across the board to all ‘Online Platforms’ (Article 2(h)). Moreover, given its 134 million monthly active users (MAUs) in Europe, it can be safely assumed that Spotify will be considered a ‘Very Large Online Platform’ (‘VLOP’) in the DSA (Article 25). The VLOP status of 45+ million MAUs in the EU will come with increased obligations including systemic risk exposure assessments, risk mitigation measures and heightened transparency reporting requirements.
We think that it is appropriate that Spotify is included in this scope given how commonplace it is for citizens to use podcasts to access information on matters of public interest. This should come with increased EU-level oversight as Spotify is simply dangerous in a different way to social media giants such as Facebook from a systemic risk perspective: the potential for harmful and illegal content to proliferate on podcast-streaming services is unique for a number of reasons.
As the Joe Rogan – Covid-19 disinformation debacle demonstrated, citation of sources and impartiality are hard to interweave into a winding conversation about global public health matters. On top of this, unlike the simplicity of moderating, for example, an identifiably antisemitic Instagram meme, a podcast may contain two hours of harmless content with a brief monologue of vicious antisemitism interweaved within it. Yet as Neil Young and fellow artists’ withdrawal from Spotify demonstrates, these are no longer valid excuses for a platform of such considerable size and influence as Spotify to inadequately moderate its service.
This brings forth both qualitative and quantitative issues: firstly, how effectively can existing automation and human moderation tackle this content, and how varied and culturally adapted are those content moderators? Secondly, are sufficient resources being invested in content moderation at streaming services to survey the endless hours of audio content?
This is where the DSA comes in. Its emphasis on increased transparency obligations for VLOPs should help address these black box elements to the functioning of not only Spotify’s algorithmic processing but they will also allow for far greater transparency as to how the platform is meeting its risk exposure and risk mitigation measures. But how will this transparency function in practice in the DSA?
Spotify’s Voluntary Step towards Transparency – A Wholehearted Effort?
Prior to the DSA’s legislative intervention, Spotify claims to have made several attempts to make itself a more transparent streaming platform. For instance, Spotify launched the Loud & Clear web portal in 2021 presenting explicit data on how Spotify streams are determined and paid out. With this initiative, the platform acknowledged that artists and users on its platform deserve more clarity on how the music streaming economy is handling their data. The Loud & Clear campaign may have been a step in the right direction as it offers insight in the monetary aspect of the platform, but it remains suspiciously vague on determining exactly how much data is being collected from the users.
This is striking since an analysis of the Spotify’s terms and conditions reveals that the platform extracts ever-growing amounts of data: apart from data on pay-outs, photos, location, voice data, and personal information are being collected. Whereas the length of Spotify’s terms of service has grown to over 10,0000 words since 2013, the underlying code driving the algorithms remains undisclosed giving the platform no legal obligation to share it thus creating alarmingly little transparency for users.
Intervention by regulatory instruments such as the DSA may be a solid solution to initiate a more transparent cyberspace. The DSA highlights the need for platforms to be held accountable for their societal impact and outlines specific obligations for VLOPs including:
- Assessing the systemic risks at the platform caused by, for instance, illegal content (Art.26),
- Ensuring measures have been put in place to mitigate the above risks (Art. 27), and
- Improving the user’s experience and understanding of recommender systems (Art. 29).
In this way, the DSA hopes to move towards demystifying the secrecy around the algorithmic operations of powerful platforms like Spotify. When users know how much data is collected from them, a lack of transparency from companies is unlikely to be accepted. This is one point that has brought co-legislators and stakeholders together, with calls from the European Parliament to require stronger mandatory risk assessments. It is evident that the strengthened obligations proposed by the DSA should shed light on Spotify’s policy transparency issues; the Joe Rogan scandal has simply accelerated conversations on why Spotify must be reined in a similar way to the Parliament’s familiar foes – tech giants such as Meta, Google, Amazon and Apple.
The DSA’s Impact on Spotify – A Future-Proof Regulation?
As is typical of the difficulties of regulating inherently innovative industries, the ‘future-proof’ nature of safety and transparency rules for digital platforms is something which is hoped for as opposed to being guaranteed. For one thing, we simply do not know how automation and artificial intelligence will impact, for instance, content moderation practices. Furthermore, a horizontal regulation the DSA it is not intended to deal with the specificities of different types of digital platforms. Thus, although Spotify will be considered a VLOP under the DSA, certain features of the platform that make it both popular and unique are not catered for within the rulebook. Let us look at three current and future sources of discussion for Spotify regarding future-proofing and the potential for further regulatory action after the DSA.
Firstly, the DSA focuses upon monthly active users in establishing its regulatory metrics. As above, this is appropriate considering the Act’s horizontal nature. However, in the case of Spotify’s fight against disinformation, the audience reach of specific podcasters is a far more useful metric to assess where systemic risks exist. With over 3.2 million podcasts on Spotify (2021) and counting, it is clear that ridding the platform of all harmful content is too arduous a task. Instead, resources must be bolstered such that, at the very minimum, the most popular and influential podcast channels are held to more rigorous standards. One could also point to Spotify’s immense resources to suggest that such standards should not be limited to the largest purveyors of information. Yet the sheer size and consistency of a listenership such as Joe Rogan’s means that irrespective of whether he likes it or not, Rogan currently holds more sway on public debate than most national broadcasters.
This undoubtedly has a real-life impact on the quality of the information space. If we choose to hold broadcast media to higher regulatory standards, should this not also be the case for globally popular podcasts with tens of millions of listeners? Increased transparency as to how the most influential podcasters are being moderated – be it the BBC, the New York Times or an independent political commentator – appears to be something which will become more urgent for Spotify and its rivals in the months and years to come.
The Metaverse & Content Moderation
The second issue that Spotify may encounter regarding content moderation concerns its recent tentative move into the Metaverse via livestreaming virtual concerts. Of course, content moderation in ‘real time’ is not exactly a new phenomenon (think Twitch, Club Penguin, or YouTube livestream comment sections). We see this move as tentative because Spotify has been keen thus far to avoid a more interactive user-to-user or community experience. Yet as will be discussed below, Spotify is eager to diversify its revenue streams. If the Spotify Island metaverse amounts to another instance of revenue diversification (now that it has a stable position in both the podcast and music streaming market), then Spotify should look to platforms such as Twitch which have been forthcoming thus far about their approach to online safety.
Through their participation in knowledge-sharing online safety initiatives such as the Internet Commission’s Accountability Report 2.0, other platforms can benefit from industry leaders. For instance, Twitch prioritises partially ‘devolved’ community moderation, includes high-profile online creators in its Safety Advisory Council, and enforces stricter guidelines for promoted content. As the DSA does not look to regulate content moderation practices directly, knowledge sharing by industry leaders is crucial at this point for practically effective regulation. Whether this self-regulatory approach to moderation practices will suffice into the future remains to be seen.
The Broader Societal Implications of Business Pivots
The final issue exposed by the Joe Rogan saga is the conflict between tech platforms’ profit-driven rationale and the responsibility which they must hold where this rationale leads to business pivots with stark fundamental rights implications. Nowhere has this been more evident than Spotify’s pivot into podcasting. Although such an expansion does not fall directly within the ambit of EU Competition law or the DSA Package, Spotify has learned via prominent artists withdrawing music from the platform that streaming podcasts is far more challenging than streaming music from a public relations perspective. Although this pivot led to Spotify’s self-assured entry into the information space, this entry must come with greater responsibilities. This is because Spotify currently streams innumerable conversations concerning topics with real-life consequences such as public health and public security.
Of course, platforms amend and diversify their core services continuously: see Instagram’s relatively recent entry into online retail services. Yet some business pivots are more consequential than others. The pervading view across Europe seems to be that once a platform enters the information space, they must accept the societal responsibilities accompanying this. Will the DSA’s systemic risk exposure (Art. 26) be sufficient to force Spotify to acknowledge said responsibilities?
Where to Next?
This blog has reflected upon how Spotify is expected to be impacted by the DSA and where we expect significant issues to emerge for the platform in the coming months and years. With Spotify’s bottom line conflicting with its growing societal responsibilities regarding the quality of (and access to) information for public debate, this is by no means the end of the road. If anything, this may just be the beginning of a challenging period for Spotify.
Despite this, a digital streaming service-focused EU rulebook seems surplus to current requirements given that both the DSA and the recently ‘strengthened’ EU Code of Practice on Disinformation have not yet had time to filter down and impact the user experience. In the meantime, we believe it is imperative that platforms continue to do all in their power to redesign their services in line with DSA requirements and engage in industry-wide knowledge-sharing regarding best practices for content moderation.
Anne Koopmans is a law student at Utrecht University. After finishing her bachelor’s thesis on the regulation of content moderation by hosting providers in the forthcoming Digital Services Act, she joined Utrecht University as a student-assistant at the European Law department. With a particular interest in the intersection of law and technology, she is starting her LLM in European Law, track Law and Technology, at Utrecht University in September 2022.