This blogpost is part of a series of short commentaries on the European Commission’s proposals for a Digital Markets Act and a Digital Services Act, released on 15 December 2020. Stay tuned for more.
Digital advertising has become the bread and butter for digital platforms providing content and services online. The highly anticipated DMA and DSA proposals include provisions that jointly tackle issues that surfaced in these opaque markets. The overarching goals are far-reaching with rules aimed to curb structural market concerns caused by gatekeeping platforms and to strengthen online users’ rights. When it comes to advertising-specific rules, the overarching theme in both documents is transparency.
Having accurate, complete information for making good decisions is important. However, people’s behavior is influenced by environmental constraints and personal internal triggers (such as moods, emotions, psychological inclinations), sometimes leading to poor decisions despite the available information at hand. Looking at the problems in the digital advertising markets, increased transparency is an obvious starting point. Looking at the actual (business) users behavior, the doubts arise whether DMA and DSA does, in fact, offer a solution with teeth.
Digital advertising as hypernudging?
Digital platforms with ad-funded business models operate on multi-sided markets. They match and connect advertisers and publishers with the desired audiences. Some gatekeeping platforms hold a systemic position within the digital advertising value chain (on both advertiser- and user-facing sides of the market), which allows them to collect, combine and process (business) users’ data and to target individual users with dynamically personalised digital ads across its many business domains.
The granular image of users’ context allows targeting the right consumer, with the right message, by the right means, at the right time. As the norm in modern marketing is making everything about ‘you’ as an individual, the question arises whether targeted advertising as experienced on gatekeeping platforms goes beyond enticement and has stronger, potentially more problematic, qualities that have the power to recalibrate users’ behavior.
The established behavioural economics research shows that in complex decision-making environments people often do rely on mental rules of thumb to simplify their assessment of options. Rationality is limited, and decision-making can be affected by moods, emotions and psychological inclinations, possibly leading to systemic errors in thinking that result in poor or unfavorable outcomes (see concrete examples here).
Thus, the ordering of choices and delivery of relevant information matters, as the way people perceive their options will impact their choices. In other words, with information about specific users’ characteristics, the platform may nudge them towards predictable outcomes that suit their profit-driven ends. As information-rich digital environments are particularly complex, the way options are sorted and presented is important in influencing users’ decisions.
With the immense data and AI capabilities, gatekeeping platforms are uniquely positioned to engage in hypernudging– one of the most sophisticated data-driven nudging techniques that recently come up in the literature. Looking at the characteristics of targeted advertising (dynamic, personalised, deductive/predictive etc.), the initial inkling is that some forms of targeted advertising are attempts to hypernudge. If that is true, increased transparency obligations may fall short in addressing the negative effects of hypernudging felt by the individuals and on the market as a whole.
Emerging concerns: what does transparency have to do with it?
Digital advertising markets are complex and opaque. On the advertising side of the market, a handful of gatekeeping platforms is able to impose non-transparent conditions on business customers, that impede their ability to assess the value and quality of provided services. On the user-facing side, the current opacity of the functioning of the digital advertising markets create challenges because of possibilities to spread misleading, harmful or outright false ads in commercial and political domains. Users do not clearly understand the extent to which their data is being used, why certain ads are shown, and what goals they serve.
The solutions proposed in the DMA and DSA are elegantly simple: imposing obligations that require (gatekeeping) platforms providing digital advertising services to give information that helps to address abovementioned concerns. The strength of the proposed measures lies in reinforcing accountability of powerful platforms that for a long time were secluded from the outsiders’ scrutiny.
Its Achilles’ heel, however, is connected with the fact that (business) users remain in the weak position as compared to the gatekeeper: despite knowledge of unfavorable conditions, advertisers and publishers may not have better service-provider alternatives; similarly, due to users’ behavioral inclinations, providing more information about specific ads does not necessarily translate into better decisions.
Also, the dangers of hypernudging are broader than merely keeping advertisers and consumers in the dark. These techniques are designed to surpass users’ rationality; they can be used to subvert autonomous choice and potentially manipulate consumers into unwanted outcomes. When gatekeepers engage in such practices in a large-scale, systemic manner, we are coming closer towards the digital market manipulation scenarios.
Of course, not all is doom and gloom for targeted advertising – the research suggests its welfare effects are generally ambiguous, and the effectiveness of digital ads in shaping consumers’ preferences is, so far, limited. With the fast technological developments and the growing possibilities for engaging in potent hypernudging techniques, it is the right time to take future-proof steps in stopping the emerging harms and increased transparency in digital advertising is only one of them.
This post also features on the blog of the Utrecht University focus area Governing the Digital Society