Democratic Platform Governance
On resilient institutions for bottom-up decision making and user agency in consumer social platforms.
This post was originally published in The Startup on Medium.
Headlines and online commentary in the current news cycle have seen their fair share of coverage and responses to recent contrasting decisions by Facebook and Twitter regarding political advertising, with the former permitting such ads and reaffirming a commitment to existing policy and the latter making the switch to banning such advertising altogether. While much has been said and written in response to these policies from a wide-range of perspectives, (and there certainly are many perspectives surrounding the issue), far less attention has been paid to the manner in which such decisions were made — the procedure by which these policies with far-reaching and numerous potential consequences, some unforeseen, were designed, advocated for, and enacted. In both cases, these policies appear to have been made internally by the two companies, likely factoring in inputs and pressures from a variety of sources, as well as the political zeitgeist, and were articulated by their executives to the public at large. Following implementation, users can then expect to be subject to the policies and their effects, intended or inadvertent. Additionally, political actors and public figures have certainly been vocal, sometimes aggressively so, about political advertising policies and the effect of social media platforms on issue advocacy and electoral politics in general. It seems, then, that the stakeholders that have had the least input in these decisions surrounding platform policy towards political advertising are the intended recipients of such content and, one would assume, the group through which such advertising is translated into electoral reality — the users themselves.
Any discussion of decision-making procedures on social media platforms should of course acknowledge, as many often forget, that giant social media platform companies are just that — companies. Companies may, of course, respond to public, user, employee, regulatory, and a variety of other types of pressure, but the ultimate decision-making process is a centralized one that occurs internally with consideration to the company’s various goals and interests. There are thus few explicit obligations for such companies to design their platforms in ways that function as forums for public discourse, if such functions are ineffective in or counterproductive to reaching their objectives.
That being said, these platforms should, as meta-communities of users, along with the somewhat self-organizing online communities they host, also be examined from the perspective of governance and social organization, given that they derive much of their value from the activity of and interactions between their users. Given that these platforms can be considered communities, or places of social aggregation and interaction, there are inevitably questions of social organization, distributions of decision-making power, and mechanisms for consensus within the platforms as social entities. Axios has highlighted that “Facebook’s scale and power have often made it seem more a kind of quasi-sovereign nation than a traditional company”, pointing out the various domains typically reserved for the state in which the company has asserted these quasi-sovereign tendencies, including speech regulation, money, conflict and security, and taxation in the form of data collection. Mark Zuckerberg himself has been quoted as saying that “Facebook is more like a government than a traditional company”, and that the platform “this large community of people, and more than other technology companies [they’re] really setting policies”. Such comparisons of platforms to sovereign entities should inevitably raise the question of platform governance, as if they function as quasi-states, then the decision-making power and governance structures of these states appear to be incredibly centralized, with authority both nominally and practically held by the platform company’s executive. Zuckerberg himself has questioned in a conversation with Ezra Klein whether “with a community of more than 2 billion people all around the world, in every different country, where there are wildly different social and cultural norms”, a California-based team should be making these policies; additionally, Zuckerberg suggested that he had been thinking about how one can “set up a more democratic or community-oriented process that reflects the values of people around the world?”
The word “process” is key here. It alludes to a focus not on outcomes or the substance of policy changes, but rather the procedures and mechanisms that are followed or enacted in order to reach decisions concerning these policies. Often, efforts to address impacts of technology platforms on various other domains (political discourse, policy, etc.) focus on remedying undesirable effects themselves or, at best, attempting to fix the diagnosed root of the perceived problems. Such efforts can largely be considered reactive efforts (i.e. problem arises, some combination of policymakers and executives advocate ways to solve this problem) and, while they may be effective to some extent in addressing or remedying the changes in question, will generally always be playing catch-up to emerging and unforeseen effects of such platforms. Furthermore, these efforts often hinge upon decisions to be made or taken by the platform companies themselves, without a lot of formal mechanisms for input from external parties to factor into (beyond, of course, formal policy or massive coordinated user exit, but both of these mechanisms have long and intensive cycles of development). How, then, can we move beyond this type of reactive, ad-hoc policy that is largely in a centralized manner responding to external pressures in unknown ways? A potential solution lies in viewing our real-world institutions for social organization and decision-making as an analogue for platform governance, suggesting a need for resilient institutions that both sets the fundamental ‘rules of the game’ for governance and establishes mechanisms for flexible adaptation to changing situations. These institutions should be broadly democratic in spirit — perhaps contributing to robust norms of democratic decision-making for critical platform decisions — and emphasize user agency, participation, and input in a formal way that alleviates concerns of hyper-centralized platform governance. I’ll address each of these aspects of this form of platform governance in turn.
Primacy of Procedures and the Rules of the Game
Firstly, why is it important to focus on decision-making procedures, rather than purely making responsive policy to create the desired platform-society relationship? Given that it’s impossible to foresee every possible consequence of any emergent technology, why is the ideal strategy not to just make plans and policies to address issues as new effects emerge? A focus on governance procedures with established and appropriately broad definitions, as well as effective mechanisms for both enforcement and reinterpretation, holds a number of advantages.
Limiting unintended effects and enabling flexibility. A responsive policymaking approach, without the defined and consistent parameters of rules regarding procedures, to platforms whose full societal impact is not immediately clear runs the risk of becoming a patchwork quilt of ad-hoc decisions that may be contradictory, unpredictable, and limited by the information available at the time of the decision. Given time lags and longer feedback loops in the policy sphere, an inconsistent (both substantively and temporally) policy approach may be at best ineffective and at worst give rise to consequences unforeseen either due to limited information or interaction effects between different responsive approaches. A consistent set of sufficiently broad rules and institutions that focus on how platforms are governed and the processes for decision-making, on the other hand, is less likely to have these effects as it focuses not on prescribing a certain outcome desirable in a point in time, but rather about how decisions relating to that outcome are made, an approach that is less likely to have substantive unintended effects. Furthermore, establishing how platforms are governed may actually lend itself to more flexible policymaking, as it establishes procedures within which policies can be made instead of creating these policies as the top layer of platform governance directly, thereby allowing for more responsive policymaking that is better able to factor in changing situational information.
Stable institutions and rules of the game can be more conducive to innovation. With a robust framework for how platform governance will be conducted and big policy decisions will be made, creators and builders may be more encouraged to experiment with new ideas, technologies and business models. Instead of facing a constant risk that ad-hoc, responsive policy will render particular innovations untenable, creators can rely on consistent and predictable procedures that policies that affect their innovations will be made through, enabling a more innovation-conducive environment both on platforms and in the platform space more broadly. Furthermore, institutions with effective channels for input will ensure that creators (along with other stakeholders) have at minimum a way to formally provide feedback and input to decisions that affect their innovations.
Long-term relevance. Situational factors relating to platforms, and emergent technologies in general, are likely to change fairly rapidly in tandem with the pace of technological development itself, with some time lag. The factors that would impact the ideal procedures by which platform policies are to be deliberated, considered, and made, however, are likely to be much slower changing. In other words, how we would like to platforms to be governed and platform policy decisions to be made is likely to be much more consistent in the long term than what we would like those decisions to be. As such, creating institutions around governance and decision-making procedures for platforms is likely to have more long-term relevance and applicability than responsive, substantive policymaking.
Formal mechanisms for stakeholder input. In the status quo of platform governance, policies are often made by platform companies in response to a variety of pressures from stakeholders. These inputs, however, tend to be channeled through informal mechanisms such as public pressure. As such, there is the potential that such mechanisms are unreliable for the purpose of capturing relevant stakeholder inputs consistently and repeatedly as new issues arise. Creating institutions that establish governance procedures with formal mechanisms for stakeholder input thus can lessen the reliance on executive goodwill and responsiveness to external or internal pressures, and allow for a repeatable procedure through which stakeholders can provide input and ensure that it will be factored into decisions, and a predictable process by which executives and top-level decision makers can access this input and incorporate it into policy decisions.
Baseline consensus for more constructive policy discourse. Finally, having established rules and norms surrounding platform governance may have the desirable effect of enabling more constructive policy discourse in the space in the future. If such rules and norms are agreed upon via broad consensus, it establishes a baseline level of agreement and common principles from which policy positions can be developed and further debate and discussion can occur. Having this common consensus as the baseline for specific, substantive debates may lend itself to more constructive policy discourse, as all positions would be starting from a set of procedures and rules of the game that are agreeable to all, meaning agreeable policy outcomes may be more easily reached by all parties.
Democratic in Nature with a Deference to User Agency
Secondly, what should these rules and norms for platform governance emphasize? The institutions for platform governance and decision-making should be democratic in nature, and place a premium on user agency and the ability for users to provide input on policies. While democratic in nature is somewhat of an ambiguous phrase, I use it to refer to a platform governance system where some portion of decision making power concerning the platform’s policies as a whole are dispersed across the community of its users. Some of the reasons for an institutional design along these lines are as follows.
Control risks of centralized decision-making power with institutional check. One of the primary concerns that often arise in any conversation around platform company regulation is the extraordinary level of discretionary power such companies exercise in making policy or product changes to the platform. Regardless of what these decisions entail substantively, the prospect of platform companies being able to unilaterally make decisions that affect the user communities they host represents a significant level of centralized power. Such centralization is to be expected given the nature of platforms as being created, provided, and maintained by companies; nevertheless, it may present certain risks, particularly as the issues upon which decisions are being made veer into the domain of what is traditionally decided upon by public institutions. For instance, for platform companies to be making decisions on speech regulation, such as the qualities that constitute political speech, means that decisions that affect procedures of the public — and traditionally handled by courts — are being unilaterally decided by a single organization with external input being factored in via unclear mechanisms, which may increase the likelihood of uneven playing fields in the public sphere. When users, both as individuals and as a collective, have some portion of this decision-making power, the total body of decision makers on a platform becomes more representative of the public that these decisions may affect (note, however, that it is very unlikely for the community of users on any platform to be fully representative of the public at large).
Furthermore, it is important to remember that these organizations consist of individual actors. Executive turnover, therefore, represents a significant risk with this highly centralized allocation of decision-making power. When high-level decision making personnel within the organization change, it presents the possibility of drastic breaks in continuity with regards to policies and governance of various aspects of the platform. The collective body of platform users, on the other hand, represents a certain level of continuity in the aggregate that would suggest that platform governance policies are likely to be more consistent over time and less dependent on executive goodwill than if all such decisions were made internally by the governing organization (the platform company).
Centralized decision making may also make it difficult for platforms to maintain high levels of user trust, as noted in this article examining Facebook as a state through the lens of political economy, particularly given current trends of increased user scrutiny of such platforms. This user trust would be conducive to more effective platform governance, and one way of achieving such trust would be to allow users to exercise greater individual agency, where possible, and participate in the decision making process on policy decisions more broadly within the platform ecosystem. Users may be more inclined to trust the platform if the governance procedures of the platform included formal mechanisms for user input. Such mechanisms for input may also imply greater transparency in governance policies (as users would need to be made aware of various decisions in order to provide such input), which would further compound improvements to user trust from a democratic platform governance structure.
Governance that is more likely to be substantively neutral, allowing for diverse communities. Another concern that often arises when companies make platform policies unilaterally in a centralized fashion is that these decisions, particularly with respect to political content and access, may unfairly privilege certain perspectives or opinions at the expense of others, regardless of the intent of these content policies. Granting users — as a diverse, multifaceted set of individuals and communities — a portion of the control of the platform yields as a partial check against this kind of imbalance, either intentional or accidental, by allowing users both greater say in platform content and by permitting users to exercise individual preferences in experiencing on-platform content.
Easing tensions between consumer social platforms and public discourse. The way this potential effect might work is more unclear. A possible pathway is that if the collective body of decision makers on the platform with regards to platform policy more closely reflects the collective body of decision makers in real life along a near-infinite set of qualities that could define a community, tensions between the content on platforms and public discourse via distorted public opinion and increased polarization might be lessened as the communities in question are more in tandem with one another. For instance, if the platform operator is the sole decision maker, the output of content will represent the user-created input of content run through whatever algorithms and policies are decided by the platform operator. If, however, the platform operator represented one of many members of the decision making body, the output content may more closely resemble the input discourse, which might represent more closely public sentiment.
Envisioning Platform Governance
What, then, might these institutions for platform governance that focus on the rules of the game and emphasize user-driven decision making look like? While specific designs could be discussed and debated ad nauseam, some foundational qualities might make this kind of platform governance structure — one that establishes agreeable rules of the game and allows for user agency — work better for the reasons previously described. These qualities represent an ideal case and might form the beginnings of a kind of rubric that such a system could be assessed against, though they do not necessarily provide insight as to how each of these qualities could be achieved. This governance structure should ideally be:
Enforceable. There must be either technical, societal, or some other type of mechanisms that ensures compliance with the established rules and procedures of making decisions on platform policy, as well as the means of altering those rules and procedures, such that policies made outside these agreed upon rules are technically or otherwise untenable on the platform.
Incentive-compatible. To complement the enforcement of the rules and norms surrounding the decision-making process, an ideal platform governance system should be designed such that the rules of decision making leads to outcomes that are consistent with the incentives of each party at the individual level (individual users, the platform provider, other stakeholders).
Consensus-driven with individual user choice. The rules and procedures determining how platform decisions are made should drive the decision making process toward consensus among the community of participants. As one particular party — the platform provider — starts off with a significantly greater level of influence than any other given party, decision making procedures that simply account for greater volume, simple or absolute majority rule, or other competitive methods of reaching a decision may create the same kind of overcentralization seen in the status quo (i.e. if the platform provider holds 51% of decision-making power and a competitive simple majority decision-making structure is implemented, this system is effectively equivalent to complete centralization). For decisions that may only affect a single user without any kind of externality on other stakeholders on the platform (if such decisions arise), users should be permitted to make these decisions in determining their own experience of the platform.
Open-ended and interpretable. Given that an important reason for a focus on decision making procedure and platform governance institutions is the potential to account for unforeseen changes in relevant factors, these rules of the game should be sufficiently open-ended such that their interpretation can evolve with other factors in the platform environment. Overly specific or targeted rules of the game defeat the purpose of focusing on procedures instead of specific platform policy outcomes themselves.
Iterative with mechanisms for change. It is incredibly unlikely that any design of platform governance institutions will achieve all of the desired effects on the first try. In the spirit of allowing for changing and unforeseen circumstances, these platform governance institutions should allow for iterative changes to be made, with mechanisms that establish an orderly, agreeable system for making changes to the rules and norms surrounding platform decision making.
The ideas described here represent overarching ways to think about the questions surrounding platform governance rather than any kind of specific design or policy. They are merely a basic set of suggestions as to why platform governance might be considered a question of decision making procedure, and why it may be desirable to think about procedures that are more participatory and take into account the role of users as stakeholders in the platform. In the setting of much noise about how to apply policy and political institutions to tech, we ought to take a step back and think about whether we want these policies to become temporary fixes based on currently available information or long-term, adaptive institutions that are resilient in the face of new and future technologies. Instead of trying to enact reactive policymaking or retrofit existing frameworks onto online participatory platforms, we should study our institutions of political economy and social organization to create resilient processes and norms for social media platforms and online communities in a way that reflects democratic values, respects user agency, and embodies the democratizing spirit of the internet.