Why I Am Not A Technocrat
E. Glen Weyl
August 19, 2019
In the months leading up to the RadicalxChange conference in March, I wrote a series of critiques of prominent contemporary ideologies (capitalism, statism and nationalism) as well as an attempt to sketch the positive beliefs of the RxC movement. Since this time, however, it has become apparent that I omitted a critical contemporary ideology, perhaps the one with which RxC is most likely to be confused by outsiders (and which most RxC participants previous subscribed to): technocracy. Myself, I was socialized into a highly technocratic culture. In this blog post I try to fill this lacuna.
By technocracy, I mean the view that most of governance and policy should be left to some type of “experts”, distinguished by meritocratically-evaluated training in formal methods used to “optimize” social outcomes. Many technocrats are at least open to a degree of ultimate popular sovereignty over government, but believe that such democratic checks should operate at a quite high level, evaluating government performance on “final outcomes” rather than the means of achieving these. They thus believe the intelligibility and evaluability of technocratic designs by the broader public is of little value. Within these broad outlines, technocracy comes in many flavors. A couple of notable and less democratic versions are the forms adopted by the Chinese communist party, the “neoreactionary” movement and its celebration of Lee Kwan Yew’s Singapore.
Yet perhaps the most prominent version, especially in democratic countries, is a belief in a technocracy based on a mixture of analytic philosophy, economic theory, computational power and high-volume statistical analysis, often using experimentation. This form of technocracy is a widely held view among much of the academic and high technology elites, among the most powerful groups in the world today. I focus on this tendency as I assume it will be the form of technocracy most familiar and attractive to my readers, and because the neoreactionary and Chinese Communist technocracies have much conceptually and intellectual historically in common with it. Some examples of more extreme versions of this view, likely to be popular among my readers, are common in the “rationalist” community and projects adjoining it such as effective altruism, mechanism design, artificial intelligence alignment and, to a lesser extent, humane design. I will critique each of these tendencies in detail as archetypes of technocracy.
Such rationalist projects are generally “outcome oriented” and utilitarian, have great faith in formal and quantitative methods of analysis and measurement. Their standard operating procedure is to take abstract goals related to human welfare, derive from these a series of more easily-measurable target metrics (ranging from gross domestic product to specific village level health outcomes) and use optimization tools and empirical analysis derived from economics, computer science and statistics to maximize these outcomes. This process is imagined as taking place overwhelmingly outside the public eye and is viewed as technical in nature. The public is invited to judge final outcomes only, and invited to offer input into the process only through formalisms such as “likes”, bets, votes, etc. Constraints on this process based on democratic legitimacy or explicability, “common sense” restrictions on what should or shouldn’t be optimized, unstructured or verbal input into the process by those lacking formal training, etc. are all viewed as harmful noise at best and as destructive meddling by ill-informed politics at worst.
The fundamental problem with technocracy on which I will focus (as it is most easily understood within the technocratic worldview) is that formal systems of knowledge creation always have their limits and biases. They always leave out important considerations that are only discovered later and that often turn out to have a systematic relationship to the limited cultural and social experience of the groups developing them. They are thus subject to a wide range of failure modes that can be interpreted as reflecting a mixture of corruption and incompetence of the technocratic elite. Only systems that leave a wide range of latitude for broader social input can avoid these failure modes. Yet allowing such social input requires simplification, distillation, collaboration and a relative reduction in the social status and monetary rewards allocated to technocrats compared to the rest of the population, thereby running directly against the technocratic ideology. While technical knowledge, appropriately communicated and distilled, has potentially great benefits in opening social imagination, it can only achieve this potential if it understands itself as part of a broader democratic conversation.
My argument proceeds in six parts:
- Formal social systems intended to serve broad populations always have blind spots and biases that cannot be anticipated in advance by their designers.
- Historically, these blind spots often lead to disastrous outcomes if they are left unchecked by external input. If this input is left to the outcome stage, disasters must occur before the system is reconsidered rather than biases being caught during the process.
- Failures of technocracy in managing economic and computational systems today bear significant responsibility for widespread feelings of illegitimacy that threaten respect for the best-grounded science that technocrats believe is most important for the public to trust.
- Technical insights and designs are best able to avoid this problem when, whatever their analytic provenance, they can be conveyed in a simple and clear way to the public, allowing them to be critiqued, recombined, and deployed by a variety of members of the public outside the technical class.
- Technical experts therefore have a critical role precisely if they can make their technical insights part of a social and democratic conversation that stretches well beyond the role for democratic participation imagined by technocrats. Ensuring this role cannot be separated from the work of design.
- Technocracy divorced from the need for public communication and accountability is thus a dangerous ideology that distracts technical experts from the valuable role they can play by tempting them to assume undue, independent power and influence.
First, formal, technical knowledge systems, and the cultures that surround and police them, are necessarily narrow compared to the societies they inhabit. They must be because formal systems and models that capture the full richness of the world they portray accomplish nothing in terms of the simplification necessary for analysis, much as Jorge Luis Borges noted that a fully accurate map would need to be as large as the region it mapped.
Furthermore, such formal systems are never logically closed and thus never exist on their own but are supported by a community of scientists and engineers that police the boundaries of what is considered valid and valued work within such a knowledge system. Such a community inevitably forms a sociology of its own, replete with norms, standards of behavior, social networks and so forth. Given that such a culture inevitably makes up a small part of the society it aims to serve, and given that the demands of technical work make it very unlikely to be a “representative sample” in any meaningful sense, additional narrowness and bias will result from the social characteristics of those making up the technocratic class.
Some examples of these biases are:
- Failure to consider gaps in the formal framework that do not impact members of that social class.
- Implicit assumptions treated as purely technical matters often turn out to carry ideological significance that is ignored by that community as it is viewed as “obvious” within their politics.
- A tendency to defend the actions of the technocratic class that serve personal or narrow class interests (“corruption”), and to obscure such actions using technocratic language.
The more insulated a technocratic class is from the rest of the society socially, the more distant their formal language is from the language of a broader public. And the less they feel the need to justify their analysis and reasoning outside the technocratic class, the more likely are all these failure modes. Especially when that insulation is severe, even a deeply “well-intentioned” technocratic class is likely to have severe failures along the corruption dimension. Such a class is likely to develop a strong culture of defending its distinctive class expertise and status and will be insulated from external concerns about the justification for this status.
Second, historically, these failures of technocracy are not simply theoretical possibilities. They are among the oldest and most persistent concerns in political thought and have led to many of the most extreme human catastrophes. In the Western intellectual tradition, the technocratic outlook stretches back at least as far as Plato, whose Republic advocates rule by an elite class of disinterested “philosopher kings”. Since that time, repeated attempts to establish such a class have been met with failure after failure (and a few great successes, which are, as we discuss below, the exceptions that prove the rule) along precisely the lines outlined above.
Many classic works of political philosophy, such as much of F. A. Hayek’s writing, has focused on the dangers of technocracy. Perhaps the definitive treatment is James C. Scott’s Seeing Like a State, especially given that Scott highlights how a wide range of technocracy (whether corporate or governmental) can trigger such disaster.
Some leading examples, from Scott and beyond:
The Holodomor: One of the three largest genocides of all time was a man-made famine in Ukraine in the early 1930s. It resulted from a systemic policy of the Soviet government of the time aimed at extracting grain surpluses to fund industrialization. Yet most historical accounts do not suggest Stalin or other Soviet leaders ever intended or even would have countenanced the deaths of the millions whose lives the Holodomor took. This stands in sharp contrast to the other two largest genocides, by the Nazis, which were the goal of Nazi policies. How did the Soviet state “accidentally” exterminate several million people?
Stalin and other top leaders were trying to tap, to the maximum sustainable extent, grain surpluses from Ukraine, the most fertile region of the Soviet Union, to fund industrialization. For obvious reasons, this policy was not particularly popular in Ukraine SSR. However, only a tiny fraction of the deaths in the Holodomor were associated with direct state imprisonments or executions. Soviet penetration into and expertise on Ukrainian society was sufficiently deep that direct resistance was difficult. The problem, instead, was that Soviet leadership did not receive clear indications that, as a result of reduced incentives to grow and adverse weather conditions, its demands were cutting significantly below the surplus and into necessary food supplies and seed stores for the population until it was too late.
The source of this communication failure was the extreme informational and trust isolation of Stalin and his planning circle. This group had become convinced that the state of the Ukrainian population and seed stocks were being systematically distorted for political gain by Ukrainian nationalists and their foreign backers. They thus distrusted early warning signs and treated those providing them as tools of state enemies, often punishing those conveying such warnings. This in turn suppressed further signals of impending disaster. Cycles of this sort escalated until evidence of famine had become overwhelming, at which point it was too late to avert catastrophe. Amartya Sen documents similar examples in his classic work on famines in authoritarian regimes. China’s Great Leap Forward under Mao Zedong is perhaps an even cleaner example (though one about which I know less) given that Mao clearly did not intend large scale deaths.
“High Modernist” Urban Planning: Inspired by Baron Haussmann’s mid-nineteenth-century redesign of Paris, many mid-twentieth century urbanists sought to design or redesign cities “from the top down”. These “high modernist” projects had limited citizen input, a highly specialized staff with extreme confidence in the power of science and technology to transform cities and great powers to sweep away existing structures. Prominent examples include Robert Moses’s interventions in New York City and the housing projects throughout the developed and especially Soviet worlds led by Le Corbusier and associates.
As observed by the journalist who became the greatest urbanist of the twentieth century, Jane Jacobs, these designs ignored and thus ended up destroying much of the microstructure of cities that made them safe and livable. They were based on abstract categories that saw cities “from above” and built a science around this vision. They missed the “on-the-street” sociology that Jacobs built up by walking through cities (especially New York City) and observing the life of residents.
The canonical example of such a high modernist failure of urban planning was the de novo capital of Brazil, Brasilia, built in the mid-twentieth century by Brazilian architect Oscar Niemeyer. Every space of the city was designed to impress when seem from above or viewed devoid of people. Every use was clearly demarcated, and purposes of the State were marked on each space. As it turned out, these were the only virtues of the city, which was soon both extremely congested and yet a ghost town in its wide open, structureless canyons. Astonishingly over budget and yet feared or resented by most who were forced by government jobs to move there, Brasilia stands as a monument in glass, steel and stone to the arrogance of technocracy. More contemporary examples are the ghost cities of China.
The Phillips Curve: A center piece of the mid-twentieth Century political economy was Keynesian management of the business cycle by econometric experts. They used historical relationships between employment and inflation to adjust fiscal and to a lesser extent monetary policy to manage full employment targets also observed from historical data. The basis of this analysis was the common assumption that the scientific observer and technologist is external to the system she controls, and that the relationships measured are steady over time. The “stagflation” experience of the 1970s, with high unemployment and inflation, challenged these assumptions.
Eventual Nobel-laureate economists like Robert Lucas and Chris Sims argued that economic models cannot be separated from the world itself because policy choices affected by economists’ work shape the behavior of the economic relationships they model. This is most clearly stated in economist Charlie Goodhart’s contemporaneous law, “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.” Lucas argued that the large social distance that had grown between business leaders and entrepreneurs on the one hand and econometric policy planners and central bankers on the other hand led planners to underestimate the sophistication of businesses’ response to policy changes. Once accounted for, these “rational” expectations undermined many of the potential benefits of active business cycle management and converted expansionary policies into primarily inflationary ones.
Eastern European “Shock Therapy”: The failures of “modernist” urban and economic planning of the types mentioned above precipitated fundamental changes in the ideologies of technocrats. Economic planning elites increasingly embraced a capitalist-oriented “neoliberal” ideology of deregulation, privatization and floating exchange rates. Yet partly because this ideology had itself been central to the critique of previous technocracy, it considered itself immune to its failures: it would allow “free markets” to “take their course” and avoid top-down technocracy. As it turned out, historical experience with bringing about such a rapid introduction of capitalism was virtually non-existent, especially in the context of the formerly Communist countries where this doctrine of “shock therapy” was most vigorously applied. Engineering such a transition therefore involved countless choices of techniques of privatization and accompanying auctions, regulations of the markets that resulted, collective bargaining structures, etc. Most of these choices were ultimately made by a tiny group of technocrats from Harvard with minimal democratic consultation or participation from the Polish and Russian publics on whom their designs were tested. The resulting large scale profits for and monopolization by a group of oligarchs, many connected to the designers themselves, is widely blamed for the collapse of Eastern European economies, the decline of democracy in that region and the eventual ouster of one of the technocrats as President of Harvard University.
These are but a few examples of the historical failures of technocracy. It is important to recognize the fundamental issue at work here, illustrated most sharply by the last example. It is not usually or fundamentally that any given technocracy leaves out or cannot be made to express any critical insight that emerges from experience or “from below”. These days Jane Jacobs is at the core of the education of any urbanist and economic technocracy has moved well past extreme neoliberalism to detailed randomized controlled trials, structural economic models and market design. The hope is that these new forms of technocracy have solved the limitations of those they replaced and will now, based on some mixture of attention to sociological detail, use of “rigorous research design” and an “engineering approach to market design” avoid the mistakes of the past.
In some sense this is true: the precise failure modes of the past will not emerge. A common mistake of critics of current technocracy is to accuse it of precisely the same mistakes of technocracy of the past.
Yet there is a deeper failing that is simply unavoidable, probably for all time, but at least until we identify narrow formal systems far more capable of capturing the full richness and human depth of the problems they try to cover. Keynesian planning and neoliberal privatization drives are superficially quite opposite tendencies. Yet deeper down they share the view that a thin formalism, based on aggregate statistics like inflation, GDP growth, output, interest rates, etc. as defined in the theory, are enough to process the wide range of social feedback necessary for sensible political and economic decision-making. While they communicate amongst each other in the rich language of papers, seminar debates, etc. they want input from the broader public to fit into the formalism they have designed for that input, whether it be market demand, votes, survey responses, etc. They both believe the messy work of politics, social engagement, persuasion, democratic conversation, etc. are largely distractions from or necessary evils to achieve sensible economic planning, not necessary checks on and criteria for evaluating the success of technocratic plans. As such, they inevitably lead to large failures relative to the outcomes that a more connected and open process would allow when weaknesses in their formalism become prominent, leaving out both the information and interests of those outside the class that can speak in this formal language.
Third, these are precisely the kinds of problems we are facing with present technocracy. The current attempts to get beyond the limits of previous technocracy have solved the narrow problems of past paradigms but have dramatically failed to solve the fundamental problems of technocracy itself to dramatic social cost.
A few examples will suffice:
- Technocratic management of the financial system was largely blind to the growing imbalances that were clear on the ground around the United States in the lead up to the 2008 financial crisis. Capital regulators and economic experts were almost entirely isolated from these ground-level imbalances (viz. overstretched borrowers and bankers taking large risks) as they were from very different social classes than those who became most exposed to these imbalances. This helped to lay the groundwork for the greatest global financial catastrophe since the Great Depression. At the same time, they ran closely in social and intellectual circles with the very financial elites whose taking advantage of these poorly designed controls helped precipitate the crisis.
- Antitrust policy, increasingly focused on elaborating on a specific class of economic models, has largely ignored broad trends in the growth of market power and categories of market power that had clear economic logic but did not fit the standard models, such as the power of institutional investors, the power of employers, and the dynamics of high tech mergers. As a result, despite ever increasing expertise, evidence increasingly suggests that market power is near an all-time high in US history. This rise of market power largely unaddressed by existing antitrust institutions appears an important component of rising inequality, wage stagnation and the growth of illiberal populism of the left and right.
- Experimental and adaptive “reinforcement learning” systems are increasingly used throughout business and policy to optimize quantitative metrics ranging from advertising engagement to health outcomes. These experiments, however, are usually conducted for tractability in carefully controlled environments with large numbers of observable units, making it difficult to judge system effects of the intervention, and usually last only a short period requiring the metrics used to be measurable over this time horizon and thus quite distant from eventual outcomes of interest. Heavily optimizing these metrics without reference to broader social impact that they fail to capture, but which is observed rapidly by those on the ground, has led to quite disastrous outcomes, from Facebook’s inciting of a genocide in Myanmar based on maximizing engagement metrics to a series of failed development interventions that had succeeded in narrower trials.
- Market designers have, over the last 30 years designed auctions, school choice mechanisms, medical matching procedures, and other social institutions using tools like auction and matching theory, adapted to a variety of specific institutional settings by economic consultants. While the principles they use have an appearance of objectivity and fairness, they play out against the contexts of societies wildly different than those described in the models. Matching theory uses principles of justice intended to apply to an entire society as a template for designing the operation of a particular matching mechanism within, for example, a given school district, thereby in practice primarily shutting down crucial debates about desegregation, busing, taxes, and other actions needed to achieve educational fairness with a semblance of formal truth. Auction theory, based on static models without product market competition and with absolute private property rights and assuming no coordination of behavior across bidders, is used to design auctions to govern the incredibly dynamic world of spectrum allocation, creating holdout problems, reducing competition, and creating huge payouts for those able to coordinate to game the auctions, often themselves market design experts friendly with the designers. The complexities that arise in the process serve to make such mass-scale privatizations, often primarily to the benefit of these connected players and at the expense of the taxpayer, appear the “objectively” correct and politically unimpeachable solution.
In short, we are very far from discovering formalisms capable of capturing and quantifying most of the critical inputs to policy and systems design for a decent society. So much of what we still need lives in e.g. the low-income housing developments, the lived experiences of workers facing powerful corporations, the NGOs on the ground in Mynamar, and the community educational justice groups. To the extent that technocracy is a practice of insulating policy makers and system designers from the need to justify themselves in the language of, clearly explain their designs to and maintain open lines of communication from these highly informative channels, it leads to large-scale failures, corruption, crises and justified political backlash and outrage.
Fourth, none of this should suggest that technical insights are of no value. Anyone who has followed my work knows the (partially) technical provenance of many of the social designs I advocate. I am also deeply impressed by technical work, in technical fields like data science and humane design, that seeks to develop new approaches to data analysis and metrics for design success. In fact, I believe that the only chance we have of saving our current political economy from the oppression of capitalism and the nation-state runs through substantial advances of a technical sort that can provide us new systems of democratic input, value accounting and social imagination and experimentation. Work by designers on these projects are among the activities I most admire today.
Yet a critical element is missing from the forms of technocracy above and which is necessary for new technical design to play a necessary social role. Designers must explicitly recognize and design for the fact that there is critical information necessary to make their designs succeed that a) lies in the minds of citizens outside the technocratic/designer class, b) will not be translated into the language of this class soon enough to avoid disastrous outcomes and c) does not fit into the thin formalism that designers allow for societal input.
The case for these points is overwhelming. After all, technocrats do not insist that they communicate amongst themselves using the thin formalisms through which they solicit feedback through the public…they talk to each other in conversations, papers, seminar, etc., not through votes, likes, webpage views, and market demands. They thus implicitly acknowledge the limits on the formal feedback mechanisms they design. Furthermore, it would require extreme blindness for technocrats to believe there are no systematic biases that filter out relevant perspectives from participation in the technocratic class, or that the perspectives that this class comes to take on through its training does not eliminate relevant perspectives. As noted above, historical experience overwhelmingly shows this occurring.
So, if we take this perspective seriously, how should technical experts seek to design? First, they must constantly seek to create usable systems that account formally for critical pieces of information omitted from previous formal input procedures, not simply to optimize further in the models of previous designs. Let us call this goal “fidelity”, as it tries to make the formal system as true to the world as possible and contrasts with “optimality”. Yet, as the same time, they must recognize that whatever they design, it will fail to capture critical elements of the world. In order to allow these failures to be corrected, it will be necessary for the designed system to be comprehensible by those outside the formal community, so they can incorporate the unformalized information through critique, reuse, recombination and broader conversation in informal language. Let us call this goal “legibility”.
There will in general be a trade-off between fidelity and legibility, just as both will have to be traded off against optimality. Systems that are true to the world will tend to become complicated and thus illegible. While the essence of technocracy is to optimize, technocrats are often comfortable with increasing fidelity. However, they mostly or completely disregard legibility. On the opposite extreme, existing systems that are highly familiar to the public have over time become legible as understanding of them has spread, but they do great violence to the world they are meant to describe and are very far from optimal; thus the widespread discontent with the failures of capitalism and the nation-state.
The right, intermediate path is “design in a democratic sprit”, which works to find designs for social feedback, optimization routines, public education and communication strategies and clear standards for evaluation of experts that make large improvements on fidelity (or optimality) in exchange for small and relatively quickly remediated failures of legibility. Democratic designers thus must constantly attend, on equal footing, in teams or individually, to both the technical and communicative aspects of their work. They must view the audience for their work as at least equally being the broader non-technical public as their technical colleagues. They must view a lack of legitimacy of their designs with the relevant public as just as important as technical failures of the system. They must recognize and reward excellence in explicability and explication as much as optimization, and view good performance on both dimensions as far more valuable than excellent performance on one paired with failure on the other. Because of the inherent tensions among optimality, fidelity and legibility, the two are not separable; communication cannot simply follow design. Design must aim for both.
No sophisticated technical system is ever as legible as familiar systems that have already been woven into the fabric of everyday life. Yet systems designed with an eye towards legibility and are obvious when you interact with them, and contrast sharply with more sophisticated but illegible ones. Early Apple v. Microsoft computers; the early internet v. contemporary platforms; real estate websites with posted prices v. combinatorial auctions for spectrum; the 99 DOTs blister pack-based system of ensuring adherence with prescription courses v. endless, far more expensive systems of surveillance previously tried for the same purposes.
A deeply frustrating feature of designing legibly, for technically trained designers, is that legibility itself defies technical formalization. In fact, it is precisely the residual when everything that can be formalized has been stripped away. There have always been and will likely always be attempts to fully formalize legibility. Yet while these bring some additional features into the scope of the model, they never eliminate the unformalized element. For example, a recent proposal for designing economic mechanisms that are “obvious” to use has interesting merits, but would classify chess as obvious to play and any normal human conversation as extremely complicated. This is obviously not how humans find it.
Yet the lack of technical formalizability does not imply there is nothing systematic about achieving legibility. In fact, all the examples I described above share common characteristics. They all involve significant sacrifices of functionality and optimality (at least with in contemporary formal models) relative to the more veracious and less legible contrasts. They all emerge from cultures where designers and engineers are required to work closely with creative, humanistic, and other less formalistically trained colleagues and be accountable to the perception of their joint designs by a diverse range of end users and participants. These practices systematically reduce the domination and exclusive importance of technical and formal design practices and increase the weight on democratic conversation, public accountability and multidisciplinary collaboration. These are the systematic practices that allow for design in a democratic spirit.
Finally, the need for design in a democratic spirit is one that has become increasingly apparent to me with time. Many elements of my background are highly technocratic (I have a PhD in economics, I grew up in Silicon Valley as a child of technology executives, I worked for many years on antitrust policy very much in the paradigm I critiqued above, etc.). Most of my writing, including recently and especially my book Radical Markets with Eric Posner that helped launch the RadicalxChange movement, is in a quite technocratic mode. As a result, I have learned and gained much from, and RadicalxChange has attracted many leaders, from some of the more extreme technocratic communities. Thus, even more than with other ideologies I have flirted with (such as statism and capitalism), I consider myself an internal critic.
My goal here is to highlight how technocrats will fail in many ways on their own terms and at their own goals if they do not democratize the spirit of their work in terms roughly as discussed above. To this point, I want to close by highlighting the dangers of some of the technocratic communities in which I frequently participate and with which some would likely confuse RadicalxChange. These communities have great promise and I have learned much from them and at the same time I find myself often deeply worried by their technocratic operating style. In many ways it is that concern which led me to write this piece.
Technocracy involving elements of economic expertise, computational technology, analytic philosophy, and behavioral psychology pervade global elites. However, there is a subcommunity of this elite which takes these perspectives to an extreme, is especially insular and has turned these views into a life philosophy: the self-styled “rationalist movement” formed originally around a blog by Robin Hanson and Eliezer Yudkowsky. Ways in which technocratic practices waste valuable technical expertise and insight are most clearly manifest in various areas adjacent to this movement. I conclude by discussing four examples:
Mechanism Design: The field of mechanism design was founded in many ways on the principles laid out above: that relevant information is held diffusely and that social policies must be designed to incorporate this information. However, analysis in mechanism design overwhelmingly focuses on exact optimality of designs and assumes models that allow for only very specific forms of diffuse information, even when other forms are known to exist in the literature. For example, the canonical model of auctions that is the basis of the common prescription of non-discriminatory ascending price or second price auctions assumes that the only private information is about perfectly known personal valuation of items. This ignores, among other things, the fact that bidders may not know their values ahead of time and may need to work to learn these values (and have private information about this process) and the fact that bidders may have preferences over others’ outcomes (as they compete or cooperate with others, and that these patterns may be private information). In fact, these are known in the literature to cause disastrous problems for the standard designs.
More importantly, beyond the issues already identified, these highly optimized designs are extremely complicated and opaque, to the point where this complexity is almost celebrated by their designers. In fact, many in the field believe this complexity and inaccessibility to outsiders is a defining feature of what constitutes “real work in the field”. The shroud it throws over mechanism designs, makes it essentially impossible for those outside of the field to raise potential issues with these designs based on information they have that the designers do not; the outcomes of recent spectrum auctions and school choice designs described above being leading examples.
Coupled with the extreme focus on optimality over even fidelity within the field, these features make me worried that overall mechanism design and its applications have harmed rather than improved outcomes. It has also allowed tremendous scope for corruption in which experts who can understand the design, and are friends with the designers, are able to take advantage of them for profit as consultants and cannot easily be criticized by those in the field given the close social relations and extremely hierarchical nature of the discipline. All this even though, as my other work clearly shows, I think it is a field with great potential to contribute to a more just society. It is perhaps the leading example of the tremendous opportunity and potential that a technocratic attitude wastes or turns to harm.
Effective Altruism: The effective altruism movement, which largely grew directly out of the rationalist movement, seeks to maximize the efficacy with which charitable donations are directed using standard rationalist methods. It is a tight-knit community that strongly privileges rationalist approaches over all other forms of knowledge-making (such as from the humanities, continental philosophy, or humanistic social sciences) and tends to dismiss input not formulated in rationalist terms. The community also has a strong and explicitly stated view that its activities uniquely contribute to the achievement of “the good”: of their top five recommendations of most productive careers by a leading community organization, two suggest being a researcher or support staff within the movement, and two others recommend working on the AI alignment problem (see the next point). Until recently, much of the analysis and funding emerging from the community has pointed towards a focus on extremely unlikely but potentially catastrophic risks, such as alien, asteroid or biological catastrophes.
Yet, interestingly, the conclusions of the analysis emerging from the community increasingly undermine these foci and the approach of the community more broadly. In particular, recent research in the community suggests that the greatest and most probable risks to be avoided are anthropogenic (climate change, nuclear war, the rise of a totalitarian regime, other environmental catastrophes etc.). Leaders in the community have in turn suggested that the most effective ways to avoid these are likely finding solutions to problems of political organization and legitimacy of social systems to help reduce the likelihood of conflict or inability to cooperate in the provision of critical global public goods.
Ironically, those “political” goals, social reforms, and public confidence building are precisely the sorts of activities that effective altruists have long viewed as “non-rigorous” and ineffectual. Worse, the extremely elitist, segregated, and condescending approach to philanthropy encouraged by the community has created widespread public backlash that has closely tracked a broader populist reaction to technologically-driven globalist elites and that increasingly seems to be one of the largest risk factors for precisely the sorts of catastrophes effective altruists increasingly see as the largest threats to their long-term viewpoint on universal welfare. In short, it increasingly seems like, after almost a decade of existence, a primary conclusion of the movement’s analysis may be that the movement itself is a significant part of the problem it is identifying. Earlier, more open dialog with a broader range of approaches and social classes might have illuminated this more quickly and avoided the associated waste of talent and resources, two things the movement greatly prizes.
Artificial Intelligence Alignment: Perhaps the greatest passion of the rationalist movement, and Eliezer Yudkowsky, its most celebrated figure, is work on creating “good” artificial intelligence, usually called the “AI alignment project” (AIAP). The basic idea of the AIAP is just an extension of the problems of technocracy identified above. AIs, including and in many way especially “good” AIs, are formal systems created by narrow social classes according to their conception of and ability to formally specify the good. Because AIs may become quite powerful, there is a great danger that they will “run out of control” in achieving a goal that does not fully specify what the human designer aims at. The classic example is a paper clip manufacturing machine that turns the whole world into paperclips. AIAP aims to create very powerful AIs that will nonetheless be good (in some sense).
While the whole framing of the AIAP is obviously informed by many of the concerns above, its formulation and institutionalization seem to largely miss the fundamental point. If one believes that there will always be critical information necessary to the good which will not be formalizable in the language of a narrow community that can “solve” the AIAP or design a “friendly” AI, then the AIAP is inherently insoluble on the terms in which it has been posed.
It is a bit like posing the “genius dictator alignment problem”: how can we ensure that if there is a brilliant dictator, he will serve the interests of the broader public? This question presupposes that we must accept such a dictator or that such a dictatorship would be desirable. Most supporters of democratic or otherwise decentralized societies would reject this premise; they would instead argue that dictatorship is inherently to be avoided and that the primary gauge of whether a dictator is friendly or not is the extent to which she sheds most of her powers to allow for decentralized intelligence. Similarly, if we want to have AIs that can play a productive role in society, our goal should not be exclusively or even primarily to align them with the goals of their creators or the narrow rationalist community interested in the AIAP. Instead it should be to create a set of social institutions that ensures that the ability of any narrow oligarchy or small number of intelligences like a friendly AI cannot hold extremely disproportionate power. The institutions likely to achieve this are precisely the same sorts of institutions necessary to constrain extreme capitalist or state power.
A primary goal of AI design should be not just alignment, but legibility, to ensure that the humans interacting with the AI know its goals and failure modes, allowing critique, reuse, constraint etc. Such a focus, while largely alien to research on AI and on AIAP, is actually a major focus in much of computer science, specifically the field of Human-Computer Interaction. Even some AI researchers have taken an interest in this, aiming to design AIs and robots that are less efficient but more legible. The general tendency of AI researchers to ignore, dismiss, view as “non-rigorous” or “non-rational”, or otherwise fail to focus on the distributed nature of the most valuable cognition is a severe limit on their ability to contribute to their own goal of designing technologies beneficial to the societies they aim to serve. It makes me deeply concerned they will end up, unwittingly, being primary vectors for creating unfriendly AIs.
None of this requires one to take a “human exceptionalist” viewpoint or to view AIs as inherently non-human or inferior to humans. Just as we would not want the child or dynasty of any family to become dictator over the world, so too we should not want any AI to be. Just as AIs should be legible, non-anti-social human beings do a tremendous amount of work to make themselves legible to and cooperative with other human beings; those who don’t usually end up in jail.
Human Systems: Perhaps the most surprising outgrowth of the rationalist movement has been a significant part of the growing interest in “humane design” associated in the public mind with the Center for Humane Technology. I have deep sympathy for this work and its precise trajectory is very much in progress. However, one concerning potential future manifests in the work of Joe Edelman and the Human Systems (HS) group he founded.
Most familiar with their work are likely to be surprised, given my discussion above, that I see HS falling into a similar set of problems as technocracy. After all HS is largely based around a critique of existing design approaches that shares much with my arguments above and in fact much of the above analysis was partly inspired by their work. Yet the sympathy I have with their analysis is similar to that I have with the above-critiqued forms of technocracy…all are based on moving past some flawed aspect of past technocracy, such as inattention to dispersed information, lack of focus on social good, or lack of attention to the unexpected destructive implications of powerful formal systems. HS focuses on the psychological-ethical foundations of technocratic standards and on trying to develop metrics that elicit and track “values”, as opposed to preferences, goals, pleasure, etc.
It remains to be seen whether it is possible to design systems that persuasively achieve this goal; the breadth of it and the lack of established disciplinary academic communities with formal training pursuing it seem to have thus far hindered such designs, and I generally find myself skeptical of the merit, even on their own terms, of designs emerging from that community. Yet even if successful technocratic practices do emerge from this community, it fundamentally shares a lack of concern with external legibility and accountability with the other forms of technocracy I discuss above. It is overwhelmingly focused on establishing tight connections among a community of designers who imagine that, by eliciting values using some procedure, they will identify the “true” maximands for design, which systems should then optimize. In fact, the range of obscure vocabulary and technical perspectives they aim to employ in “getting to the root” of what people truly value makes many other forms of technocracy seem legible by comparison and the depth that they seek to reach in getting at profound human truths makes what one must learn and accept to grasp their designs close to the tenets of a religion.
HS is thus certainly not headed for precisely the same failures as other technocracy. However, it does seem quite plausibly on the trajectory to create a sort of “priestly” class, above and detached from the rest of the public, interested in seeing into the souls of the broader public and paternalistically shaping their lives to serve these spiritual aims. Whether technocracy or theocracy is the best way to describe such a system is for my readers to judge, but in any case it seems likely (if it does not change its sociological orientation towards broader public legibility, engagement and respect for diverse communication styles) to suffer precisely the problems outlined above and shared between technocracy and theocracy. By contrast, if HS comes to see itself as contributing concrete technologies communicable and legible in a range of systems of meaning-making, it has, like the other traditions above, a great deal to contribute.
I could easily add to this list much of my work in Radical Markets, which manifested many of the problematic technocratic attitudes I critique above. In fact, I plan to soon release a critique of the book partially along these lines. It is only through broad public conversations and beginning to see the consequences of some of the approaches I was taking that I have come to fully appreciate the severe limits of technocracy. In that case, as in all those above, there is a severe danger of great technical minds being wasted on an arrogant pursuit of remaking the world in their image, rather than contributing to a broader conversation. If they do so, they will undermine the very goals they seek and be rightly discredited and attacked. I hope they will instead, like I have at this very late date, come to see the value of instead pursuing design in a democratic spirit, with all the challenges it entails. Personally, it has been the most rewarding experience of my life as it has given me a chance to learn more, and more quickly, than I have ever have in the past from a broad range of brilliant people I would once have dismissed, across walks of life and ways of thought.
I would like to extend a thank you to Zoë Hitzig, Vitalik Buterin, Matt Prewitt, Jennifer Lyn Morone, Alisha Holland, and Danielle Allen for their invaluable insights that have helped shape and produce this lengthy critique.