Daniel Mügge is Professor of Political Arithmetic at the political science department of the UvA. (If you're wondering what political arithmetic is, you can find my own take on it under the Political Arithmetic tab below and, in proper paper form, here.)
Daniel's current research has two focal points - the European governance of artificial intelligence (AI), and the political economy of macroeconomic indicators. The first line of work investigates how the regulation of AI is organized. Specifically, the research investigates "AI diplomacy", the EU's external relations in the AI field - both with other countries such as China and the USA, as well as its role in multilateral efforts to regulate AI.
Daniel's inaugural lecture summarizes the essence of the second research agenda, arguing for a Numeracy 2.0. This research is supported by the Netherlands Organisation for Scientific Research (NWO, Vidi project 016.145.395) and the European Research Council (ERC Starting grant 637683). You can find the work of the whole research team on the FickleFormulas project website.
In 2009, Daniel Mügge's dissertation on European financial markets had been honoured with the ECPR Jean Blondel prize as best European political science dissertation of the year. He spent the first half of 2012 as a visiting scholar at the Center for European Studies at Harvard University; he returned to spend the whole academic year 2014/15 there, as well.
Until March 2016 Daniel was lead-editor of the Review of International Political Economy. At present, he is also on of the co-directors of the program group Political Economy and Transnational Governance (PETGOV) at the UvA political science department. In 2021, he is the local organizer of the annual convention of the Society for the Advancement in Socio-Economics (SASE).
During the 2020/21 academic year, Daniel will be at the Otto-Suhr-Institut of the Freie Universität Berlin (his alma mater) as an Alexander von Humboldt-fellow.
The 17th century virtuoso William Petty was about as colorful as they come. Born to a family of modest means, he absorbed what knowledge his age had to offer with insatiable verve and improbable speed. In spite of his forays into musicology, medicine, shipbuilding and natural science experiments, he achieved lasting fame as the inventor of what he called Political Arithmetick.
A practical and ambitious man, Petty sought to rub shoulders with the rich and powerful early on. His hour came when the English rulers once more faced trouble in their Irish colony. Effective domination, Petty realized, required a colony that was charted and legible. It required a population that was counted and categorized, just as the land itself needed to be gauged and plotted.
Petty got his way, and in the 1650s was charged him with the infamous Down Survey: a census cum cartography of Ireland to bolster England’s grip. However imperfectly, Petty had made the island amendable to bureaucratic rule.
In the years that followed, Petty synthesized his lessons under the label Political Arithmetick, what we today would call evidence-based policy. Effective rule and governance - the line between the two was never quite clear - should shun superstitions, abstract musings, religious fervor and gut feelings and instead be approached with the same rational and empirical mindset as budding modern science.
Consonant with mainstream thinking at the time, Petty saw mostly the positive force of such rule. He had studied with Thomas Hobbes in France, who extolled the necessity of a Leviathan – a ruler perched on top of a society to enforce order, without which people could not live in peace. At worst, top-down rule was a necessary evil for stable societies, and Political Arithmetick could make it more effective, rational and less bloody. Much like Michel Foucault three centuries later, Petty saw both the restraining and productive dimensions of the power that political arithmetic would bestow upon its practitioners.
Petty saw Political Arithmetick as a transformative project – science not only with a mission but with a concrete goal. The point of putting societies under his scientific magnifying glass was to mold them. The social and economic statistics, of which many see him as the father, served a political project, rather than just being collected for their own sake or the idle entertainment of scientists.
This genealogy is revealing as we uncover the political fundaments of present-day macroeconomic indicators. We readily recognize the controversial core of statistical undertakings – from the Down Survey and colonial censuses in past centuries to Soviet statistics to squeeze its citizens and development statistics to civilize backwards countries. Contemporary macroeconomic indicators are not overtly nefarious. But they too have political ancestors, intentions, and offspring.
As a label, Political Arithemtick just barely survived its inventor. In the 18th century, advisors to kings wielded it as a catch-all label for (mostly quantitative) data about populations, trade, economic stocks, and so on. But Political Arithmetick met a formidable critic in Adam Smith, who harbored many skepticisms about quantitative social science that we would recognize today: its pretense to objectivity could hide political motives; quantitative data could be patchier and less trustworthy than neat spreadsheets suggested. So Smith proselytized for political economy, instead, a much more deductive approach to studying economic life.
The empiricist, quantitative study of economic life survived, but mainly under the German label Statistik. Also in England, Political Arithmetick eventually blossomed under the novel flag of statistics. In the shadow of intellectual battles between grand thinkers such as Ricardo, Malthus, Marx and Mill, government statistics thrived in 19th century France, Germany, the UK, and the USA in particular. From there, they conquered the rest of the world.
The headache that the Irish question was to English rulers in the 17th century, the social question was to their successors two centuries later. The swelling masses of urban workers generated social problems and political tensions. Quelling these required effective state intervention and the instruments to underpin it. Political Arithemtick as a project got a second lease on life.
First off, new forms of labor – compressed into factories – spawned employment statistics: governments needed to get a handle on an increasingly pressing 'social question'. Systematic surveys to gauge changes in these workers' cost of living - inflation indicators in their infancy - followed later that century, for similar reasons. The 1st World War entrenched cost of living measures even more deeply: the war wrecked 19th century economic certainties, governments were more dependent on loyal subjects than ever, communism threatened to lure away disaffected workers, and so governments felt pressured to monitor and measure workers' cost of living systematically.
Estimates of national income, also pioneered by Petty in the 17th century, grew more and more sophisticated, and by the end of the Great War, almost 20 countries had produced such figures.
The decisive boost for statistics in macroeconomic policy arrived in the 1930s and 1940s. Disillusion with previous laissez-faire policy, popular Keynesian ideas about macroeconomic steering, and the needs of wartime economic planning - this time for World War II - all led governments to develop yet new policy instruments. Among these, statistics were indispensable to make “the economy” intelligible, legible and thus manageable.
After all, economic and political stability would go hand in hand. Simon Kuznets introduced a measure for the American “national income” in 1934. Together with the British Colin Clark and Richard Stone, he created the fundaments of the now ubiquitous Gross Domestic Product (GDP). It perfectly fit the government-heavy economic reconstruction after the war. By the 1950s, macroeconomic indicators had become part and parcel of economic policy and politics. Political Arithemtick as a label had all but disappeared, but as a political project, it had become a success that William Petty would have found hard to imagine.
Why study political arithmetic today? (For ease of reading, I'll stick to the modern spelling from here on.) Political arithmetic had two distinguishing features: first, it was an empiricist (which in practice meant quantitative) approach to understanding social life. It eschewed abstract philosophy, normative or metaphysical. It also sharply diverged from the kind of deductive political economy that a David Ricardo would later propagate - an approach that started not with empirical observation but with first principles, and took things from there.
Second, political arithmetic was attached to political projects. Petty's emphasis on social transformation through political arithmetic foreshadows Marx' complaint that philosophers had only interpreted the world in various ways, while the point was to change it. The parallel is ironic, because Marx saw in Petty the ancestor of bourgeois political economy - the intellectual edifice against which he, Marx, railed.
We recognize both features - quantification and political motivation - in much research today; development economics may be the most extreme example. And we would equally recognize the criticisms that were hurled at political arithmetic in Petty's times and at quantitative social science in the centuries to follow: that it was reductionist, that it thrived off mock-objectivity, that it obscured uncertainty and shaky data sources, that it was prone to confuse correlation with causation, etc.
We also recognize the defenses that the empiricists erected: that theorizing without some reality-check was pointless, if not dangerous; that scientists had a responsibility to benefit society and hence to aid public policy; and so on. The debate about the merits and demerits of different approaches to generating knowledge are still with us today.
In the meantime, real world policy practice has moved on. It has embraced Petty's de facto plea for evidence-based policy - but with a twist. When contemporary policy is to be underpinned by empirical data, the aim is not only to make that policy more effective, but also to give it an objective basis. Hard data, so the idea, can de-politicize policy. It can help draw a clear line between political choices - whether by prince or parliament - and policy applications.
This division does not hold. However far you zoom in, you can rarely separate the measurement of social phenomena from our judgments and abstract ideas about them. How could you measure school performance, drug abuse, poverty, development, gender equality, standards of living and just about any other dimension of social life without making big definitional choices first? This problem becomes only more stubborn when we concentrate on economic variables - inflation, growth, unemployment - which might seem to have a more objective basis. There, too, the devil is in the detail. What counts are production? What is a quality improvement rather than a price increase? How do we distinguish between idleness as a choice and involuntary joblessness?
It is here that modern day evidence-based policy has forgotten - or consciously ignores - its roots in political arithmetic, an openly normative political project. Indeed, in contemporary democracies, we find a fundamental tension: as citizens we expect bureaucracies to be politically neutral and to serve the political aims of whomever we install, through elections, as political principals. So when branches of government publish evidence, either to assess past policies or to devise new ones, we demand that they not be colored already.
But they inevitably are colored, consciously or otherwise. This applies most immediately to quantitative indicators, and especially to those that are so ubiquitous that we take them for granted: inflation gauges, GDP, unemployment figures, trade statistics, government debt levels. After all, measuring abstract economic quantities is never straightforward, and the choice for one formula over the other carries implications that are rarely understood beyond a narrow circle of experts and communicated to the wider public. So when people discover these inevitable biases, they are incensed: here are hidden choices that have been kept out of the political limelight and the democratic process!
The contradictory expectations inevitably set us up for disappointment: we demand objectivity where that is ultimately impossible, and lament a lack of accountability and public ownership when these biases are revealed. You find this schizophrenic attitude everywhere. To give just one example, parents complain when the assessment of their children's' school performance is hostage to the whims and sympathies of her teacher. Objective measures seem the answer. But as soon as (inevitably deficient) measures are introduced, those same parents complain that the yardsticks fail to do justice to their child's abilities.
In essence, evidence-based policy is political arithmetic in self-denial: policy instruments and their application are infused with political values, and these political charges belie the veneer of objectivity of which evidence-based policy prides itself.
Political arithmetic has left an enormous legacy, but it is rarely appreciated as such. What do we gain by resuscitating the label?
In his 1751 encyclopedia, Diderot defined political arithmetic as that kind of arithmetic "the operations of which have as a goal research that is useful to the art of governing the people". That mindset still informs many present-day social and economic statistics. They were mostly launched to address pressing social and economic problems, not to quench social scientists' thirst for data. An appreciation for political arithmetic challenges us to uncover their political underpinnings - both the down-stream consequences of choices for measuring a concept one way rather than another, and the upstream political roots of a choice for one formula over another.
There is no presupposition here that either the consequences of measurements, nor the intentions behind it, are necessary good or malign. Neither do those influences have to be conscious - indeed, in a world as complex as ours, they often will not be. It seems naive to hope that governments today could do their jobs, however imperfectly, without statistics. We should study of modern-day political arithmetic not in an attempt to get rid of it, but to reflect on our own political condition, and the potential and limits of basing policy on scientific fundaments.
Political arithmetic also sharpens our view of this particular mode of governing. Techno-rationality applied to social life is not new as scholars such as Michel Foucault, Timothy Mitchell and James Scott have shown. But the long-term view also reminds of the potential for backlash. Evidence-based policy is already under fire: rule by experts is decried either because it is detached from "the real people", or because it is beholden to hidden forces or ideas, or simply because it suffers from tunnel vision. The rise of fact-free politics is the mirror image of relentlessly intensifying calls for evidence-based policy (with standards of objectivity that bureaucrats must inevitably violate).
Finally, the applied roots of modern social science itself should give its modern-day practitioners food for thought. It was obvious for Petty that his endeavors always served some political or social goal. These kinds of goals, and the predispositions derived from them, as often still baked into the social and economic statistics we use for our analyses. Debates about methods in appropriate social sciences stretch back far into the past. But certainly in economics, they are increasingly forgotten and exorcised from undergrad curricula. Rediscovering the gnarled roots of present-day social-scientific research will help its practitioners appreciate its limits and inevitable political baggage.
Ted McCormick's biography of Petty offers a revealing view on the man and his times; Patrick Carroll's Science, Culture and Modern State Formation puts political arithmetic in the context of the scientific revolution and its relation to state building and government rule. Alain Desrosières has written the classic study of the rise of statistical reasoning, and Ingrid Rima has chronicled the role of quantification in political economy and economics more generally. The history of GDP - the most ubiquitous of all macroeconomic indicators - has been told from a number of different angles. The most nuts-and-bolts versions of that story are probably Paul Studenski's The Income of Nations and Philipp Lepenies' The Power of a Single Number.