Defining dynamic social science microsimulation
What is microsimulation?
A useful way of defining simulation in the social sciences is to think of it as the purposeful use of a model. Therefore, going back one step, social science simulation is both a modeling exercise and the exercise to ‘run the model’, or to ‘play’ or ‘experiment’ with it. The range of purposes is as broad as are the reasons for doing research: solving problems, finding explanations, building theory, predicting the future, and raising consciousness. From a more practical view, we can also add training to this list. Pilots are trained on flight simulators. Why should policy makers not be trained to improve their awareness by computer simulations of policy effects? And why should voters not have tools to study the effects of proposed policy measures? Social science simulation enables such visions.
Dynamic simulation includes time. How did we get where we are now, what changes can we expect for the future, what drives those changes, and how can we influence these processes? Most informed statements about the future are based on dynamic simulations of some kind. Some require complex computer simulations; others are the result of thought experiments. The exploration of future scenarios and how the future is shaped by our individual action is a core achievement of our human brain closely linked to consciousness itself. Being able to predict the future state of a system improves the planning of our actions, both those influencing the outcome of the system, and those affected by it. For example, weather forecasts are produced using complex computer simulations—and we have both fairly adequate forecasting models for the weather tomorrow (which we cannot influence) and much more controversial simulation models for long-term climate change (which we can influence). Dynamic simulation raises the public awareness of potential future problems, be it the storm tomorrow or the effect of CO2 emissions over time. The same potential to raise awareness and improve the planning of our actions holds true in the social sciences for issues such as population aging, concentration of wealth or sustainability of social security systems.
Dynamic microsimulation is a specific type of dynamic simulation. Unfortunately microsimulation itself can be a confusing word because, despite the ‘micro’ prefix, we are nevertheless still simulating a ‘macro’ system. The ‘micro’ prefix essentially corresponds to how we simulate that system. Many systems are made up of smaller level units. Liquids consist of particles which change behaviour when heated, traffic systems are made up of cars driven on a network of roads, and societies and economies consist of people, households, firms, etc. All of these systems have one feature in common--macro level changes result from the actions and interactions of the micro units. The idea of microsimulation is that the best way to simulate such a system is often to model and simulate the actions and interactions of its smaller scale units and to obtain macro outcomes by aggregation.
Dynamic social science microsimulation can be perceived as experimenting with a virtual society of thousands - or millions – of individuals who are created and whose life courses unfold in a computer. Depending on the purpose of the model, individuals (or ‘actors’) make education and career choices, form unions, give birth, work, pay taxes, receive benefits, get divorced, migrate, retire, receive pensions, and eventually die. Creating such a ‘scientific computer game’ involves various steps, the first being the modeling of individual behaviour. The dominant micro model types in microsimulation are statistical and econometric models. While the literature is rich in statistical microdata analysis, most research stops after the estimation of models of individual processes. With a microsimulation model, we go one step further: microsimulation adds synthesis to analysis. Accordingly, the second step of microsimulation, after the modeling of individual behaviour, is the programming of the various behavioural models to enable us to run simulations of the whole system. Microsimulation can help us to understand the effect of different processes and changes in processes on the total outcome. The larger the number of interdependent processes that have to be considered, the more difficult it gets to identify and understand the contribution of individual factors on the macro outcome. However, microsimulation provides the tool to study such systems.
Modeling at the micro level facilitates policy simulations. Tax benefit and social security regulations are defined on the individual or family level which makes microsimulation a natural modeling approach, allowing their simulation at any level of detail. As such rules are usually complex and depend in a nonlinear way on various characteristics like family composition or income (e.g. progressive taxes), microsimulation is often the only way for studying the distributional impact and long-term sustainability of such systems. In policy analysis, parts of the power of the microsimulation approach already unfold in so-called static microsimulation models. These are models designed to study the cross-sectional effect of policy change, e.g. by identifying immediate winners and losers of policy reform. Dynamic microsimulation adds a whole new dimension in policy analysis, however, since it allows individuals to be followed over their entire life course.
In the social sciences, dynamic microsimulation goes back to Guy Orcutt's (1957) idea about mimicking natural experiments in economics (Orcutt 1957). His proposed modeling approach corresponds to what can be labelled as the empirical or data-driven stream of dynamic microsimulation models, i.e. models designed and used operatively for forecasting and policy recommendations (Klevmarken 1997). Associated with this type of microsimulation are microeconometric and statistical models as well as accounting routines. In contrast to this empirical stream is the theoretical stream or tradition of agent based simulation (ABS). While constituting microsimulation models under our broad definition, ABS is frequently considered as a separate branch of simulation different from microsimulation. This view is mostly based on the different purpose of ABS modeling (mainly to explore theories) and the different approaches used by ABS in the modeling of micro behaviour (rules based on theory and artificial intelligence). Unless otherwise stated, however, this discussion will only correspond to the data-driven stream of dynamic microsimulation models. (It should be noted, however, that the Modgen language has also successfully been used for ABS, as documented in Wolfson (1999).)
The main components of a typical data-driven microsimulation model can be summarized as follows. In the middle is a population microdatabase storing the characteristics of all members of the population. (From a more object-oriented perspective, the population database can also be viewed and implemented as decentralized individual 'brains' with actors possessing methods to report their states to a virtual statistician responsible for data collection and presentation.) This database is dynamically updated in a simulation run according to micro models of behaviour and policy rules (such as contribution and benefit rules in a pension model). All of these models can be parameterized by the user. Simulation results consist of aggregated tables produced by output routines. Additionally, output can consist of microdata files which can be analyzed by statistical software. Some models (such as all of those generated with Modgen) also allow the graphing of individual biographies.
Orcutt’s vision and today’s reality
Dynamic microsimulation was first introduced into the social sciences in 1957 by Guy Orcutt’s landmark paper ‘A new type of socio-economic system’, a proposal for a new model type mainly based on the frustration about existing macroeconomic projection models. In this paper, Orcutt addresses various shortcomings of macroeconomic models which can be overcome by using microsimulation. The first is the “limited predictive usefulness” of macro models especially related to the effects of governmental action, since macro models are too abstract to allow for fine-grained policy simulations. The second is the focus on aggregates and the ignorance of distributional aspects in macroeconomic studies and projections. Third, he stresses that macro models fail to capitalize on the available knowledge about elemental decision-making units. In contrast, microsimulation is not bound by restrictive assumptions of “absurdly simple relationships about elemental decision-making units” in order to be able to aggregate. Modeling on the level on which decisions are taken makes models not only more understandable, but also, in the presence of nonlinear relationships, “stable relationships at the micro level are quite consistent with the absence of stable relationships at the aggregate level”.
While these observations still hold true after half a century, some of his other observations are a good illustration of how computers have altered research. In fact, a considerable part of his paper is dedicated to the justification of using expensive computer power for simulations – doing something that was widely thought of as the domain of mathematicians and analytic solutions derivable on paper. As one of its advantages, Orcutt notes that microsimulation “… is intelligible to people of only modest mathematical sophistication”.
While this proposed modeling approach was in fact visionary in 1957, due to the lack of sufficient computer power and data availability at that time, Orcutt soon afterwards was in charge of the development of the first large-scale American microsimulation model Dynasim. He later contributed to its offspring CORSIM, which also served as a template for the Canadian CANSIM and Swedish SVERIGE models. In the meantime, dozens of large-scale general purpose models and countless specialized smaller models can now be found around the world, (for a list, see Spielauer 2007). Nevertheless, microsimulation still faces the continued resistance of the mainstream economic profession “imbued with physics envy and ascribing the highest status to mathematical elegance rather than realism and usefulness” (Wolfson 2007). This front is increasingly broken up by the demands of policy makers concerned with distributional questions and facing problems of sustainability of policies in the context of demographic change. This holds especially true for pension models which constitute a showcase for the new demands of policy makers faced with population aging and questions of sustainability and intergenerational fairness, as well as for the power of microsimulation in addressing such issues. As individual pension benefits depend on individual contribution histories as well as family characteristics (e.g. survivor pensions), pension models require very detailed demographic and economic simulations. On one hand, this can make the models very complex, but on the other, it enables them to serve very distinct and separate purposes. Many models are designed as general purpose models capable of studying various behaviours and policies, such as educational dynamics, the distributional impact of tax benefit systems, and health care needs and arrangements. It is the increasing demand of policy makers for more detailed projections necessary for planning purposes, together with advances in data collection and processing, which have triggered this development.
- Date modified: