Strengths and drawbacks
The strengths of microsimulation unfold in three dimensions. Microsimulation is attractive from a theoretical point of view, as it supports innovative research embedded into modern research paradigms like the life course perspective. (In this respect, microsimulation is the logical next step following life course analysis.) Microsimulation is attractive from a practical point of view, as it can provide the tools for the study and projection of sociodemographic and socioeconomic dynamics of high policy relevance. And microsimulation is attractive from a technical perspective, since it is not restricted with respect to variable and process types, as is the case with cell-based models.
Strengths of microsimulation from a theoretical perspective
The massive social and demographic change in the last decades went hand in hand with tremendous technological progress. The ability to process large amounts of data has boosted data collection and enabled new survey designs and methods of data analysis. These developments went hand in hand with a general paradigm shift in the social sciences, many of the changes pointing in the same direction as Orcutt’s vision. Among them is the general shift from macro to micro, moving individuals within their context into the centre of research. Another change relates to the increasing emphasis on processes rather than static structures, bringing in the concepts of causality and time. While the microsimulation approach supports both of these new focuses of attention, it constitutes the main tool for a third trend in research: the move from analysis to synthesis (Willekens 1999). Microsimulation links multiple elementary processes in order to generate complex dynamics and to quantify what a given process contributes to the complex pattern of change.
These trends in social sciences are mirrored in the emergence of the life course paradigm which connects social change, social structure, and individual action (Giele and Elder 1998). Its multidimensional and dynamic view is reflected in longitudinal research and the collection of longitudinal data. Individual lives are described as a multitude of parallel and interacting careers like education, work, partnership, and parenthood. The states of each career are changed by events whose timing is collected in surveys and respectively simulated in microsimulation models. Various strengths of the microsimulation approach have a direct correspondence to key concepts of the life course perspective, making it the logical approach for the study and projection of social phenomena.
Microsimulation is well suited to simulate the interaction of careers, as it allows for both the modeling of processes that have a memory (i.e. individuals have a memory of past events of various career domains) and the modeling of various parallel careers with event probabilities or hazards of one career responding to state changes in other careers.
Besides the recognition of interactions between careers, the life course perspective emphasizes the interaction between individuals--the concept of linked lives. Microsimulation is a powerful tool to study and project these interactions. This could include changes in kinship networks (Wachter 1995), intergenerational transfers and transmission of characteristics like education (Spielauer 2004), and the transmission of diseases like AIDS.
According to the life course perspective, the current situation and decisions of a person can be seen as the consequence of past experiences and future expectations, and as an integration of individual motives and external constraints. In this way, human agency and individual goal orientation are part of the explanatory framework. One of the main mechanisms with which individuals confront the challenges of life is by the timing of life course events of parallel – and often difficult to reconcile - careers like work and parenthood. Microsimulation supports the modeling of individual agency, as all decisions and events are modeled at the level where they take place and models can account for the individual context. Besides these intrinsic strengths of microsimulation, microsimulation also does not impose any restrictions of how decisions are modeled, i.e. it allows for any kind of behavioural models which can be expressed in computer code.
Strengths of microsimulation from a practical perspective
The ability to create models for the projection of policy effects lies at the core of Orcutt’s vision. The attractiveness of dynamic microsimulation in policymaking is closely linked to the intrinsic strengths of this approach. It allows the modeling of policies at any level of detail, and it is prepared to address distributional issues as well as issues of long-term sustainability. A part of this power unfolds already in static tax benefit microsimulation models, which have become a standard tool for policy analysis in most developed countries. These models resulted from the increased interest among policy makers in distributional studies, but are limited to cross-sectional studies by nature. While limited tax benefit projections into the future are possible with static microsimulation models by re-weighting the individuals of an initial population to represent a future population (and by upgrading income and other variables), this approach lacks the longitudinal dimension, i.e. the individual life courses (and contribution histories) simulated in dynamic models. The importance of dynamics in policy applications was most prominently recognized in the design and modeling of pension systems, which are highly affected by population aging. Pension models are also good examples of applications where both individual (contribution) histories and the concept of linked lives (survivor pensions) matter. Another example is the planning of care institutions whose demand is driven by population aging as well as by changing kinship networks and labour market participation (i.e. the main factors affecting the availability of informal care).
Given the rapid rate of social and demographic change, the need for a longitudinal perspective has quickly been recognized in most other policy areas which benefit from detailed projections and the “virtual world” or test environment provided by dynamic microsimulation models. The longitudinal aspect of dynamic microsimulation is not only important for sustainability issues but also extends the scope of how the distributional impact of policies can be analyzed. Microsimulation can be used to analyze distributions on a lifetime basis and to address questions of intergenerational fairness. An example is the possibility to study and compare the distribution of rates of return of individual contribution and benefit histories over the whole individual lifespan.
Strengths of microsimulation from a technical perspective
From a technical point of view, the main strength of microsimulation is that it is not subject to the restrictions which are typical in other modeling approaches. Unlike cell-based models, microsimulation can handle any number of variables of any type. Compared to macro models, there is no need to aggregate behavioural relations which, in macro models, is only possible under restrictive assumptions. With microsimulation, there are no restrictions on how individual behaviours are modeled, as it is the behavioural outcomes which are aggregated. In other words, there are no restrictions on process types. Most importantly, microsimulation allows for Non-Markov processes, i.e. processes which possess a memory. Based on micro data, microsimulation allows flexible aggregation, as the information may be cross-tabulated in any form, while in aggregate approaches, the aggregation scheme is determined a priori. Simulation results can be displayed and accounted for simultaneously in various ways--in aggregate time series, cross-sectional joint distributions, and individual and family life paths.
What is the price? Drawbacks and limitations
Microsimulation has three types of drawbacks (and preconceptions) which are of a very different nature: aesthetics, the fundamental limitations inherent to all forecasting, and costs.
If beauty is to be found in simplicity and mathematical elegance (a view not uncommon in mainstream economics), microsimulation models violate all rules of aesthetics. Larger scale microsimulation models require countless parameters estimated from various data sources which are frequently not easy to reconcile. Policy simulation requires tiresome accounting, and microsimulation models, due to their complexity, are always in danger of becoming hard- to-operate-and-understand black boxes. While there is clearly room for improvement in the documentation and user interface of microsimulation models, the sacrifice of elegance for usefulness will always apply to this modeling approach.
The second drawback is more fundamental. The central limitation of microsimulation lies in the fact that the degree of model detail does not go hand in hand with overall prediction power. The reason for this can be found in what is called randomness, partly caused by the stochastic nature of microsimulation models, and partly due to accumulated errors and biases of variable values. The trade-off between detail and possible bias is already present in the choice of data sources, since the sample size of surveys does not go hand in hand with the model’s level of detail. There is a trade-off between the additional randomness introduced by additional variables and misspecification errors caused by models that are too simplified. This means that the feature that makes microsimulation especially attractive, namely the large number of variables that models can include, comes at the price of randomness and the resulting prediction power that weakens or decreases as the number of variables increases. This generates a trade-off between good aggregate predictions versus a good prediction regarding distributional issues in the long run, a fact that modellers have to be aware of. This trade-off problem is not specific to microsimulation, but since microsimulation is typically employed for detailed projections, the scope for randomness becomes accordingly large. Not surprisingly, in many large-scale models some processes are aligned or calibrated towards aggregated projections obtained by external means.
Besides the fundamental nature of this type of randomness, its extent also depends on data reliability or quality. In this respect we can observe and expect various improvements as more and more detailed data becomes available for research, not only in the form of survey data but also administrative data. The latter has boosted microsimulation, especially in the Nordic European countries.
Since microsimulation produces not expected values but instead random variables distributed around the expected values, it is subject to another type of randomness: Monte Carlo variability. Every simulation experiment will produce different aggregate results. While this was cumbersome in times of limited computer power, many repeated experiments and/or the simulation of large populations can eliminate this sort of randomness and deliver valuable information on the distribution of results, in addition to point estimates.
The third type of drawback is related to development costs. Microsimulation models have a need for high-quality, longitudinal and sometimes highly specific types of data--and there are costs involved to acquire and compile such data. Note that such costs are not explicit costs associated with the actual microsimulation itself but represent the price to be paid for longitudinal research in general and informed policy making in particular.
Microsimulation models also usually require large investments with respect to both manpower and hardware. However, these costs can be expected to further decrease over time as hardware prices fall and more powerful and efficient computer languages become available. Still, many researchers perceive entry barriers to be high. While many do recognize the potential of microsimulation, they remain sceptical about the feasibility of its technical implementation within the framework of smaller research projects. We hope that the availability of the Modgen language lowers this perceived barrier and makes microsimulation more accessible in the research community. In the last couple of years, various smaller-scale microsimulation models have been developed alongside PhD projects or as parts of single studies. Modgen can both speed up the programming of smaller applications and provide a tested and maintained modeling platform for large-scale models, such as Statistics Canada’s LifePaths and Pohem models.
- Date modified: