What are the pressing reform challenges for OECD countries?
The SGI project as a whole is driven by the premise that 31 developed nations of the Organization for Economic Cooperation and Development (OECD) face several major problems at the outset of the 21st century. Future challenges such as the effects of globalization, shifting demographics, climate change and new security risks demand thoughts and actions that go beyond national boundaries. At the same time, countries face their own individual problems, which they alone must resolve. Despite the pressures of internationalization and economic or political integration, industrialized nations also face social and economic problems that are to some extent “home-made,” and therefore a matter of national policy. They include structural and financial problems associated with states’ social security systems, issues of distributive or social justice, shortcomings in education systems, integration problems and problems associated with environmental protection. These are closely accompanied by grave secondary problems such as a growing dissatisfaction with democracy among electorates, or a loss of faith in the performance of democratic institutions and the structures of advanced market economies.
Which time periods do the SGI cover?
The Sustainable Governance Indicators 2011 assess a two-year period from 1 May 2008 to 30 April 2010. The period under review for the SGI 2009 ranges from January 2005 to March 2007. In order to ensure comparability across all reports, experts were asked to focus on developments during this period. Therefore most recent developments are not incorporated. The SGI are updated on a regular basis every two or three years. The next edition will be published in 2013.
Who runs the project?
The Sustainable Governance Indicators are run by the Bertelsmann Stiftung, an operational think tank that encourages social change. Working together with partners from all areas of society, the Bertelsmann Stiftung fosters sustainability by identifying nascent challenges early on and by developing viable strategies to face these issues.
How do the SGI stand out in international comparison?
On two points, the approach of the SGI goes beyond existing international rankings:
1.) The capacity for reform of OECD states, one of the two pillars of the SGI, has been largely neglected in previous international rankings.
2.) In the SGI, the need for reform is examined not only in terms of economic but also includes aspects like education, environmental and social policies as well as security issues. Thus, for the first time, the SGI allow a sound assessment of the future viability of 31 OECD member states. The rankings are backed up by more than 1,300 pages of collective expertise from more than 80 respected researchers working at various locations all over the world. The SGI are quality-assured by the SGI Board, a committee of recognized scientists and practice-proven experts who audit and approve all findings.
Why are both reform status and governance capacities measured?
A comprehensive assessment of future viability must not be limited to the measurement of political outcomes and the quality of democratic frameworks; to the contrary, it must also look very closely at the capacity of the responsible political actors for successful governance. The Management Index assesses the actual capacity of an OECD state to take action and implement reform in terms of developing, greeing on and realizing policy. It seeks to answer the critical question of whether a state is able to identify pressing problems, develop proposals for strategic solutions and thus foster sustainable policy outcomes through governance.
Why are qualitative and quantitative assessments combined?
To operationalize and measure the concepts used in constructing the SGI, we decided to rely on a combination of statistical data drawn from official sources as well as the qualitative assessments of country experts. Statistical data are conformed to cross-national standards. However, such data often do not adequately cover the full meaning of a concept. We therefore believe that complex concepts can be measured best through the use of expert assessments that take the country-specific context into account and provide “thick” descriptions capturing the nuances of phenomena.
In this way, the combination of expert assessments and statistical indicators assumes that both types of observations have specific strengths and weaknesses, that they cannot fully substitute for each other, and that neither of them is epistemologically superior to the other. Pairing “objective” quantitative data with highly context-sensitive, qualitative expert assessments delivers a high-resolution profile of political outcomes, the quality of democracy and political steering performance.
How are the two indices calculated?
The Status Index and the Management Index scores are derived by calculating the arithmetic means of the scores for their respective two dimensions, i.e. "quality of democracy" and "policy performance" in the Status Index, and "executive capacity" and "executive accountability" in the Management Index. The individual dimension scores in the Status Index are derived by calculating the arithmetic means of the criteria scores, which are also derived by calculating the arithmetic means of their respective components.
The dimension scores in the Management Index likewise represent the arithmetic means of their equally weighted component scores, but the Management Index contains two additional levels of disaggregation so as to reflect the greater diversity of governing practices and mechanisms addressed by the individual questions. The “executive capacity” dimension is disaggregated into stages of the policy process (e.g., preparation, implementation, etc.) that constitute three categories, each of which consists of between one and five criteria that are used to group activities such as regulatory impact assessment or effective implementation. In addition, a distinction is drawn between single items (e.g., with M6.1, which asks about how the government achieves its own policy objectives) and sets of items that are closely related to each other by using lettered annotations (e.g., a, b, c, etc.). For example, intra-executive monitoring mechanisms are viewed as forming such a set, consisting of: organizational incentives limiting ministerial self-interest, the monitoring of line ministries, executive agencies and internal auditing arrangements. These items are weighted equally.
The two composite indicators—that is, the Status Index and the Management Index— provide scores and ranks for each of the 31 states. The ranking is based on the score that is precise to the second decimal place. If two or more states have the same score at this level of precision, they are ranked equally.
How did the assessment process work? How many experts were involved?
The SGI’s expert survey questionnaire is designed to improve the validity and reliability of expert assessments through the use of six tools and procedural steps. First, many assessment questions are formulated so as to elicit detailed factual evidence rather than broad—and, consequently, more subjective—assessments. In fact, many questions ask for responses that may be cross-checked with responses to other questions, statistical data or data from opinion surveys.
Second, the questionnaire provides detailed explanations of and four tailored response options for each question. This information is intended to illustrate the purpose of a question, to structure the way the expert words his or her assessment, and to provide a standardized framework for the production of the country reports. The experts are instructed to adapt the standardized response options to the individual context of the particular country they are evaluating and to substantiate their ratings (numerical assessment) with evidence in their country report (in the following: “expert report”). The rating scale for each question ranges from one to 10, with one being the worst and 10 being the best. The scale is differentiated by four response options provided for each question. Although the written assessments do not allow for a direct reconstruction of the numerical ratings, they do provide an explanatory background for them.
Third, each OECD member state surveyed is examined by two leading scholars with established expertise in the respective countries. To identify subjective bias and reduce any distortion it might cause, the experts were selected so as to represent both domestic and external views as well as the viewpoints of political scientists and economists. One expert writes a draft country report, assessing all questionnaire items. The other expert reviews this report, making comments and providing alternative or complementary content. Both experts are instructed to assess the situation in their countries as of April 2010 and to take into account the period between May 2008 and April 2010 when explaining their evaluation. Although many experts know each other personally, we ensure that their expertise is given independently.
In completing the questionnaire, each expert provides numerical ratings for 65 questions, which means that the evaluations for all 31 countries entail a total of 2,015 ratings (or scores). Whereas the reviewer has access to the written assessments of the first country expert, he or she cannot see the first expert’s numerical ratings, which ensures that scores are given independently.
Fourth, the countries examined by the SGI are subdivided among seven “regional coordinators.” These regional coordinators, who are political scientists with both comparativist and area expertise, are each responsible for four or five of the 31 OECD countries, grouped according to their geographical proximity. The regional coordinators monitor the development of the written assessments according to criteria of validity and objectivity, ensuring a fair and balanced country report. In addition, the regional coordinators give numerical ratings based on those provided by the county experts.
Fifth, the regional coordinators review their ratings collectively so as to make it possible to draw comparisons across the entire OECD world. As part of the discussions forming the review process, each regional coordinator is required to explain, defend, and if necessary, recalibrate his ratings and assessments. To make any changes agreed to during the review process more transparent, the coordinators also agree to keep the score within the range of the two country expert ratings. During the review process, six percent of these scores exceeded the range defined by the expert ratings, and each of these deviations was justified in the body of the country reports.
Sixth, as part of a second round of reviews, an advisory body composed of renowned scholars and practitioners that are tasked with making strategic decisions discusses and approves the ratings.
To what extent can the SGI 2011 results be compared to the SGI 2009?
The set of indicators in the SGI 2011 has been carefully revised, following a comprehensive external and internal evaluation of the SGI 2009 index design. In this process, some indicators have been replaced by others. Moreover, the composition of the indices has been readjusted. These modifications inevitably create distortions whenever the new SGI 2011 results are directly compared with the original SGI 2009 results.
However, in order to allow for justifiable comparisons between these two SGI editions, the SGI 2009 results have been interpolated on the basis of the new SGI 2011 index design, i.e. every quantitative indicator newly incorporated into the SGI 2011 survey has also been included ex post into the SGI 2009 survey with a value corresponding to the respective review period.
Qualitative expert assessments in the SGI 2011 can be directly compared to the corresponding assessments of the SGI 2009. However, in order to ensure comparability of qualitative scores, the SGI 2009 expert assessments are no more subject to linear transformation. Thus, the qualitative indicators of both SGI editions correspond to the original score value on the 1-10 scale as assessed by the respective country experts.
Three qualitative indicators newly added in 2011 (“S1.4 Party Financing”, “S4.3 Appointment of Justices”, “M2.7 Informal Coordination Procedures”) have no corresponding value in the SGI 2009 survey. In these cases, in order to avoid distortions, the score for the SGI 2011 indicators in question has been imputed to the according SGI 2009 indicator as well. The SGI 2009 value for the newly added indicator “S5.1 Economic Policy” represents the arithmetic mean of all quantitative scores within the Economy criterion as assessed in the SGI 2009. The qualitative indicators “S3.1 Civil Rights” and “S3.2 Political Liberties” were originally combined in one single qualitative indicator in the SGI 2009 survey, and thus the interpolated values for both S3.1 and S3.2 match with the single combined value as assessed in the SGI 2009. The same method has been used for the indicators “M13.2 Association Competence (businesses)” and “M13.3 Association Competence (others)”, which were also surveyed as one single indicator in the original SGI 2009.
Thanks to the interpolation of the SGI 2009 on the basis of the SGI 2011 index design, two composite datasets are available which are based on an identical set of indicators and an identical aggregation method. Thus, plausible direct comparisons between the two SGI editions are possible to a certain degree but must still be treated with caution. For methodological reasons, comparisons across the two SGI editions should only focus on the ranking of countries relatively to each other and not on aggregated score differences. The interpolated data are first and foremost meant to reveal basic trends which may encourage further and deeper analysis. Rank differences between the SGI 2009 and 2011 have to be interpreted carefully for two reasons: first, because the high level of aggregation only partly allows for inferences at the category, criterion and indicator level; and second, because rank differences cannot entirely be attributed to developments in one particular country but are also related to increased or decreased performance of other countries. Trends of improvement and decline can only to some extent satisfy the complexity of changes which are described in-depth in the country reports.
I want to use SGI scores for my work. Where do I get the complete data?
The vast set of data generated by the SGI can be accessed free of charge under ‘Download’.
Why is the quality of a state’s democracy of importance? All OECD states are democracies.
The SGI Status Index looks at the quality of democracy and rule of law by drawing on an array of indicators. The quality of democracy and political participation in a political system are crucial to its long-term stability and capacity to perform. Indeed, this viability depends to a large extent on the levels of trust between citizens and politics. Guaranteed opportunities for democratic participation and observation, freely accessible in formation, rule of law and protection of civil rights are thus essential prerequisites for the legitimacy of a political system. Moreover, democratic participation and observation
are essential for concrete learning and adaptation processes as well as the capacity to change. The SGI thus regard structures that ensure a high quality of democracy and rule of law as necessary in achieving sustainability in terms of long-term system stability.
The SGI’s concept of democracy includes not only the rights of political participation and electoral competition but also the rule of law. Since all OECD member countries are democracies, the SGI’s questions in this category focus on the quality rather than the presence of democracy. Thus, there are a series of questions designed to address whether citizens face discrimination in the electoral process, how citizens can access public information, the degree to which the media are independent and diversified, how well states protect civil rights and whether the government and administration act predictably and in accordance with the law.
Which states do the SGI compare?
The SGI compare 31 member states of the Organization of Economic Cooperation and Development (OECD), using OECD membership as of May 2010 as a formal criterion to select states from a wider group of developed industrialized nations committed to human rights, democratic pluralism and an open market economy.
What if values are missing for some countries?
Missing values in public statistics were supplemented by values from previous years or from other sources used as proxies. If this was not possible, the missing value was imputed by the median of the available values.