Followers

Popular Posts

Translate

Information House

Friday 1 June 2012

DEVELOPMENT INDICES AND INDICATORS


DEVELOPMENT INDICES AND INDICATORS

Introduction

The concept that complexity can be succinctly summarized in a single statement, picture or measure is indeed an old one. The world is a complex place and human beings have always sought ways of interpreting what they sense around them so as to help deal with that complexity.
Given the pressing need to help address human suffering and poverty wherever it is found the appeal of ways to present this complexity in ways that help an intervention is thus highly understandable. Development indicators and indices (index: an aggregation of indicators into a single representation) seek to do just that – to simplify so as to manage.
Indicators have largely been quantitative rather than qualitative, virtually by definition. That is not to say that qualitative (non-numerical) indicators are unimportant. Indeed by far the majority of indicators we use on a day-by-day basis are qualitative – a sense of a street being ‘dirty’, of traffic being ‘heavy’ or of feeling unsafe walking in a particular neighborhood. These ‘feelings’ are based on what we hear and see interpreted by what we have learned from our own experience or from that of others. Quantitative indicators can have a feel of being mechanical, technical and complicated, and this may be off-putting for some. Yet ironically they are capturing the same sense of what we each do every day of our lives.
This article will explore indicators and indices in development by dissecting three well-popularized examples. The examples have not been chosen because they are necessarily the ‘best’ (whatever that may mean) but because they are widely reported and do follow a simple causal chain:
Figure 1. Hypothesised causal chain with three development indices. Figure 1. Hypothesised causal chain with three development indices.
Admittedly there is over-simplification in these cause-effect assumptions. Corruption is not the only limiting factor within good governance and neither is good governance the only limitation to achieving human development. Similarly, an increase in environmental degradation isn’t the only potential ‘cost’ to achieving good human development. But it can at least be tentatively assumed a priori that the three indices could have a relationship. This and their popularity make them good examples of their genre.
All three of the indices range in value from 0 (bad) to 1 (excellent), but rather than give the numerical values of the indices as tables the results are presented more qualitatively as color maps. In these maps, values towards zero are represented as ‘red’ (‘bad’), values of 0.5 (midpoint) are ‘yellow’ and values of ‘1’ (‘good’) are dark blue.

Human Development Index

The Human Development Index (HDI) is a creation of the United Nations Development Programme and represents the practical embodiment of their vision of human development as an alternative vision to what they perceive as the dominance of economic indicators in development. Economic development had the gross domestic product (GDP) so human development had to have the HDI. In essence the HDI represents a measure of the ‘quality of life’.
Since its appearance in 1990 the HDI comprises three components:
  1. life expectancy (a proxy indicator for health care and living conditions).
  2. adult literacy combined with years of schooling or enrollment in primary, secondary and tertiary education.
  3. real GDP/capita ($ PPP; a proxy indicator for disposable income).
There is typically a two-year time lag in the data – the HDI for 2006 uses data from 2004 for example – and gaps are filled in various ways, typically by making assumptions based upon data available for assumed ‘peers’.
Figure 2. The Human Development Index of 2006 and its three components. Figure 2. The Human Development Index of 2006 and its three components.
The choice of these three components for the HDI is not surprising, and they can be found in many lists of development indicators. It can certainly be argued that the selection of only three components for human development is problematic. Income inequality, for example, is not included alongside GDP/capita and neither are there any elements of ‘consumption’. The UNDP have argued that these three can act as proxy indicators for many others. For example, provision of a clean water supply and/or adequate nutrition would be reflected in life expectancy. Indeed, given that the UNDP wanted an index that was relatively transparent and simple to understand it is also not surprising that they decided to include only three components.
As a key part of this strategy the UNDP decided to present the HDI within a country ‘league-table’ format and labels of ‘high’, ‘medium’ or ‘low’ human development applied by UNDP depending upon each countries value for the HDI. Both these devices – league table presentation and ‘labeling’ – promote a sense of ‘name and shame’ and comparison of performance across peers. Rather than duplicate any of the HDI league tables here is a global map of the values of the HDI published in 2006 ranging from 0 to 1 is presented as Figure 2.
Sadly, and perhaps unsurprising, large swathes of Africa have low values for the HDI (orange and yellow), implying that the level of human development for the continent is poor. The preponderance of dark green and blue across the globe (higher values of the HDI) paint a more positive picture, but its still Africa which stands out. But that just gives the overall picture, and how does this breakdown in terms of the three components of the HDI? Well the story is not all that different whichever component is looked at. The three ‘bits’ of the HDI are also presented in Figure 2. The GDP/capita (income) component almost exactly mirrors the coloring for the HDI. The two other components – life expectancy and education – do show some nuanced differences. Life expectancy is particularly poor in the southern Africa countries, a reflection of the preponderance of HIV/AIDS, while education is especially poor in West Africa. However, looking at the maps for the three components it is easy to see how they merge into the map of the HDI. Neither is it hard to appreciate how the three components may be related – higher income per capita could mean greater expenditure on education and health care for example. In that sense even though the three components are quite different (a heterogeneous index) the HDI does have an internal consistency.

Corruption Perceptions Index (CPI)

It is often reiterated that one of the necessary drivers to bring about human development is good governance, and controlling corruption is an important element of this. The assumptions are straightforward. Corruption can result in resources being diverted from the public good to private consumption with the result that impacts intended to be of wider benefit are lost. Corruption may also drive up the costs of doing business with the result that investment is deterred and economic growth will suffer. But the very nature of corruption makes it difficult to gauge. After all, those benefiting from corruption are unlikely to say so and openly declare how much they receive. Payers may be less reticent to talk about the extent of corruption as they are one of the losers, but there may be a danger of them exaggerating their problems and evidence may become somewhat anecdotal.
The Corruption Perceptions Index (CPI), created by the Berlin-based Transparency International (TI; a non-governmental organization) and first released in 1995, has been designed to provide a more systematic snapshot of corruption in the same way that the HDI provides a snapshot of human development. Like the HDI it combines a number of different ‘indicators’ into one, but unlike the HDI the indicators which are combined all measure corruption. Whereas the HDI has three quite different components (an heterogeneous index), the CPI is an homogenous index in the sense that all the components upon which it is based seek to measure the same thing.
Like the HDI, the CPI is based on data collected over a number of years prior to release of the index. While the HDI 2006 is based on data from 2004 the CPI for 2006 uses 12 surveys and expert assessments from 2005 and 2006 with at least three of them being required for a country to be included in the CPI. The surveys evaluate the extent of corruption as perceived by country experts, non-residents and residents (not necessarily nationals) of the countries included, and are:
  • Country Policy and Institutional Assessment by the IDA and IBRD (World Bank), 2005
  • Economist Intelligence Unit, 2006.
  • Freedom House Nations in Transit, 2006.
  • International Institute for Management Development, Lausanne, 2005 and 2006.
  • Grey Area Dynamics Ratings by the Merchant International Group, 2006.
  • Political and Economic Risk Consultancy, Hong Kong (2005 and 2006).
  • United Nations Economic Commission for Africa, African Governance Report, 2005.
  • World Economic Forum, 2005 and 2006.
  • World Markets Research Centre, 2006.
Figure 3. The Corruption Perceptions Index of 2006. High values (blue) indicate ‘good’ while low values (red) indicate ‘bad’. Figure 3. The Corruption Perceptions Index of 2006. High values (blue) indicate ‘good’ while low values (red) indicate ‘bad’.
Thus the CPI is based, at least in part, upon judgments made by non-residents and non-nationals of the countries. Each of these sources has a league-table ranking of countries akin to that of the HDI, and it is the ranks which are combined through various complex steps including standardization via ‘matching percentiles’ and ‘beta-transformation’ to generate the CPI.
The global map of the CPI for 2006 is presented as Figure 3. The picture has a resemblance to that of the HDI and indeed it is temping to relate this in simplistic cause-effect terms to Figure 2. In both sets of maps areas with the greatest levels of corruption (Africa, Asia and Latin America) also tend to do less well in terms of the HDI and its components. Particular ‘hot spots’ of corruption can be found in West Africa, Kazakhstan and Myanmar (Burma). Maybe this supports the initial assumption that countries do badly in human development because they also do badly in terms of corruption? This is, of course, a simplistic argument based on tools designed to simplify, and there will be exceptions to the rule. But the temptation to make simplistic arguments based upon these tools is understandable; a point which will be returned to later.

Environmental Sustainability Index (ESI)

It is possible that a country can derive a high HDI at the cost of environmental degradation. It has indeed been shown that ‘development’, be it measured in terms of economic wealth or more broadly in terms of quality of life, comes at a cost of environmental quality. The third index discussed here is the Environmental Sustainability Index (ESI). The ESI is backed by the powerful World Economic Forum (WEF) in collaboration with Yale and Columbia Universities in the USA. This partnership refers to themselves as the ‘Global Leaders of Tomorrow’, or simply ‘Global Leaders’. Values of the ESI have been published for 1999 (pilot study), 2000, 2001, 2002 and 2005. Unlike the HDI and CPI the creators of the ESI state that they see no need to publish the results on a yearly basis as there may be little change to report.
The ESI methodology to arrive at a value for a country is somewhat complex (much more so than that of the HDI) and as with the CPI the precise details do not need to be repeated here. The 2005 version of the ESI covers 146 countries. The process begins with the assimilation of raw data sets for 76 ‘environmental’ variables which are aggregated into 22 ‘indicators’ and finally into the ESI. The terminology is admittedly somewhat confusing as the ‘variables’ of the ESI are often reported as ‘indicators’ elsewhere in the literature. Thus strictly speaking aggregation results in 22 indices (or sub-indices or partial-indices if preferred) which are then combined into the ESI.
The data sets cover a diverse range of variables such as ambient pollution and emissions of pollutants through to impacts on human health and being a signatory to international agreements. Included amongst them are a measure of corruption (although not the CPI) and measures which relate to human life span (e.g. child mortality rate) and education (enrollment and completion rates) so there is some overlap with the HDI. Many of the variables date from the year 2000 on, but some are based on earlier data. The ESI variables are loosely grouped into the pressure-state-response (PSR) framework which has proved to be popular for sustainability indicators. The variables are checked for their distribution across all the nations included in the sample, but there is some tolerance of gaps. Gaps are filled by a process of regression which predicts what the missing values would be based upon associations with other variables.
If the data have a highly skewed distribution then the skewness is lessened by taking logarithms. Also extreme values (high and low) are capped by using percentiles. As the variables all have different units of measurement they are standardized by subtracting the mean or subtracting from the mean (depending upon whether high values of the variable are regarded as ‘good’ or ‘bad’ for environmental sustainability) and dividing by the standard deviation. If higher values (e.g. biodiversity) are deemed to be good for sustainability:
<math>z-value = frac{country value - mean}{standard deviation}</math>
If high values are deemed to be bad for sustainability (e.g. emissions of pollutants):
<math>z-value = frac{mean-country value}{standard deviation}</math>
Figure 4. The Environmental Sustainability Index of 2005 and its division into pressure, state and response components. Figure 4. The Environmental Sustainability Index of 2005 and its division into pressure, state and response components.
The average z-value for an indicator (a group of related variables) is then calculated for each country and these are converted to a more intuitively meaningful statistic ranging from 0 to 100 by calculating the ‘standardized normal percentile’ (SNP). SNP’s are averaged over all the indicators to provide the ESI for each country and these are then presented in a league table format akin to both the HDI and CPI. The higher a country appears in the league table then the more environmentally sustainable it is deemed to be.
The values of the ESI for 2005 are presented in Figure 4. Unlike the HDI and perhaps to a lesser extent the CPI maps this one does seem to go against an a priori expectation. Figure 4 appears to suggest that the most environmentally sustainable nations are also the most developed and industrialized – Europe (including Russia), North and South America and Australia. The least environmentally sustainable region spans the horn of Africa, the oil-rich countries of the mid-East through to Iran, Afghanistan/Pakistan and China. Surely the developed world also has its problems with environmental sustainability? As with the HDI it is possible to ‘unpack’ the ESI but here the process is not so straightforward. The obvious framework for unpacking the ESI is to consider the PSR components, and Figure 4 also shows the results of this process. The methodology employed is complex – basically a principal component analysis of the ESI z-values once each variable has been classified as pressure, state or response. Once the ESI has been unpacked the state and pressure components do seem to match expectation. The worst state of the environment and the greatest pressures on the environment are found in the industrialized countries of Europe and North America. The reason why this position is swung around to greater environmental sustainability is because of the strong performance of the development world in the ‘response’ category. The response variables do dominate within the ESI, a point which has been noted and criticized by others, and hence a country that does well in ‘response’ can mitigate a poor performance under ‘state’ and ‘pressure’.

Pros and cons

Table 1. Comparison of the HDI 2006, CPI 2006 and ESI 2005.
HDI CPI ESI
Index assesses Human development Corruption (perception of) Environmental sustainability
Creators UNDP Transparency International Global Leaders of Tomorrow: World Economic Forum (WEF) sponsored but created by the universities of Yale and Columbia in the USA
Type of organization Governmental: multi-lateral Non-governmental WEF is an “independent international organization”
Number of components 3 (education, life expectancy, income) 12 corruption surveys 76 variables in 2005 grouped into 22 ‘indicators’
Heterogeneous or homogeneous index Heterogeneous Homogenous Heterogeneous
Methodology relatively simple complex complex
Years of data 2004 for the 2006 HDI 2005 and 2006 for the 2006 CPI Quite complex. Some are between 2001 and 2004 but many are averages spanning a number of years, even back to the 1960s.
League table presentation yes yes yes
Publication Every year since 1990 Every year since 1995 1999 (pilot study), 2000, 2001, 2002 and 2005
Clear and detailed presentation of methodology yes (Human Development Reports) yes yes (ESI Reports)


A summary of some of the main features of the three indices discussed above is provided as Table 1.
The three indices presented here are by no means the only such examples within development, and no doubt others will have their own particular ‘favorite’ for which they can make a case for wide acceptance. The HDI has spawned a group of other ‘human development’ indices that look, for example, at differences between males and females. However, the HDI, CPI and ESI do receive a wide degree of attention amongst the popular press of many countries and are ‘consumed’ and used by politicians. They can act as good examples of indicators/indices in general.
The advantages of development indicators and indices rest in the reason why they are created in the first place – to simplify complexity. The consumers of such tools are obviously managers, policy makers and politicians but they are also avidly devoured by the popular press which in turn can influence public opinion and, of course, politicians. As I have done here it is indeed tempting to look at the maps and identify similarities – causal factors – and differences. Perhaps the CPI is measuring one limitation to the HDI and the ESI is assessing one dimension of the cost. Perhaps we have a set of pictures that don’t just tell us what the world is but also help with an explanation as to why it is the way it is. But indicators and indices also have their problems and no article on this topic is complete unless these are also aired.
First it has to be stressed that any indicator/index is only as good as the data upon which it is built. Data sets can be poor quality as well as good quality and there may well be gaps. In the ESI these gaps are covered by creating data based upon a complex statistical technique, and indeed gaps are also covered in the HDI by making assumptions as to where a country ‘ought’ to be in terms of the three variables. There is also the hiding of intra-country variation to consider. These may be consideration between urban and rural populations, for example, or between different regions. Some variables may also change dramatically during the year – air pollution for example. The end result is a single value of an indicator/index covering all the spatial and temporal variation within that country over a year (or more).
Related to this issue of data availability is the ‘it’s all out of date’ excuse that can be used by those who the indicators/indices are meant to influence. There is inevitably a delay between data availability and the generation of the indicators/indices. The data have to be checked, manipulated and presented as part of a report and this can result in a delay of one year (CPI), two years (HDI) or more. This allows the classic get out clause of claiming that the index may indeed present a bad picture for a country but ‘it’s by now out of date and necessary changes have already been made which have brought about improvements’. Whether they have, or have not, made such changes is a matter of conjecture and will no doubt be reflected in future presentations of the indicator/index. After all, this excuse may have credence for a short while but the HDI and CPI are published every year and the ESI every couple of years.
Besides these more obvious issues are others which are perhaps not so obvious. An indicator/index is a product of the person(s) who created it. This is obvious when stated but the ramification is that there is potential for human bias.
Thus the ESI has been criticized for its emphasis on response indicators as distinct to state and pressure. The richer countries tend to do well in the response category but not so well in state and pressure. A similar criticism can, of course, be leveled at the HDI with its dependence on just three components, and the CPI for its reliance on perspective from residents of the richer north.
The potential for bias interacts with another problem of indicators/indices that can also be seen as a strength – much depends on who is doing the looking. They are quantitative tools and even the simplest of the three presented here, the HDI, has a technical and somewhat mechanical feel. The methodology for all three of them is complex, to varying degrees, with even the HDI requiring steps of ‘standardization’ and ‘transformation’. The ESI of 2006 has 76 components, not just three, and the ways in which these are manipulated and combined is even more complex. The CPI is arguably the most complex of the three given that it uses ‘ranks’ from 12 sources. Will ‘users’ or ‘consumers’ of the tools make the effort to understand how they are created? It is likely that most will not, and instead the tools will be treated as ‘black boxes’ – I don’t need to know the technical detail just the league table ranking. But will ‘consumers’ be aware of the assumptions that have been made? There is a dangerous element of technical dependency here; an assumption that those selecting/creating the indicators/indices have been ‘fair’ and ‘true’ in the decisions they have made. While, in fairness, the creators of the indices do go to great lengths to clearly present their raw data and methodologies these can be rather involved and inaccessible to a non-specialist.
Finally the process of simplification while it is appealing can be dangerous. Bringing human development down to just three components can generate a sense that it is only these three that matter. The ESI encompasses many variables but even more are left out. Does their non-inclusion imply that they are unimportant? Should a government aim to push its country up the HDI league table by addressing the three variables or should it try to do what is ‘best’ for its population irrespective of its league table ranking? Often, of course, these are identical interests but they may not necessarily be so.

Summary

Development indicators and indices are in vogue; they are popular amongst people with busy lives as a way of condensing complexity to ‘snap shots’ that can be digested and appreciated. This popularity is unlikely to diminish, indeed the opposite will likely be the case. But care does need to be taken as the creators and promoters of these tools have a great responsibility.

0 comments:

Twitter Delicious Facebook Digg Stumbleupon Favorites More

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | 100 Web Hosting