Followers

Popular Posts

Translate

Information House

Monday 11 June 2012

COMMON SKILLS OF AN ENTERPRENEUR

ENTREPRENEURIAL SKILL AND CHARACTERISTICS

You've heard it said that certain careers require certain
attitudes to succeed in. Policemen must possess good physical fitness and courage in the face of danger. Graphic designers must be creative, original thinkers with an artistic mind. Entrepreneurs, too, share several common characteristics that those those who hope to achieve success in the field should strive to emulate. Today we explore seven crucial characteristics of the best entrepreneurs that anyone thinking about starting a business ought to consider.

Perseverance

In the business world, problems and complications are a part of every day life. Especially in the early days of a company, entrepreneurs can be said to be climbing a wall of opposition as they struggle to manage the many goals, tasks and constantly evolving problems of the new organization. Furthermore, many entrepreneurs fail many times when trying to start a small business, and the ones who succeed are the ones who can persevere through failure and try again. Steve Jobs, founder of Apple, was publicly thrown out of his own company at age thirty. Fighting thoughts of suicide, Jobs regained his composure and went on to found two other very successful companies, NeXt and Pixar. Later, Jobs was recruited to come back to apple and rescue the company from the brink of extinction. His efforts demonstrate perseverance in entrepreneurship better than perhaps any other.

Task management

The ability to see an end goal, break it down into many tasks, and see those tasks through to completion is a crucial skill of successful entrepreneurs. This skill is one many of us learned in high school and college. When you had a big research paper due, you knew you had to research the topic, find time to write the paper, compile a bibliography, proof read it, and turn it in on time, all the while also working a job and managing work from other classes.
Similarly, you may come into your office one day to find that not only must you prepare a presentation for a big name contract, but must also move everything out of the main office in preparation for a new hardwood floor installation, while also making sure that the design guys finish their website template on time. Rarely will these things work out without any issue, and that is when you must manage the tasks and figure out how to make them all happen.

Courage

It is typical for many people to discourage you in the early days of your new project. This can be even more painful when your first few projects either fail or achieve very little success. Disparaging remarks can be hurtful to the morale of a new entrepreneur trying to get his or her new project off the ground. It is then that you must possess the courage to tune out those who do not share your vision, ignoring the nay-sayers and pressing forward on nothing more than the strength of your own determination.
It takes true courage to reject social pressure and to try to build something truly outstanding, but this is a trait that all successful entrepreneurs must have in order to survive in the business world. When Larry Paige and Sergey Brin were beginning work on Google, they made the decision to max out their credit card and drain their bank accounts in order to fund the nascent project. Such a decision surely must have attracted some negativity from peers and family, but the two showed true courage in seeing the project through to the billion dollar empire it is today.

Social Skills

Rare is the entrepreneur who achieves huge success completely on his own. Even Andrey Ternovskiy, the 17 year old founder of the wildly popular yet incredibly simple ChatRoulette website had to take investment money from his family to get the project rolling. Whether you need help with design, programming, finance, or investment, if you plan on going into business, you must possess the ability to approach other people and convince them of the worth of your idea. More than just a friendly attitude, you must somehow find a way to bend their vision to match your own so that they come to see the brilliance of you work in the same way you do. This is by no means an easy skill to master, but it is one that will help you greatly as you struggle to put your new company together.

Negotiation Skills

There is a saying among business savvy people that “everything is negotiable.” While this might not be exactly true, it does illustrate that most people in business will try to negotiate with you as you strive to put deals together. Unless you possess solid negotiation skills, you might end up getting the short end of the stick. For example, if you hope to put together a deal where a skilled programmer works on your project without pay in exchange for equity, you might only want to give up 5%, but he may be thinking more along the lines of 15%. Unless you possess a strong will and an ability to negotiate in a time of need, you’ll end up giving away more of the company than you had hoped.

Internal Motivation

A key difference between running a business and working a job is that no one will be pressuring you to complete your tasks when you run your own business. Instead, you must be completely self-motivated to do everything it takes to get the work done. In the beginning, you may be working a day job or attending college classes at the same time. This means that after taking care of the work owe to those obligations, you must still find enough energy and determination to sit down and put in long hours for your business.
If you are the sort of person who needs a boss hawking over their shoulder in order to remain on task, or feels that you work best inside the known structure of a job, entrepreneurship will not be an easy life path for you. When Facebook founder Mark Zuckerburg began work on the project, he was attending classes at the prestigious Harvard University. By finding the drive to not only juggle demanding course work but also to develop a groundbreaking social platform, Zuckerburg demonstrated rock solid internal motivation.

Opportunism

Sometimes the best ideas are ones that are already showing huge success. As an example, when Jack Dorsey launched Twitter in 2007, Facebook’s status feature was already in use and gaining popularity. No doubt seeing the opportunity in the feature, Dorsey and his team made it the central feature of Twitter. This little social web application has skyrocketed in popularity, being regularly used by web surfers, celebrities, and even companies for marketing purposes. Twitter was recently valued at $1 billion after raising close to $100 million in venture funding. The lesson to be learned is clear: if you give the public what it wants, you stand a great shot at success.

Friday 8 June 2012

WHAT THE COMPUTER CAN NOT DO

Hubert Dreyfus has been a critic of artificial intelligence research since the 1960s. In a series of papers and books, including Alchemy and AI (1965), What Computers Can't Do (1972; 1979; 1992) and Mind over Machine (1986), he presented an assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2003), the standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.[1]
Dreyfus argued that human intelligence and expertise depend primarily on unconscious instincts rather than conscious symbolic manipulation, and that these unconscious skills could never be captured in formal rules. His critique was based on the insights of modern continental philosophers such as Merleau-Ponty and Heidegger, and was directed at the first wave of AI research which used high level formal symbols to represent reality and tried to reduce intelligence to symbol manipulation.
When Dreyfus' ideas were first introduced in the mid-1960s, they were met with ridicule and outright hostility.[2][3] By the 1980s, however, many of his perspectives were rediscovered by researchers working in robotics and the new field of connectionism—approaches now called "sub-symbolic" because they eschew early AI research's emphasis on high level symbols. Historian and AI researcher Daniel Crevier writes: "time has proven the accuracy and perceptiveness of some of Dreyfus's comments."[4] Dreyfus said in 2007 "I figure I won and it's over—they've given up."[5]

       Dreyfus' critique

         The grandiose promises of artificial intelligence

In Alchemy and AI (1965) and What Computers Can't Do (1972), Dreyfus summarized the history of artificial intelligence and ridiculed the unbridled optimism that permeated the field. For example, Herbert A. Simon, following the success of his program General Problem Solver (1957), predicted that by 1967:[6]
  1. A computer would be world champion in chess.
  2. A computer would discover and prove an important new mathematical theorem.
  3. Most theories in psychology will take the form of computer programs.
The press reported these predictions in glowing reports of the imminent arrival of machine intelligence.
Dreyfus felt that this optimism was totally unwarranted. He believed that they were based on false assumptions about the nature of human intelligence. Pamela McCorduck explains Dreyfus position:
[A] great misunderstanding accounts for public confusion about thinking machines, a misunderstanding perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking machines are already here, or at any rate, just around the corner.[7]
These predictions were based on the success of an "information processing" model of the mind, articulated by Newell and Simon in their physical symbol systems hypothesis, and later expanded into a philosophical position known as computationalism by philosophers such as Jerry Fodor and Hillary Putnam.[8] Believing that they had successfully simulated the essential process of human thought with simple programs, it seemed a short step to producing fully intelligent machines. However, Dreyfus argued that philosophy, especially 20th century philosophy, had discovered serious problems with this information processing viewpoint. The mind, according to modern philosophy, is nothing like a computer.[7]

    Dreyfus' four assumptions of artificial intelligence research

In Alchemy and AI and What Computers Can't Do, Dreyfus identified four philosophical assumptions that supported the faith of early AI researchers that human intelligence depended on the manipulation of symbols.[9] "In each case," Dreyfus writes, "the assumption is taken by workers in [AI] as an axiom, guaranteeing results, whereas it is, in fact, one hypothesis among others, to be tested by the success of such work."[10]
The biological assumption
The brain processes information in discrete operations by way of some biological equivalent of on/off switches.
In the early days of research into neurology, scientists realized that neurons fire in all-or-nothing pulses. Several researchers, such as Walter Pitts and Warren McCulloch, argued that neurons functioned similar to the way Boolean logic gates operate, and so could be imitated by electronic circuitry at the level of the neuron.[11] When digital computers became widely used in the early 50s, this argument was extended to suggest that the brain was a vast physical symbol system, manipulating the binary symbols of zero and one. Dreyfus was able to refute the biological assumption by citing research in neurology that suggested that the action and timing of neuron firing had analog components.[12] To be fair, however, Daniel Crevier observes that "few still held that belief in the early 1970s, and nobody argued against Dreyfus" about the biological assumption.[13]
The psychological assumption
The mind can be viewed as a device operating on bits of information according to formal rules.
He refuted this assumption by showing that much of what we "know" about the world consists of complex attitudes or tendencies that make us lean towards one interpretation over another. He argued that, even when we use explicit symbols, we are using them against an unconscious background of commonsense knowledge and that without this background our symbols cease to mean anything. This background, in Dreyfus' view, was not implemented in individual brains as explicit individual symbols with explicit individual meanings.
The epistemological assumption
All knowledge can be formalized.
This concerns the philosophical issue of epistemology, or the study of knowledge. Even if we agree that the psychological assumption is false, AI researchers could still argue (as AI founder John McCarthy has) that it was possible for a symbol processing machine to represent all knowledge, regardless of whether human beings represented knowledge the same way. Dreyfus argued that there was no justification for this assumption, since so much of human knowledge was not symbolic.
The ontological assumption
The world consists of independent facts that can be represented by independent symbols
Dreyfus also identified a subtler assumption about the world. AI researchers (and futurists and science fiction writers) often assume that there is no limit to formal, scientific knowledge, because they assume that any phenomenon in the universe can be described by symbols or scientific theories. This assumes that everything that exists can be understood as objects, properties of objects, classes of objects, relations of objects, and so on: precisely those things that can be described by logic, language and mathematics. The question of what exists is called ontology, and so Dreyfus calls this "the ontological assumption:" If this is false, then it raises doubts about what we can ultimately know and on what intelligent machines will ultimately be able to help us to do.

      The primacy of background coping skills

In Mind Over Machine (1986), written during the heyday of expert systems, Dreyfus analyzed the difference between human expertise and the programs that claimed to capture it. This expanded on ideas from What Computers Can't Do, where he had made a similar argument criticizing the "cognitive simulation" school of AI research practiced by Allen Newell and Herbert A. Simon in the 1960s.
Dreyfus argued that human problem solving and expertise depend on our background sense of the context, of what is important and interesting given the situation, rather than on the process of searching through combinations of possibilities to find what we need. Dreyfus would describe it in 1986 as the difference between "knowing-that" and "knowing-how", based on Heidegger's distinction of present-at-hand and ready-to-hand.[14]
Knowing-that is our conscious, step-by-step problem solving abilities. We use these skills when we encounter a difficult problem that requires us to stop, step back and search through ideas one at time. At moments like this, the ideas become very precise and simple: they become context free symbols, which we manipulate using logic and language. These are the skills that Newell and Simon had demonstrated with both psychological experiments and computer programs. Dreyfus agreed that their programs adequately imitated the skills he calls "knowing-that."
Knowing-how, on the other hand, is the way we deal with things normally. We take actions without using conscious symbolic reasoning at all, as when we recognize a face, drive ourselves to work or find the right thing to say. We seem to simply jump to the appropriate response, without considering any alternatives. This is the essence of expertise, Dreyfus argued: when our intuitions have been trained to the point that we forget the rules and simply "size up the situation" and react.
Our sense of the situation is based, Dreyfus argues, on our goals, our bodies and our culture—all of our unconscious intuitions, attitudes and knowledge about the world. This “context” or "background" (related to Heidegger's Dasein) is a form of knowledge that is not stored in our brains symbolically, but intuitively in some way. It affects what we notice and what we don't notice, what we expect and what possibilities we don't consider: we discriminate between what is essential and inessential. The things that are inessential are relegated to our "fringe consciousness" (borrowing a phrase from William James): the millions of things we're aware of, but we're not really thinking about right now.
Dreyfus claimed that he could see no way that AI programs, as they were implemented in the 70s and 80s, could capture this background or do the kind of fast problem solving that it allows. He argued that our unconscious knowledge could never be captured symbolically. If AI could not find a way to address these issues, then it was doomed to failure, an exercise in "tree climbing with one's eyes on the moon."[15]

                 History

Dreyfus began to formulate his critique in the early 1960s while he was a professor at MIT, then a hotbed of artificial intelligence research. His first publication on the subject is a half-page objection to a talk given by Herbert A. Simon in the spring of 1961.[16] Dreyfus was especially bothered, as a philosopher, that AI researchers seemed to believe they were on the verge of solving many long standing philosophical problems within a few years, using computers.

            Alchemy and AI

In 1965, Dreyfus was hired (with his brother Stuart Dreyfus' help) by Paul Armer to spend the summer at RAND Corporation's Santa Monica facility, where he would write Alchemy and AI, the first salvo of his attack. Armer had thought he was hiring an impartial critic and was surprised when Dreyfus produced a scathing paper intended to demolish the foundations of the field. (Armer stated he was unaware of Dreyfus' previous publication.) Armer delayed publishing it, but ultimately realized that "just because it came to a conclusion you didn't like was no reason not to publish it."[17] It finally came out as RAND Memo and soon became a best seller.[18]
The paper flatly ridiculed AI research, comparing it to alchemy: a misguided attempt to change metals to gold based on a theoretical foundation that was no more than mythology and wishful thinking.[19] It ridiculed the grandiose predictions of leading AI researchers, predicting that there were limits beyond which AI would not progress and intimating that those limits would be reached soon.[20]

        Reaction

The paper "caused an uproar", according to Pamela McCorduck.[21] The AI community's response was derisive and personal. Seymour Papert dismissed one third of the paper as "gossip" and claimed that every quotation was deliberately taken out of context.[22] Herbert A. Simon accused Dreyfus of playing "politics" so that he could attach the prestigious RAND name to his ideas. Simon says "what I resent about this was the RAND name attached to that garbage".[23]
Dreyfus, who taught at MIT, remembers that his colleagues working in AI "dared not be seen having lunch with me."[24] Joseph Weizenbaum, the author of ELIZA, felt his colleagues' treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus' positions, he recalls "I became the only member of the AI community to be seen eating lunch with Dreyfus. And I deliberately made it plain that theirs was not the way to treat a human being."[25]
The paper was the subject of a short in The New Yorker magazine on June 11, 1966. The piece mentioned Dreyfus' contention that, while computers may be able to play checkers, no computer could yet play a decent game of chess. It reported with wry humor (as Dreyfus had) about the victory of a ten year old over the leading chess program, with "even more than its usual smugness."[20]
In hopes of regaining AI's reputation, Seymour Papert arranged a chess match between Dreyfus and Richard Greenblatt's Mac Hack program. Dreyfus lost, much to Papert's satisfaction.[26] An Association for Computing Machinery bulletin[27] used the headline:
"A Ten Year Old Can Beat the Machine— Dreyfus: But the Machine Can Beat Dreyfus"[28]
Dreyfus complained in print that he hadn't said a computer will never play chess, to which Herbert A. Simon replied: "You should recognize that some of those who are bitten by your sharp-toothed prose are likely, in their human weakness, to bite back ... may I be so bold as to suggest that you could well begin the cooling---a recovery of your sense of humor being a good first step."[29]

                Vindicated

By the early 1990s several of Dreyfus' radical opinions had become mainstream.
Failed predictions. As Dreyfus had foreseen, the grandiose predictions of early AI researchers failed to come true. Fully intelligent machines (now known as "strong AI") did not appear in the mid-1970s as predicted. HAL 9000 (whose capabilities for natural language, perception and problem solving were based on the advice and opinions of Marvin Minsky) did not appear in the year 2001. "AI researchers", writes Nicolas Fearn, "clearly have some explaining to do."[30] Today researchers are far more reluctant to make the kind of predictions that were made in the early days. (Although some futurists, such as Ray Kurzweil, are still given to the same kind of optimism.)
The biological assumption, although common in the forties and early fifties, was no longer assumed by most AI researchers by the time Dreyfus published What Computers Can't Do.[13] Although many still argue that it is essential to reverse-engineer the brain by simulating the action of neurons (such as Ray Kurzweil[31] or Jeff Hawkins[32]), they don't assume that neurons are essentially digital, but rather that the action of analog neurons can be simulated by digital machines to a reasonable level of accuracy.[31] (Alan Turing had made this same observation as early as 1950.)[33]
The psychological assumption and unconscious skills. Many AI researchers have come to agree that human reasoning does not consist primarily of high-level symbol manipulation. In fact, since Dreyfus first published his critiques in the 60s, AI research in general has moved away from high level symbol manipulation or "GOFAI", towards new models that are intended to capture more of our unconscious reasoning. Daniel Crevier writes that by 1993, unlike 1965, AI researchers no longer made the psychological assumption,[13] and had continued forward without it. These new "sub-symbolic" approaches include:
  • Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning. Dreyfus himself agrees that these sub-symbolic methods can capture the kind of "tendencies" and "attitudes" that he considers essential for intelligence and expertise.[34]
  • Research into commonsense knowledge has focussed on reproducing the "background" or context of knowledge.
  • Robotics researchers like Hans Moravec and Rodney Brooks were among the first to realize that unconscious skills would prove to be the most difficult to reverse engineer. (See Moravec's paradox.) Brooks would spearhead a movement in the late 80s that took direct aim at the use of high-level symbols, called Nouvelle AI. The situated movement in robotics research attempts to capture our unconscious skills at perception and attention.[35]

                 Ignored

Although clearly AI research has come to agree with Dreyfus, McCorduck writes that "my impression is that this progress has taken place piecemeal and in response to tough given problems, and owes nothing to Dreyfus."[36]
The AI community, with a few exceptions, chose not to respond to Dreyfus directly. "He's too silly to take seriously" a researcher told Pamela McCorduck.[29] Marvin Minsky said of Dreyfus (and the other critiques coming from philosophy) that "they misunderstand, and should be ignored."[37] When Dreyfus expanded Alchemy and AI to book length and published it as What Computers Can't Do in 1972, no one from the AI community chose to respond (with the exception of a few critical reviews). McCorduck asks "If Dreyfus is so wrong-headed, why haven't the artificial intelligence people made more effort to contradict him?"[29]
Part of the problem was the kind of philosophy that Dreyfus used in his critique. Dreyfus was an expert in modern European philosophers (like Heidegger and Merleau-Ponty), as Pamela McCorduck points out.[38] AI researchers of the 1960s, by contrast, based their understanding of the human mind on engineering principles and efficient problem solving techniques related to management science. On a fundamental level, they spoke a different language. Edward Feigenbaum complained "What does he offer us? Phenomenology! That ball of fluff. That cotton candy!"[39] In 1965, there was simply too huge a gap between European philosophy and artificial intelligence, a gap that has since been filled by cognitive science, connectionism and robotics research. It would take many years before artificial intelligence researchers were able to address the issues that were important to continental philosophy, such as situatedness, embodiment, perception and gestalt.
Another problem was that he claimed (or seemed to claim) that AI would never be able to capture the human ability to understand context, situation or purpose in the form of rules. But (as Peter Norvig and Stuart Russell would later explain), an argument of this form can not be won: just because one can not imagine the rules, this does not mean that no such rules exist. They quote Alan Turing's answer to all arguments similar to Dreyfus':
"we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'"[40][41]
Dreyfus did not anticipate that AI researchers would realize their mistake and begin to work towards new solutions, moving away from the symbolic methods that Dreyfus criticized. In 1965, he did not imagine that such programs would one day be created, so he claimed AI was impossible. In 1965, AI researchers did not imagine that such programs were necessary, so they claimed AI was almost complete. Both were wrong.
A more serious issue was the impression that Dreyfus' critique was incorrigibly hostile. McCorduck writes "His derisiveness has been so provoking that he has estranged anyone he might have enlightened. And that's a pity."[36] Daniel Crevier writes that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."[4]

Friday 1 June 2012

DEVELOPMENT INDICES AND INDICATORS


DEVELOPMENT INDICES AND INDICATORS

Introduction

The concept that complexity can be succinctly summarized in a single statement, picture or measure is indeed an old one. The world is a complex place and human beings have always sought ways of interpreting what they sense around them so as to help deal with that complexity.
Given the pressing need to help address human suffering and poverty wherever it is found the appeal of ways to present this complexity in ways that help an intervention is thus highly understandable. Development indicators and indices (index: an aggregation of indicators into a single representation) seek to do just that – to simplify so as to manage.
Indicators have largely been quantitative rather than qualitative, virtually by definition. That is not to say that qualitative (non-numerical) indicators are unimportant. Indeed by far the majority of indicators we use on a day-by-day basis are qualitative – a sense of a street being ‘dirty’, of traffic being ‘heavy’ or of feeling unsafe walking in a particular neighborhood. These ‘feelings’ are based on what we hear and see interpreted by what we have learned from our own experience or from that of others. Quantitative indicators can have a feel of being mechanical, technical and complicated, and this may be off-putting for some. Yet ironically they are capturing the same sense of what we each do every day of our lives.
This article will explore indicators and indices in development by dissecting three well-popularized examples. The examples have not been chosen because they are necessarily the ‘best’ (whatever that may mean) but because they are widely reported and do follow a simple causal chain:
Figure 1. Hypothesised causal chain with three development indices. Figure 1. Hypothesised causal chain with three development indices.
Admittedly there is over-simplification in these cause-effect assumptions. Corruption is not the only limiting factor within good governance and neither is good governance the only limitation to achieving human development. Similarly, an increase in environmental degradation isn’t the only potential ‘cost’ to achieving good human development. But it can at least be tentatively assumed a priori that the three indices could have a relationship. This and their popularity make them good examples of their genre.
All three of the indices range in value from 0 (bad) to 1 (excellent), but rather than give the numerical values of the indices as tables the results are presented more qualitatively as color maps. In these maps, values towards zero are represented as ‘red’ (‘bad’), values of 0.5 (midpoint) are ‘yellow’ and values of ‘1’ (‘good’) are dark blue.

Human Development Index

The Human Development Index (HDI) is a creation of the United Nations Development Programme and represents the practical embodiment of their vision of human development as an alternative vision to what they perceive as the dominance of economic indicators in development. Economic development had the gross domestic product (GDP) so human development had to have the HDI. In essence the HDI represents a measure of the ‘quality of life’.
Since its appearance in 1990 the HDI comprises three components:
  1. life expectancy (a proxy indicator for health care and living conditions).
  2. adult literacy combined with years of schooling or enrollment in primary, secondary and tertiary education.
  3. real GDP/capita ($ PPP; a proxy indicator for disposable income).
There is typically a two-year time lag in the data – the HDI for 2006 uses data from 2004 for example – and gaps are filled in various ways, typically by making assumptions based upon data available for assumed ‘peers’.
Figure 2. The Human Development Index of 2006 and its three components. Figure 2. The Human Development Index of 2006 and its three components.
The choice of these three components for the HDI is not surprising, and they can be found in many lists of development indicators. It can certainly be argued that the selection of only three components for human development is problematic. Income inequality, for example, is not included alongside GDP/capita and neither are there any elements of ‘consumption’. The UNDP have argued that these three can act as proxy indicators for many others. For example, provision of a clean water supply and/or adequate nutrition would be reflected in life expectancy. Indeed, given that the UNDP wanted an index that was relatively transparent and simple to understand it is also not surprising that they decided to include only three components.
As a key part of this strategy the UNDP decided to present the HDI within a country ‘league-table’ format and labels of ‘high’, ‘medium’ or ‘low’ human development applied by UNDP depending upon each countries value for the HDI. Both these devices – league table presentation and ‘labeling’ – promote a sense of ‘name and shame’ and comparison of performance across peers. Rather than duplicate any of the HDI league tables here is a global map of the values of the HDI published in 2006 ranging from 0 to 1 is presented as Figure 2.
Sadly, and perhaps unsurprising, large swathes of Africa have low values for the HDI (orange and yellow), implying that the level of human development for the continent is poor. The preponderance of dark green and blue across the globe (higher values of the HDI) paint a more positive picture, but its still Africa which stands out. But that just gives the overall picture, and how does this breakdown in terms of the three components of the HDI? Well the story is not all that different whichever component is looked at. The three ‘bits’ of the HDI are also presented in Figure 2. The GDP/capita (income) component almost exactly mirrors the coloring for the HDI. The two other components – life expectancy and education – do show some nuanced differences. Life expectancy is particularly poor in the southern Africa countries, a reflection of the preponderance of HIV/AIDS, while education is especially poor in West Africa. However, looking at the maps for the three components it is easy to see how they merge into the map of the HDI. Neither is it hard to appreciate how the three components may be related – higher income per capita could mean greater expenditure on education and health care for example. In that sense even though the three components are quite different (a heterogeneous index) the HDI does have an internal consistency.

Corruption Perceptions Index (CPI)

It is often reiterated that one of the necessary drivers to bring about human development is good governance, and controlling corruption is an important element of this. The assumptions are straightforward. Corruption can result in resources being diverted from the public good to private consumption with the result that impacts intended to be of wider benefit are lost. Corruption may also drive up the costs of doing business with the result that investment is deterred and economic growth will suffer. But the very nature of corruption makes it difficult to gauge. After all, those benefiting from corruption are unlikely to say so and openly declare how much they receive. Payers may be less reticent to talk about the extent of corruption as they are one of the losers, but there may be a danger of them exaggerating their problems and evidence may become somewhat anecdotal.
The Corruption Perceptions Index (CPI), created by the Berlin-based Transparency International (TI; a non-governmental organization) and first released in 1995, has been designed to provide a more systematic snapshot of corruption in the same way that the HDI provides a snapshot of human development. Like the HDI it combines a number of different ‘indicators’ into one, but unlike the HDI the indicators which are combined all measure corruption. Whereas the HDI has three quite different components (an heterogeneous index), the CPI is an homogenous index in the sense that all the components upon which it is based seek to measure the same thing.
Like the HDI, the CPI is based on data collected over a number of years prior to release of the index. While the HDI 2006 is based on data from 2004 the CPI for 2006 uses 12 surveys and expert assessments from 2005 and 2006 with at least three of them being required for a country to be included in the CPI. The surveys evaluate the extent of corruption as perceived by country experts, non-residents and residents (not necessarily nationals) of the countries included, and are:
  • Country Policy and Institutional Assessment by the IDA and IBRD (World Bank), 2005
  • Economist Intelligence Unit, 2006.
  • Freedom House Nations in Transit, 2006.
  • International Institute for Management Development, Lausanne, 2005 and 2006.
  • Grey Area Dynamics Ratings by the Merchant International Group, 2006.
  • Political and Economic Risk Consultancy, Hong Kong (2005 and 2006).
  • United Nations Economic Commission for Africa, African Governance Report, 2005.
  • World Economic Forum, 2005 and 2006.
  • World Markets Research Centre, 2006.
Figure 3. The Corruption Perceptions Index of 2006. High values (blue) indicate ‘good’ while low values (red) indicate ‘bad’. Figure 3. The Corruption Perceptions Index of 2006. High values (blue) indicate ‘good’ while low values (red) indicate ‘bad’.
Thus the CPI is based, at least in part, upon judgments made by non-residents and non-nationals of the countries. Each of these sources has a league-table ranking of countries akin to that of the HDI, and it is the ranks which are combined through various complex steps including standardization via ‘matching percentiles’ and ‘beta-transformation’ to generate the CPI.
The global map of the CPI for 2006 is presented as Figure 3. The picture has a resemblance to that of the HDI and indeed it is temping to relate this in simplistic cause-effect terms to Figure 2. In both sets of maps areas with the greatest levels of corruption (Africa, Asia and Latin America) also tend to do less well in terms of the HDI and its components. Particular ‘hot spots’ of corruption can be found in West Africa, Kazakhstan and Myanmar (Burma). Maybe this supports the initial assumption that countries do badly in human development because they also do badly in terms of corruption? This is, of course, a simplistic argument based on tools designed to simplify, and there will be exceptions to the rule. But the temptation to make simplistic arguments based upon these tools is understandable; a point which will be returned to later.

Environmental Sustainability Index (ESI)

It is possible that a country can derive a high HDI at the cost of environmental degradation. It has indeed been shown that ‘development’, be it measured in terms of economic wealth or more broadly in terms of quality of life, comes at a cost of environmental quality. The third index discussed here is the Environmental Sustainability Index (ESI). The ESI is backed by the powerful World Economic Forum (WEF) in collaboration with Yale and Columbia Universities in the USA. This partnership refers to themselves as the ‘Global Leaders of Tomorrow’, or simply ‘Global Leaders’. Values of the ESI have been published for 1999 (pilot study), 2000, 2001, 2002 and 2005. Unlike the HDI and CPI the creators of the ESI state that they see no need to publish the results on a yearly basis as there may be little change to report.
The ESI methodology to arrive at a value for a country is somewhat complex (much more so than that of the HDI) and as with the CPI the precise details do not need to be repeated here. The 2005 version of the ESI covers 146 countries. The process begins with the assimilation of raw data sets for 76 ‘environmental’ variables which are aggregated into 22 ‘indicators’ and finally into the ESI. The terminology is admittedly somewhat confusing as the ‘variables’ of the ESI are often reported as ‘indicators’ elsewhere in the literature. Thus strictly speaking aggregation results in 22 indices (or sub-indices or partial-indices if preferred) which are then combined into the ESI.
The data sets cover a diverse range of variables such as ambient pollution and emissions of pollutants through to impacts on human health and being a signatory to international agreements. Included amongst them are a measure of corruption (although not the CPI) and measures which relate to human life span (e.g. child mortality rate) and education (enrollment and completion rates) so there is some overlap with the HDI. Many of the variables date from the year 2000 on, but some are based on earlier data. The ESI variables are loosely grouped into the pressure-state-response (PSR) framework which has proved to be popular for sustainability indicators. The variables are checked for their distribution across all the nations included in the sample, but there is some tolerance of gaps. Gaps are filled by a process of regression which predicts what the missing values would be based upon associations with other variables.
If the data have a highly skewed distribution then the skewness is lessened by taking logarithms. Also extreme values (high and low) are capped by using percentiles. As the variables all have different units of measurement they are standardized by subtracting the mean or subtracting from the mean (depending upon whether high values of the variable are regarded as ‘good’ or ‘bad’ for environmental sustainability) and dividing by the standard deviation. If higher values (e.g. biodiversity) are deemed to be good for sustainability:
<math>z-value = frac{country value - mean}{standard deviation}</math>
If high values are deemed to be bad for sustainability (e.g. emissions of pollutants):
<math>z-value = frac{mean-country value}{standard deviation}</math>
Figure 4. The Environmental Sustainability Index of 2005 and its division into pressure, state and response components. Figure 4. The Environmental Sustainability Index of 2005 and its division into pressure, state and response components.
The average z-value for an indicator (a group of related variables) is then calculated for each country and these are converted to a more intuitively meaningful statistic ranging from 0 to 100 by calculating the ‘standardized normal percentile’ (SNP). SNP’s are averaged over all the indicators to provide the ESI for each country and these are then presented in a league table format akin to both the HDI and CPI. The higher a country appears in the league table then the more environmentally sustainable it is deemed to be.
The values of the ESI for 2005 are presented in Figure 4. Unlike the HDI and perhaps to a lesser extent the CPI maps this one does seem to go against an a priori expectation. Figure 4 appears to suggest that the most environmentally sustainable nations are also the most developed and industrialized – Europe (including Russia), North and South America and Australia. The least environmentally sustainable region spans the horn of Africa, the oil-rich countries of the mid-East through to Iran, Afghanistan/Pakistan and China. Surely the developed world also has its problems with environmental sustainability? As with the HDI it is possible to ‘unpack’ the ESI but here the process is not so straightforward. The obvious framework for unpacking the ESI is to consider the PSR components, and Figure 4 also shows the results of this process. The methodology employed is complex – basically a principal component analysis of the ESI z-values once each variable has been classified as pressure, state or response. Once the ESI has been unpacked the state and pressure components do seem to match expectation. The worst state of the environment and the greatest pressures on the environment are found in the industrialized countries of Europe and North America. The reason why this position is swung around to greater environmental sustainability is because of the strong performance of the development world in the ‘response’ category. The response variables do dominate within the ESI, a point which has been noted and criticized by others, and hence a country that does well in ‘response’ can mitigate a poor performance under ‘state’ and ‘pressure’.

Pros and cons

Table 1. Comparison of the HDI 2006, CPI 2006 and ESI 2005.
HDI CPI ESI
Index assesses Human development Corruption (perception of) Environmental sustainability
Creators UNDP Transparency International Global Leaders of Tomorrow: World Economic Forum (WEF) sponsored but created by the universities of Yale and Columbia in the USA
Type of organization Governmental: multi-lateral Non-governmental WEF is an “independent international organization”
Number of components 3 (education, life expectancy, income) 12 corruption surveys 76 variables in 2005 grouped into 22 ‘indicators’
Heterogeneous or homogeneous index Heterogeneous Homogenous Heterogeneous
Methodology relatively simple complex complex
Years of data 2004 for the 2006 HDI 2005 and 2006 for the 2006 CPI Quite complex. Some are between 2001 and 2004 but many are averages spanning a number of years, even back to the 1960s.
League table presentation yes yes yes
Publication Every year since 1990 Every year since 1995 1999 (pilot study), 2000, 2001, 2002 and 2005
Clear and detailed presentation of methodology yes (Human Development Reports) yes yes (ESI Reports)


A summary of some of the main features of the three indices discussed above is provided as Table 1.
The three indices presented here are by no means the only such examples within development, and no doubt others will have their own particular ‘favorite’ for which they can make a case for wide acceptance. The HDI has spawned a group of other ‘human development’ indices that look, for example, at differences between males and females. However, the HDI, CPI and ESI do receive a wide degree of attention amongst the popular press of many countries and are ‘consumed’ and used by politicians. They can act as good examples of indicators/indices in general.
The advantages of development indicators and indices rest in the reason why they are created in the first place – to simplify complexity. The consumers of such tools are obviously managers, policy makers and politicians but they are also avidly devoured by the popular press which in turn can influence public opinion and, of course, politicians. As I have done here it is indeed tempting to look at the maps and identify similarities – causal factors – and differences. Perhaps the CPI is measuring one limitation to the HDI and the ESI is assessing one dimension of the cost. Perhaps we have a set of pictures that don’t just tell us what the world is but also help with an explanation as to why it is the way it is. But indicators and indices also have their problems and no article on this topic is complete unless these are also aired.
First it has to be stressed that any indicator/index is only as good as the data upon which it is built. Data sets can be poor quality as well as good quality and there may well be gaps. In the ESI these gaps are covered by creating data based upon a complex statistical technique, and indeed gaps are also covered in the HDI by making assumptions as to where a country ‘ought’ to be in terms of the three variables. There is also the hiding of intra-country variation to consider. These may be consideration between urban and rural populations, for example, or between different regions. Some variables may also change dramatically during the year – air pollution for example. The end result is a single value of an indicator/index covering all the spatial and temporal variation within that country over a year (or more).
Related to this issue of data availability is the ‘it’s all out of date’ excuse that can be used by those who the indicators/indices are meant to influence. There is inevitably a delay between data availability and the generation of the indicators/indices. The data have to be checked, manipulated and presented as part of a report and this can result in a delay of one year (CPI), two years (HDI) or more. This allows the classic get out clause of claiming that the index may indeed present a bad picture for a country but ‘it’s by now out of date and necessary changes have already been made which have brought about improvements’. Whether they have, or have not, made such changes is a matter of conjecture and will no doubt be reflected in future presentations of the indicator/index. After all, this excuse may have credence for a short while but the HDI and CPI are published every year and the ESI every couple of years.
Besides these more obvious issues are others which are perhaps not so obvious. An indicator/index is a product of the person(s) who created it. This is obvious when stated but the ramification is that there is potential for human bias.
Thus the ESI has been criticized for its emphasis on response indicators as distinct to state and pressure. The richer countries tend to do well in the response category but not so well in state and pressure. A similar criticism can, of course, be leveled at the HDI with its dependence on just three components, and the CPI for its reliance on perspective from residents of the richer north.
The potential for bias interacts with another problem of indicators/indices that can also be seen as a strength – much depends on who is doing the looking. They are quantitative tools and even the simplest of the three presented here, the HDI, has a technical and somewhat mechanical feel. The methodology for all three of them is complex, to varying degrees, with even the HDI requiring steps of ‘standardization’ and ‘transformation’. The ESI of 2006 has 76 components, not just three, and the ways in which these are manipulated and combined is even more complex. The CPI is arguably the most complex of the three given that it uses ‘ranks’ from 12 sources. Will ‘users’ or ‘consumers’ of the tools make the effort to understand how they are created? It is likely that most will not, and instead the tools will be treated as ‘black boxes’ – I don’t need to know the technical detail just the league table ranking. But will ‘consumers’ be aware of the assumptions that have been made? There is a dangerous element of technical dependency here; an assumption that those selecting/creating the indicators/indices have been ‘fair’ and ‘true’ in the decisions they have made. While, in fairness, the creators of the indices do go to great lengths to clearly present their raw data and methodologies these can be rather involved and inaccessible to a non-specialist.
Finally the process of simplification while it is appealing can be dangerous. Bringing human development down to just three components can generate a sense that it is only these three that matter. The ESI encompasses many variables but even more are left out. Does their non-inclusion imply that they are unimportant? Should a government aim to push its country up the HDI league table by addressing the three variables or should it try to do what is ‘best’ for its population irrespective of its league table ranking? Often, of course, these are identical interests but they may not necessarily be so.

Summary

Development indicators and indices are in vogue; they are popular amongst people with busy lives as a way of condensing complexity to ‘snap shots’ that can be digested and appreciated. This popularity is unlikely to diminish, indeed the opposite will likely be the case. But care does need to be taken as the creators and promoters of these tools have a great responsibility.

Twitter Delicious Facebook Digg Stumbleupon Favorites More

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | 100 Web Hosting