Measuring Educational Capital

A PDF of this paper can be downloaded on Google Drive or Dropbox

The PowerPoint Presentation can be found here [Dropbox, PDF]

 

Measuring Educational Capital

A Proposal to Promote a Skills Agenda

Skills have lost and schooling has won. Nearly the entire scope of Education for All has become interpreted as increasing enrolments with an eye on equity and quality. For the TVET and NFE community to find success in the post-2015 framework, they’ll need an easily computed metric that can move their agenda foreword. One way of doing this is setting targets in the same way that propelled Universal Primary Education (UPE) foreword. Indeed, this is the path the post-2015 High Level Panel is proposing. However, a more conceptually and technically compelling model is to instead find metrics that produce “red flags” where formal schooling inefficiencies and inequities exist, providing traction for nonformal, informal, and vocational education that can help alleviate issues. Using only existing metrics, the Educational Capital Index (EdCI) is presented as just such an indicator. The EdCI is modeled after the Human Development Index and is presented as an open-source model, freely available for anyone to adjust or change.

Trey Menefee
The University of Hong Kong

UKFIET 2013 Conference

Education and Development Post 2015

Reflecting, Reviewing, Re-visioning

Whatever Dakar’s Education for All (EFA) Goal Three was meant to be, it was eventually swallowed up by a specific kind of metric that has dominated educational development discourse for decades: enrolment rates. Enrolment rates dominated Goals One, Two, and Five (as proportions); the goals that didn’t have enrolment rates – Three, Four, and Six – were largely ignored. Even Goal Four, which did have its own metric (adult literacy rates), eventually became the “forgotten goal.” One now sees Goal Three, which covered “life skills” and “learning needs of young people and adults,” as being interpreted as lower secondary enrolment rates in the UNESCO GMR rather than the informal and nonformal education (NFE) and technical and vocational education and training (TVET) it has conventionally been identified with.

There are signs of change, though. The Brookings Institute partnered with the UNESCO Institute for Statistics (UIS) in convening the Learning Metrics Task Force (LMTF) to propose measurements for whatever would replace Goal Six, which concerned educational quality. Measuring learning outcomes seems only a short step from measuring skills. At the same time, the youth unemployment crisis has brought a lot of attention to the skills agenda and was even the theme of the latest UNESCO Global Monitoring Report (GMR). A skills target, as well as a “decent jobs” target, has also been included in the High Level Panel’s (HLP) proposed post-2015 metrics. Specifically, they asked to “increase the number of young and adult women and men with the skills, including technical and vocational, needed for work by x%.”

To date, though, there seems little progress on actually measuring skills in an internationally comparable way. But in the LMTF spirit of belatedly finding metrics for expiring Dakar EFA goals in time for whatever rolls on into the post-2015 goals, this paper introduces a novel technique for advancing the Goal Three/Skills/NFE agenda. It proposes a metric called the Educational Capital Index (EdCI) designed to overcome many of the methodological and conceptual challenges of measuring these issues. Importantly, the EdCI is submitted here as an “alpha[1]” open sourced model, with the intention of other people and groups picking up the idea and refining it.

Politics and the Post-2015 Agenda

There is a certain “In a Perfect World” discourse in the post-2015 development debates. People and groups quickly fall into making lists of things they think would exist in their ideal world to decorate the post-2015 “Christmas Tree” with their issue (Green, 2013). The Post-2015 agenda, as Green describes it, has been, “a highly mobile circus going from country country discussing the Christmas tree.” There is a temptation to lobby post-2015 consultations, explain the importance of the issue, and come up with a target for the world to do what we think is important. Yet there is almost no end to worthy issues that can be lobbied for.

Green (2013) provides a heuristic to overcome this problem. First, he argues we should think of EFA, the Millennium Development Goals, and whatever comes after 2015 as the “aid development goals.” They should be thought of as instruments to drive foreign aid policy that can drive policy. The primary instruments for driving policy through frameworks like the MDGs are in how they create “big global norms,” set global and regional goals and targets, create global and regional league tables that foster competition and can shame governments into action, and data transparency through data collection.

In this sense, EFA Goal Two/MDG Goal Two was a ‘good’ goal because it was relatively easy to measure, had a relatively straightforward means of improvement, set a “big global norm,” and thus fostered competition among states to improve enrolment rates. EFA Goals Three and Six were “bad” goals because they were essentially aspirational and unmeasurable – mere ornaments on the Dakar Christmas tree. It was twelve years after Dakar before the Learning Metrics Task Force convened to propose metrics that could be used in the GMR to evaluate Goal Six.

This recalls James Scott’s arguments on the nexus between legibility and power (see Scott, 1998). Counting and simplifying complex social processes often allows for the state to move in and bear power, often to move the numbers one way or another. There wasn’t enough for state power and international powers to “grasp” with Goal Three like there was with Goal Two. “Grasping” here means more than the activity being made legible, but the capacity to move it. This might well have been the point of failure for Goal Four’s aim to reduce adult literacy by half.

This act of reduction is often the site of political conflict, which was evidences by the difficulties and criticisms the LMTF encountered. They were simultaneously criticized for being too reductionist and expansive. One of their products was a holistic vision of learning that spanned multiple domains and subdomains, yet that was critiqued for being too complicated to ever put into practice. But they were also critiqued for being a Trojan Horse of sorts to usher in reductionist curricular and testing policies similar to the Bush Administration’s No Child Left Behind Act in the United States (see Klees, 2013). We can glean from this that middle grounds and consensus on both details and scope seem exceptionally difficult to find.

The LMTF was also financed by the Brookings Institute at a level that nobody involved in Goal Three-style work should expect to find for their own cause. They were able to hold meetings across the globe, publish in multiple languages, and show up at almost all of the important educational development conference. Whatever metric the Goal Three “community” comes up with should be able to be pitched and computed relatively cheaply. Those interested in advancing the cause should then find ways of compiling existing metrics rather than creating new ones.

Politics and Flexible Education

The Goal Three agenda has been politically sidelined because of the dominance of the “enrolment cure.” The enrolment cure manifests itself at the individual level and at national and international levels. At the individual level, TVET and NFE is often seen as second- or third-tier education of less quality than formal schooling (see King, 2006). At the national and international level, Goal Three-style education programs have taken a far back seat to the quest for universal primary education (UPE) and expanding secondary and tertiary enrolments (see Menefee, 2013). We are thus left with a form of education delivery that is crucial to many but embraced by few.

This leaves the Goal Three agenda two points of entry: one would be to enter the metrics race demanding, in effect, “more TVET” and “more NFE” to compete directly with the formal schooling priorities. Or it can hold up a mirror to the formal education system and use its shortcomings in tangible deliverables as a rationale for financing and promoting alternatives.

Since many see Goal Three-style work as competing, zero-sum, for the same scarce resources, it would make sense for the TVET and NFE community to make their case as being a corrective for the shortcomings of formal schooling. It could – and should – be framed as a flexible and affordable alternative to the “more and better” approach of formal schooling expansion that is often out of reach for governments and families.This is not to say that skills win when schooling and enrolment fail, but it is that is a necessary complement in most situations.

In this context, a Goal Three-focused metric might work best as a mirror reflecting the problems of the formal education system and the economy. It is helpful to consider a non-education field to make the case more clearly: the United States healthcare system is, on one level, arguably the best in the world. Some of the best doctors, hospitals, and medical schools in the world are found there. On another level, however, it is one of the worst of all advanced economies: public health statistics don’t compare well with economically similar countries, it is extremely inequitable, and it is expensive both in aggregate and individually. Few would look at the United States healthcare system and say that that the primary problem is lack of resources, doctors, or hospitals. Instead, it needs creative ideas to more equitably and efficiently distribute existing healthcare resources.

The health care analogy is apt, as it recalls Illich’s (1998) question of how many people in developing countries must be denied clean drinking water to pay for modern medical equipment, or surgeons rather than healthcare workers. One rarely finds this sort of framing of educational systems. The GMR’s interpretation of EFA is that equitable formal schooling enrolments and a vague idea of quality must be constantly throttled up: more funding for formal schooling, more teachers, more enrollment, more learning, more schools. But how many illiterates and out-of-school youth does a seat in a formal lower secondary school create? Many countries need more and flexible education delivery rather than more schools teaching a fairly homogenous curricula.

The latter, I believe, more accurately reflects the spirit of NFE and TVET. Were we to label sides, those in favor of UPE before all else have a universal cure while the Goal Three agenda merely presents flexible options that can be deployed where and when they are needed. Nobody is promoting universal NFE or TVET. Where enrolments are low, there should be education programs for out-of-school youth. Where illiteracy persists, there are cheap and reasonably effective NFE programs for adults and youth. Where education isn’t producing the economic performance expected of it, the TVET community overflowing with ideas of how to provide relevant, marketable skills to those who need it.

A Theory of Skills

Generally speaking, there is no broad theory of “skills” from which to draw easy measurement h- especially not of the sort that the HLP has recommended. There was some disappointment in the TVET community when the LMTF decided that skills would not explicitly be included in their final product, which raises the conceptual difficulty even more, because it would appear to put a dividing line between “skills” and “things people learn.” The key difference, it would seem, is in the economic instrumentality of that which has been learned.

One could start, though, with the observation that human capital does not form linearly or as planned. Individual, averaged, learning outcomes tell us very little about how skills are used in practice. That someone can design or maintain jet engines tells us little about the market demand for this knowledge. Nor have labor planners been apt at predicting future market demand that students will likely encounter in their careers.

Research on complex systems, which sees market dynamics as inherently emergent phenomena, offers one path forward.  Complexity mean that economies’ performance, no matter how one measures it, is both unplanned and largely unplannable. We would also see the inherent difficulties of trying to measure (and average) skills at the individual level and extrapolate up to higher units of analysis (the city, the region, the country). Which skills are ultimately utilized in the marketplace is outside the scope of both planning and prediction.

The Atlas of Economic Complexity (Hidalgo and Haussman, 2012), which showcases their Economic Complexity Index (ECI), offers a way of measuring these skills in aggregate at the national level. The ECI measures the amount of knowledge brought to productive economic use inside of economic systems by comparing the level of human capital interconnectivity and interdependence necessary to create products for competitive exports. Different types of knowledge are of differing values and their value is created through their interconnections and interdependencies with other forms of knowledge. Whether any specific type of knowledge may be considered an emergent property of a complex system, depends on whether it is of productive economic value.

One way to understand what the ECI is measuring is to think in terms of “people-bytes”: it only takes a few people to catch, freeze, and transport fish for international export. The fisherman is only capable of diversifying so much, as are the people involved in building and maintaining freezers. We can also consider frozen fish “simple” in the sense that almost every coastal country can, and does, trade in fish. But were we to look at a product like an automobile, we would see that few countries competitively export them and that the human capital necessary to produce automatic transmissions is likely used to produce other products at a similar level of complexity. The transmission is only one of thousands of components that need to go into making the vehicle, which also need to be nested in an environment where the logistics and management work competently as well. In short, “skills” should be seen about how well different pieces work together than the individual knowledge of each person employed in the labor market.

Confusing Inputs and Outputs

There are four types metrics which could be computed and measured–individual inputs and outputs, and collective inputs and outputs. Individual inputs are the classroom hours and experienced curriculum; individual outputs are test results and other ways of having demonstrable knowledge of something. Collective inputs are the enrolment rates that increase the overall education of a country or region. They can even be the averaged test scores of all of the students in a cohort. The tangible outputs, though, like GDP per capita, Gini coefficients, and the ECI. They can also be intangibles outputs like national or global citizenship issues.

The futility of creating meaningful metrics at the individual input or output level is because emergence – how all the pieces come together – is irreducible. Taking another emergent phenomena, human consciousness, the best science to date shows that it is the interactions of neurons and chemicals that collectively produce consciousness. While we know that removing the chemicals and neurons would end consciousness, we also know that lumping neurons together in a hypothetical petri dish is unlikely to induce consciousness. It is the same for knowledge and its deployment as social and human capital: counting test scores or enrolment rates as an economic input makes almost as much sense as counting neurons to measure self-awareness.

What an input and output is has always been subject to confusion in development theory and practices. Bray (2013 [forthcoming]) points our attention to the fact that inputs and results are so confused in educational development thinking that universal primary education is the only human right that is compulsory for children to attend and government to finance (because it must be free). This is because we “know” education has expected results, or perhaps because we “know” no education has clear results. Thus education is thought of as being something like a vaccination against poverty and ignorance, something no parent or government can deny a child.

There are three basic problems with measuring education which haven’t been overcome. The first is comparability, second is the epistemology of prediction, and the third is Cambell’s Law. The comparability problem arises because one year of schooling in classroom X is often incomparable with one year of schooling in classroom Y. So when we speak of universal primary education, we’re really speaking of providing five years of schooling to every child. Yet we know, empirically, that this simply isn’t the case. [Brookings quote]. Five years in a rural Borno school is simply incomparable to one year in many schools in Lagos.

Many have attempted to correct for this by focusing on learning targets, a la the LMTF and the HLP’s desire to measure skills. The goal is to find metrics of quantifying exactly how dissimilar (or similar) rural Borno education is with urban Lagos education. The thinking is that once we can quantify the degree of learning outcome inequality, and we can divert resources to address it. Two problems arise from this. The first is the issue of emergence mentioned earlier. It’s also unclear how much one is measuring school quality rather than things like family educational background. Teaching basic literacy and numeracy to students whose parents are literate and numerate is far easier than teaching students whose parents are illiterate and/or innumerate (see Nortdveit, 2008).

Finally, whether we’re measuring enrolment rates or learning outcomes, Campbell’s Law kicks up in full force. Campbell’s Law states that, “the more any quantitative social indicator (or even some qualitative indicator) is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” Enrolment rates are an easy metric to game, as they are simply names in books. Whether a child attends after her name is recorded is irrelevant to the metric, as is whether or not the child learns anything (see Lewin, 2011).

For instance, as we wrote in our report on EFA progress in the  Commonwealth, “Uganda has a lower net enrolment rate than Tanzania’s (90.9 versus 98.0), but Uganda’s net attendance rate is 85.6 versus Tanzania’s 80.6” (Menefee and Bray, 2012). We have seen similar ‘gaming’ issues in the United States when learning outcomes have become prioritized and privileged: the curriculum narrows, corruption rises, and there is little evidence that these metrics actually lead to specific socio-economic outcomes.

A Proposed Model

The model proposed here is called the Educational Capital Index (EdCI). It aims to use broadly collected and internationally comparable metrics. It offers little in the way of prediction, as it is descriptive rather than implicitly prescriptive. It should be seen as a “red flag” offering warnings rather than an implicit policy-posing-as-metric like primary enrolment rates. It is immune to Campbell’s Law because of the amount of metrics that go into it. It is built with the intention of laying some foundations for better minds to improve upon the idea.

The term “capital” is employed here because it is understood and measured on both ends – that is both an input and an expected output. Investopia cleverly defines capital as, “an extremely vague term and its specific definition depends on the context in which it is used. In general, it refers to financial resources available for use.” A machine in a factory becomes a capital when it is put to instrumental ends. Money becomes a capital when it is used for loans and investment, it loses that function when it is stuffed into a mattress. A highly educated young woman living in a country unable to harness her talents is a picture of squandered educational capital. As would be the case if that woman were highly educated in a country where most aren’t being provided even the basic skills, like literacy and numeracy, necessary employment in modern economies.

The HLP has made two almost certainly unmeasurable recommendations for metrics, in which the EdCI can be seen as a bridge between. The HLP asks that we set as a goal, “increase the number of young and adult women and men with the skills, including technical and vocational, needed for work by x%.” Later, they ask that we, “increase the number of good and decent jobs and livelihoods by x 2.”  The EdCI can be seen as an attempt to answer, “are schooling and educational policies increasing the number of good and decent jobs and livelihoods?”

Computing the Educational Capital Index

The methodology for computing the EdCI borrows heavily from the Human Development Index (HDI). The HDI is a composite index weighing the level of education in a country, its people’s health, and their material well-being. It creates an index for each sector and averages them together. EdCI computes separate indexes for Schooling Efficiency, Skills, and Opportunity. Final scores are then weighed upwards by dividing each index by half and adding .5. Countries missing the specific metrics are penalized and given the lowest score of .5 for each composite index. The assumption being, “if you treasure it, you measure it.” If a country isn’t measuring literacy rates, it is likely lowly ranked on their educational policy agenda.

The Schooling Efficiency Index

Schooling Efficiency is itself broken into two indexes, which are equally weighted and averaged into the Schooling Efficiency Index: the Overeducation Index and the School Equity Index. The EdCI makes a proxy measurement for over-education. In this specific context, over-education refers to there being a more education workforce than the economy can usefully absorb. Theoretically, this follows the arguments found in Dore (1997) and Oxenham (1984). Over-education is an issue because it can be an inefficient use of scarce educational resources, but also because it creates inequitable employment difficulties for those sufficiently skilled for a job, but who are pushed either to lower paying jobs or out of employment because they lack unnecessary educational qualifications.

There are two metrics for measuring over-education, but two more can be suggested. The two metrics used are the ratio of Gross Domestic Product (GDP) per capita and the Economic Complexity Index (ECI) and UNESCO’s Educational Development Index (EDI). The EDI is UNESCO’s attempt to compile all of the EFA metrics into one, so it reflects a country following all of the “best practices” for educational inputs.

Two potential metrics that could be used would use the momentum of change in both GDP per capita and ECI in relation to changes in average years of schooling, either in addition to or as replacement to the static metrics used here. The rationale behind this measurement is that countries with similar degrees of wealth and economic complexity give an indication of how many years of schooling are actually required to obtain a specific ratio.

Countries that are able to obtain high ratios are likely more efficiently using educational resources, while countries with low ratios are likely less efficient. As an example, if there are two countries at a level of economic complexity where their most advanced export are circuit boards for household electronics, and one has an average of eight years of schooling and the other has twelve, the country with the most years of education is likely in danger of having an over-education problem in that there are more skills and knowledge than can be productively employed, or the education system is delivering fewer units of product knowledge per year of schooling.

Schooling Quality, the second composite metric of the Educational Equity Index, is an aggregate index of the three measurements: average years of schooling in the adult population to adult literacy rate, youth literacy rate to primary Net Enrolment Ratio, and average years of schooling to Gini Coefficient. The rationale behind comparing educational metrics and literacy metrics is that it gives us an idea of the distribution of quality education. If the years of schooling or enrolment rates are comparatively high, but the distribution of learning outputs (literacy) is low, it means that resources are being used equitably. Similarly, comparatively high rates of schooling and a high Gini coefficient tell a similar story.

The Skills Index

While not a theoretically rich framework, UNESCO presents a simple (and common) framework to think about skills that are incorporated into the the EdCI. UNESCO’s 2013 GMR on youth unemployment breaks skills into three broad components: cognitive skills, non-cognitive skills, and vocational skills. Each of these become an index in the EdCI. Cognitive skills are the essential skills necessary to function in a modern work environment, like literacy and numeracy. These are the foundational skills necessary to advance to learning vocational skills. In short, someone likely needs to be able to read in order to learn more about the product they’re selling, making, or creating. Where these basic cognitive skills end and where more specific vocational skills begin is up to debate and interpretation, and likely to be unpredictable in nature.

The EdCI Cognitive Skills Index measures adult and youth literacy rates and secondary completion rates as an output of basic skills. There is a certain amount of imputation in using these metrics. While literacy is important, and sometimes has a higher correlation with economic performance than average years of schooling (see Menefee, 2013), we can reasonably infer that countries with low amounts of literacy likely also have low amounts of other basic cognitive abstraction skills. Literacy, in many ways, is the foundational skill for all further abstraction (see Lindsey 2013). It is unlikely that a student, or adult, who is illiterate has mastered basic numeracy. Secondary completion is used because it is where the GMR believes these skills are best acquired (UNESCO 2012).

The Non-Cognitive Skills Index is constructed from youth unemployment rates and levels of entrepreneurship in a country. The rationale for including youth employment is that work is the best place to acquire the skills to learn how to survive and thrive in a modern work environment (which is precisely what the GMR says as well). While high youth employment might not directly translate into these sorts of skills, the other direction is almost certainly the case: a large unemployed youth population is one in which few of these skills are being learned and shared. Entrepreneurship is measured with a World Bank Entrepreneurship Database, specifically with their “new density” metric, which measures “the number of newly registered limited liability companies per 1,000 working-age people.” One handicap of this metric is that it doesn’t measure the informal economic activities that are prolific in most developing economies (see Palmer, 2008; King, 2001).

The Opportunity Index

The Opportunity Index is meant to measure educational inequality, access, and the flexibility of education systems to attempt an answer at whether or not students are getting “stuck.” Conceptually, it takes an instrumental approach to inequality in the same way that Wilkinson and Pickett (2009) do in The Spirit Level. Educational inequality doesn’t just hurt people, it hurts economies. With Economic Complexity in mind, it is difficult to imagine that a country where 30% of the youth are enrolled in tertiary education and 30% of the adult population is illiterate is a functional economy, as the higher educated will find it difficult to translate their skills into productive knowledge.

One of the lessons of recent Chinese economic history is that it was the totality of the education system that delivered the educational inputs to allow for the entire supply chain for products like the iPad. All of the educational pieces were in place to find the managers, engineers, IT technicians, logisticians, and factory workers with basic skills. Empirically, we have seen that it is difficult for countries to grow if these pieces are missing. The competent factory workers need the competent managers as much as the managers need the competent factory workers to harness their skills. This is not to say that having the inputs in place leads to the outputs, but that the emergent phenomena would likely not have happened without those inputs.

The Opportunity Index measures this type of inequality with four metrics that comprise a single Exclusion Index. The Exclusion Index first measures the growth or decline in the absolute number of out-of-school children since Dakar EFA was launched, youth unemployment, and the ratio of primary students to secondary students to measure both matriculation and provision inequality (lower is better), and the ratio of tertiary enrolment rate to primary school dropouts (higher is better). The latter two metrics are meant as a substitution for an enrolment Gini index, which is more difficult to compute. This is averaged with an Enrolment Growth Index, which measures the relative momentum of net enrolment trajectories in primary and secondary. This is a relatively novel metric in that it would penalize a country that has slipped from 99% to 97% as severely as a country that has slipped from 55% to 53%. The implicit logic being that decline in enrolment rates implies declining prioritization and funding elsewhere in the education system,

Conclusions

If the Goal Three agenda is to have any hope of competing with enrolments in the post-2015 aid landscape, it needs to have its own metric. If it does not have a metric, and were HLP’s nod to skill to make it to the actual post-2015 framework, then the TVET and NFE “ornament” on the post- 2015 “Christmas tree” will fade into obscurity as surely and quickly as Dakar EFA’s Goal Three did.

Unfortunately, the well-intentioned metrics proposed by the HLP are constructed from questionable epistemic material. The history of such endeavors and the insights from the science of complex systems tell us that planners are in no position to predict which skills are important to foster economic performance that leads to “good jobs.” Even if they were able to make such plans, the LMTF has taught us that there would be no end to the debate of how we would then go about measuring these skills. The focus, then, must be known collective outputs and how outputs correlate with known inputs. “How well are we doing it?” is a substantially easier question to answer than, “how should it be done?”

The maxim that “our greatest strengths are often the flip side of the same coin of our greatest weaknesses”, can be as true for people as it is of ideas. The weakness of the Goal Three agenda is that is resistant to measurement and targets for expansion in its own right. Yet that is also its strength: it is ultimately most useful as a reflex to the problems of the rest of the educational development agenda. It is an important bandage that resists definition and reduction. It is best used when needed, not because it is the “right” way forward. Thus, the EdCI is an indicator meant to tell planners, funders, and policy makers where NFE and TVET might be of use – and where it isn’t.

In that sense, the EdCI is pieced together with humility. It is an approach to educational development that prioritizes no specific solution. Where a metric like Net Enrolment Ratio rewards more enrolment for its own sake, the EdCI instead prioritizes and privileges outcomes that are thought to be outcomes of quality education. Something like the EdCI, then, would be an epistemologically stronger product than attempting to measure what the post-2015 HLP has suggested: what constitutes a useful “skill” or a “good job.”

The EdCI has been developed with the intention of being modified, improved, simplified, and debated over. What is presented here is, in fact, something akin to an “alpha” version of a software. It is meant as a point of conversation and a functional mock-up to demonstrate its potential. Whether the Goal Three-focused community uses this or something else, we should steer clear of setting targets and returning to the role of labor planners.


References

Dore, R. (1997). Reflections on the Diploma Disease Twenty Years Later. Assessment in Education, 4(1), 189-206.

Hausmann, R., Hidalgo, C., et al (2011). The Atlas of Economic Complexity. Boston. USA.

Illich, I. (1997). Development as planned poverty. The post-development reader, 94-112.

King, K. (2001). Africa’s Informal Economies: Thirty Years In. SAIS review, 21(1), 97-108.

King, K. (2007). The Politics and Politics of Skills: China, Africa, and India. Comparative Education Research Centre. Hong Kong. Retrieved from http://www.fe.hku.hk/cerc/KK-article/7.Policies and Politics of Skill.KK2007b.pdf

Klees, S. (2013). Whither Post-2015? A Critique of the Post-2015 High Level Panel’s Education and Economic Goals | NORRAG NEWSBite  http://norrag.wordpress.com

Lewin, K. M. (2011). Taking targets to task revisited: How indicators of progress on access to education can mislead. Consortium for Research on Educational Access, Transitions and Equity.

Palmer, R. (2008) Skills and productivity in the informal economy. No. 413142. International Labour Organization.

Menefee, T, and Bray, M. (2012) Education in the Commonwealth: Towards and beyond the Internationally Agreed Goals. Commonwealth Secretariat.

Nordtveit, B. H. (2008). Poverty alleviation and integrated service delivery: Literacy, early child development and health. International Journal of Educational Development, 28(4), 405-418.

Oxenham, J. (1984). Education versus qualifications?: a study of relationships between education, selection for employment and the productivity of labour.

Scott, J. C. 1998. Seeing like a State: How certain schemes to improve the human condition have failed. Yale University, Yale Agrarian Series, pps, 2(3), 5.

Wilkinson, R. G., & Pickett, K. (2011). The spirit level. Bloomsbury Press.


[1] “Alpha” here refers to software development nomenclature. An alpha release of new software is meant for in-house testing and refinement, where it then proceeds to a public “beta” roll-out for wider scale testing. Which is then followed by a final release.

Screen Shot 2013-08-28 at 3.15.15 PM

Screen Shot 2013-08-28 at 3.15.22 PM

 

Screen Shot 2013-08-28 at 3.15.30 PMScreen Shot 2013-08-28 at 3.15.40 PMScreen Shot 2013-08-28 at 3.15.46 PM

 

One Comment

Leave a Reply