Success and Failure in eGovernment Projects

eGovernment Success/Failure: Definitions

Defining eGovernment Success and Failure

We can divide e-government initiatives into three main definitions:

Major goals are the main objectives a group wanted to achieve with the initiative; undesirable outcomes are unexpected outcomes that a group did not want to happen but which did happen.


Total Failure:

India's Indira Gandhi Conservation Monitoring Centre was intended to be a national information provider based on a set of core environmental information systems. Despite more than a year of planning, analysis and design work, these information systems never became operational, and the whole initiative collapsed shortly afterwards. [1]

Partial Failure Type 1 - Goal Failure (main stated goals not attained):

The Tax Computerisation Project in Thailand's Revenue Department set out seven areas of taxation that were to be computerised. At the end of the project, only two areas had been partly computerised, and five others were not operational. [2]

Partial Failure Type 2 - Sustainability Failure (succeeds initially but then fails after a year or so):

A set of touch-screen kiosks was created for remote rural communities in South Africa's North-West Province. These were initially well received. However, the kiosks' lack of updated or local content and lack of interactivity led to disuse, and the kiosks were removed less than one year later. [3]

Partial Failure Type 3 - Zero-Sum Failure (succeeds for one stakeholder group but fails for another):

There was a zero-sum failure during the Accounts and Personnel Computerisation Project of Ghana's Volta River Authority. Most managerial staff in the finance department were pleased with the changes brought by the new system. However, the implementation "bred a feeling of resentment, bitterness and alienation" among some lower-level staff, and led to resistance and non-use, particularly among older workers. The feelings and the resistance/non-use were undesirable outcomes. [4]


The work of South Africa's Independent Electoral Commission was supported widespread use of ICTs. In the 1999 elections, this enabled 400 new constituency boundaries to be drawn up, 18 million voters to be registered, voting to take place at 15,000 polling stations, and the results to be transmitted to and collated at a central point. [5]

Assessing eGovernment Success/Failure Case Studies

eGovernment success/failure cases can be judged against four questions:

1. Who wrote the case study?

Is the case written by an independent evaluator, or by an 'interested party': someone with a direct or indirect interest in the project? An interested party could be one of the instigators, managers, consultants, designers, operators, or vendors. Their interest does not make the case untrue, but - for all writers - it can help to ask 'why did the writer write this case: what did they hope to achieve?'. Credible cases are written by independent evaluators.

2. When in the project's life was the case written?

Some cases are written in the planning phase or during initial implementation of the project (words like 'will', 'to be used', 'could', 'intended' etc. give a clue to this). These cannot be used to judge success and failure. Cases written only a few months after implementation are weakened because there has not been enough time for all impacts to emerge. Credible cases are written at least one year after implementation.

3. Whose views are taken into account?

If an e-government system is never built or abandoned, you can judge that objectively. Other aspects of e-government success and failure are subjective: they depend on your perspective. In the Ghana case above, some people thought the project was a success; others thought it a failure. Credible cases identify all the key stakeholder groups involved, and take all their views into account.

4. Where is the evidence?

Assertions about success and failure need evidence to support them. For example, to say an e-government Web site is a success, evidence that the site exists is not enough. You need evidence - about the number and type of users; about what they use the site for; about their views on the site. It can help to ask 'how did the writer get this evidence?'. Credible cases present in-depth, transparent evidence.

eGov4Dev Classification

On this Web site we use the following six-way classification of e-government case studies, building on the definitions given above:

[1] Puri, S.K., Chauhan, K.P.S. & Admedullah, M. (2000) 'Prospects of biological diversity information management'; in Information Flows, Local Improvisations and Work Practices , Proceedings of the IFIP WG9.4 Conference 2000, Cape Town.

[2] Kitiyadisai, K. (2000) 'The implementation of IT in reengineering the Thai Revenue Department'; in Information Flows, Local Improvisations and Work Practices , Proceedings of the IFIP WG9.4 Conference 2000, Cape Town.

[3] Benjamin, P. (2001) ' Community development and democratisation through information technology: building the new South Africa'; in Reinventing Government in the Information Age , R.B. Heeks (ed.), Routledge, London, 194-210.

[4] Tettey, W.J. (2000) 'Computerization, institutional maturation and qualitative change'. Information Technology for Development 9(2):59-76.

[5] Microsoft (2000) IEC of South Africa wins Computerworld Smithsonian Award , Government News, 28 June, Microsoft Europe, Reading.


Page Author: Richard Heeks. Last updated on 19 October, 2008.
Please contact with comments and suggestions.