Practical monitoring and evaluation a guide for voluntary organisations
helping voluntary organisations organisatio ns to be more effective Since 1990, Charities Evaluation Services (CES) has worked with a wide variety of voluntary organisations and their funders. Our aim is to promote accessible monitoring and evaluation practice, which organisations can carry out within the resources available to them. CES provides training, advice and technical help, and also carries out independent evaluations as part of its commitment to strengthening and improving the effectiveness of the voluntar y sector. CES produces a range of publications including PQASSO, the practical quality assurance system for small organisations.
Charities Evaluation Services 4 Coldbath Square London EC1R 5HL 7713 3 5722 t +44 (0) 20 771 7713 3 5692 f +44 (0) 20 771 e
[email protected] w www.ces-vol.org.uk
Company limited by guarantee Registered office 4 Coldbath Square London EC1R 5HL Registered in England and Wales number 2510318 Registered Register ed charity number 803602
Copyright Unless otherwise indicated no part of this publication may be stored in a retrievable system or reproduced in any form whatsoever without prior written permission from Charities Evaluation Services. Charities Evaluation Services will give sympathetic consideration to requests from small organisations for permission to reproduce this publication in whole or in part but terms upon which such reproduction may be permitted will remain at Charities Evaluation Services’ discretion. © Charities Evaluation Services, 2002 Third edition, 2008 ISBN 978-0-9558849-00 Published by Charities Evaluation Services Designed by Alexander Boxill New edition artwork by Artloud Ltd Edited by Wordworks, London W4 4DB Printed by Lithosphere Print Production
acknowledgements
charities evaluation services
Developed and written by Dr Jean Ellis for Charities Evaluation Services.
1
CES trustees, staff and associates For reading drafts and providing material for the first edition text Rowan Astbury Libby Cooper Sally Cupitt For their contributions Jean Barclay Mark Bitel Sara Burns Vijay Kumari John Morris Jayne Weaver
Readers For reading and commenting on the first edition text Jenny Field, Bridge House Estates Trust Fund Ciarán McKinney, Streetwise Youth Tom Owen, Help the Aged Georgie Parry-Crooke, University of North London Professor Helen Simons, University of Southampton Chris Spragg, NCH Action for Children James Wragg, Esmée Fairbairn Foundation
Funders We are very grateful to the Calouste Gulbenkian Foundation and the Wates Foundation for the grants they provided for the first edition of this publication.
charities evaluation services
2
foreword A central part of the mission of NCVO is to help voluntary organisations to achieve the highest standards of practice and effectiveness. We know from our work with organisations across the country that the vast majority are firmly committed to improving their work. However, while the motivation is undoubtedly there, many organisations lack the experience and practical know-how to assess the effectiveness of their work without help or guidance. Charities Evaluation Services has an impressive track record in developing clear and accessible material for voluntary organisations. This stems from their long experience of working closely with organisations of all shapes and sizes. CES’ Practical Monitoring and Evaluation wa wass first published in 2002, providing much-needed guidance on evaluation for voluntary organisations both large and small. The guide is now in its third edition, with a revised further reading list, and text additions which acknowledge the recent shifts in the policy and funding context and developments in monitoring and evaluation software and other resources. This guide will be invaluable to any voluntary organisation that is serious about measuring the effectiveness of its work. It has been carefully designed to be of relevance to organisations relatively new to this sort of approach, as well as providing more demanding material for those who need something more sophisticated. I would encourage any voluntary organisation that is committed to improving its work to use this guide. The sector is under increasing scrutiny from the public, the government and funders – particularly in our role in delivering public services. There has never been a more important time for the sector to demonstrate its effectiveness. I am sure that this guide will continue to play an important part in helping us do this. Stuart Etherington Chief Executive, National Council for Voluntary Organisations
Now, more than ever, organisations which stand still run the risk of stagnating. One of the Commission’s hallmarks of an effective charity is that it considers how to identify, measure and learn from its achievement and outcomes - not least the positive (and negative) effects it has on its beneficiaries. Dynamic, effective organisations put learning and developing at the heart of their development strategy, and advice and guidance on monitoring is invaluable in helping them do so. So I’m delighted to welcome the third edition of CES’ Practical monitoring and evaluation. Charities are increasingly aware of the importance of monitoring and evaluating the effectiveness their work – being able to demonstrate how they do this sends a strong and positive message of accountability to beneficiaries, funders and supporters alike. The fundamental principles in this guide are equally relevant to those setting up a charity as to those already up and running. It provides back-tobasics advice for organisations with no, or limited, experience of monitoring and evaluation as well as more advanced guidance for those organisations which are keen to develop a more nuanced understanding. Monitoring and evaluation are an integral part of a well-run and evolving charity and should be seen as vital tools in helping an organisation adapt and develop their services to meet need. I hope the third edition of Practical monitoring and evaluation will cement the importance of good governance. It is more fundamental to the sector today than ever before.
Dame Suzi Leather Chair, Charity Commission
this guide
charities evaluation services
Practical monitoring and evaluation has an introductory and four main sections: Planning Monitoring Evaluation Utilisation. The final section is a Practical toolkit.
The four main sections offer practical guidance on the four key areas of the monitoring and evaluation cycle:
feedback for internal learning and planning, or whose funders ask for one-off evaluations, or who have reporting requirements that demand a more complex understanding of monitoring and evaluation. This level is particularly relevant for larger projects or organisations which have greater resources for evaluation. The Practical toolkit has practical guidance on data collection methods, gives examples of some data collection tools, and discusses some essential points on data analysis.
ng g P lanni n
Practical monitoring and evaluation also provides a comprehensive glossary of terms and suggestions for further reading.
M
n o
o
n
i
i
t
o
t
a
r
s
g
i
t
U
E v v a l l ua t i o n
Within each each section, guidance guidance is offered at a basic and a further level. The basic level provides a guide to self-evaluation. This is intended to help voluntary organisations develop a system both for accountability purposes and to provide essential information for good management and service development. Read together with the Practical toolkit, this level is designed as an introduction for organisations with a limited amount or no previous experience of evaluation, particularly small organisations. 1 Even for organisations familiar with evaluation, it may be helpful to check practice against the different stages described. The further level within each section follows and builds on the basic level. level. It aims to meet the needs of organisations wanting more 1
A resource for PQASSO users
i n
i l
For a short introduction to monitoring and evaluation, see also Charities Evaluation Services (2002), First steps in monitoring and evaluation , London
Although it is intended to stand alone, this guide has also been designed as a resource for users of PQASSO (Practical Quality Assurance System for Small Organisations). CES designed and introduced PQASSO in 1997 specifically for the voluntary sector. Since then it has been used by thousands of organisations as a tool to increase organisational efficiency and effectiveness. PQASSO provides a flexible, three-level three-level approach for stage-by-stage improvements across 12 quality areas. In a number of quality areas, PQASSO calls for monitoring and evaluation activities at three levels of complexity complexity.. Many small organisations and projects work to PQASSO Level 1. The basic level in Practical monitoring and evaluation is designed to meet the needs of PQASSO users at this stage, as well as other small organisations with limited resources. The further level level of guidance guidance provides a resource for those moving on to PQASSO Levels 2 and 3, who need more complex approaches to monitoring and evaluation.
3
A resource for PQASSO users
charities evaluation services
4
Terminology There is a comprehensive glossary in the Practical toolkit . But here are some basic definitions of terms used in this guide.
Project For simplicity, the guide uses the term ‘project’ to imply a fairly limited set of activities or services with a common management and overall aim, whether carried out independently or as part of a larger organisation. However, this guide is relevant to, and can be used by, the full range of voluntary organisations, including large organisations which may provide a complex range of services at a number of sites. The principles and practice also apply to the evaluation of programmes, in which a number of projects are funded within the framework of a common overall aim.
Participant This means any person, group or organisation taking part in project activities, where ‘user’ is not an appropriate term as no services are offered. The term ‘participant’ is also used for those who take part in evaluation activities.
Respondent/informant A respondent is someone who provides information directly, directly, usually by answering questions asked by an evaluator evaluator.. Informants might provide information both directly and indirectly, for example through being observed.
Information/data Information is collected during monitoring and evaluation. This becomes data when it is gathered for a specific purpose and is linked to specific evaluation questions.
Staff/volunteers Many voluntary organisations have no paid staff, and depend on volunteers to deliver their services. This is recognised in the guide, which often refers to both staff and volunteers. However, for brevity, the term ‘staff’ is also used as an umbrella term to include everyone working for a project, whether fully employed, sessional staff, consultants or volunteers.
Symbols The following symbols are used in the guide:
There is more about this in the Practical toolkit. toolkit .
User This term is used to mean any person, group or organisation that may use an organisation’s services, either directly or indirectly. This can include clients, casual callers, referral agencies and researchers.
Further reading relevant to this section.
inside this guide
5
Introduction
7
Evaluation and the voluntary sector
7
The partnership between monitoring and evaluation
8
Approaches to monitoring and evaluation
8
Challenges for voluntary sector evaluation
11
Section 1 Planning
13
Basic planning
15
Further planning
27
Section 2 Monitoring
33
Basic monitoring
35
Further monitoring
45
Section 3 Evaluation
49
Basic evaluation
51
Further evaluation
65
Section 4 Utilisation
79
Basic utilisation
81
Further utilisation
84
Section 5 Practical toolkit
89
Collecting and analysing data
91
Data collection tools 111 Further reading 130 Glossary 132
charities evaluation services
5
charities evaluation services
introduction Evaluation and the voluntary sector Practical monitoring and evaluation is a comprehensive introduction to monitoring and evaluation for voluntary organisations and their funders. It is intended to support organisations which carry out their own evaluation activities, as well as those that use external consultants to h elp develop their systems or to carry out an external evaluation. The need for this guide has become increasingly pressing. The voluntary sector is complex and diverse, and is affected by changing demands from funders, from those in whose interests it works, and from the general public. This funding makes organisations more formally accountable. Voluntary organisations are more likely to have to prove that their performance is of a high quality and to adopt more business-like management. The general public, funders and grant givers, and service users themselves, expect to see evidence that projects are making a difference – and that they provide value for money. A significant amount of voluntary and community grant income comes from government sources, both national and local. Some major voluntary organisations receive large sums from central and local government. Local authority funding is now increasingly available through contracts for service delivery and more and more local community sector organisations are funded from local budgets. The sector is also highly dependent on grant-aid from charitable foundations, particularly for new initiatives, as well as on public donations, and is sensitive to any reduction in available funding and greater competition for dwindling funds. This funding makes organisations more formally accountable. Voluntary
charities evaluation services
organisations are more likely to have to prove that their performance is of a high quality and to adopt business-like management. The general public, funders, commissioners and grant givers, and service users themselves, expect to see evidence that projects are making a difference – and that they provide value for money. Major policy changes have affected the sector.. Increasingly sector In creasingly,, voluntar y organisations have been asked to demonstrate proof of their performance, often against targets. But organisations are now asked not just to show that they work to high standards. They have to show how their standards can be guaranteed, and how well they perform in comparison to others. There is an increasing emphasis on understanding not just what works, but why and how it works. This level of accountability is here to stay, so the sector needs practical guidance on how to demonstrate its performance effectively effectively.. The nature of the voluntary sector today means that organisations have to be clear about what they are trying to achieve, and need to develop very specific aims and objectives along with strategic plans and targets. Practical monitoring and evaluation describes a model of evaluation that recognises this emphasis on aims and objectives, while describing the diverse and complex range of evaluation approaches and activities from which organisations can benefit.
7 introduction
Evaluation and the voluntary sector
charities evaluation services
8
The partnership between monitoring and evaluation Although monitoring and evaluation work hand-in-hand, you need to distinguish between them.
Monitoring All organisations keep records and notes, and discuss what they are doing. This simple checking becomes monitoring when information is collected routinely and systematically against a plan. The information might be about activities or services, your users, or about outside factors affecting your organisation or project. Monitoring information is collected at specific times: daily, monthly or quarterly. At some point you need to bring this information together so that it can answer questions such as: How well are we doing? Are we doing the right things? What difference are we making?
At this point you are starting to evaluate. While monitoring is routine and ongoing, evaluation is an in-depth study, taking place at specific points in the life of the project.
Evaluation Evaluation aims to answer agreed questions and to make a judgement against specific criteria. Like other research, for a good evaluation, data must be collected and analysed systematically, and its interpretation considered carefully. Assessing ‘value’ – or the worth of something – and then taking action makes evaluation distinctive. The results of an evaluation are intended to be used. Although monitoring and evaluation are different, certain types of evaluation may involve a lot of monitoring activity. It can be difficult to carry out an evaluation unless monitoring data has already been collected. The more resources you put
into monitoring, the fewer you should need for evaluation. You can develop this close connection further by involving the people who evaluate your project in the design of your monitoring systems wherever possible.
Approaches to monitoring and evaluation The following questions are key to making basic decisions about monitoring and evaluation: Why are you doing it? Who is it for? What are the key issues or questions you wish to address? When will you do it? How will you do it? Who will carry it out? How will the information be managed and analysed? When and how will the information be communicated?
There are many different perspectives and approaches to evaluation. Answering the questions above will help you decide whether you wish to self-evaluate or to have an external evaluation. The questions will help you to think about what you want to focus on. For example, this could be: your organisational structure and how it works how you carry out your services or activities how users experience the project what changes or benefits the project brings about.
The basic basic guidance guidance in Practical monitoring and evaluation focuses on helping projects to self-evaluate, although the principles are also useful for external evaluations. It is based on the ‘aims-objectives’ model widely used by funders
charities evaluation services
and voluntary organisations. In this model, evidence is gathered in relation to planned and clearly defined aims and objectives. However, evaluation can be more complex However, and varied. Case study evaluation, for example, is often used to develop good practice. This, and other approaches, are discussed in Further monitoring and evaluation,, which also looks in more detail evaluation at the management of both self-evaluation and external evaluation.
Why evaluate? The approach we take in this guide is that monitoring and evaluation not only measure how well you are doing, but also help you to be more ef fective. Funders and others who sponsor an evaluation will want to know whether a project has spent its money appropriately, or whether it provides ‘value for money’. There is considerable pressure from funders to ‘prove’ success. Many projects have to respond to this demand in order to survive. But there is a danger that an evaluation that is unable to prove success, or defines success unacceptably, will be rejected, and important learning from the evaluation will be lost. Evaluation can help you to manage and develop your work – and this is a valid purpose in its own right.
Evaluation has two main purposes: Evaluation for accountability – to demonstrate achievements.
The purpose of evaluation will change the type of questions asked. For accountability , the questions might be:
There may be some tension between these two approaches. But it is important for many voluntary organisations to find ways in which both the need for accountability and the need for learning can be met. Many funders are also becoming more interested not only in whether a project has worked, but why.
introduction
Has the project worked? How has money been spent? Should the project continue?
For learning , you might ask: What are the project’s strengths and weaknesses? What are the implementation problems? Why have things worked, or not? What are the good practice issues?
Evaluation should not only answer questions. It should also prompt fresh thinking within your organisation and in your contacts with external agencies. If you have asked the right questions, an evaluation will tell you not only what you have achieved, but also how you did it and what was most effective. It will help you find the areas where improvement or change is needed, and help you to provide the best service to users.
Who is evaluation for? Many different people and groups may be interested in your evaluation. These ‘stakeholders’ may include: the project management staff and volunteers funders and commissioners
Evaluation for learning and development – using evaluation to learn more about an organisation’s activities, and then using what has been learnt.
9
users members policy-makers and decision-makers other agencies and project partners.
How you involve stakeholders in the evaluation will affect how the evaluation findings are accepted and, in turn, how they are used. People are more likely to respond – and make changes – if they have
The partnership between monitoring and evaluation Approaches to monitoring and evaluation
charities evaluation services
10
been personally involved in decision-making. Evaluation needs to answer their questions and use methods that they think are credible and that will provide adequate and convincing evidence. It is therefore useful to think about what part they will play in carrying out the evaluation, and how they want the evaluation findings reported back.
Many voluntary organisations have developed quality standards and systems. They monitor different aspects of their organisation and projects as part of a quality check or audit. This has led to some confusion about whether their quality system is in itself a form of evaluation. Quality systems sometimes use a simple form of self-assessment to find out how well a project is performing against quality standards. This might, for example, use people’s subjective judgements to reach agreement on improvements to be made. However, this process may not involve the level of proof or evidence that an evaluation might require. On the other hand, a more rigorous quality assessment or audit will need more substantial evidence about what the organisation is doing, how it is doing it, and its results. Here, you must have a monitoring and evaluation system to give you the information you need.
Quality and evaluation The term ‘quality’ is often used in a vague, blurred way. If someone talks about ‘working on quality’, they may simply mean activities designed to improve the organisation and its services. When the term ‘quality assurance system’ is used, then it means something definite: a formal system you can use to raise standards of work and help strengthen your organisation. Quality is concerned with expectations about performance, setting standards and making improvements. Each quality assurance system will suggest particular areas for which you need to make judgements about performance against quality standards.
Monitoring and evaluation can therefore serve as important tools for improving quality. Work on quality, and on monitoring and evaluation, should be co-ordinated and feed into each other. However, it is helpful to recognise them as two different things.
The quality and evaluation link asks for data
Quality assurance system assesses the project against its own expectations and standards
Monitoring and evaluation system assesses the project against a varied range of agreed evaluation questions
o t h
e
r i
n
o
f
m r
a t
i o
n
n e e
d
s
provides data and judgements Example: A quality assurance system may state that, ‘the organisation should seek regular user feedback on its services ’. ’. The monitoring and evaluation system
can give you data to help you show that you meet that standard. It can also help you understand what that feedback was, and the reasons behind it.
charities evaluation services
Self-evaluation
The use of self-evaluation techniques allows the people involved – managers, staff or users – to take a fresh look at the project. Evaluation findings should provide information for service reviews and planning cycles, and can lead to rethinking the very aims and objectives of the project. Were objectives too limited or overambitious? Did they reflect the needs of members of the community?
When an organisation uses its own expertise to carry out its evaluations this is known as self-evaluation. Monitoring and evaluation is built into the everyday life of the project and is an essential part of project management and development. It should not be confused with individual staff self-assessment or appraisal. Funders increasingly rely on self-evaluation to provide important information on what you are doing and how effective you are.
Self-evaluation processes and findings need to be carefully managed. Here are some issues to consider: Self-evaluation should be integrated into work activities.
Self-evaluation cycle
Self-evaluation will need resources, both in terms of people’s time and money – without these, the quality of the evaluation will suffer.
Needs assessment
i s e l t U i
Set aims and objectives: what do you want to achieve and how?
E v
a
l u
t e
P l a
n
Set performance indicators: to measure your progress
Review project and implement recommendations
a
Findings may stir up discomfort or conflict within the organisation.
Deliver work programme
r t o i o n M
Self-evaluation may be carried out by one member of staff, by an internal team responsible for evaluation, or responsibility may be spread throughout an organisation. The emphasis is on user-friendly techniques and methods, and on collaboration and participation.
Challenges for voluntary sector evaluation The government drive for ‘evidence-based’ policy has brought increasing pressures to establish a direct relationship between a programme or project and some measured outcomes – demonstrating ‘what works’. There are a number of problems for voluntary organisations in demonstrating outcomes and impacts: Who will decide how much change is needed for a project to be successful? People may have different perspectives. And it may be different for different projects. How does a project which demonstrates less change for more people compare to one which shows dramatic benefits for a few? Community change initiatives are putting more value on joint working, and yet there is a desire to show that each individual project has ‘caused’ a result.
11 introduction
Challenges for voluntary sector evaluation
charities evaluation services
12
Changes in complex initiatives may also take place over a longer time period, although funders are looking for outcomes earlier in the process. Outcomes sometimes appear to worsen over time. Clients may report their views more accurately as their trust or awareness grows, and this can lead to client outcomes appearing to be more negative. Some benefits to the target group are very hard to define and measure. It can be particularly hard for organisations involved in campaigning or in education and prevention work to monitor outcomes. The work is often long term and a small voluntary organisation cannot measure it economically. It is often difficult before a project starts to collect good data against which to measure change.
There are, then, problems in proving that changes are attributable to the programme or project itself and not to some other cause. This is especially true when a project is part of complex and collaborative community initiatives, in which it is difficult to isolate or show the effect any one group is having. There is a particular problem for projects which try to measure outcomes for campaigning and advocacy work. The organisation can measure outputs, such as the level of participation in campaigning, and there is often information publicly available about policy change. However However,, unless policy-makers are willing to admit that campaigning or advocacy has influenced the change, it may be difficult or impossible to prove the link between the two. This does not mean that such projects should not try to find out how effective they are. There is value in describing the process of the work, and how well particular strategies are working. For campaigning
organisations, building a persuasive evidence base and forming alliances with other advocates for change and those in a position to influence policy may lead to developments in the longer term. Recording and reporting on those processes and focusing on important intermediat intermediate e outcomes may be more useful than trying to measure more long-term ones. It is also important to research and acknowledge other factors that may have affected the changes that have happened. For the voluntary sector to have confidence in evaluation, there needs to be clarity about how it can be used to build and develop organisational capacity. It is also important to understand where there are limits on what can be reliably or usefully measured within the time and resources available. Clarifying your success criteria, maintaining good monitoring systems, and achieving good quality data collection and reporting are essential for credibility. This guide leads you through several important processes: how to clarify your aims and objectives, how to identify indicators of success, and how to develop and implement simple and effective monitoring and evaluation systems. It also explores the scope and potential of evaluation activities that the voluntary sector can use and the wide range of approaches and techniques that can be applied.
section1 planning Basic planning
15
Understanding and clarifying your project 15 Performance Performa nce indicators 21 Setting targets 23 Developing a year plan 24 Planning for monitoring and evaluation 26
Further planning
27
Linking evaluation to project planning 27 Designing an evaluation 30
charities evaluation services
13
charities evaluation services
basic planning Basic planning describes the foundation blocks on which monitoring and evaluation are built. This section tells you how to make clear statements about what your organisation is doing, with whom, and why. It shows you how to word these statements so that your activities, and the resulting changes, can be measured and reported on. It looks at a simple year plan and sets out the points you must remember when planning a monitoring and evaluation system.
This section describes a sequence of planning steps:
Stakeholders expect clear statements about the reason for the project’s existence and what it is doing. This clarity is also needed to plan the project’s work and to monitor and evaluate it. These statements are usually expressed in the following way: mission or overall aim – your organisational or project purpose values – the organisational principles you work to specific aims – the changes or benefits you intend to achieve objectives – the activities you carry out.
Think about which of your stakeholders will be involved in thinking these through: the trustees staff and volunteers users
Clarify the target group Identify outputs and outcomes Set performance indicators
Developing Develop ing a mission statement A mission statement expresses the overall purpose of an organisation. It is sometimes called a strategic or overall aim. It defines and summarises the long-term change or difference you want to make. make. Because it expresses this in broad terms, it will need to be supported by more specific aims. If no mission statement exists, you may want to develop one. It is often helpful for staff and trustees to develop the statement together. together. It should be short and clear, and cover the essential aspects of your work. Think about the people you work with, the nature of the work and the broader or long-term changes that you want to result from your work. It is useful to check whether your mission statement accurately and actually describes the change you want to make, and then to review it every three to five years, particularly if there are obvious external changes that might affect it.
other community members specialists or experts in the field.
You also need to be clear about the target group or target groups for your activities or services. If these statements were not clearly developed when the project was first set up, it will be helpful to do this as a first step in monitoring and evaluating the project.
15 basic planning
Clarify project aims, objectives and values
Set targets
Understanding and clarifying your project
charities evaluation services
Example mission statement: Thursbury Community Theatre Project The Community Theatre Project works with young people in schools, community centres and other community venues. This is its mission statement: To enhance the lives of young people in Thursbury through the integration of drama into education and community activities .
Understanding and clarifying your project
charities evaluation services
16
Expressing values Mission statements usually include some core values that describe an organisation’s approach when carrying out its activities, for example, the way you work with service users. Values may also be expressed in a separate statement. Organisations often use words such as these to express the way they work: flexible, respectful, caring, empowering, providing equal opportunity. Various expressions are used to describe the basis of relationships in the organisation: involving users in service planning, working in partnership, collaborative. People inside an organisation often make assumptions about organisational values, sometimes wrongly. Be sure to discuss values openly, so that everyone can agree them. You can then incorporate the statement into your promotional material and any other descriptions of your project, for example materials prepared for funders and other agencies. Values that have been agreed and clearly stated can be integrated into everyday work, and monitored to show how successfully you are working within them. Funders sometimes have a statement about their own values, and they may expect you to demonstrate that you are working towards them. This may be a problem if no-one decides how these values might affect the way the project works.
It is helpful to identify and distinguish between your first-line users and your end users. Your first-line users may not be your actual target group, but may be their parents or carers, or other agencies providing services for them. The end users will be the children or people being cared for. You may also have ‘indirect users’, such as other agencies which refer people to you. It is important to be clear about who these different groups of users are.
Needs assessment The starting point for any project is to understand the needs of the group it plans to work with, whether they are potential users, partner agencies or those it wants to influence. A ‘needs assessment’ is a form of evaluation activity. It will identify the existing problems in the community, how bad these are, the services available and the needs that are not being met. Look for local data and any relevant national statistics and reports, and talk to potential users and other stakeholders to get information on: the physical and economic characteristics of the project’s environment the nature of the target population and where they are existing community groups and projects, current service provision and gaps in provision, and whether partnerships are possible problems and possible solutions.
Defining your target group Your target group is the main group or groups in society that you work with. It is important to be clear about who they are. They could be identified by geographical geographical area or a place, such as a housing estate or school, by age, gender, sexual orientation, ethnicity, or by health status or disability. You may also want to identify socio-economic groups, such as single parents, children excluded from school or homeless people.
You can then plan what is feasible and clarify what information you need. This avoids collecting large amounts of data that cannot be analysed. Target your questions to different populations or groups. ‘Hard-to-reach’ groups may be the very groups you want to work with. When contacting ethnic minorities, talk to community workers and members as well as
charities evaluation services
community leaders. Think about using translators in your needs analysis, and talking widely within the community to reach ‘hidden’ groups, including women, young people, smaller religious or ethnic groups and disabled groups. Different groups or individuals may well have different ideas about what is needed. So ask about problems and the available resources and services, rather than for lists of needs or solutions which may give you contradictory answers or raise unrealistic expectations.
what you are doing. It is useful anyway to review them when you start your selfevaluation activities, to make sure they are clear, that they continue to be relevant, and that they are agreed and understood by everyone. Try not to have too many aims and objectives and don’t make them vague or too ambitious.
Aims will usually describe: the people the service or activity is intended for the intended benefits, or changes you expect to make, for your target group.
Aims and objectives Your mission or overall aim is likely to be too broad a statement to allow you to plan your work in detail, or to provide guidance on what you might monitor and evaluate. Specific aims – statements about different aspects of your main purpose – will allow you to think clearly about what you want to achieve. Aims should show an understanding of the needs of the target group. These more precise statements are also easier to evaluate. Specific aims are likely to remain relevant for several years, but should be reviewed regularly. Aims and objectives are linked in the following way: Aims are the benefits or changes you are trying to achieve. Objectives are the methods or the activities by which you achieve your aims.
It is important to make this distinction in order to be clear about what you are doing, why you are doing it, and to be able to assess your achievements. Projects often collect information fairly randomly because they are not clear about what their key activities are. You will usually set your aims and objectives as part of the project planning process. If you did not, it is never too late to clarify
Aims may also identify the area in which the project will work. Although they should be clear, aims do not specify exact details of what will be done to achieve them, nor do they spell out a timescale. Example aim: Thursbury Community Theatre Project The Community Theatre Project has three specific aims. Its first is about the performance work it does with young people, that is: To change young people’s attitudes about social issues through drama. Objectives identify what the organisation will do and the services it will provide. Each aim will have one or more objective, and some objectives may relate to more than one aim. Example objective: Thursbury Community Theatre Project The following objective of the Community Theatre Project relates directly to the aim of changing young people’s attitudes, describing what it will do to achieve it: To perform plays in schools and community venues in co-operation with other voluntary organisations.
The language you use will help you keep your aims and objectives distinct.
17 basic planning
charities evaluation services
18
Aims tend to start with words that indicate a change
Objectives tend to start with words that indicate activity
to increase to reduce to expand
to o orrganise to co conduct to provide
to enable to develop to improve
External aims and objectives relate very specifically to the organisation’s mission and the tasks it is trying to achieve. The triangle diagram below shows how to organise aims and objectives. It shows how all the aims and objectives of the Community Theatre Project feed directly into one main aim or mission statement. The triangle should describe all the project’s external work, making it clear what a project does and why. Keep the number of aims and objectives limited and make them as focused as you can.
to d diistribute to pr produce to set up
To fully understand the diagram, it is helpful to understand a little more about the theatre project. It works throughout Thursbury, but focuses its efforts on four wards where there are a number of failing schools and high levels of teenage crime and drug problems. The project has a coordinator, an administrator and two project workers who research social issues and liaise with schools, social services and voluntary organisations working with young people.
The Community Theatre Project
i m a l l a r v e O
s m i a c i f i c e S p
e s v i t c e j b O
To enhance the lives of young people in Thursbury through the integration of drama into education and community activities
To change young people’s attitudes about social issues through drama
To perform plays in schools and community venues in co-operation with other voluntary organisations
To enable young people to express themselves through dance and drama
To run dance and theatre workshops for young people
To enable teachers and community workers to use drama in teaching and play activities
To hold theatre skills courses for teachers and community workers
To work with schools on integrating drama into the school curriculum
charities evaluation services
You can do the same thing with internal or organisational aims and objectives. These are mainly concerned with helping the project achieve its external objectives. They relate, for example, to management, finance and personnel. Once you have described your project in clear statements, with an overall aim and more specific aims and objectives, you will be able to describe your project activities and their effects in more detail, and in
ways that can be evaluated. The terms output , outcome outcome and and impact impact are are used. These relate directly to aims and objectives – what you plan to do – as shown in the diagram below. Inputs are the resources you put into the project, such as staff and volunteer time, funding and technical resources. Inputs will directly affect the quality and level of your outputs. These in turn will affect your project outcomes.
Impact
Overall aim Why we do it Specific aims
Objectives
Outcomes
What we do
Outputs
Inputs
Outputs The outputs of the Community Theatre Project are the work generated by the project, and relate to the project’s objectives. The following are examples: Objectives
Outputs
To perform plays in schools and communit community y venues in co-operati tio on with other voluntary organis isa ations
Meetings with other agencies Rehearsals Publicity leaflets Performances
To run run dance dance and theat theatre re wor workshop kshopss for for youn young g peopl people e
Weekend Weeke nd works workshops hops One- and two-day workshops in schools After-school courses Holiday schemes Productions
To hold theatre skills courses for teachers and community workers
Introductory talks Day workshops Week-long courses
To work with schools on integrating drama into the school curriculum
Meetings with teaching and support staff Curriculum discussions Classroom assistance
19 basic planning
charities evaluation services
20
Outcomes The outcomes of the Community Theatre Project are the changes or benefits that take place as a result of project activities. These relate to to the specific aims. It is important to assess only factors which the project can reasonably control or make happen. Aims
Outcomes
To change young people’s attitudes about social issues th through dr drama
Greater understandi understanding ng of social issues raised Greater awarene awareness ss of own attitudes Greater to tolerance of of ot other lilifestyle less an and vi viewpoints Changed views on social issues Greater use of drama with young people by other voluntary sector services
To enable young people to express themselves through dance and drama
Increased confidenc confidence e of young people Increased skills in dance and drama Increased sense of commitment and responsibility Development Developme nt of team-working skills Young people continue to develop theatre skills through further opportunities
To enable teachers and community worke workers rs to use drama in teaching and play activities
Ability to link drama to curriculum content Acquisition of basic theatre skills and their application to teaching Greater use of drama in the school curriculum
Impacts Impacts are the broader and longer-term changes relating to your overall aim, or mission. The Community Theatre Project would have to demonstrate that it is meeting the overall aim of changing the lives of children through the use of drama. Inputs
Outputs
Outcomes
Im p a c t s
charities evaluation services
Performance indicators There is one more key step in laying the foundation for monitoring and evaluating your project. To check how your organisation is performing, you will need to set performance indicators. They are the criteria or ‘clues’ on which to base judgements about the progress and success of the project. Performance indicators: let stakeholders know what they can expect from the organisation provide a focus for managers help you to focus on what you need to monitor and evaluate help a project to judge its achievement and effectiveness help comparison between projects.
Who defines the performance indicators for your project is important, as different stakeholders will have different views about what counts as success. If you involve them you will have a wider and richer perspective. Brainstorming can be a useful way to find out how different people understand success or failure. It may be helpful to look at indicators that similar projects are using. Funders and commissioners may want to have performance indicators that are easily measurable or that link to their own strategies. This could mean measuring things that are not at the heart of the project. So discuss your key activities and intended benefits with funders early on. Evaluation priorities should not take over project activities and divert you from your planned priorities. You may want to involve users in developing indicators. Users have a unique and valuable perspective on success. But be clear yourselves, and with users in advance, what weight will be given to their views when final decisions are made about indicators.
It should take about a day to work with a small user group. Here are some tips to help you: make sure everyone has a clear list of project aims and objectives develop some user definitions of what change or success will look like discuss any indicators staff have already developed.
Users and others may not be familiar with monitoring and evaluation, so try to choose your language with care. Some projects, for example environmental groups, may not provide a direct service to users or may have other important aspects to their work. In this case, you still need to identify your stakeholders and to think carefully about what will indicate success.
Types of indicator There are a number of different types of indicator. The most common are output and outcome indicators. These are often confused with impact indicators, but there is a difference. Output indicators – these demonstrate the work the organisation does and show progress towards meeting objectives. Outcome indicators – these demonstrate changes which take place as a result of the organisation’s work, and show progress towards meeting specific aims. Impact indicators – these demonstrate broader, longer-term change, often relating to the overall aim or mission of the organisation.
Output indicators Output indicators can be set for: quantity (how many services or products) take-up (used by how many people) accessibility (used by what type of people) quality (including user satisfaction) cost.
21 basic planning
Performance indicators
charities evaluation services
22
There will be a great number of possible output indicators, so you will need to prioritise and limit them. Example for the Community Theatre Project Output
In d i c a t o r s
Meetings with agencies
Number and type of agency contacted and met; nature and length of relationship
Publicity leaflets
Number of leaflets printed and distributed; locations distributed; quality of leaflets; number and type of contact generated from publicity
Performances
Number of performances; subject matter; profile of schools and community venues; cost; source and level of sponsorship; satisfaction levels
Weekend workshops for young people
Attendance levels; profile of young people; level of participation; drop-out rate; proportion of cost covered by users and sponsorship
Curriculum discussions
Number of preliminary and follow-up discussions; numbers and types of teaching and support staff involved
Outcome indicators Outcomes and their indicators are often confused, but are quite distinct from one another. Agree your expected outcomes by thinking about what success would look like for each aim, that is, what change would happen. Develop indicators by agreeing what would demonstrate that this change had taken place. Example for the Community Theatre Project Outcome
Indicators
Level and extent of discussion generated after the performance Young people have greater understanding Level of increased understanding expressed by young people of the social issues Extent of changed attitudes and behaviours reported by teachers raised and play workers Increased sense of commitment and responsibility
Individual attendance levels at workshops and rehearsals Time keeping Level of support and encouragement given to each other Numbers of young people following classes through to performance
charities evaluation services
Limit the number of performance indicators for each specific aim. Think about the resources you have available and collect a focused range of information against your evaluation questions. Attendance (for example, at a training course, a clinic session or on a programme) is an output indicator, indicating access to a service. It is rarely an outcome indicator because no change has happened. The following example, for a health education project, illustrates an exception to this: Aim: ‘to encourage users to take more responsibility for their health’ Outcome: ‘the increased use of services’ Indicator: ‘attendance levels at clinics’.
However, if the aim was, for instance, However, ‘to improve the health of Bangladeshi communities’ , outcomes should relate to those health improvements, not just attendance at health centres. Many aims may be abstract or intangible. For example, ‘to change young people’s attitudes about social issues’ is an aim that could be difficult to measure, as it does not lend itself to being counted. In areas that cannot be measured directly, you may need to use indicators that assess the change approximately. approximatel y. These are called ‘proxy’ indicators. For example, time keeping and attendance levels, taken together with other evidence, may be an indication of ‘an increased sense of commitment and responsibility’, the expected outcome. When indicators are difficult to assess, it could be helpful to get advice from practitioners in the field. There are also developments in the creation and validation of different outcome assessment tools and measures, for example, of well-being. You You may also be able to find some examples of indicators contained within indicator banks relevant to your work area. Other agencies may also be able to offer examples of indicators, or of self-report questionnaires
or other assessment tools You can change indicators if you find them too difficult to assess or if they are not meaningful. However, However, there should be some stability if you want to be able to demonstrate patterns, trends or changes over time.
Setting targets Often it will be useful to compare monitoring data against targets that you have set in advance. Targets are expected levels of achievement in relation to inputs, outputs, outcomes and impacts against which you can measure performance. Indicators on their own have no specific value. For example, if an output indicator for the Community Theatre Project is ‘number of performances’, you do not know whether the project would be successful if it gave five performances or 50. To help define success, you may need to qualify the indicators by setting targets. Targets specify the level and quality of activities and achievements that you expect or hope for. They may be needed for funding applications, but there are a number of reasons for setting targets. They can help with operational and individual work planning, help the project to improve performance, and help to demonstrate and improve value for money. They will provide a ‘benchmark’ or standard to monitor and evaluate performance or progress against, and to help document progress towards achieving aims and objectives. At the same time, projects need to be wary of allowing targets to drive activity in a way that might limit the time given to other areas of work or less tangible achievements. Targets are often set for outputs. The following examples demonstrate targets for a number of different projects.
23 basic planning
Setting targets
charities evaluation services
24
Project
Output indicators
Output targets
Community development agency
Number of new projects set up
Three new projects during the year
Drop-in service
Number and profile of users
Increase in number of women attending by 20% in the next year
Training Tr aining project
Level of satisfaction with courses
90% of users record overall satisfaction as high or very high
Targets may also be set for outcomes. For example: Project
Outcome indicators
Outcome targets
Employment training programme
Percentage of trainees getting employment
65% of trainees placed in full-time jobs within six months of finishing the programme
Alcohol advisory project
Amount of weekly units drunk
75% of clients reducing the number of weekly units drunk during the first six months of counselling
Young people’s counselling project
Numbers of young people remaining in the family home
60% of young people remaining in the family home
Objectives need to be Specific, Measurable, Achievable, Realistic and Time-based. Developing targets for specific objectives will make them SMART.
Developing a year plan Once you have stated clearly what your project is, a year plan, setting out what you will achieve over the year, is an important tool for monitoring and managing your project. Some funders require you to submit one, particularly in your first year.
Setting milestones and targets Milestones are particular planned Milestones are achievements or key events marking a clear stage in completing a main phase of the project. You may agree these with your funder and set them to a timetable. The following are examples for a new project:
trustees appointed workers in post promotional activity underway partnership agreements in place conference held first trainees enrolled or completing training.
Many organisations will have an agreement with their funder to give a certain level of service, expressed as targets (see above). High targets can help to provide vision and ambition, and to obtain funding. Low targets might be
charities evaluation services
more realistic and achievable. Finding the right level is important for maintaining credibility and for staff and volunteer morale. They should not add up to more than your capacity, so take into account:
work of individual staff members fits into overall work plans. The simple plan below shows the key events and timetable for the first six months of a counselling project. A more complex year plan will link proposed activities with a budget and set out a clear plan of action against each objective, showing the following:
your resources what you have achi eved before needs assessments
what activities will take place
what other organisations have achieved.
how they will be carried out
When setting targets, make sure that they cover your key activities. Allow for start-up time, staff holidays and other things that limit your activity. It is a good idea to set your milestones and targets with the staff responsible for achieving them. There will be greater commitment to reaching the targets set, and a better understanding of how the
who will be responsible for what – and when what resources are needed.
By collecting quarterly figures against your targets, you will have some warning if your targets are not likely to be met. If you give good reasons for this, preferably early on in the financial year, many funders will be happy to renegotiate targets.
Counselling project – year plan (first six months) Activity
A pr i l
May
J une
July
Aug
S ept
Project coordinator in place Project administrator in place Database installed Counsellors appointed Strategic planning day Away day to discuss monitoring and evaluation systems Promotional activity to schools First users seen Review of project start-up
Planned activity
25 basic planning
Developing a year plan
charities evaluation services
26
Planning for monitoring and evaluation
activities. If funding is uncertain, it may be difficult to plan your evaluation realistically.
Once you have a year plan for your work, you can set out a monitoring and evaluation plan. In this, you may well be prompted by what a funder wants you to monitor. Think also about other information that you need to help you manage the project efficiently and effectively, and about who else might be interested in learning about what you are doing.
The timing of the various stages of an evaluation is important. Build in time to process the data, that is, to organise it, and possibly enter it onto the computer or in paper files. Time the presentation of the evaluation report carefully so that it is available when decisions have to be made about the project, otherwise opportunities for action may be delayed or lost. Plan how to spread your evaluation findings to a wider audience.
In planning for your monitoring and evaluation activities, consider questions that will help you prioritise. For example: What is most important to evaluate at this stage of the project? What evaluation questions do you want answered so you can plan next year’s work?
Davey, S et al (2008) Using IT to Improve your Monitoring and Evaluation , Performance Hub, Charities Evaluation Services, London
both evaluation and the project itself have good management support staff and volunteers understand and value monitoring and evaluation tasks
Do your IT systems and resources help you manage your information sufficiently?
enough time is set aside for monitoring and evaluation activities
How can you best build self-evaluation activities into your routine work?
Cupitt, S and Ellis, J (2007) Your Project and its Outcomes, Charities Evaluation Services, London
monitoring and evaluation plans are realistic
What are the most significant outputs and outcomes?
Think about how you will carry out monitoring and evaluation, who will do it, and what methods you will use. You also need to consider management, support and supervision issues. For example:
Further reading:
Your evaluation eval uation is more m ore likely to succeed if:
How will the information be collated and analysed? What skills are needed? How can you make sure that staff will collect the information – and collect it consistently? When is the information needed?
Even though funders may not recognise the need for funding purely for evaluation, try to build costs for evaluation into your budget right from the start. This will avoid having to divert scarce resources from other
evaluation is built into planning cycles, job descriptions and budgets.
Basic planning looked first at how to set out clearly your project aims and objectives. It then moved on to describe tools that will help you to assess how well you are doing, including the use of performance indicators and targets. Finally, but most importantly, we raised the need to ask questions about the information you and your stakeholders need and the importance of working out how that information will be collected and communicated.
further planning Basic planning demonstrated the importance of defining project aims and objectives, clarifying the project inputs, outputs and outcomes, and understanding the links between them. It also highlighted the use of performance indicators and of monitoring and evaluation planning procedures. Further planning looks at more techniques both for project planning and for planning monitoring and evaluation.
Linking evaluation to project planning Your first planning stage is a good needs assessment, which will make clear recommendations based on the needs, problems and solutions it has identified. A needs assessment can also provide baseline data , against which you can measure changes during the lifetime of your project. Once you have identified needs, project planning is carried out through two different planning levels: strategic or business planning and planning and operational planning . Strategic planning and business planning set out the main directions of the project and its financial health for a three-to-five-year
charities evaluation services
period. Operational plans flow from strategic and business planning and set out a programme of work for a one- or two-year period. It is helpful to link your planning for monitoring and evaluation into your project planning.
Logical framework analysis One useful planning tool which makes the link between project design and evaluation is the logical framework analysis. This is used for planning and managing projects in international development work. It is a tool that describes the project and its component parts in a logical and linked way. The logical framework, or logframe, is a matrix showing: the project’s aims and objectives the means required to achieve them how their achievement can be verified.
The logical framework matrix Project structure
Indicators
M e an s of verif ic at ion
Broad aim Project purpose Outputs Activities/inputs Adapted from: Gosling, L and Edwards, M (2003) Toolkits: A Practical Guide to Monitoring, Evaluation and Impact Assessment , Save the Children, London
A ssum ptions a nd critical fac factors
27 further planning
Planning for monitoring and evaluation Linking evaluation to project planning
charities evaluation services
1
Cracknell identifies the vertical logic that traces cause and effect from project inputs through outputs to achieving project aims, and a horizontal logic which builds in how achievement will be verified through indicators and evidence. One danger is that it becomes too ritualistic or mechanistic. Another danger is that there may be too much emphasis on easily quantifiable indicators, leaving little room for flexibility and ways of measuring unintended effects.
28
The logframe may use some terms (output, purpose and aim) differently from the way we use them in this guide, but this does not need to be confusing. The main advantage of the logframe is that it links monitoring
and evaluation directly with project management. Another useful element in the logframe is that important assumptions, without which progress or success would be compromised, are identified and built into project design. These might include government policy or resources, or the actions of other agencies or, within the project, the recruitment of qualified staff. Outside factors often have an important influence on the success of a project and should be identified early on. Periodic checks are made to see whether assumptions have been realised. There are three levels of planning for evaluation:
Level 1 Evaluation strategy Evaluation priorities over over a period of time Level 2 Evaluation planning Making evaluation operational Level 3 Evaluation design Approaches and methods
1
Cracknell, BE (2000) Evaluating Development
Aid: Issues, Problems and Solutions , SAGE, London
charities evaluation services
Developing an evaluation strategy Once your strategic planning is in place, it is useful to develop a planned approach to evaluation activities. This is your evaluation strategy.. It presents agreement on what will strategy be evaluated within the organisation over a period of time, taking into account time, money and effort. You will need to evaluate at strategic points in the life of a project. You will also have different information needs at different times, for example: to examine new or innovative projects to explore something that is not working well to further investigate user or other stakeholder feedback a project may be coming to the end of its current funding and you need good evidence of its worth to ask for continued funding.
An evaluation strategy needs to consider the following questions: What is the purpose of monitoring and evaluation? Who are the stakeholders and audiences? What are the key questions over a period of time? What types of evaluation will be needed at different times? What resources are available?
It is helpful to think ahead and map out the evaluation activities you might need over the period of your strategic plan, or longer. But there will be some you will not be able to plan in advance. Remember that if you identify your evaluation needs in advance, you will be able to plan monitoring systems to systematically collect the data you need, and in a form that will address the evaluation questions. This will help to avoid collecting unnecessary data. unnecessary data.
Example: Mapping evaluation activities over time When
Type of evaluation
Who
Year 1
Needs assessment Baseline data End of year report
Student researcher Student researcher Internal
Year 2
Implementation (process) report
Internal
Year 3
End En d of of yea yearr pr progr gres esss re repo port rt
Intter In erna nall
Year 4
End of End of yea yearr pr progr gres esss re repo port rt Good practi ticce research
Inter Int erna nall Internal plus external evalu luat ato or
Year 5
Outcome evaluation
Internal plus external evaluator
29 further planning
charities evaluation services
30
Developing an evaluation strategy has a number of benefits. It: links evaluation activities to other management activities, such as year plans and strategic plans, so that the project has the right information at the right time helps to plan for the resources needed to cover the costs of the evaluation activities encourages thinking about evaluation as a longer-term process, with a different focus for evaluation activities in different years.
Developing a monitoring and Developing evaluation plan Your monitoring and evaluation strategy will provide an overall framework to guide monitoring and evaluation activities within a broad timeframe, in line with your strategic planning or even the lifetime of the project. Basic planning discussed how, when you draw up your project’s year plan, you also need to think about how you will carry out monitoring and evaluation over the year. For this next level of planning you need to ask yourself: When will different data be collected? Who will carry out and manage the activities? How will data be collected, stored, analysed and presented? How can routine monitoring activity support evaluation? How will evaluation findings be used in short-term planning and decisionmaking, quality reviews and other key processes?
Focusing your evaluation activities is vital. This focus, in turn, shapes the questions to which you want answers, and these should now be clearer. The following questions are quite different from each other:
How are resources being used? How appropriate is the management structure? How well are we meeting identified needs? How do we fit within a network of services? How well have we met our expected outcomes?
Designing an evaluation Your evaluation strategy will describe the evaluation activities needed at different points in your project management cycle and over the lifetime of the project. You will then need to begin to implement your monitoring and evaluation strategy by planning activities over a shorter time. Each separate evaluation will also need designing . This will focus on the approaches and methods to be used, decide who the respondents will be and set out a detailed timetable for activities.
Steps in evaluation design When you design your evaluation, think about the technical skill level of the evaluator or evaluation team and how much data you will be able to analyse and use. List the evaluation questions and then look at possible data collection methods, both quantitative and qualitative. Your Your choice of method should take account of the likely quality of the results and the practical issues involved in collecting the data.
Design matrix It is helpful to summarise the design of your evaluation activities in a ‘design matrix’. This usually includes the following elements: key evaluation questions indicators – the things you will assess to help you to make judgements data collection methods data sources data collection timetable.
charities evaluation services
The example matrix below shows part of the design options for the evaluation of St Bernard’s Homes, which provide hostels and shared accommodation for older people. Once the options have been considered, it may be necessary to make choices about what to include in the final design, so that the evaluation is manageable.
Project aims and objectives
Key question
Aim: To enable older people living in St Bernard’s accommodation to make informed choices about their housing, social, health and care needs
To what extent did older people acquire benefits not previously received and use health and care services not previously received?
The mission, or overall aim, of St Bernard’s Homes is as follows:
31 further planning
To improve the health, social and housing status of older people living in St Bernard’s hostels and shared accommodation.
Designing an evaluation
The matrix is completed here for only one of the aims, and the same approach should be used for each additional aim.
Indicators
Data sources
Number of older
Users
people with Housing Benefit
Staff
Total amount of Housing Benefit received
Other agency personnel Case records
Numbers receiving health care
Data collection methods Analysis of
Timetable
April/May
case records Financial analysis of records
April/May
Interviews
June
Questionnaire
May/June (analysis
Types of services received
early July)
Numbers receiving care packages Types of care packages Objective: To provide information and advice to older people in St Bernard’s accommodation accommodati on on available housing benefits, personal care services, health services and life skills training
How well was the service used? How appropriate was the service to users’ needs? How did statutory and independent agencies participate, and to what extent?
Number of older people who have had needs assessments Level of user satisfaction Number of people contacting statutory and independent agencies Extent and type of working relationships with other agencies
Users
Desk research
April/May/June
Staff
User feedback forms
Continuous
Interviews
May/June
Other agency personnel Needs assessments Service participation monitoring Case reviews Meeting minutes
charities evaluation services
32
It is better to complete an evaluation successfully with a very simple design than to only get part way through a complex one. If you put down on paper all your possible evaluation activities, this makes it easier to assess whether you have enough resources and whether your design is realistic. Think about the timetable of monitoring and evaluation activities and whether they can be brought together. Your management information system may well contain a lot of information, for example client progress reports, which may be useful for your evaluation. You may be able to set up a single, common database, which can be used for both monitoring and evaluation.
Too much emphasis on achieving aims and objectives may mean that you overlook unintended effects of the project or the effect of external events, such as local or national government policy or closures in other local services. Your evaluation design should be flexible enough for these issues to be considered in the analysis and interpretation of the findings. This section has covered the complex planning activities for evaluating your project. It has shown how the different levels of planning for evaluation start from, and can be built into, strategic and operational planning. Long-term planning for evaluation activities, how you turn your evaluation plans into action, and the careful design of approaches and methods, are all critical to success.
Needs assessment Project strategic plans Evaluation strategy Project operational plans Evaluation plans
Further reading Cracknell, BE (2000) Evaluating Development Aid: Issues, Problems and Solutions , SAGE, London Robson, C (2000) Small-scale Evaluation: Principles and Practice , SAGE, London
section 2 monitoring Basic monitoring
35
Setting up your record keeping5 keeping 5
35
Feedback from users5 users 5
40
Outcome monitoring5 monitoring5
41
Analysing monitoring data5 data5
43
Reporting monitoring data5 data5
43
Monitoring your monitoring5 monitoring5
44
Further monitoring 5
45
Developing more complex monitoring systems5 systems 5
45
Process monitoring5 monitoring5
45
Monitoring for quality5 quality5
46
Impact monitoring5 monitoring5
47
Monitoring context issues5 issues5
47
Data management5 management5
48
charities evaluation services
33
charities evaluation services
basic monitoring Basic monitoring discusses how to set up your record keeping. It looks at the different elements of your project that you might monitor.. These include inputs (the resources monitor you put into your project), and your outputs, including the scale and pattern of use, and financial monitoring. It discusses using an outcomes approach and developing outcome monitoring systems. It looks at good practice in monitoring user profile information, introduces feedback from users, and suggests a framework for analysing and reporting your monitoring information.
Setting up your record keeping Monitoring is the routine a nd systematic collection of information so that you can check regularly on your progress. Beware of collecting information randomly without thinking through what information you really need. Planning is vital in helping you to monitor in a focused and systematic way. It is not realistic to collect information for all the possible performance indicators you identified. This is why it is important to discuss with your stakeholders, particularly trustees, users and funders, the key indicators for which you are going to collect information. Monitoring should give you enough information to tell you what is working, identify problems, tell you who is using your services and how, and help you plan to meet changing needs. It also means you can provide relevant information to other agencies, develop useful publicity, and tell funders and other stakeholders about progress. Write monitoring tasks into job descriptions and planning cycles to make sure that responsibilities are spelt out and that everyone is clear about the importance of monitoring. Setting up record keeping, whether this is paper based, computerised, or both, is a key part of project start-up activities and should
be done early on. It is helpful to think through at an early stage how you will collect the data and how it will be organised. Records might include data on project activities and users, staff and management meetings, project funding and finances, and supportive agencies and individuals. Develop some interesting ways of collecting data. For example, photographs can capture activities and events, or show changes in physical environments.
charities evaluation services
35 basic monitoring
Setting up your record keeping
Whenever possible, input information such as membership data, publications sales and advice records directly onto a computer database. Make sure someone is responsible for dating, filing, storing and collating monitoring data collected on paper. You also need to make someone responsible for inputting and analysing the data held electronically. Get good advice on how your database can best serve your information needs. Think about the links you want to create between different types of information. For example, you may want to be able to link user profile data with attendance at different activities. It may be useful to talk to another project with a good management information system and to a database expert, if possible. There are a number of IT systems available that are designed to facilitate monitoring and evaluation, and an increasing number that will help you to manage outcomes information. The following questions are important to consider when you start monitoring: What depth and type of information do you want? How can you check the reliability of your information? How much time can you afford to spend? How much will it cost? How will you analyse the information?
Further reading: Davey, S et al (2008) Using IT to Improve your Monitoring and Evaluation , Performance Hub, Charities Evaluation Services, London
charities evaluation services
36
It is important to be familiar with the Data 1 Protection Act. Make sure data is used for its intended purpose. If personal information is kept about individual service users, make sure that they know exactly what the evaluation is for, what data exists, that they can have access to it to check its accuracy, and that the project will preserve their confidentiality. Here are some basic points for successful monitoring: build simple, user-friendly monitoring systems into everyday activities, collecting data at the most natural point get commitment from those collecting the information, by explaining why they are doing it make sure that everyone responsible for monitoring has clear and consistent guidelines make sure that monitoring records are completed fully and accurately – people may not regard it as a high-priority activity give people collecting the information feedback on the results of their monitoring, and how it is being used to make the organisation more effective check that the project is not collecting the same piece of information more than once.
Very often, the information you need already exists. So design project forms, such as planning diaries, referral forms, assessment sheets and case records, so that they can also be used for monitoring. This will avoid duplicating effort.
1
You can get information on the Data Protection Act at www.informationcommissioner.gov.uk and on an information line: 01625 545745. A publication, The Data Protection Act 1998 : Legal Guidance , is also available from this information line.
On membership and access forms, forms, referral referral and case record records, s, think about using using a range of possible answers with tick boxes, rather than providing space for narrative or comment. This information can be entered more easily on a database and analysed. You can give extra space for more information if necessary. Transfer information from case records Transfer onto monitoring summary sheets to give easy access to information. Information will then be accessible when you need it, and you can check from time to time that the same information is being collected in the same way by everyone, within specified categories.
Once you know that your system is working well, write down your monitoring system and build it into your organisational procedures. Your first question before collecting information was: ‘What information do we need?’ From the early stages of your project you are likely to need different types of information – on your activities, your users and your finances. We now look at these in more detail.
Monitoring inputs Think about the resources you put into your services or activities. These include staff and volunteers, management and training, and financial resources. Don’t forget the contribution made by partner agencies. You may want to monitor against a number of input indicators. For example: the amount of management or administration time spent on certain activities the level and use of staff and volunteer resources staff and volunteer profile and skills the amount and quality of staff training the level of trustee attendance at meetings
charities evaluation services
the amount of time spent on fundraising and promotional work catering, travel or other costs for specific activities.
Think about the information you need so that you can understand the full costs of the project, or the details needed if someone else wanted to carry out the project from scratch. What processes are an essential part of what makes your project work?
Monitoring outputs When you set output indicators, focus these on the information that will be useful in planning activities and services. A pressure group might want to monitor its level of media coverage, whereas a service provider might want to monitor the scale and pattern of use. This could be: the level of use at each session, for example, at a luncheon club the total level of service over a given period of time, for example, the number of advice sessions each month the total number of users, number of new and repeat users, and number of users who have left the service, over a given period of time frequency or extent of use by individual people use of premises, transport or catering.
Remember that output information will be useful for explaining to others what you do, what is involved in providing your service, and what resources you need. It will also be important in helping to understand how and why you have a chieved your outcomes.
Monitoring the profile of your users When you set up the project you identified a target group or groups. So it is important to check if the users are the ones you intended to reach. If you are examining the profile of
users to assess patterns of service use, it may be more meaningful if you compare it with local population statistics, or against the user profile in other relevant services. Your immediate users may not be the target group whose profile you are most interested in. For example, if you are training other agency staff, you may need not only to monitor which agencies they work for for,, but to get information on the target group of those agencies, so that you can identify the end user. You can collect data on organisations that use your services when you first have contact with them, by systematically asking for the information you need for your database. Collect individual profile information either through a simple questionnaire that users fill in themselves, or one that you fill in by asking them questions. You may wish to collect information such as where users live, their age, gender, sexual orientation, ethnicity or disability. Remember that questions should be easy to answer and to analyse. Develop specific ‘closed’ questions with a range of categories to tick. Open-ended questions may be interpreted interpret ed differently by different users, need more time to answer, and will get responses that are more difficult to analyse. There are a number of good practice points to think about when you monitor the profile of your users. Information should be given voluntarily. Wherever possible, allow a relationship of trust to develop first and avoid guessing profile information. Using pre-set categories for ages, or for ethnicity, such as those used by the Commission for Racial Equality, will make it easier to analyse responses for a large group of people. By using these categories, you will be able to make comparisons with census data. People often wish to describe their characteristics themselves, so for a small group of users, or where you are working with a range of smaller minority groups, this may be preferable.
37 basic monitoring
charities evaluation services
38 The following categories are suggested by the Equality and Human Rights Commission. In Scotland it will be helpful to use questions similar to the Scottish census questions. The Equality Commission for Northern Ireland recommends particular ethnic categories in its code of practice. White: British Irish any other white background (please write in)
Mixed: white and black Caribbean white and black African white and Asian any other mixed background (please write in)
Asian or Asian British: Indian Pakistani Bangladeshi any other Asian background (please write in)
Black or black British: Caribbean African any other black background (please write in)
Chinese or other ethnic group: Chinese any other (please write in)
If you ask profile questions in a consistent way across different activities and services, you will be able to make comparisons and look at the profile of service users for the project as a whole. If staff and volunteers are reluctant to ask profile questions, it might help to find out how another similar project has collected sensitive information – and how this information has benefited the project. Explain to users why you are collecting the information and how it will improve services. You may want to reassure your users that profile information is anonymous and kept separate from other information, such as case records. On the other hand, if you want to relate the information about benefits or changes for users to the type of user , you will need to keep this information linked. You can do this by giving users an identity number and linking this to profile information that is kept anonymously. Some information about users is relatively easy to get, such as addresses. You can use postal codes to monitor a gainst targeted geographical areas. It may be more difficult to do profile monitoring for certain types of activities, for example, projects working informally,, such as detached youth work or informally those with large numbers of users, such as telephone advice lines. Think about ways of getting ‘snapshot’ information over a short period of time. This can be explained to users and staff as a one-off or occasional exercise, and may be accepted as less demanding and intrusive. Funders and commissioners often want forms filled in at the end of each quarter or year.. Information is sometimes needed for a year formal service level agreement. If the funder has different categories from your project for describing users’ ages or other circumstances, you may be able to reach a compromise if you discuss it together. together.
charities evaluation services
Financial monitoring Financial monitoring has three main purposes: accountability control evaluation.
These relate to the different needs of a wide range of stakeholders to know what is going on in your organisation or project. Financial monitoring systems should be built in at the beginning of a project. This allows appropriate information to be collated as an integral part of the project, and allows reports to be produced at regular intervals. Integrating financial information needs Use a coding system to show the different types of income and expenditure. This allows summary information to be broken down into categories, for example, to highlight income and expenditure for a particular project or fundraising activity. At the same time it will allow income and expenditure to be categorised for the statutory accounts. External accountability By law, your project will have to keep accurate accounting records of income, expenditure and assets. As well as the legal requirements, a range of external stakeholders – such as funders, regulatory bodies and users – will have an interest in particular aspects of the project’s finances. For example: a funder giving money for a particular purpose will expect to see financial reports showing that the money has been spent effectively according to their wishes a regulatory body, such as the Inland Revenue, will expect to see a financial monitoring system that allows them to check that you have kept to the tax laws users may be interested in various aspects of how effectively you use your resources.
Control Trustees, staff and volunteers need financial information to make sure that legal responsibilities are fulfilled, and to understand what is going on in the project. Regular reporting of progress against an agreed annual plan and budget is a key means of control. However, However, a budget is only an estimate, and what actually happens may not be what was planned.
You will normally report monthly for staff on actual income and expenditure compared with budgeted income and expenditure. Trustees might see reports less often, say quarterly, to fit in with the trustee meeting schedule. Major differences between actual and budgeted expenditure should be highlighted and explained to make sure that the project does not spend more than it can afford. It is also helpful for staff to produce a forecast of what the year-end position is likely to be. This can be done quarterly for trustees and lets them take action, if appropriate, to make sure that an acceptable year-end position is reached. Monthly cash flow projections are also useful. Evaluation Reviewing income and expenditure for particular activities, and monitoring trends over time, can help to evaluate the financial implications of the different activities of the project. This is important for planning service delivery and fundraising. For example: the cash flow will show whether there are enough funds at all times both to implement and run the project evaluating the cost of providing different levels of service, compared with income received, will help with decision-making when planning services the relative cost-effectiveness of different fundraising methods can be compared, and this can be used to plan how resources should be allocated to future fundraising.
39 basic monitoring
charities evaluation services
40
Feedback from users Even with a small organisation or project, it is important to monitor whether people are satisfied with what you are doing. You can get a range of different information on views and experiences of the service, including the changes or new services people would like. People can also tell you about how convenient and accessible they find the service and whether project publicity is helpful. This level of feedback will not tell you what has changed as a result of your activities. For that, you will need information that relates to outcome indicators. Feedback sheets after a training course, for example, are a basic tool to gauge user satisfaction. They are best viewed as a monitoring tool, and are most useful in providing information that will help you to adjust your services. They will not assess how well participants were able to use the training you provided. You will need to ask further questions at an appropriate time to find this out. Try to collect user feedback as an integral part of your service delivery or other activities. For example: In a day centre you may ask children informally what they like most about the play session, but record the information systematically. Put a simple questionnaire in your waiting area and ask people to fill it in before leaving. You might ask the following questions: – How did you hear about us? – What were your first impressions of the project? – How long did you have to wait? – Is there any way we could improve what we are doing? – Is there anything else we should be doing? – How convenient is the location, times available or access to the service? Ask workshop or training participants to fill in a questionnaire showing how
satisfied they were. Ask them to grade aspects of the event using a scale. This gives a range of possible replies to questions, to help analyse information and to collate it over a number of events. Here are some questions you could ask: – How well did the workshop meet your expectations? – How did you rate each session? – How appropriate were the training methods used? – How good was the handout material? – How satisfactory were the venue and the food? During your planned contacts with users, take the opportunity to ask consistent questions about the quality of your activities. Have a complaints and suggestions box and keep a record of them as part of your feedback data. Discuss complaints and suggestions regularly and take action on them.
Think about what methods you can use that will be consistent with your values and that will help your organisation to develop further. You can get feedback from training, workshops and other sessions by involving people more. For example: put up wallpaper or flip chart paper as a ‘graffiti wall’ for individual comments throughout an event ask people to write comments on post-it notes and invite others to tick the comments if they agree or to add their own further comments draw an evaluation wheel on which participants tick different sections representing different activities to show what was most or least useful or enjoyed.
There is more information about scales and about participatory methods in the Practical toolkit. toolkit .
charities evaluation services
Outcome monitoring Focusing on outcomes Outcomes are the changes, benefits, learning or other effects that happen as a result of your activities. Desired outcomes often describe positive change. They can also be: about maintenance of the current situation. This involves ensuring things stay the same and don’t get worse. about reduction in something, for example, criminal behaviour or vandalism.
Outcomes can be: welcome or unwelcome expected or unexpected.
You may be able to anticipate most of your outcomes. However, some things may happen that you did not plan for. Individual client outcomes usually describe change in one of seven outcome areas: circumstances physical or psychological health behaviour attitude self-perception knowledge or skills relationships.
Some organisations, for example second tier organisations, may have target groups other than individual clients. Some do not have immediate users, for example, organisations lobbying for environmental change or other campaigning organisations. Organisations may also intend to achieve change at a number of levels, such as in other agencies, or at the wider community or policy level. Community outcomes might include the development of residents’ groups, increased levels of recycling or community arts activity. Outcome monitoring monitoring has a number of benefits.
It helps accountability, and allows you to describe achievements clearly. By monitoring outcomes, you can gather and report information regularly on the proportion of your users or target group who changed or benefited in some way. It could be, for example, the proportion who were housed, accessed education, stopped misusing drugs or gained in confidence. Funders are increasingly asking voluntary and community organisati organisations ons to provide this as well as information on their outputs. Other benefits are that: seeing changes over time increases staff motivation and team work identifying changes in themselve themselvess provides encouragement encourageme nt for clients it helps to identify strengths and areas for improvement it can help to build the organisation’s reputation.
First steps in monitoring outcomes There are potentially a large number of outcomes you could assess. Keep the number of outcomes you will monitor to a minimum by prioritising key outcomes and combine outcome monitoring with existing systems wherever possible. You have identified outcome indicators as the ‘clues’ that will tell you if outcomes are achieved. Look at your list of outcome indicators and identify those for which you: are already collecting information. You may already be collecting outcomes information in an informal or unsystematic way. could easily collect the information within an existing monitoring system, for example, by rewording or adding questions need to find a new way of collecting information.
You may be able to combine questions relating to outcome indicators in a questionnair questionnaire e asking users about how satisfied they are. Think about formalising feedback from volunteers or from partner agencies working with your users (for example, a placement agency) so that you address specific outcome questions. Think about
41 basic monitoring
Feedback from users Outcome monitoring
charities evaluation services
42
using the information you collect when you first work with your target group as a formal baseline and a template for information you will collect again at a later time. Large organisations may wish to set up an outcome monitoring system that covers all their services. A useful approach is to identify areas, or categories of outcomes, common to the whole organisation. Single outcomes that relate to those outcomes can then be identified by particular services.
Collecting outcome informati information on To assess change over time, you need information on your outcome indicators at two or more points in time. For client outcomes, for example, this should ideally be: as early as possible, usually when clients first present at your service. This will provide you with a starting point against which you can assess change. as late as possible, usually when clients leave your service.
How frequently you monitor will be governed by the nature of your work and your target group. You can collect information on the same outcome in more than one way, for example through a combination of client self-assessment and worker observation.1 However However,, questions at initial monitoring must be repeated in the same way at later monitoring stages, so that results can be compared. It will very often be useful to ensure that questionnaires and other monitoring sheets providing individual outcome information have an identity code, so that changes can be tracked for individual users over time. You may find that your organisation is already collecting information relevant to your outcome indicators. For example, regular attendance at an employment project may be one indicator of increased motivation to find work. You may have various other sources of outcome information, such as: centrally compiled and filed anecdotal information informa tion on client outcomes after leaving the service 1
There is more information on these methods in the section Practical toolkit , pages 116-123.
client diaries kept as part of their engagement with the project records kept by other agencies working with the clients.
Case records of individual clients’ development can be a useful tool for outcome monitoring, because they provide both descriptive and evaluative information. They offer an opportunity to assess individual progress, for example through the frequency of specific behaviours or their progress in a care plan. Be careful to respect confidentiality and maintain anonymity if you use case records. Also, unless you have thought about the use of case records for monitoring and evaluation, they may have limitations as an evaluation tool. Different teams or individuals may keep records differently or they may be incomplete or otherwise insufficient. It may be helpful to have a standardised case monitoring form. For example, when you are recording data from progress reviews and case conferences, it may be possible to do this on a separate summary sheet against specific outcome indicators so that the data can be easily retrieved.
Processing outcome inform information ation Small amounts of information may be stored on paper and analysed manually. However, many voluntary organisations now use computers to manage outcome information. You can type or scan information from completed outcome monitoring questionnaires or other forms into a computer and store it in a database. If each form has a client code or name, two or more forms completed by each client can be placed in the same section and compared by the analysis system system used. With information presented numerically, that is, quantitative information, you can ask the database to calculate the change from one form to the next. It is then possible to produce a total number of clients showing change in each of the different outcome areas. Qualitative information can also be stored in the computer. Either type in clients’ answers in full, or summarise them first. This can be done using specialist computer packages, but many people use Excel or Word.
charities evaluation services
Analysing monitoring data Having decided what data to collect, and set up monitoring systems to collect it, you now need to make sense of the data, that is, to analyse it. The questions you wanted answering dictated the type and amount of data you collected. These questions provide the framework for your analysis. Before starting to analyse your data, you need to refer to the questions again, so that you relate data to the questions. At this point you may need to break them down into sub-questions, as in the examples below.
Framework for analysing monitoring information Who is using our services? Has the user profile changed? Are there any under-represented or over-represented groups? How does the profile of our users compare to those using other relevant services?
It is useful to look for patterns. For example, where has any increase in user numbers come from? Is it from young people, or older people? Does the increase include people from a particular community or group? Which organisations are you reaching, or which parts of the city or country? If you have numerical data, and want to make comparisons between two groups of different sizes, it may be helpful to show the information as percentages. However, be careful of using percentages when you are dealing with low numbers, as this may give you a distorted picture of your data.
Analysing outcome information To help you analyse your outcome information, try to think in terms of asking questions of the information you have collected. What do you want to find out about your work? For example: What percentage of clients over the past year stayed more than six months in their tenancy? What proportion showed an improvement in independent living skills?
What is working well? Which services or activities are used most? What do users value most? What changes or benefits have been achieved? Were there any unexpected outcomes? What factors have affected positive changes? How many clients have benefited and who are they?
What problems have we encountered? Which services or activities have been delayed or not run as planned? Which services are over used or under used? What are our waiting times? What complaints and suggestions do we have?
It is also possible to look at smaller groups of clients by asking questions such as: What percentage of those with substance misuse issues at assessment then showed some improvement by the review? Of those with diagnosed mental health problems, how many improved their ability to manage their illness?
Reporting monitoring data Monitoring data is usually reported to your trustees or to your funder at regular points. This is usually quarterly, but for funders may be yearly. It will: keep trustees and others fully informed about how well the organisation is doing against targets prompt useful questions about how far activities are achieving expected benefits
43 basic monitoring
Analysing monitoring data Reporting monitoring data
charities evaluation services
44 prompt corrective action before it is too late.
Keep your life simple by collating information regularly, for example, weekly or monthly, to avoid too much work at one time. Design simple reporting forms so that you can easily demonstrate progress against targets. Your basic point of reference will be the year plan. If you have set out milestones, or key events or achievements for the year, it will be useful to report progress against these. For example, you may have set out a timetable for sending in a major funding application, moving premises or holding a conference. From the performance indicators you identified for your outputs, select key indicators to report on, for example, number of sessions, meetings, publications, profile of users or other target group. Set these out in a way that makes it easy to compare against the targets you set for your key indicators. Some of these may be performance levels or standards agreed with your fun der der.. You may report on satisfaction feedback data during your quarterly reporting. Some funders also ask for some basic outcome monitoring data every quarter, such as numbers of people housed.
Further reading Parkinson, D and Wadia, A (2008) Keeping on Track: A Guide to Setting and Using Indicators , Performance Hub, Charities Evaluation Services, London
There are a number of points to remember when presenting your data. For example, make sure you distinguish between new and repeat users when you collect and report on take-up of services. Break down the data by age, gender, ethnic group or other relevant category to ensure that different levels and types of participation in the project and differences in benefits can be seen clearly. Remember also to look at monitoring data against your original plans and timetable.
Keep monitoring data over a period of time, so that you can report on any trends that emerge. For example, you may have made changes to the times your service opens and want to see whether this had the result you expected. Outside factors, such as a changed policy or a new service in the area, may also affect trends. Using a consistent reporting format over time will help you to make comparisons. However However,, remember that the user profile may have changed over time, and this may affect trends or your ability to compare.
Monitoring your monitoring Monitoring arrangements themselves need to be monitored. Make sure they give you the information you need and are not too time consuming: Keep a note of any problems as they arise and the main ways monitoring has been helpful. From time to time do a more systematic review. To start with, this should be every six months. Later, it could be once a year. Make sure that you are not collecting information that you do not use.
This section has concentrated on how to set up basic monitoring systems on different aspects of your project’s activities, including monitoring against targets. It has discussed the monitoring implications of focusing on project and organisational outcomes. It has also looked at the need for a framework to analyse and report monitoring information. You now need to think about how monitoring links to evaluation activities.
further monitoring Basic monitoring examined how to set up basic systems to monitor project activities and outcomes and gather data about users. It also looked at how to report monitoring information within an analytical framework. Further monitoring looks at more complex monitoring. This includes monitoring project impacts, and how to monitor against a quality system. It also looks at collecting routine data about factors within your project, or in the outside world, that may affect how well the project is doing.
Developing more complex monitoring systems You may need to extend your basic monitoring system to collect qualitative information to help you manage your services routinely, or to collect information needed later on for evaluation purposes. Setting up a more complex system will depend on what other monitoring questions you might have: Is service delivery consistent with the project design and plans? What are the project’s cumulative or longer-term longer-ter m effects? Do services meet user needs? What resources have been used and how much does service delivery cost? Is service delivery consistent with quality standards?
As well as staff from your own project, staff from other agencies will be a useful source of monitoring information. It may be difficult to get commitment to this, so discuss with your project partners from the beginning: what information each of you will collect why you need it and how it will be used the format for collecting the information.
If you receive face-to-face or informal feedback, for example from busy teachers or social workers, make sure that you record their comments during the discussion or interview.
Process monitoring To understand why your project is achieving the changes it hopes to bring about, you need to identify the factors that really matter in the way you do things – that is, the project’s processes. This may include looking at the way you deliver certain outputs, such as client assessments. But there will be other things you want information about, such as the way staff and volunteers work together, together, recruitment processes, public relations and relationships with other agencies. Process monitoring is also important to assess the quality of what you do. It can be helpful to set specific indicators for processes. Examples might be: the extent of involvement of group members the amount of team working the extent and type of meetings with other agencies the level and quality of contact with the media.
Also allow for more wide-ranging and unexpected data to be collected, for example, through records, minutes and diaries. Appreciative enquiry is enquiry is an approach to get information from participants about their personal experience of projects by asking specifically about: what they like or value most what pleased them most about this aspect of the project and their part in it what else could be done to develop these positive features.
The key is to find out as much as possible about the positive aspects of the project, and to build on these.
charities evaluation services
45 further monitoring
Monitoring your monitoring Developing more complex monitoring systems Process monitoring
charities evaluation services
46
Monitoring for quality It is not always easy to make judgements about service quality. Projects sometimes use simple quality indicators, for example: waiting times for appointments number and type of complaints response time to information queries.
There are a number of other ways to monitor quality. For example: using accepted performance indicators, such as client-practitioner ratios observing interaction with users, such as training sessions or group work checking case recording methods, for example, against agreed practice standards using someone who is not known to staff to experience the service first hand.
If you work within the framework of a quality system, such as PQASSO (Practical Quality Assurance System for Small Organisations), you will have created standards. That is, you will have agreed levels or expectations, across a range of organisational activities, including management and service delivery. Most systems will suggest what evidence you will need to show how well you are doing. Your monitoring system should provide information that allows you to check whether you are meeting those standards. In the extract below, the quality system suggests the monitoring information that will allow you to make an assessment against the quality criterion. A quality system will have its own cycle of review. Monitoring and evaluation activities should be planned so that they produce the information you need for your quality reviews.
Quality area: User-centred service Criterion: Users are more involved in the organisation. User feedback leads to changes in service delivery
Monitoring information required
A range of methods is used to seek user views, and the views are used to influence decisions on services
User satisfaction surveys surveys,, minutes of user forums, focus group reports
Active steps are taken to understand the issues that affect users and potential users, particularly under-re under-represented presented groups
Research reports, government strategy and relevant policy documents
Complaints and suggestions are discussed openly and dealt with quickly
Records of complaints, suggestions and actions taken
Adapted from: Charities Evaluation Services (2000) PQASSO (Practical Quality Assurance System for Small Organisations), Organisations) , Second edition, London
charities evaluation services
The basic monitoring and evaluation cycle can be adapted to show how monitoring feeds back into quality improvement improvements. s. g n g M o n i tor i in
s
t
n e
Q
m
u
a
e
l
v
i t
o
y
r
r
e v
p
m
i e
w
i
y t i
l
a
example, a sexual health education project may result in more people having a sexually transmitted infection diagnosed. This could demonstrate success, that is, increased awareness of the importance of sexual health check-ups, rather than indicate increased rates of infection. It is worth remembering that a number of factors – ranging from socio-economic trends, government policy and other local initiatives – may all influence impact measurements. It will be difficult to gauge the part played by a small project within the larger context.
u
Q S t
a n
t e n
d a r m rd d e v e l o p
Impact monitoring Impact monitoring examines the effect of your project over a longer timeframe and in a wider context. It asks about the broader, cumulative consequences of your work. You will need to take measurements over several years and use outside information to assess the effect of your project from a longer-term longer-t erm perspective. Examples of statistical information relevant to a local project might be: the number of school exclusions the number of arrests in a particular neighbourhood noise pollution levels.
Monitoring context issues Decide what the important issues are that will affect your project’s performance. These may be at different levels, for example: central government policy, such as regulations affecting asylum seekers local government policy, such as housing or environmental policies services provided by other agencies, including other voluntary organisations factors within the project, including staffing and financial resource levels.
Be realistic about what you can monitor. Collect information that you need to run your project more ef fectively anyway. anyway. You can do this through: newspaper cuttings and other articles seminars and conferences meetings with policy-makers
A positive impact from certain services or activities may result in an increase or decrease in the statistical measures. For
networking and inter-agency meetings and discussions.
47 further monitoring
Monitoring for quality Impact monitoring Monitoring context issues
charities evaluation services
48
The key is to record this information systematically at the time, and in a way that will be accessible and useful to people reporting on the project and conducting evaluations.
Data management It is important to resolve practical data management difficulties, as these can hold back how effectively you can use monitoring information to improve your work and the quality of your reports to trustees and funders. It may be helpful to involve an external consultant in identifying internal information needs as well as external requirements. Research Research has shown that many organisations struggle with IT resources that are inadequate to meet reporting requirements; this lack may be in IT infrastructure and systems and in staff training and capacity. There is an increasing development of IT software specifically for third sector monitoring and evaluation; investment in improved IT systems can have a number of advantages, including: considerable time savings increased storage capacity easier data search and access easier checking for errors and missing data Further reading Davey, S et al (2008) Using IT to Improve Your Monitoring and Evaluation, Performance Hub, Charities Evaluation Services, London The Charities Evaluation Services website has information on monitoring and evaluation software: www.ces-vol.org.uk
more consistent monitoring information delivery of well-structured reports ability to meet a range of reporting needs.
Take time early on to decide what you need the system to do. Be clear about whether you want a stand-alone monitoring and reporting system or whether you want it to support your work with clients or to support other functions, such as fundraising. It may be wise to start with a simple system and to add more complex features as you go along. The introduction of a system may have
unexpected costs and implications. To minimise these, it may be helpful to bear a number of points in mind: Understand your current difficulties with storing, managing and analysing information and assess how any given system will address those problems. Put the development of a monitoring system within the framework of an overall IT strategy. Plan to involve time and effort from all sections of the organisation. Get good management support and, ideally, a team of people with the knowledge, skills and motivation to lead on the work. Involve people in the process who may be reluctant to move to new ways of managing data or be IT resistant. Provide a regular flow of information about the progress of the project and keep everyone on board.
Remember that you may be able to get funding support, so write a proposal that describes how your IT system will make you more effective and efficient and how it will help more beneficiaries. And once you have an effective IT system system in place, place, make it a feature in your applications for funding, identifying how it will help you report on your achievements You can gathe gatherr a range of monito monitoring ring data during the lifetime of your project, including data on the quality of your management or services, and on the changes resulting from your activities. However, you cannot monitor everything. Think carefully first about what you and your stakeholders most need to know and carry out a systematic review of the data you collect. An important part of this is thinking through your evaluation questions, because your monitoring data will feed into your evaluation. This is discussed in the next section on Evaluation Evaluation..
section 3 evaluation Basic evaluation
51
Making judgements
51
The politics of evaluation
52
Understanding stakeholders’ needs
53
Outcome evaluation
53
Impact evaluation
56
Finding the evidence
56
Involving users in evaluation
59
Evaluation standards and code of ethics
59
Writing the evaluation report
60
5
Further evaluation
65
Focusing your evaluation
65
Evaluation approaches
67
Designing an outcome evaluation
70
Extending your evaluation activities
71
Managing evaluation
74
charities evaluation services
49
charities evaluation services
basic evaluation Basic evaluation considers what makes evaluation different from monitoring, and how the two link together. together. It discusses the need to take into account different expectations and agendas and to work within ethical standards. The section introduces the process of data collection and looks at the important step of interpreting evaluation findings and at writing an evaluation report.
Making judgements During the year you produce monthly or quarterly monitoring reports for colleagues and trustees, and use the data for funding applications and publicity publicity.. Your quarterly reports may lead you to think about how you are delivering services, for example, how you can best respond to advice needs, or how you can improve the accessibility of your services given your user profile information. However, However, an evaluation will allow you to examine specific issues in greater depth. It gives you the opportunity to bring together, analyse and make judgements and recommendations from the monitoring data collected during the year. Your monitoring information is likely to contain: profile information on your users basic project record keeping, such as the minutes of meetings and case records statistical information on take-up of services feedback sheets from training courses and workshops diaries and other records of events complaints and compliments from users.
As part of your evaluation, you may decide you need to know more than this. Your monitoring information will probably suggest further questions that need an answer, such
as why certain activities were well attended and others not, or why an inter-agency network collapsed. You need to think clearly about where the focus of the evaluation will be and who and where you want to obtain information from. Your focus may be on how well you are meeting your objectives objectives (that (that is, your intended activities), and whether your objectives are in fact helping you to achieve your aims (the changes you intend to bring about). These are often the key issues that funders want projects to address in an end-ofyear report. By putting together your monitoring data you should see an overall picture and patterns for particular time periods or activities. This allows you to make comparisons with plans and targets and with previous years. The data you collected will also give you information about how users and partners value your project. Remember the importance of planning for your evaluation. Think carefully about what you want to find out from your respondents. You should also decide whether you are able to, or wish to, collect information from all your members or users, or only a selected number, that is, a sample. How you draw your sample is important, because gender, age, ethnicity,, economic status and many other ethnicity factors can have an effect on outcomes for individuals. Be aware that participants who volunteer to be involved in the evaluation may feel more positive about the project than those who do not, and this may affect your findings. Think also about how you can get information from people who no longer use the project, as they may have different and valuable perspectives. Make sure you set enough time aside for this additional information gathering. Questionnaires take time to develop, and should be tested with a small sample from your target group to see if they will capture the information you want. Interviews take time to organise and even longer to write up and analyse.
charities evaluation services
51 basic evaluation
Making judgements
charities evaluation services
52
Evaluation is a set of interlinked activities. Each of these is an important part of the overall process and needs adequate time built in to protect the quality of the evaluation. Conducting an evaluation Planning Design Data collection Data analysis Interpretation Reporting
The politics of evaluation Different stakeholders come into an evaluation with different purposes, perspectives and values. For example, decisions to evaluate may be linked to funding decisions or a policy agenda. This may apply to self-evaluation as well as to an external evaluation. Evaluation may encounter a number of difficulties, particularly if the reasons for it are not clear to everyone. Here are some examples: Managers may want to carry out an evaluation simply to make the project look good. A project monitoring and evaluation officer might find their position undermined by a project director if an evaluation brings up negative findings. Changes in senior management may lead to different priorities, which might challenge the agenda set for either self-evaluation or for an external evaluation already underway underway.. The evaluation may lose its relevance if a
funding decision or a policy decision is decided before findings are reported.
The project may be experienced differently by different sections of its user groups. This may be along the lines of gender or ethnicity, and external agencies who are users often have their own considerations. It is the evaluator’s job to understand and bring out these differences from the beginning: the evaluator will want to make the differences clear in order to take account of them properly. The role of the evaluator or the evaluation team is vital. You should: make sure that stakeholders feel a sense of ownership by consulting with them and, where appropriate, sharing the process of the evaluation as well as its results use the evaluation to encourage a process of reflection throughout the project recognise the pressures of daily project activities while monitoring and evaluating be receptive to other people’s ideas and responses while remaining independent keep in mind the evaluation aim of learning.
Managers should talk through with staff any fears they have about evaluation. It is particularly important for evaluators to work closely with project management, so managers do not become defensive about any negative findings, and reject or shelve the report. You can help to reduce any concerns among project staff about evaluation by including staff on a steering committee or evaluation task group. The steering committee can help set the terms of the evaluation and keep staff in close touch with the process and the emerging findings throughout. This may also increase the likelihood that recommendations will be accepted and implemented.
charities evaluation services
Understanding stakeholders’ needs Just as you consulted your stakeholders in setting aims, objectives and indicators of success, make sure you plan enough time for consultation with managers, users, funders and staff about their monitoring and evaluation needs. You do not always need to involve everyone, but try to consult with representatives from different stakeholder groups as they will have different, sometimes contradictory, interests and views. Although the performance indicators you set will guide some of your evaluation activities, there may be other areas of your work that need exploring, and other questions to be answered. Find out: What are the main questions they want monitoring and evaluation to answer? What information do they lack about the project? How do they want to be involved? How do they want the findings presented?
Involving funders and partners at this stage can also help to build relationships and can lead to a greater likelihood of the evaluation report being understood and accepted. Consulting users can also encourage user participation in the evaluation itself and in developing data collection tools, such as questionnaires. Deciding what questions need to be answered is the first important step. For example, they might be: Are we reaching the right target group? Are we doing what we set out to do? Are we meeting users’ needs? Are our services effective? What has worked well, and what has worked less well?
Do we have the right sort of project, or staffing, to do the work?
Collect only the information you need to answer your evaluation questions, information that you can use, and that is not already collected elsewhere. If your monitoring has focused on what you are doing – the project processes and outputs – evaluation may be the point at which you need to collect information on the differences this has made – that is, project outcomes. If you have been monitoring outcome information as well, evaluation will allow you to review it in context. You may also find it useful to carry out some additional one-off data collection, such as interviewing users or partner agengies.
Outcome evaluation We have defined outcomes as the changes, benefits, learning and other effects that you can attribute to your organisational or project activities. They are different from the longer-term change or impact of the project or programme. Impacts relate to the overall aim or mission. They are the broader or cumulative effects that may be seen in local or national statistics about the area of intervention, such as health, homelessness, school attendance and youth unemployment. If you set indicators for outcomes, you might have been able to monitor some of these outcomes regularly. During the evaluation stage you should be able to draw this information together. Useful outcome information can also be obtained during the evaluation itself through one-off data collection. The task then is to put outcome information in the context of: your services and working methods what you know about your users external factors.
You can then make judgements about what has
53 basic evaluation
The politics of evaluation Understanding stakeholders’ needs Outcome evaluation
charities evaluation services
54
been achieved. This will involve comparison against: your own previous experience, expectations, standards, or defined targets results from similar activity or previous local results expected outcomes and outcome levels defined by other stakeholders. You may have defined these standards with your funders and users, for example.
This is also a good opportunity to pick up on the unintended effects of activities. Projects often produce outcomes that were not expected, as well as planned changes, so it is important that you collect information in a way that will tell you about these. Outcomes that are achieved in the short term, but link to or lead to a final longer-term outcome, are called intermediate outcomes . Be clear, particularly with funders, about what can be measured at any given stage. At an early point, or a pilot phase, you may be able to measure only intermediate outcomes, for example increased knowledge or skills, changed levels of confidence, rather than longer-term outcomes, such as take-up of jobs, which might fulfil the project aims more completely. Also be clear about the change your project is able to achieve and what might lie with other agencies. Once you are clear about what change is realistic, you will be in a better position to negotiate on this with stakeholders. Longer-term outcomes may often be less easy to evaluate than intermediate ones. For example, in a family centre working with young men, it will be difficult to assess their ability to sustain responsible relationships in the short term. However, it may be possible to evaluate whether the young men’s self-esteem increased, or their awareness of responsibilities or ability to manage anger. Some projects provide a service which is a link in a chain of services, or part of a multi-agency intervention. For these projects, intermediate outcomes may be the final outcomes assessed
as these are the only ones the project is involved with. Recognising intermediate outcomes can help identify and acknowledge the different contributions of separate agencies. For example, a homelessness organisation may need to consider the context in which they are working: a contact and assessment team may have engaged the client; a day centre may have built up their skills and confidence; a resettlement team may find them a home; and a tenancy sustainment team may help them to keep the tenancy. 1
Understanding outcomes You can look for outcomes at a number of levels. When you set indicators remember to set them at different levels, if appropriate. For example, these may be: individual family community environment organisational or institutional systems and policy.
You may be focusing your monitoring at one level, but it could provide a more complete picture of your project if you collect information on outcomes at other levels during your evaluation. For example, a project developing a green space might monitor the changes in the environment on a regular basis, and during an evaluation collect information on the resulting outcomes for the community as well. A project working in schools might collect regular data on school attendance and test results, and as part of an evaluation also collect data from families and teachers on the difference the project has made to them. Examples of these outcomes might include fewer conflict episodes or increased contact between school and family. 1
For further discussion on outcomes for homelessness organisations, see Burns, S and Cupitt, S (2003) Managing outcomes: a guide for homelessness organisations , Charities Evaluation Services, London.
charities evaluation services
Outcomes at an organisational level are also important. Some of the questions to ask are: How does the presence of the project create changes in the agency which houses it, or with which it is working? Is collaboration among institutions strengthened?
It is helpful to be clear how individual outcomes are related to and lead on from outcomes at a policy or systems level. For example, a programme aimed at improving the welfare of older people in residential care may need to focus on changing procedures and bringing about more participation by older people.
Have staff become more skilled?
Individual projects may have outcomes at a policy level, although this is more usual with larger programmes. An external evaluation was carried out on a programme whose overall aim was to prevent homelessness in London. The projects funded within the programme were intended to have effects on at least four different levels: with the individuals at risk, with their families, on the local community and on government policy. policy.
Levels of intervention
The programme was able to define the expected outcomes for each level. Clarifying expected outcomes at the early stages of projects can reveal unrealistic, vague or conflicting aims and is vital to effective planning.
E x p e c t e d o u t c o me s
Policies to prevent homelessness
Government policy
Increased acceptance and co-operation
Local community
Change in attitude More young people reunited with their families
Families
Increased self-esteem Confidence and change in behaviour
Individuals
Be clear about the type of change you are intending, and for whom, as shown in the table below. Change
In what
For whom
Develop Expand Maintain Modify Improve
Attitude Knowledge Condition Perceptions Policy
Individual Family Neighbourhood Other agencies Local government
Adapted from: Evaluation Forum, The (1999) Outcomes for Success , Organizational Research Services Inc and Clegg and Associates Inc, Seattle, WA
55 basic evaluation
charities evaluation services
56
Impact evaluation The term ‘impact’ is used in a number of ways, but is usually taken to mean the effect of a project or programme at a higher or broader level, cumulative effects, or changes that affect a wider group than the original target. Impact often relates to the overall aim or mission of your project. Impact is frequently longer term and therefore the timescale needed for impact evaluation may well be longer than that needed for outcome evaluation. One area of overlap between outcomes and impact arises when an organisation is directly involved in policy work, education or awareness-raising at community level, or other broader work. Then: Policy change or community awareness would be defined as intended outcomes of the organisation or work taken. Impact could be the effect on a wider target group of any policy change or i ncreased awareness.
Example of outcomes and impact: homelessness agency A homelessness agency works with homeless people in a number of ways. It also campaigns or lobbies to change the local authority’s approach to who it defines as ‘homeless’ and their responsibility to house. The agency would have outcomes for: the individuals they work with directly possibly those individuals’ families the local authority’s policy on homelessness. They could also have an impact by: increasing the number of people housed by the local authority (the effect of the policy change).
Impact is generally achieved through the efforts or effects of a number of interventions. So you need to think carefully about the role of your project in relation to these long-term changes and place your
project’s activities within a broader picture. Even if you are able to follow up individual users later, to assess longer-term changes, it may be difficult to analyse clearly the role of the project in their successfully finding and keeping jobs or housing, reducing their alcohol dependency or maintaining family relationships. Many other factors within their personal circumstances and the wider environment influence their behaviour and potential. Similarly, wider statistical information about an overall aim – for example on crime, homelessness, teenage pregnancy, recycled material – could reflect the impact of a programme or group of projects. But it will also reflect other social and economic factors and interventions, either negative or positive. It may be relevant to get information about the wider policy change that relates to the area in which your project is working and your target group, or the area in which you are campaigning. This will be useful data in which to place the more immediate outcomes from your project.
Finding the evidence Data collection methods How you collect evaluation data, and what you choose to collect, is crucial to your findings. There are many methods that you can use. These are some of them. Using documentary sources, such as: existing project information sources public documents personal files or case notes existing databases. Collecting data directly from individuals, such as: diaries and anecdotal accounts questionnaires interviews
charities evaluation services
written responses to requests for information tests samples of work participatory data collection methods, with people expressing themselves creatively,, often v isually. creatively
You may be able to test whether there is a statistical relationship between an activity and some measured change. Qualitative data tells data tells you about personal reactions, feelings, impressions and processes. It may be based on verbal or written records, and extracts from letters, questionnaires and minutes of meetings.
Using an independent observer
Data may be collected through: written accounts observation forms mystery shopping, where the evaluator experiences the service as a user would. Audio-visual data collection methods audiotape videotape photographs
In some cases data from open-ended questions can be transformed into quantitative data. This can be done by grouping statements or themes into larger broad categories and giving them a numerical value. For example: ‘26% of users made some negative comment about the waiting time, while 15% were unhappy about the adequacy of the waiting area, referring to the space and seating available, and inadequate provision of children’s toys.’ Be toys.’ Be aware that such quantification of qualitative data may lose the essence of the personal views and experiences recorded.
digital media. Adapted from: Worthen, BR and Sanders, JR (1987) Education Evaluation: Alternative Approaches and Practical Guidelines, Longman.
It is useful to make a distinction between primary and secondary data collection. Primary data is data collected, for example, through questionnaires or interviews designed specifically for the evaluation study. Secondary data is data is data collected by other organisations or researchers for their own purposes that can provide a useful source of information for the evaluation. This may be, for example, local population statistics, or a report on user experiences in other, similar projects.
Distinctions between quantitative and qualitative approaches are not absolute and most evaluations collect both quantitative and qualitative data. Be careful that a focus on quantity does not obscure an understanding of how people experience project activities and benefits. Different approaches, methods and types of data are not in opposition but often complement each other.
Planning data collection Make sure you plan enough time for preparation before you collect your data. You will need to: decide which methods to use prepare the data-collecting instruments pilot them
Quantitative data deals with numbers, asking questions such as: ‘how much?’, ‘how many?’ and ‘how often?’ Things are either measured or counted. It allows you to compare sets of numbers and look at their distribution. It is useful when you want accurate, precise data.
train data collectors discuss evaluation methods with staff to reassure them that the evaluation will not unduly disrupt their work or project users.
57 basic evaluation
Impact evaluation Finding the evidence
charities evaluation services
58
Piloting Before starting any data collection, it is a good investment of time to test the datacollecting instruments. For example, ask some project participants to fill in a questionnaire or carry out interviews with a small number of respondents. Then look at the results to see if the questions have been understood consistently by respondents and if you have captured the information you wanted. Piloting is particularly important, for example, with a survey questionnaire or other ‘selfcompletion’ instruments, where a researcher is not on hand to clarify questions.
Training It is important that people carrying out evaluation research are properly briefed and trained. Data collection should be supervised to ensure quality, for example, by comparing data entry records for different researchers and reviewing work regularly.
What venue will be most suitable?
Deal with any issues of distrust. Many individuals and groups may be reluctant to criticise a service, particularly if there are no alternatives open to them. This may mean you do not get a true representation of user views. You You may wish to involve community leaders and other community members when planning evaluation. Be sensitive to cultural norms when choosing data collection methods. Consider having a focus group of a specific ethnic minority community to allow particular views to emerge and be heard. Remember to consider access for people with physical disabilities and be prepared to visit people in their homes. You may need someone to sign, or to prepare an audio cassette of your questionnaire. For people with learning disabilities, it can help to use pictures instead of, or as well as, written 1 questions.
Diversity and access The evaluation design should take diversity into account. When designing the evaluation, be clear about differences between groups and recognise the potentially different viewpoints of minority groups, and how these differences might affect the way they experience the project. Think about how gender,, age, disability, language and ethnicity gender may affect the ability of different groups to take part in the evaluation: Which tools are most suitable for which groups? When would it be most convenient for people to meet?
There is more guidance on focus groups in toolkit . the Practical toolkit.
Incentives Evaluators sometimes offer a small token to participants if they are asked to give up a substantial amount of time. Ask people working directly with your informants what they would consider appropriate: a voucher can be given as a token payment. You also need to budget for travel costs, refreshments for a focus or discussion group, and possibly for child care costs.
Would people prefer to meet with you alone or with someone else? Is an interpreter necessary? What type of evaluator or interviewer will put your respondent group most at ease? In some situations, for example, you may wish to consider a same-sex evaluator.
1
People First’s Access2Pictures (at 020 7820 6655) and the CHANGE picture bank (www.changepeople.co.uk) are designed to make information easier to understand. Both are available on CD-Rom.
charities evaluation services
Involving users in evaluation It is increasingly expected that users will be involved in evaluation, and this is sometimes requested by funders. Evaluation findings that users have contributed to could have a greater impact on policy-makers. Indeed, there are a number of good reasons to involve users in evaluation: Users may have been involved in the design of services. Involvement will make use of the expertise that users may have in relation to their own needs. Users could gain a greater understanding of the project as a whole. It could help to meet the project’s aims of building capacity and confidence in the users and lead to greater user involvement in the project. It will create a sense of ownership in the evaluation.
On the other hand, it is important not to assume that users want to be involved. There are potential difficulties or limitations. For example, there are issues of power. Users may find it difficult to talk with confidence to professionals or staff members who they see as having more authority or influence. In practice, only a small group of users may become involved, usually the most vocal and least vulnerable. Finally, there are practical factors to consider, such as confidentiality and how to select the users to be involved. You should also weigh up the advantage of user involvement with organisational needs and the focus of the evaluation, and the need to convince other stakeholders about its credibility. Be clear with users about what commitment is involved. It is also important to be aware of the diversity within groups: there is no single ‘user view’, no representative user. Remember, if you are working with users Remember, you need to:
plan in enough time – it may take much more time than any other approach provide proper training and support
59 basic evaluation
encourage teamwork.
Evaluation standards and code of ethics In a self-evaluati self-evaluation, on, as for an external evaluation, evaluation, get agreement within the project about its purpose, remit and value. You should also: make sure that everyone involved in the evaluation is in agreement with the process from the start choose methods that are suitable for the project’s resources and working practices agree whether and how findings will be made public, and when data should be kept confidential to the project.
At the outset, think about the ethical issues raised by the evaluation. Give evaluation respondents a written introduction to the evaluation setting out its purpose, the time it will take to be involved, and how it might affect them. They should understand that they do not have to take part if they do not want to, and know what risks, if any, are involved in taking part in the study. You should explain how information will be used, give assurances about confidentiality, confirm that no personal consequences will result from the evaluation, and that individuals and their replies will not be associated in the report without consent. It may be helpful for staff or volunteers to ask users if they would be happy for their contact details to be passed to an evaluator for a telephone or one-to-one interview. This will be an opportunity to reassure them and to answer questions about the evaluation process and how findings will be used. Where appropriate, check your interpretation of data with informants. Offer to give them feedback on what is learnt. If personal feedback or a workshop is too time consuming, you could let them have a copy of
Involving users in evaluation Evaluation standards and code of ethics
charities evaluation services
60
the report or executive summary instead. Think about: face-to-face explanations for participants with limited literacy skills explanations in their first language for participants with limited English skills working together with advocates, supporters or carers to make sure that people with learning or other difficulties have enough information about the evaluation.
Decide how long you will keep data and make sure it is held securely. You should be fully aware of, and take into account, the project’s confidentiality policy. When 1 collecting information from a minor: remember to get consent from the parent or guardian find out the consent policies of agencies involved, such as schools and social services departments.
If you do not pay enough attention to these key stages, you risk wasting the time you have already invested. A good report is vital.
Interpreting findings So far you have gathered your monitoring data over the year and collected some additional data. You can find information on analysing data in the Practical toolkit. toolkit . The next stage is possibly at the heart of your evaluation activities. You must think about what lies behind your data, that is, interpret it. Interpretation means looking beyond the data itself and asking what the results mean in relation to your evaluation questions. Be wary of assuming that there are links of cause and effect between your project activities and results. Involve other people in this level of interpretation and, where appropriate, acknowledge in your report the possibility of other interpretations. Remember to put outcome information into context. To do this, it will be useful to ask a number of questions: Did the level of resources, for example, the level of money or staffing, affect the outcome?
There is more guidance on working with children in the Practical toolkit .
Writing the evaluation report You should have the basis of a useful evaluation if you planned and focused your evaluation carefully, if you have the right skills, if you chose an appropriate mix of methods to collect the data, and if you analysed it carefully. However, there are two further important stages: interpreting your findings reporting. 1
You will need permission from the local authority before interviewing children under 18 in care and from a parent or guardian for children under 16 not in care.
Was the outcome affected by the way you delivered your service? Did external factors, such as local policy policy,, prevent you from achieving your outcome?
Example of alternative interpretations Children at a play centre are becoming more sociable and creative in their play. Is this down to the skills of the play worker and the activities? Or is it partly because many of the children are beginning to play with each other in their family homes as their parents get to know each other?
Consider whether enough time has been allowed in the project for various things to happen. Were your assumptions, for example about co-operation with other agencies, borne out? Is there a difference in results that relates to the type of user, or to different ways of working with users?
charities evaluation services
When you ask different people the same question, do you get different answers? If so, why is that? It is not always easy to get a sense of the significance of different findings when you first look at the data – you may lose the most important point in studying and presenting the detail, or overemphasise a minor finding. So try to stand back from your findings and look at the broader picture before finalising your report. Think about how your own biases and interests may affect the data you collect and how you analyse it. For example, have competitive or other reasons led to failure to report appropriately on services provided by another agency?
Drawing conclusions You are now ready to tell your stakeholders what you have found, and must draw conclusions from your findings. Conclusions do not just repeat the data, but should link clearly to the evidence presented. Avoid generalising from a particular finding and make clear in your report the difference between facts, respondents’ views and the evaluator’s interpretation. If your report is short, you might summarise your conclusions at the end of the findings section. In a longer report, summarise your conclusions on one section of your findings before reporting on the next topic. One approach to finalising the report is to organise discussion groups on the draft report. Think of this as your final stage of data gathering, as you may be given new insights and different interpretations. You may also use these discussions to suggest recommendations arising from the findings. This process will improve the report and allow others to feel involved in the evaluation findings. It is more likely that recommendations developed in this way will be acted on. Make the distinction between
data and interpretation clear. If differences of opinion arise, make sure that the evidence is reported clearly in the report, and offer alternative explanations.
Writing your recommendations Recommendations are based on the conclusions. An evaluation need not always lead to recommendations, recommendat ions, and be wary of making recommendations recommendat ions if you do not have enough evidence. It is helpful to be clear about where you are able to propose a course of action and where further information or consideration may be necessary. Be careful not to mix findings or conclusions with recommendat recommendations, ions, although recommendations should follow clearly from the findings. Make recommendations in a way that will increase the chances of action being followed through. Be clear about the most important recommendations and those that are less significant. If possible, give a timescale and say how the recommendations should be implemented. Group recommendations together in a logical way and make it clear who should act on the recommendation, for example, your trustees or a partner agency. Recommendations may be about policy, staffing, the target group, service quality, activities, and monitoring and evaluation; you can use sub-headings to separate them. It may also be sensible to group together recommendations that are for immediate action and those that are longer term. Recommendations should be specific, realistic and achievable.
Presenting findings How you present the report will have an important bearing on your credibility, so consider what your evaluation report will look like before you start putting together your findings. Think first about the purpose of the evaluation and its audiences. The style of reporting and level of detail will vary according to your audience. Will your audience prefer evidence in the form of tables, charts and graphs, or case examples?
61 basic evaluation
Writing the evaluation report
charities evaluation services
Most reports will cover the following:
62
an introduction, which may include the project’s aims and objectives the aims and objectives of the evaluation, including the main evaluation questions how the evaluation was carried out findings conclusions and recommen recommendations. dations.
It is also useful to produce a brief summary, placed at the front of the document, for people who will not read the whole report.
You can find a more detailed example of a report outline in the Practical toolkit. toolkit . First, prepare an outline of the report and the points you want to cover in the findings section. Discuss with others the outline for the findings section and the data you will draw on. You will find it easier to report your findings if you write a structure for this section. Then , gather all your data together, including previous evaluation reports, and do any analysis that is needed. Remember that you may need to bring data from several sources together.. If it is contradictory, you ca n make together this clear in the report.
Collect additional data
Keep the information needs of your audience in mind. You may need to present the information in a different, possibly more concise, format for your trustees or your funders. If so, a stand-alone summary section is helpful. Your funders may want to see the full report, but may also want some of the information presented in their own standard report format. The way a final report is presented is important. It should look easy to read, with clear headings and sub-headings. Consider using lists and bullet points, boxes and shaded areas to highlight or separate content, and footnotes for extra information. Make sure your numbering system is as simple as possible. Have a clear and comprehensive contents page. Think about sharing the writing of the report. To make sure it reads consistently, edit and discuss each other’s drafts.
Findings
Data Monitor routinely
Put together a framework based on your evaluation questions and other themes as they emerge, rather than presenting findings drawn from different data collection methods. You may want to include findings that are interesting but do not answer the original evaluation questions. But do not feel you have to report on everything. If the information adds nothing to your findings, leave it out. If the findings were inconclusive, say so.
Analyse
Assemble
Interpret
Report
charities evaluation services
63 basic evaluation
When writing your report: remember your audience and the key questions covered by the evaluation keep it short, including only what the reader really needs to know use careful wording for negative points and avoid reflecting on individuals use plain English, and avoid jargon make sure the content follows a logical sequence use a standard, readable sans serif font, of at least 12 points in size.
There are guidelines on good practice in presenting information.
1
Tables, diagrams diagrams and charts Tables, diagrams and charts get information across effectively and concisely. Look at other reports to see how charts and tables are used and set out. Bar charts are good for comparing two sets of information about the same things, so they can be used to show changes over time. Pie charts give a visual representation of proportions. If you
want to show how proportions have changed over time, you need a separate chart for each year. The bar chart below shows how a sexual health project has distributed information leaflets using a number of different channels over a period of time.
Chart 1 Distribution of leaflets years 1, 2 and 3 percentage 100 of total 90 distribution 80 70
60 50 40 30 20 10 year 1 year 2 year 3
1
0
drop-in centre
outreach
For example: Royal National Institute for the Blind (1998) See it Right: Clear Print Guidelines and The Word Centre (1999) Plain English Writing Guide.
s ch o o l s
community centres
advice centres
social services
other
charities evaluation services
64
Chart 2 Distribution of leaflets in year 3
drop in centre outreach work schools community centres advice centres social services others
The pie chart above shows the proportion of leaflets distributed through different channels in a single year. You can use simple tables to break up and illustrate the text, and further clarify the data. Remember to give charts and tables a chart number and a full descriptive title so the reader does not have to read the text to understand them.
Case examples Case examples describe a particular event or set of events, activity or personal history. They are intended to demonstrate and explain evaluation findings. They can be very useful in demonstrating good and bad practice, illustrating complex services and bringing an evaluation to life. But it is helpful to explain the point you are illustrating in the main narrative.
Kumar, R (1999) Research Methodology: A step- by-Step Guide for Beginners, SAGE, London
Quote what people have said in their interviews, or on questionnaires or feedback sheets to illustrate your findings. Make sure you include negative as well as positive comments if they are given. Do not use too many quotations and be careful that you do not use quotations instead of interpreting your data.
Robson, C (2000) Small-scale Evaluation: Principles and Practice, SAGE, London
Remember that individual respondents and their views must not be identified within a
Further reading
case example, or elsewhere in the report, unless they have given written permission. Be careful to remove or change identifying material from your findings, and be particularly careful when you have a small number of respondents from a particular group, where it may be easy to identify them. Where it is difficult to ensure anonymity, you must get clearance from the people involved for the information you are presenting.
Editing and proof reading You need to edit your report, to check it for content, meaning and flow. Book time with a colleague so you can review what you have written and get feedback before the report is finalised. After editing, you will need to proof read your report to make sure that there are no typing errors or spelling mistakes, and that paragraph, page and other numbering are correct. These final stages take longer than you think, so leave plenty of time in your timetable. This section has looked at the key components of self-evaluation. If you have collected monitoring data you will be able to bring this together across a whole year. Once you have thought clearly about your evaluation questions, you can decide what other data you need and how you will collect it. Your report will suggest answers to your evaluation questions, and recommend ways forward.
further evaluation Basic evaluation concentrated on bringing together monitoring data and supplementing it within a structured selfevaluation exercise. Further evaluation examines evaluation approaches within a basic theoretical context and considers different types of evaluation activity. These relate to the focus of the evaluation enquiry. This section also looks at the relationship between evaluation, quality audits and other audits. Finally, it considers good practice points for managing and reporting your evaluation, whether this is carried out internally or by a consultant.
Focusing your evaluation The Further planning section raised the importance of focusing the issues or questions for evaluation. These questions may be influenced by changes which are inside or outside the project, or by the stage reached in the project’s life cycle.
Formative and summative evaluation Evaluation can be carried out with different intentions. These are usually described as: Formative evaluation – intended to improve and develop the project. Summative evaluation – intended to make judgements when a project is completed.
Formative evaluation may begin at the start of the project. It continues through the life of the project and includes evaluation of both project implementation and progress. It will rely heavily on good internal monitoring. Summative evaluation assesses a project’s success. Sometimes, this is called an ‘end-ofterm’ evaluation. It takes place once the project is stabilised and has had a chance to bring about changes. It collects information about processes and outcomes, and answers questions about the successes, strengths and
weaknesses of the project, such as: whether the project met its overall aims
charities evaluation services
65 further evaluation
the extent to which users benefited the most effective activities whether the project could work in another setting.
Focusing the level Project level evaluat Project evaluation ion should should ideally be an integral part of the project. Monitoring should start soon after the project is funded, and provide information for periodic evaluation to improve the project as it develops and progresses. Programme level evaluation examines evaluation examines the overall activities and the resulting changes from a number of funded projects. These projects will have a common, wider aim, such as improving sexual health among young people, or developing the involvement of older people in care services. Information is gathered and put together from project level evaluation, usually once a year. It can: identify common themes across a group of projects evaluate particular issues across projects, such as partnership working help make decisions about project funding and support show how well a collection of projects fulfils the broad aims of a programme, and how it brings about policy or institutional changes.
Focusing the subject matter Evaluations also need to identify particular aspects of the project to study. This may be the way the project is working, what it is doing, or what it is achieving. Basic evaluation looked at focusing on outcomes and impact. This section introduces context and process evaluation. Focusing in this way does not necessarily mean that you research and report on these things in separate evaluations, although you might do so.
Focusing your evaluation
charities evaluation services
66
Context evaluation Evaluation can look at how the environment, or context, a project operates in affects it, and this will help you understand how and why the project works. A needs analysis will have established some of the basic contextual factors. Later on, you may need further context information when you modify or develop your project. What other services are available from other agencies? What is the political climate and how has it affected the work? Contextual information is also essential when attempting to reproduce projects and services elsewhere. How does an urban or rural environment, for instance, affect implementation? It is also important to understand how organisational contextual organisational contextual factors might hinder or support project success. Questions might include: How do the organisational structure or staffing decisions influence project development or success? How effective are decision-making processes?
Process evaluation A process evaluation will help you to assess the planning, setting up and implementation of a project, and decide how to improve or modify current activities. It will focus on processes – how the project works – and also provide valuable information on progress. Process evaluation is particularly important for pilot projects so you can learn what needs to be improved. Specific purposes might include: finding out what is working well finding out where there are problems and why assessing how users and other stakeholders experience the project, and their use of project services.
You will evaluate implementation implementation at at an early stage in the project. Later on, a progress evaluation will show how far you are meeting a project’s aims and objectives. It will also point out unexpected developments, and help you to fine tune the project. Evaluating the implementation and progress of your activities will provide an important back up and explanation for the findings of any outcome evaluation. You will be able to put those findings in the context of what was implemented – and why. Without that information it will be very difficult to show the relationship between project activities and outcomes.
Implementation questions could include: Is the project staffing in place, and is it appropriate? Have activities followed the proposed timetable and plan? How is the project promoted and publicised? How well are partnership relationships working?
Progress questions could include: How are services carried out to allow access for particular user groups? How do different project activities relate to stated objectives? How satisfied are users and what do they most value? What are the effects of project activities on users?
charities evaluation services
Evaluation approaches Once you have clarified your evaluation focus, it will be helpful to consider your evaluation approach . While there are many variations, monitoring and evaluation approaches that focus on accountability needs have largely followed the tradition of scientific investigation. These approaches assume the power and value of measurement and objectivity. However, there has been a shift towards approaches that can be used more easily in decision-making, and an acknowledgement that quantitative data is itself an approximation or interpretation. This links to the increasing emphasis on participation by stakeholders throughout evaluation – evaluation that can be used, and evaluation for learning. Such approaches are labelled ‘naturalistic’. Naturalistic approaches assume that there will not always be a definite answer. They focus on qualitative data and description, and they value subjective understandings and different perspectives. Naturalistic approaches, such as case study evaluation and participatory evaluation, acknowledge that all enquiry is value-driven and is more likely to be sensitive to the different values of programme participants. This fits particularly well with a voluntary sector ethos. In practice, the difference between the two approaches to evaluation is not as clear cut as in theory. Qualitative and quantitative methods may be used in both approaches.
Case study evaluation Case study evaluation allows you to examine a particular individual, event, activity, time period and so on in greater detail. It emphasises the complexity of situations rather than trying to generalise. This approach may be particularly appropriate when evaluating certain types of projects or
for certain audiences. In programme evaluation you may choose to report on certain projects as case studies. Case studies are used to illustrate findings and offer particular learning. You can use case studies to illustrate specific successes or difficulties, and to try to identify the circumstances in which your services are effective and where they are not. It will also allow you to describe in greater detail the processes of the project. In describing processes, case studies can tell you something about the constraints under which you operate. For example, if your case study is about an individual, you will be able to research what was happening to them in their contact with the project, how they experienced your services or activities, and how the project affected their life. It will allow you to take into account the circumstances that were unique to them, while at the same time identifying the value they experienced from the project. This approach recognises that outcomes are often specific to individuals and will not necessarily be common across a group of people. So each case study does not attempt to generalise the findings, but to look at individual stories. Each case study should therefore stand alone, so make sure that the data you have for case studies is as complete as possible. At the same time, if you have a number of case studies or stories to tell, you will then be able to compare and contrast them and pull out any common themes. This may make your findings more compelling. You may want to let participants tell their story in their own words. If so, you will need to tape an interview or to get permission to quote a diary record or other form of self-report.
67 further evaluation
Evaluation approaches
charities evaluation services
Participatory evaluation
68
We have discussed how stakeholders can take part in evaluation in a number of ways. For example, they can work with the project to: formulate the aims and objectives of the evaluation and the key questions design and test different data collection tools, such as questionnaires gather and analyse information.
Participatory evaluation actively involves all stakeholders in the evaluation process, gives value to different perspectives and provides information of practical value to the project. It aims to help stakeholders identify and solve problems themselves: in other words, building people’s organisational capacities. It can also strengthen partnerships and encourage an inter-agency approach through joint reflection. This gives project partners a chance to assess their own performance and draw lessons for their own future planning.
In practice, there may be considerable variation in the level and degree of participation. It will be more, or less, appropriate in different circumstances. The level and degree of participation may also depend on whether the evaluation is internally, externally or jointly led, and whose perspectives are dominant – those of the funder, the users or the staff team. Participatory evaluation can be externally externally led, led, by people with no direct involvement with the project. An external evaluator may give technical help, or act as a facilitator or supporter, helping stakeholders decide on and carry out their own monitoring and evaluation activities. Some people assume that participatory evaluation is less scientifically rigorous. But participatory approaches have their own systematic rigour, rigour, and will often include quantitative measurement. Users can enhance a project by their involvement in setting performance indicators, collecting monitoring data, completing self-assessment reports and other tasks without necessarily putting aside standards of validity validity..
A model of participation in evaluation One-way information
Consultation
Dialogue
Basic evaluation tasks
Complex Collaboration evaluation tasks
Stakeholders know the results
Views asked about predetermined issues
Involvement in the design and input into decision-making on some of the issues
Stakeholders carry out parts of the data collection
Stakeholders Joint involved in responsibility for processing the evaluation findings and making recommendations
No participation
Full participation
charities evaluation services
User-led evaluation
Theory-based evaluation
A participatory evaluation aims to involve stakeholders to varying degrees in the evaluation process. In a user-led evaluation the control passes to the users themselves or to user-representatives.
Theory-based evaluation is useful for larger and complex social programmes. It aims to identify the theory, which underlies any social programme, about what will make it work. This theory is called a programme logic model or a theory of change. It can then be used to decide what must be done to achieve aims and the key intermediate outcomes that will lead to ultimate long-term outcomes.
A good start is to hold a consultation meeting with users to decide the areas of interest and the evaluation questions. Train the user-evaluat user-evaluators ors in data gathering techniques, including interview skills and data analysis. Users can design questionnaires and other data collection tools themselves, and decide how to collect the information.
Tracking these intermediate outcomes can be a good option in the first year or two. Stakeholders are learning how and why a programme works, and what activities need to come before others. This a llows the programme to be adjusted, or the theory to be modified, as learning occurs.
User-evaluation needs adequate support. You User-evaluation could use a consultant to:
A programme logic model helps focus the evaluation on measuring each set of events in the model to see what happens, what works, what does not work, and for whom. This may in turn lead to a better understanding of the effect of the programme, and help to find ways of increasing its effectiveness.
help users think through the technical aspects of the exercise discuss research methods observe the interviewers discuss analysis of the data comment on the draft report.
The attempt to link activities and outcomes throughout a programme initiative means theory-based evaluation can also guide those who wish to apply lessons to similar programmes in other settings. It fits well with the current concern by government and policy-makers with what works and why .
Programme logic Inputs.....lead on to Outputs.....which lead on to Short-term outcomes.....which lead on to Outcomes.....which lead on to Impact
69 further evaluation
charities evaluation services
70
Designing an outcome evaluation
unlikely to have happened purely by chance. However, remember that the presence of a relationship does not prove cause and effect.
When you design an outcome evaluation, the following models are useful.
Comparative designs
After project assessment
If you have enough resources, you may be able to look at a comparison group. This may be:
This option collects information against outcome indicators only after the project is completed. You will not be able to measure the difference from before the project started. You You are therefore very reliant on participants’ own memories, and feedback on how they feel that things have changed. One way of making this information more valid is to also use evidence from a number of sources, such as observation, or interviews with staff or others who had contact with project participants.
Before and after project assessment With this option, you take the same measurements before and after the project. This means you may be able to get a more convincing demonstration of changes for participants over the life of the project. If you think during the planning stage about how you will evaluate your project, you may be able to collect baseline data about your users, about services, policies or the environment before your project starts. You can then make later measurements against these. By assessing the statistical significance of the change you can test whether the result is
a group of people which has not received project benefits – a control group a similar scheme run elsewhere.
Comparative studies are sometimes thought to offer more conclusive evidence that outcomes are attributable to the project. However, many projects are unique and cannot be safely compared. For most voluntary organisations, running a comparison group will be a difficult and more costly option, and it is often hard to recruit and keep group members. It is important to consider carefully how participants for the project group, the ‘experimental group’ and the ‘control group’ are recruited. The groups must be exactly the same in all respects – in age, sex and socio-economic factors – other than the fact that one group receives the project’s help. Working with two groups, one of which does not receive a benefit, may raise ethical considerations. The distance that needs to be measured is that between B and D as well as that between A and B. B
Positive change
D
A C
Before project
Negative change
After project
project group comparison group
charities evaluation services
The group not benefiting from the project may have experienced positive or negative change, and the reasons for this need to be understood. It will be difficult to make sure that there were no differences between the groups before the project, when measurements are only taken after the project. With before and after assessments it is difficult, if not impossible, to isolate all the factors that may have caused change in the comparison group, even if the groups were similar at first. Most studies show relationships and not cause and effect because of the difficulties of eliminating other possible factors.
Emergent designs There may be situations when all your evaluation questions, and even the methods for collecting evidence, are not clear at the beginning of the evaluation. In this case, you have flexibility to allow important issues and questions to emerge emerge as as you do the work, and you design your data collection in response.
Extending your evaluation activities We have discussed some of the main areas of concern for evaluation – focusing on the extent to which projects are achieving the project aims and objectives and the benefits to users. Yet there are other important issues on which you need to make judgements. If you want to know whether your project offers value for money, or whether you are working according to values of equal opportunity and user empowerment, for example, you will need different sorts of evaluation. However, this does not necessarily mean that you need separate evaluation studies.
Economic evaluation Funders often want to know whether a project provides ‘value for money’ or is ‘cost effective’. Economic evaluation recognises the choices involved in spending limited resources and focuses on the non-monetary and monetary costs as well as benefits. Costs are the various inputs, both direct and indirect, needed to set up and run a project. The most important cost is often the time spent by staff and volunteers on different activities, and this needs to be multiplied by the value of each person’s time. Other resources to be costed include physical facilities, equipment, supplies and time. In a cost-benefit analysis you analysis you measure costs and outcomes in monetary terms. The cash flow can be used to work out the percentage annual rate of return. Or, if you divide the project costs by project benefits, this will give the cost-benefit ratio. This approach needs considerable economic expertise, and extensive work that is expensive and problematic because it is imprecise in its measurements. It is rarely suitable for voluntary sector activity. Cost-effectiveness analysis can be less demanding than a full cost-benefit analysis. It measures the outcomes and impacts of projects and programmes in non-monetary terms against the costs, for example, cost per job created or client served. Cost-effectiveness is about using the resources you have in a way that produces the maximum outcomes. The questions asked by costeffectiveness evaluations are useful ones for voluntary sector projects. They ask whether the project has achieved economy, economy, efficiency and effectiveness or, in more simple terms, whether project costs, design and achievements are reasonable. Questions include: Economy – are the resources (people and supplies) used in the project the most appropriate ones, and are they obtained at a reasonable price? Could the costs be lower without damaging the project?
71 further evaluation
Designing an outcome evaluation Extending your evaluation activities
charities evaluation services
72
Efficiency – is the design and operation of the project the best way of producing the outputs? Is there no other approach that would be better? Effectiveness – what outputs and outcomes is the project producing? Are they sufficient to achieve project aims? Are the benefits reasonable in comparison to the cost?
Benefits are usually compared with planned benefits, and costs are usually compared with similar initiatives and against planned costs. A cost-effective project should yield benefits to the target group greater than the overall costs of running the project and should compare well to the costs of similar projects achieving comparable results. While the focus on costs and efficiency issues is useful, a full-scale value for money exercise, involving comparison with other schemes or with the cost of not undertaking the project, requires economic expertise and robust evidence. Also: There are considerable problems in calculating potential costs and benefits for longer-term outcomes, where costs and savings may be too far in the future. It is difficult to put a value on social benefits that do not have a monetary value, and where benefits are for communities rather than individuals. Finding out the costs of alternative ways of delivery could be timeconsuming, and it may be difficult making comparisons between them. As with other outcome and impact evaluation, it is difficult to isolate the effects of a single intervention from other influences.
a monetary value on the social and environmental benefits of an organisation relative to a given amount of investment. The process involves an analysis of inputs, outputs, outcomes and impacts leading to the calculation of a monetary value for those impacts, and finally to an SROI ratio or rating. For example, an organisation might have a ratio of £4 of social value created for every £1 spent on its activities. SROI studies work with judgements at various points in calculating their ratio, but these should be well documented so as to be transparent. SROI studies have now been applied to organisations producing social returns, such as helping ex-offenders into employment, where benefits can be seen as clients cease to receive benefits and start to pay taxes, all of which result in savings to the criminal justice system. It is recognised that the methodology may not be suitable or appropriate for some organisations, but the SROI methodology could help make a good case for providing certain types of services and be especially useful if an organisation’s funders require outcomes information in financial terms. Like other outcomes approaches, SROI can help you manage and improve services and make your organisation more sustainable. It can also improve your relationships with stakeholders; stakeholder engagement is an important part of the SROI process and is one of its strengths.
Evaluating sustainability A key evaluation question is about how sustainable the project is. Will the project be able to flourish after a particular source of funding has ended? You need to ask: What ongoing funding is secured secured? ?
Social Return on Investment
How sound is the structure of the project or organisation?
The social return on investment methodology (SROI) was originally developed for social enterprises, and developed from traditional cost-benefit analysis. The methodology puts
How has the project linked with other agencies and the wider environment? What are the social, political, cultural
charities evaluation services
and economic conditions which support or hinder a project’s growth and sustainability? How sustainable are the benefits – both the outputs and the outcomes? Can the benefits be continued for later groups? If funding and staff are withdrawn, do project users have the skills and resources to continue?
You will be able to answer some of these questions during the life of a project. You may also want to revisit these questions after several years, to check how the project developed and how it achieved sustainability.
Evaluation against values As well as asking whether your project achieved the changes and benefits it intended, a key question is whether it worked in a way that is consistent with its values. For example, your quality system or other policy documents may state that the project should be user-centred, empowering, environmentally friendly or respect equal opportunities. This is where evaluating your processes is important. You may need data on your consultation and decision-making processes, or to observe how staff and volunteers interact with users. Another important source of data might be feedback from users, partners and other agencies, so make sure you ask them questions that will give you relevant information. Outcomes will also reflect on your values, so set outcome indicators that relate, for example, to empowerment. Break down your data, as appropriate, according to gender, ethnicity, age and sexual orientation so you can assess whether project benefits were affected by user profile.
Evaluating joint initiatives and partnerships Working with other agencies may be an
important aspect of your project, but may not be reflected in the project aims and objectives. If so, you may wish to cover this aspect of your work in your evaluation activities. What important questions do you have about the partnership? For example: How useful and effective were meetings and other joint exercises? What strengths did each agency bring with them? What were the difficulties and how were they overcome? How was information and understanding shared between partners? What changes has joint working brought to each agency? What resources, including skills and expertise, have been shared or increased? Should joint or partnership working be continued? How can it be improved?
It is important to be clear about the precise nature of the partnership. Is it a contractual partnership, or simply a collaborative relationship? If you include the evaluation of joint initiatives and partnerships in your evaluation plan it will prompt you to keep good monitoring records of this part of your work. Interviews with key individuals in partner agencies during your evaluation may shed light on their commitment to joint working and help to resolve difficulties. Sharing the results with your partners may help you to find better ways to work together.
Management audits, social audits and other assessments Evaluation is not the same as audit. Auditing has traditionally been concerned with financial honesty and good practice and checking against checking against agreed procedures, controls or standards. Evaluation, on the other hand, aims to make or aid judgemen judgements ts against against a set of questions or criteria. However, However, as audits
73 further evaluation
charities evaluation services
74
have become more participative, and seek to discover how an organisation can become more effective as well as more efficient, the distinction has become increasingly blurred. Audits ask whether a programme is operating as originally intended. Evaluation may do this too, but may also try to discover whether or not a programme has worked, what has made it work and what effects it has on whom. Management auditing systematically examines the organisation and its structure, its management practices and methods, and is carried out externally and independently independently.. Value for money audit or performance audit is mainly linked to reviews of publicly-funded programmes. It usually relates to the three Es – economy, efficiency and effectiveness – important in economic evaluation. A fourth E can be added – equity, or doing things fairly – which links performance auditing to social auditing. Social auditing has grown out of the business world. It links a systematic reporting of organisational activities to the issue of social impact and the ethical behaviour of an organisation.
A social audit should be done regularly, as with a financial audit. It should try to get the views of all stakeholders, possibly using questionnaires and focus groups. While there are strong similarities to evaluation, there is a greater emphasis in this type of audit on public accountability. A social audit report should be published and made available to all stakeholders and the general public, covering all aspects of an organisation’s activities. Organisational self-assessment identifies and analyses an organisation’s strengths, weaknesses and potential, and can be carried out against a set of capacity elements or quality standards and indicators.
Managing evaluation Whether you are carrying out selfevaluation, or using an external evaluator, or combining both approaches, evaluation requires careful management to avoid or overcome some potential problems. For example: one or two people often carry much of the responsibility of the evaluation and it may be difficult to involve others the time and resources involved may prove too much of a burden, resulting in an incomplete or poor quality process evaluation staff may become demoralised there may be a hostile response to negative findings or to the process itself.
Self-evaluation may not always be appropriate for your specific evaluation needs. External evaluation can bring a fresh perspective, specific skills and new insights. However, However, make sure you build in enough consultation time so that the evaluation, while independent, works to relevant criteria and meets your immediate priorities, timetable and resources.
Using an external evaluator If you decide to work with an external evaluator,, help the evaluation to run smoothly evaluator by doing the following: name someone to act as a key contact be clear about budgets make sure that you and the evaluator have agreed on a realistic timetable and deadlines make sure that all the relevant people and documents are readily available make it clear what length and style of report you want, and whether you want a summary.
charities evaluation services
It is important that the evaluation continues to ‘belong’ to the project. If you use an external evaluator, evaluator, make it clear in the contract that you will expect to see and comment on the draft report. As long as your suggested amendments do not distort the findings, it is reasonable to ask for changes, to allow for factual adjustments or differences in interpretation. However, However, this will be subject to discussion and negotiation with the evaluator. If you cannot agree, both possible interpretations might be included in the report, or the evaluator might refer to the different view in a footnote. In self-evaluation, an external consultant may help with the technical aspects of the evaluation, help gather specialised information and provide an external viewpoint. You may want to use a social sciences student to help with data input or analysis. You may also want some help to carry out interviews. This is skilled work, so you need someone with experience. Setting up a steering committee is one way of encouraging participation, learning and sharing. The steering committee should, if possible, include some people with evaluation or research skills and experience. Think carefully about the relationship between the evaluator and the steering committee. Where does control of the process lie? In a self-evaluation, this should remain with the project. With an external evaluation commissioned by a funder, this question of control may need to be clarified. Other good practice points, whether the project is managing an internal or external evaluation, include the following: have realistic expectations of what the evaluation can deliver communicate openly and have respect for the evaluators, telling them about any changes or information that might affect the evaluation collect data and work to established procedures and with appropriate permission
be sensitive to the needs of the respondents recognise the need for evaluation to maintain confidentiality be aware of possible disruption to, or implications for, ongoing work preserve the integrity of the evaluation by not using findings out of context.
Contractual arrangem arrangements ents Terms of reference The terms of reference, or brief, for an external evaluation should set out all the principal issues that will guide evaluation activity. These should be dealt with by the proposal from the evaluator. The contract should refer to these documents and should spell out the responsibilities of each party. The contract you make with any external consultant is important and should be signed and in force before the evaluation begins. It will include or have attached the terms of reference and the evaluator’s proposal, and state clearly the amount and method of payment. It should also clarify issues to do with ownership and copyright.
It will be useful for the contract to detail administrative requirements, and where these responsibilities lie. For example, you may need to: make databases available, or collate and analyse existing monitoring data make available any previous reports or relevant literature contact stakeholders to brief them about the evaluation and introduce the consultant send out questionnaires, in order to keep costs down. Ownership and copyright Be clear from the start about rights and responsibilities. The following questions should be answered:
75 further evaluation
Managing evaluation
charities evaluation services
76
Who owns the report? Does an external evaluator have the right to publish the report in his or her own name, without first consulting the project or evaluation sponsors? Who needs to approve the final document? Can the project or evaluation sponsor edit certain sections of the report before publishing it? Can the project reject an independent external evaluation or prevent it from being published?
comparison and prevent bias, and that word-for-word word-for -word responses are essential. Check that the length of responses from the interpreter matches what the informant has said, and make sure that responses are not edited, distorted or embellished. It may be appropriate to use advocates if you work with people with learning difficulties. Remember that advocates are there to help only when necessary. Find out about the respondent’s preferred means of communication and ask questions in an accessible way.
Quality control Unless you have agreed who has authorship and copyright, neither party may be free to use the report widely; this includes website publishing. Joint copyright means that both the author, that is, the evaluator, and the project, have rights to the final manuscript. Neither party should publish – as opposed to circulating to an immediate audience – without the agreement of the other. These issues of authorship, copyright and publication should be settled early on and precise procedures put in place.
Practical details Think through in advance any practical details. For example, are there any constraints on data collection that the researcher should be aware of? Are there any IT compatibility issues or difficulties about access to paper files? Are there any issues concerning confidentiality for staff and users?
Pilot testing will show if questions are likely to be understood by respondents and if they will capture the information that the evaluator wants. Do not skip this stage, because it will save time in the long run. Improve the quality of data collection and analysis by training and supervising data collectors carefully, so that they ask the same questions in the same way, and use the same prompts. A second person should check from time to time that agreed procedures are being followed. This may include procedures for contacting respondents, obtaining consent, and collecting and recording data. Researchers can get bored or tired by a large number of repetitive tasks, and this can sometimes lead to uneven or missing data or transcription errors. Examine the data file for accuracy, completeness and consistency, and check for coding and keying errors.
Using interpreters and advocates Sometimes it may be essential to work with interpreters interpret ers in order to access people whose first language is not English. However, However, working with interpreters is not easy. Allow enough time to explain the aims and objectives of the evaluation, and how the part they are involved with fits into the whole. Make sure they understand the need for questions to be asked precisely to allow
Data analysis, including coding, is discussed toolkit . further in the Practical toolkit.
Evaluation budget Evaluation needs resources, including time and money. Self-evaluation will minimise costs, although you should still set a
charities evaluation services
budget. Include evaluation plans and costs as an essential part of your initial project proposals. This will show funders that you are keen to learn from the project and your ability to demonstrate effectiveness. It is difficult to set a standard for appropriate evaluation cost ratios. However, However, the cost should reflect some proportional relationship. Some argue that it should be as high as 5 to10% of the overall project or programme budget. This may not be realistic for many small projects, or indeed acceptable to funders, but you should take into account monitoring activities, management time and other routine evaluation commitments.
This section has examined different types of evaluation and evaluation approaches. Understanding these will help you to be clear in your discussions with stakeholders about the possible focus for your evaluation. It has considered some practical aspects of managing evaluation. All your efforts have been focused on producing a clear, illuminating and persuasive report. This is the beginning of the next phase of the evaluation cycle – utilising the findings.
77 further evaluation
You can minimise costs by designing systems that collect information from the start. Costs can also be reduced by conducting telephone or group interviews rather than one-to-one interviews, or by using a questionnaire rather than interviewing. An evaluation budget should consider the following: staff time consultants’ costs travel and other expenses for respondents data analysis and other research assistance travel and communications, for example, postage and telephone costs printing and duplicating other services, such as interpreting or signing services.
Make sure you allow enough time for your evaluation. Qualitative evaluation studies can take longer than you expect if you plan to interview and observe individuals and groups of people. Your timetable should allow enough time to carry out an online survey or postal questionnaire and analyse the data. Allow twice as much time as you think you will need to write your report.
Further reading Clarke, A and Dawson, R (1999) Evaluation Research: An Introduction to Principles, Methods and Practice , SAGE, London Gosling, L and Edwards, M (2003) Toolkits: A Practical Guide to Monitoring, Evaluation and Impact Assessment , Save the Children, London Van Der Eyken, W (1999) Managing Evaluation , Second edition, Charities Evaluation Services, London
charities evaluation services
section 4 utilisation Basic utilisation
81
Disseminating evaluation findings
81
Using evaluation findings
82
Further utilisation
84
Valuing the findings
84
Using evaluation for management
84
Using evaluation for policy change
85
Using evaluation for strategic planning
86
charities evaluation services
79
charities evaluation services
basic utilisation Once you have presented your report, it may be tempting to file the evaluation away. This will be a waste of all the time, energy and resources spent on the evaluation and risks losing the goodwill of everyone involved. Importantly, it will also be a lost opportunity to improve what you do. This section suggests ways that you can disseminate, or pass on, your findings, and how you can use the findings to demonstrate your project’s progress, to make adjustments and to plan the future direction of the project.
Within your project at management committee functions
You can use your findings in many ways for different audiences. When you are planning your evaluation, think about who would be interested in it, or who might be influenced by the results. Other projects can learn from your experience as well. How the evaluation is to be disseminated and used should be part of the evaluation design. If you have involved stakeholders from the beginning of your evaluation, you will have planned together how the lessons learnt will be processed. It is also more likely that there will be agreement about the findings. Oral presentation of the findings as well as the written report is very useful and should be built in. In a larger organisation formal feedback systems will most likely be needed. You can summarise your monitoring and evaluation results in your annual report, and therefore therefor e publicise information to a wide audience. However, However, this medium is not appropriate for too much detail or critical analysis. There are a number of ways you can disseminate your findings:
81 basic utilisation
staff meetings annual general meeting in the newsletter. With other stakeholders distribute copies of summary sheets of your findings to your partners write brief articles in other organisations’ newsletters
Disseminating evaluation findings
charities evaluation services
write a press release include key findings in your publicity and promotion present key findings at forums you attend. Think also about using the following: training events and conferences special items in newsletters local radio interviews the internet displays videos.
It is encouraging to give feedback to users and others involved in the evaluation, to show the results of their efforts. Sharing the information with other agencies makes sure that they can learn from what you have done. Your evaluation can also provide good material for publicity and public relations. People will see that your organisation is actively providing interesting or useful information for the partnerships and networks in which your project is involved.
Disseminating evaluation findings
charities evaluation services
82
Using evaluation findings Providing information to funders and other stakeholder stakeholderss Monitoring and evaluation will provide useful information for funders about the level of activities and benefits for your users. You will be able to give examples of what has worked well, and what your users most value. You will also be in a better position to make a good case to funders for continuing or developing activities. Make it clear to them that you have effective monitoring and evaluation systems in place, and show how you have learnt about and adjusted services in the light of your findings. Use information from the evaluation to build up credibility with other agencies and increase the likelihood of referrals. You can also use your evaluation findings for publicity, for lobbying or campaigning, for advocacy and to highlight gaps in services. Think about how you can use the media, such as talking on local radio or inviting a journalist to meet project workers and users.
Learning from evaluation Sharing the information within your project will help you to become a learning organisation . Management can improve its decision-making, and staff and volunteers will appreciate the value of the work that they do and understand how they can make further improvements. Information-sharing can help you assess whether the project is still meeting needs, and whether staff have the right skills and are working to quality standards. Once the immediate reporting back has taken place, make sure that dates are set for action so that impetus and enthusiasm are not lost. It may be possible to make small-scale adjustments to your project after a reportback meeting. For example, you might:
adjust workloads change your monitoring systems change promotional material introduce new training increase your quality control.
Using evaluation for organisational planning Managers may commission an evaluation because they need more information to decide between different courses of action. The evaluation is therefore designed to provide decision-makers with knowledge and information to make informed choices. Your evaluation should show which parts of the project are working, for what people and in what circumstances, and provide a warning if something is going wrong. These are key findings and you need to decide what action to take. Is extra funding needed? Are new activities required? Do staff need extra training or skills? You may need to discuss some recommendations at other organisational planning meetings and on development days, with the aim of improving project delivery and services. Make sure you draw up action plans and make sure the project supports managers and staff in carrying out recommendations. You may, for example, need to change job descriptions and working methods. The evaluation will also provide information for your next year plan. It will help you to review your objectives. Are your services or activities the right ones to achieve the intended change or benefits? If the project has brought about some unexpected results, how will you take those into account in future planning? You may need to gather more information about the outside world, for example local strategies and other service provision, before making decisions about changing services.
charities evaluation services
The evaluation may give you clearer information about who is using your services, about your members, or who you are reaching with your information or publicity. This will help you to think more carefully about who you are not reaching. If the findings point out areas where need is greatest or least served, you may need to consider redefining your target group. You may need to carry out more publicity or establish new contacts and networks. It may be that you need to follow up your evaluation with a more in-depth needs analysis. Your evaluation will also allow you to review your targets for outputs and outcomes. If you have not met certain targets, or if you have exceeded them, then you should be able to set this against what you now know about the capacity of the project and the performance of other agencies. Your evidence should be strong enough to show if there were good reasons for a lower than expected performance, whether targets were set realistically and whether you should adjust them.
Give managers, staff and partners an opportunity to comment on the way the evaluation was carried out, so that lessons can be learnt for future evaluations. File the evaluation data and report for future reference. Use the experience of selfevaluation to examine how you might improve the quality and usefulness of the information you are gathering.
83 basic utilisation
Using evaluation findings
This section has covered the final stage of the self-evaluation cycle. The starting point was planning for the project itself, and defining the evaluation questions. By systematically collecting data, and presenting your evaluation findings clearly, you will be able to demonstrate your progress to your stakeholders. You will also be able a ble to make ma ke any adjustments needed to keep your project on track to achieve your aims and objectives.
Further reading Lyons-Morris, L and Taylor Fitz-Gibbon, C (1987) How to Communicate Evaluation Findings , SAGE, London
charities evaluation services
84
further utilisation Basic utilisation discussed the need to disseminate findings and to use them to demonstrate progress and to feed back into, and guide, the management of the project. This section examines further how you might use evaluation findings internally and externally for three main purposes: management - reviewing targets and target groups - incorporating quality improvements into planning - reviewing key resources policy change strategic planning and development.
Valuing the findings To make it more likely that evaluation findings will actually be followed up, decision-makers must be receptive to evaluation and believe that it can make a difference. In organisations which have an established culture of monitoring and evaluation, evaluation findings are more likely to influence the decision-making process. Negative findings may well be ignored by project staff if people receive them as a personal criticism or see them as threatening. Whatever Whatever the findings of the evaluation, they may be welcomed by some groups and rejected by others. There are a number of factors that influence whether findings will be used. It is important: to provide information that is really useful to deliver the information when it is needed that stakeholders consider the evaluation to be of a high standard to communicate evaluation findings appropriately – evaluation stands little chance of making a difference if decisionmakers do not read the final evaluation reports because they are badly written or presented.
Using evaluation for management Evaluation findings are most likely to be used when people recognise their immediate relevance to the project. Staff and volunteers are under pressure in their daily work routines and will need motivation to use evaluation findings and make changes. Work towards changing the culture of the organisation, so that people are receptive to new ideas and challenging feedback. Evaluation can be helpful to provide information on pilot projects, or to test out new service innovations. With routine evaluation also, use the lessons learnt about what you could do better, or differently, in your operational planning. Evaluation can be useful to identify areas of unmet need and areas for service development. Do you need to: change the way way the project is managed? reallocate resources? expand or change direction?
Using evaluation to set service delivery standards Evaluation should give you some important information about how you deliver your services to users, how this affects user satisfaction and how service delivery affects the outcomes for users. For example, user satisfaction and trainer observations should allow you to set standards for your training. These could be about such things as the maximum number of participants, the quality of training materials or accessibility for disabled people. Other examples of service standards that evaluation might help you to set include: the ratio of staff and volunteers to users staff and volunteer training and supervision reception care and information food quality and the observance of religious and cultural norms.
charities evaluation services
Reviewing key resources Your evaluation should not just look at the results of your activities, but should relate these to the project’s inputs. How have activities been affected by the project’s management structure, its staffing or funding levels? Are the project’s physical facilities unsuitable or is the project transport inadequate? Do you need improved information technology to support the project? If the evaluation has not drawn clear conclusions or made recommendations about these issues, make sure that the evaluation review looks at any implications the findings may h ave for key resources. Financial or human resources may need to be reallocated, or you may need to raise more income in order to develop new services or activities. You may have to do further feasibility work to make a good case to funders.
Using evaluation for policy change Evaluation can play a key role in highlighting the effect that wider social structures and policies have on your own work and on the lives of the people you work with. For example, an evaluation of services for disabled people might examine local government policies and provision, and explore how they influence or constrain service provision. An evaluation of services for people with learning difficulties might look not only at service users’ views but also at how wider community care policies affect the experiences of service users. Policy is influenced by a combination of factors, including the assumptions, personal ideology and interests of the policy-makers. Although the processes involved in policymaking are complex, evaluation can be designed not only to improve practice, but also to improve and change policies at many levels.
When you design your evaluation, think about policy issues and your evaluation aims relating to them. These policies could range from operating policies in your own organisation to the policies of larger systems, for example of your local authority. What levels and types of policies are you trying to change? Who are you trying to influence? When you report on your evaluation, think about who your audience is. If you intend to publish the evaluation, this clarity about your audience is essential. Communicate the lessons you have learnt directly and simply. Be direct about what types of policies might be effective, based on findings. Communicate what you know to community members and project users. This can gain you allies to help you influence policy-makers. Organisations can use evaluation to develop their advocacy and campaigning role, the process of evaluating being an essential part of working for social change. You may need to work with others, using evaluation to strengthen networking and collaboration.
85 further utilisation
Valuing the findings Using evaluation for management Using evaluation for policy change
charities evaluation services
86
Using evaluation for strategic planning There is an important role for evaluation in strategic as well as operational planning. The strategic planning process starts when you have analysed monitoring and evaluation data. Steps in strategic planning Routine monitoring Evaluation
Step 1 Analyse data
Other data collection
Step 2 Assess strategic options
Step 3 Agree strategy
Two useful tools in the strategic planning Two process are: strategic environment: SWOT analysis (Strengths, Weaknesses, Opportunities and Threats) situational or PEST analysis (Political, Economic, Sociological and Technical issues affecting the project – the need for the project).
Framework for a SWOT analysis Positive
N egative
Internal: for example, internal resources, skills and organisational structure
Strengths
Weaknesses
External: for example, political, social and economic environment; external st akeholders
Op p o r t u n i t i e s
Threats
Monitoring and evaluation data can provide valuable information for developing both SWOT and PEST analyses if you choose to use these. You You need time to draw out conclusions before summarising the strategic position of the organisation and identifying essential questions.
charities evaluation services
You have now reached the final point in the planning, monitoring, evaluation and utilisation cycle as your review process feeds back into project planning.
Aims and objectives
Strategic planning
87 further utilisation
Operational planning
Implementation
Evaluation
Using evaluation for strategic planning
Process Outcome
Voluntary organisations vary greatly in their purpose and nature, size, management structures, and in the resources available to them. They will also differ in the range of monitoring and evaluation activity they choose, or are able to carry out. Whatever the profile of your project, or the complexity of the systems you adopt, you are likely to increase effectiveness if you clarify and plan project activities, obtain information routinely and systematically about your services and activities, and make periodic judgements about how well you are doing.
health and sustainability of your project and, more importantly, for your stakeholders and the people you exist for. Effective monitoring and evaluation will help your organisation to provide services of the highest possible quality, and embody the highest standards of integrity, credibility and accountability. It will make sure that you are working with the greatest possible effectiveness and efficiency, that you provide value for money and, above all, that the work you do will make a real difference.
Finally, using your monitoring and evaluation findings is what makes the time and energy you have invested worthwhile. You involved your stakeholders from the start, you defined the important questions together, and at the end of the process you produced valid and reliable findings and presented these clearly and persuasively persuasively.. This will all be wasted if you file your report away in a drawer. Instead, you will have a powerful tool to demonstrate the value of what you are doing, plus practical lessons to develop the project. Whether your project is small, medium or large, you will want to do everything in your power to be – and be seen to be – as effective as possible. This is vital for the
Further reading Patton, MQ (1997) Utilization-focused Evaluation , Third edition, SAGE, London
charities evaluation services
section 5 practical toolkit Collecting and analysing data Introduction Establishing credibility Documentary data sources Collecting data directly from individuals Data collected by an independent observer Data collection matrix Sampling Data analysis
91 91 91 92 93 102 104 105 108
Data collection tools
111
Introduction
111 112 114 115
Tool 1 Quarterly monitoring report Tool 2 Outline evaluation report Tool 3 Interview topic guide Tool 4 User satisfaction form Tool 5 Intermediat Intermediate e outcome monitoring form Tool 6 Individual assessment tool Tool 7 Outcome assessment: participant self-report Tool 8 Telephone interview guide Tool 9 User interview schedule Tool 10 Participatory learning and action (PLA) tools
116 118 120 122 124 126 128
Further reading
130
Glossary
132
charities evaluation services
89
charities evaluation services
collecting and analysing data Introduction
Establishing credibility
An important part of the early phases of evaluation is to think through the evaluation focus and the approach appropriate to your evaluation questions. You also need to be clear about data collection: where from, how you will collect it and how you will analyse it.
Whether you do a self-evaluation or an external evaluation, it is very important that the evaluation is seen to be credible. This will rely very largely on what stakeholders think of the perspectives and skills of the evaluator evaluator,, the techniques and methods used, and the soundness and accuracy of the findings.
The Practical toolkit gives you an overview of data sources and data collection methods, and provides more detailed guidance on specific methods, such as interviews, focus groups, questionnaires and observation. It introduces key aspects of data analysis and has some checklists and examples of tools that you can adapt.
It will be helpful in establishing credibility to be clear about stakeholders’ expectations, and to discuss in advance the methods and approaches that are most appropriate for the specific evaluation and project. When choosing data collection methods, first think about the skills and resources you have available. Think also of what will be most appropriate to the sensitivity and complexity of the issues, and what will be most appropriate to the project values and your stakeholders, including your users. Finally, you need to consider: Reliability – how consistent is the measure? What is the likelihood of getting the same results if procedures are repeated? Validity – are you measuring what you intend to measure?
Bias Bias occurs when findings are unduly influenced by the way data is collected, analysed or interpret interpreted. ed. When data is collected . For example, the researchers themselves, or the way questions are asked, may influence how people respond. If questions are not clear enough, they may be interpreted differently by different people. When data is analysed . For example, if data is not broken down to show differences between groups, it may not reflect the true differences within a sample.
charities evaluation services
91 collecting and analysing data Introduction Establishing credibility
charities evaluation services
92
The effects of bias can be limited through careful training, cross-checking and supervision. Analyse your own biases as well and see how these might affect data collection and analysis. Being aware of and monitoring any possible bias may enhance the validity of the results.
Documentary data sources Documentary sources may raise questions that it will be useful to tackle in other parts of the evaluation, so it is best to do this work at the beginning of the study.
Literature Literatur e search Some measures that can be taken include: eliminating bias as far as possible by ensuring questions are asked with the same wording, and in the same manner making sure that groups chosen for comparative purposes share as many characteristics as possible, except for the variable you are studying obtaining evidence to answer your evaluation questions from different types of sources. This is called triangulation . The term triangulation means getting evidence or taking measurements from different points. You can use triangulation by looking for different sources of data on the same area of enquiry or applying different methods to the same data sources. For example, you may wish to ask a group about the benefits of working together,, obtain feedback from the group together facilitator, and observe the group yourself. If you do this, you will have greater confidence in your evaluation findings. It may, of course, result in contradictory findings, and it will be your task to consider explanations for such differences.
The following section gives a detailed review of data collection methods. There are three major sources of data: documentary obtained directly from individuals obtained from independent observers.
A review of other published or unpublished material can be important to set the project in a wider context. This secondary data may be government reports or university and voluntary sector research. The internet gives access to wide-ranging information that may be useful for your evaluation.
Desk research This is a review of records, databases and statistical monitoring data. You may need to look at these over a period of years. For example: records about members and users planning documents policy and discussion papers and reports publication sales and distribution figures minutes of meetings correspondence.
Desk research is valuable because it helps you understand how a project has developed, and how it is managed and resourced. It is essential for implementation evaluation. Client-based data, such as case records, may not only tell you who you are seeing and when, but may also provide some outcome data.
charities evaluation services
Collecting data directly from individuals Questionnaires and surveys A questionnaire , with a list of questions that everyone is asked, is a popular method to get information from users. Surveys Surveys collect collect information in a standardised way from a large number of people, very often using a questionnaire. They are useful for collecting a wide variety of data, particularly on opinions and attitudes. A questionnaire can be quite user-friendly, user-friendly, and put fewer demands on the users and project than other methods. One advantage is that the same set of questions can be used as often as needed, so that you can make comparisons over time, or between groups of people. Responses can be made anonymously and questionnaires are a relatively cheap and efficient way of reaching large numbers of people or covering a wide area. Disadvantages are that response rates could be low and they call for a level of literacy among respondents. It is difficult to get complex information, and to control and know how people interpret questions. Questionnaires are used in two ways: A schedule of questions for face-to-face interviews, with the form filled in by the interviewer. A self-completion questionnaire, filled in by the respondent. These are often mailed out, but short and simple questionnaires can also be handed out. You can ask people to fill them in on the spot, which should increase the response rate. Very complex questionnaires are better completed face-to-face or over the phone, to make sure that questions are understood.
The design of the questionnaire is important. To get the information you want, it is important to ask questions in a way that people will understand – and to minimise the possibility that people will interpret their meaning differently. Try to: learn from questionnaires that other agencies have sent out involve your users wherever possible to make sure questions are phrased in language they use pilot test the questionnaire, that is, ask at least two or three people to fill in the questionnaire as a ‘dummy run’ to see if it works as planned.
Online surveys are increasingly used as an efficient way of reaching a target audience; they have the potential to reach large numbers of respondents, provide instant collation of quantitative data, and permit statistical analysis when the data is downloaded into a spreadsheet file. The survey can also be printed in hard copy for those respondents unable to access a computer, and the additional data can be inputted into the online survey. As with other questionnaires, it is important to limit the number of questions and to design the questionnaire thoughtfully. thoughtfully. If certain information is essential to your survey survey,, you may need to make some questions compulsory. Think also about the order of the questions, placing the most important ones early on. Many elements of good questionnaire design are common to both postal questionnaires and online questionnaires. The next page covers some of the important points to bear in mind.
93 collecting and analysing data Documentary data sources Collecting data directly from individuals
charities evaluation services
94
Designing the questionnaire Once you have decided to use a questionnaire, spend time on its design, as this will play a vital part in the quality of the data you get back. If you can, get the help of someone experienced in designing and analysing questionnaires. They can help you make sure that data can be easily processed, and think through how to link different bits of information together, such as profile data and attendance, or length of membership and satisfaction levels.
There may be benefits in having questionnaires and surveys that can be completed electronically. You will need to consider this in the design. Think about your target group and the sort of format that might be needed to make sure that all users can fill in the questionnaire. Do you need a large type size for people who are partially sighted? How will you include people in your survey who are not literate, or who cannot speak English, or who are physically unable to write?
When you design your questionnaire: Have a cover sheet or introductory paragraph which: introduces the questionnaire and the evaluation promises anonymity and confidentiality where possible asks respondents for consent to any follow-up contact or to clarify or explore answers. Give simple, exact instructions on how to answer each type of question, for example, whether to tick a box or circle a number. Keep the questionnaire as short as possible. Include only questions you need. Use tick boxes where appropriate. Provide enough space for answers. Double-check question numbering. Design a clear and attractive layout with a large, clear font. Include instructions for return of a postal questionnaire, with a time deadline and a stamped return or freepost envelope.
charities evaluation services
How to write good questions
Make sure you get a good response
Questions ask the respondent for factual information, opinions or preferences preferences.. These can be designed to get two types of answer:
You are more likely to get a good response if you:
open-ended with free response closed or pre-coded.
Pre-coded questions force choices from a limited range of options but have the advantage of being easier to translate into statistical analysis. Many questionnaires combine closed with open questions. Make sure that if you ask respondents to tick boxes you allow for negative as well as positive responses or outcomes to be shown. Allow also for some open-ended questions on areas for improvement or for wider-ranging comment.
pilot the questionnaire first to iron out any potential problems try to mail out at least three times more questionnaires than you need responses send a reminder use a freepost service or a stamped addressed envelope for responses offer it by email if people prefer.
When writing questions you should: Keep questions short: no more than 20 words where possible. Use clear and simple language that suits people filling in the questionnaire. Avoid jargon, slang or unfamiliar terms. Have questions that follow a logical sequence. Make sure that all questions are relevant to the respondent and can be answered by them without significant time or effort. Make sure that questions do not have more than one possible meaning and cannot be interpreted differently by different respondents. Relate questions to a specific time period, as in: ‘Over the past week, how many times have you …?’ When asking respondents about the frequency of an event or activity, specify a suitable timeframe, such as ‘less than once a week’, ‘once a week’, ‘more than once a week’. Avoid asking two different things in the same question. If you want a detailed answer, make sure the question will give you one. For example: ‘What three things, if any, would you change about the course?’ Provide a category for answers such as: ‘Don’t know’, or ‘Did not attend’. Ensure responses are quantifiable by providing boxes to tick, or a choice of numbers on a scale. You can leave space for additional qualitative responses as well.
95 collecting and analysing data
charities evaluation services
96
Face-to-face interviews
Types of interview
An interview is more than a one-to-one conversation, because the interviewer retains control, and gets responses to specific questions. A record is kept of responses. Advantages of interviews are that they do not require literacy, literacy, and you can check that your questions have been interpreted interpret ed correctly and that you have understood responses properly. properly. It is also a more personal approach for sensitive subjects. Disadvantages include the lack of anonymity, that it is a time-consuming method and it is difficult to achieve consistency among interviewers.
Structured interview In a structured interview the interviewer only asks questions that have already been prepared in an interview schedule , using a set order and exact wording. Interviewers are given detailed instructions on how to ask the questions.
You can hold interviews with an individual or with a group. Use group interviews when you have limited resources or when an individual response is less important. Remember that in group interviews people may be reluctant to disclose personal views or sensitive information. Identify key people with whom a one-to-one interview is most important. Interviewers need to be able to establish rapport with the respondents. Interviews should usually be held in a convenient environment where the respondent feels most comfortable. This might mean that you have to travel to their home or office.
Sometimes all the questions are closed, and the respondent chooses answers from fixed responses. However, some open-ended questions can be included; the instructions to the interviewer might then include some probes used to tease out further information. It is important that the wording of questions is clear and does not vary, so that different interviewees do not interpret questions differently. The advantage of the structured interview is that it is focused, and the answers will be more consistent and easier to compare and analyse. The disadvantages are that the method is inflexible and the questions may not be the most relevant for the individual respondent. So you may get a distorted picture of their views, impressions and reactions.
Recording interviews You will be able to take notes during most interviews. This is important, because if you wait until afterwards to write things down, you will forget much of what you heard, and your own biases are more likely to intervene. Type up your notes as soon as possible after the interview. You may find it useful to tape a lengthy interview. This will allow a detailed analysis and you will be able to use accurate quotations in your report as evidence. Always get permission before taping. Transcribing a taped interview can be time-consuming and costly. One option is to use notes of the interview as the basis of your analysis. You can then use your tapes selectively to supplement the notes.
charities evaluation services
Semi-structured interview Semi-structured A semi-structured interview follows a less rigid format, with open-ended questions designed to draw out more qualitative information. The schedule is used as a prompt to make sure that a ll the required topics are covered. Questions do not have to follow a predetermined sequence and the interviewer can explore the answers in greater depth. The interviewer is free to vary the exact wording of the questions and to probe and follow up answers. A semi-structured approach can be particularly useful for group discussions. Unstructured interviews These are sometimes called depth interviews, as they encourage the interviewee to reveal in depth any aspects they want to raise. Unstructured interviews are flexible, can be tailored to the individual and can lead to a greater understanding of the respondent. Their weakness is that information is likely to be different for each interviewee, is less likely to be systematic and comprehensive, and data analysis can be difficult.
It is important to have a well-thoughtthrough schedule of questions to ask. You should put them in order of importance, as you may only be able to ask the most important ones if you are to keep the interview brief. Having a tight structure for the responses makes conducting a telephone survey easier, as many of the responses can be recorded in tick boxes. However, as with face-to-face interviews, you can get in-depth an swers to questions by using more open-ended questions. It is hard to listen to what the respondent is saying and write down at the same time. It is a good idea to record the interview as well; this means that you can get accurate quotes and go back to the tape where you have gaps in your notes. These good practice points may help to get more out of telephone interviews: interviews: Get consent to the interview in advance. Make an appointment for the call just as you would for a face-to-face interview.
Combining interview styles You might combine interview styles. There may be a small set of specific questions that you want to ask everyone, and a number of topic areas which can be explored in a more flexible way.
Careful sampling is especially helpful for keeping the task manageable. Use a larger sample than you will need, to allow for difficulties in making contact.
Telephone interviews
Suggest in advance any information that respondents might find it helpful to refer to.
Telephone interviews are par ticularly useful for getting feedback from busy people. They can be used to obtain quite sensitive or complex information, or where you need to contact a large number of people in their home or office. In some situations you may be able to conduct lengthy telephone interviews, but try not to make them longer than 20 minutes.
Allow enough time for making successful contact with respondents.
Think about how to keep your call confidential, if appropriate.
It is good practice to send people an outline of the questions or topics you will cover in interviews. This may be reassuring and allows them to think about your questions in advance.
97 collecting and analysing data
charities evaluation services
98
Group interviews
Focus groups
Advantages of group interviews are that they can lead to the group identifying solutions and agreeing action. The opportunity can also be used for group members to raise issues that may not have been on your agenda. Disadvantages include the possible domination of a group by one or more individuals, and that group views tend towards agreement, rather than allowing for different views. Consider having a separate note taker and remember to record and use the whole discussion, not just the points agreed by the group as a whole.
The essence of a focus group is that it is ‘focused’ on a limited number of questions based on a core topic. As such, it is different from a more open, flexible group interview, which may be varied in size and scope. Focus groups usually have eight to 12 people. They can be used to clarify specific topics that can then be discussed by a larger group, or to help set the questions for the evaluation.
A group interview or discussion needs careful planning and preparation. It may be possible to use an existing group that meets regularly for another purpose. At other times a group may need to be put together from scratch. Group interviews are not good for revealing sensitive information, and the facilitator needs skills to: encourage each person to speak keep the discussion on track recognise important issues and probe if necessary summarise and check back.
To help you work out your questions for interviews or discussions, it can be helpful to hold some unstructured interviews with one or two of your respondent group first. Ask open questions about what they feel are the key achievements and issues. Although their views will be individual, this will help you develop your final list of questions.
There are a number of other key features: a facilitator guides the discussion sessions last for one and a half to two hours recording is usually word for word. Choosing focus group participants Whenever you can, try to make sure that participants are as similar as possible in terms of age, gender, social class and other characteristics, to encourage a good communication flow. Some people argue that ideally participants should not have met before. However, to be practical, it may be easier to interview a ready-made group, if one exists.
If your participants are unfamiliar with the term ‘focus group’, it might be better to call the group an informal discussion . Running group interviews and focus groups Group interviews and focus groups can be quite lively and energising. Group members support each other to communicate and people ‘spark’ ideas off one another. However, they need considerable skill to run and can be time-consuming and difficult to analyse.
charities evaluation services
The facilitator guides and leads the participants through a focused discussion on the topic to be investigated. A topic list should be prepared in advance to make sure that all relevant issues are covered during the meeting. In particular it is the facilitator’s responsibility to make sure that all participants have an opportunity to speak and not to let one or two individuals dominate the discussion. Make sure that the venue is fully accessible and find out if anyone in the group has specific communication needs. It is important that the note taker can write quickly and accurately. If you decide to tape the discussion, always ask participants’ permission first and remember that transcribing tapes takes a long time.
Participatory learning and action Participatory learning and action (PLA) methods combine a range of interviewing and group work techniques. Facilitators Facilitators work with participants interactively, and encourage them to express themselves freely, visually and creatively, and then to discuss what they have expressed. PLA has been developed from work in countries in the developing world, where it has been called participatory rural appraisal (PRA) or participatory appraisal (PA). The term participatory learning and action was adopted to emphasise the need for the approach not just to generate data, but also to involve participants in change. So the methods are best used within an overall approach to involving users.
The approach was designed to reach and hear the experiences of the most marginalised and isolated sections of communities. It is intended to allow communities themselves to analyse their situation and find solutions, at the same time recognising that people have diverse views. There are a number of PLA tools. Examples of these are: drawing , used to illustrate ideas timelines , which show the sequence of different events over time ranking exercises , where things are placed in order participatory evaluation exercises , such as evaluation wheels matrices , which display information such as preferences easily and visually mapping exercises , used, for example, to show the social and economic features of an area, or changes over time.
At first, try to work with people who are familiar with PLA to increase your confidence in using these methods. Remember that it is important to record the discussion that takes place about the visual descriptions. You can increase the validity of the data by getting information from a number of sources, that is, triangulate the data.
99 collecting and analysing data
charities evaluation services
100
Diaries and work schedules Diaries can be used to collect information from staff or users. They can provide an individual record of staff members’ or users’ day-to-day perceptions of a project. Diaries or work schedules, which set out actual daily work routines and the amount of time spent on different tasks and activities, can be useful to develop a profile of a project, or of a particular task within a project. But they need to be designed so that people can keep them up to date easily. A diary can take many forms, ranging from a set of quite detailed questionnaires bound together, to a series of key themes on which to respond, to a blank page for each day. A relatively flexible format, but with some focusing prompts, is often used.
Using tests and scales Any form of test or scale should be relevant and sensitive to your target group, and should use appropriate language. Most achievement tests measure against either: a norm , that is, performance compared to a previously tested group
or criteria , for example, if a student has gained specified skills.
A standardised test is one that has been rigorously developed and tested for its reliability and validity. Those measuring attitudes, beliefs or other psychological aspects are referred to as psychometric tests. Standardised tests, tests, mainly used in the health care field, may not be transferable to a voluntary sector project because of their specialised and clinical nature. Although you may construct your own test instrument, be aware that there may be limits to its reliability and validity.
There are several types of scales, which allow measurement against a predetermined rating. They can be used in a number of ways, often in questionnaires and in observation, and are often used to assess attitudes and satisfaction levels. The most common scales require a response to a number of statements. There may be a simple choice of agreeing or disagreeing with the statement. Or, as in the Likert-type scale, one of the most popular methods of measuring attitudes, there is a range of possible responses as shown below. Statements Stateme nts may be worded in a positive or negative manner. A scoring scheme is usually associated with the responses. For example: strongly agree = 1 agree = 2 neither agree nor disagree = 3 disagree = 4 strongly disagree = 5. A written scale may be replaced by a combination of written and numerical scores. For example: Strongly Stron gly 1 … 2 … 3 … 4 ... ... 5 … St Stron rongly gly agree disagree. Always make clear the association between the scores and possible responses. If you want to avoid a neutral or undecided middle category, use an even number of choices, for example, 4 or 6. The two examples on the next page are asking for levels of satisfaction from training course participants and from recipients of a newslett newsletter. er.
charities evaluation services
Please assess the following elements of the course by ticking the boxes 1 to 5, as follows: 1 2 3 4 5
Very poor Poor Average Good Very good
1
2
3
4
5
a Course content b Tutorial groups c Information materials d Course administration e Social activities
Please show your satisfaction with the newsletter by ticking the boxes 1 to 5, where 5 shows that you are very satisfied.
Very dissatisfied
1
a Topics covered b Design and layout c Clarity of language d Length e General interest and relevance
To find out why respondents have given a particular score, it is helpful to design your questionnaire or other data collection instrument to allow for explanations or comments to be recorded. You can also investigate attitudes by using ranking. Here, respondents are invited to show their preference by putting various factors in order of importance. This could be done by using a numerical ranking system or, for example, by arranging statements or pictures visually.
2
3
4
5
Very satisfied
101 collecting and analysing data
charities evaluation services
102
Working with children Don’t be afraid to bring children into your evaluation as they often have clear ideas about their problems, priorities and expectations. However, be aware that if adults are present, such as teachers or social workers, this may put them off talking or they may not say what they really feel. There may be child safety issues if you interview them alone, so make sure that you are visible when interviewing children, or work with another interviewer. You must get permission from the local authority to interview children under 18 who are in care. You will need to get permission from a parent or guardian for children under 16 who are not in care. Here are some good practice points: choose a small, friendly place for the interview and allow time to gain the child’s confidence if possible visit them beforehand to explain what you are doing, and tell them if you plan to use a video camera or tape recorder be careful with your language and check that you are understood explain that there is no right or wrong answer
Think about the different techniques that will be suited to different age groups. Children’s drawings can be useful. Ask them to talk about them. When you are asking children questions, it can be helpful to use drawings and symbols, such as sad and happy faces. 1
Data collected by an independent observer Observation is useful for collecting information on group behaviour and in situations where people find it hard to talk. It is also useful for assessing the quality and standards of delivery, for example of training, in a residential setting, or of the activities of a day centre. The use of observation can involve project participants in the evaluation, and encourage their involvement in other data collection exercises. On the other hand, the technique runs the risk of bias and subjectivity. You will need to think clearly beforehand about what you are analysing. Reliability of observation depends heavily on the training and skills of those doing the observing. You can reduce this risk if you devise a well-thought-out observation checklist. Where you have more than one observer:
give children a chance to ask you questions at the end of the interview.
agree with observers in advance how ratings should be applied check that observers apply the ratings correctly
You can ask children their views directly, or informally, or you can use observation methods. Relaxed, open-ended interviews and group discussions work well. Around four to six children is a good number – enough to get them talking but not so many that they won’t get a chance to speak. Interview in the same age groups and in same-sex groups where appropriate.
pilot and run some observations jointly to make sure the approach is standardised.
1
Source: McCrum, S and Hughes, L (1998) Interviewing
Children: A Guide for Journalists and Others , Others , Second edition, Save the Children, London
charities evaluation services
Setting up the observation instruments can therefore be time consuming. There are other potential difficulties. Participants may feel observation is intrusive and it may cause anxiety. The observation itself may affect the behaviour of those being observed. You can use a structured or unstructured approach to observation. Unstructured observation is observation is an open, exploratory approach which provides descriptive data about activities, the contexts in which they take place and the people taking part. Even unstructured observation will be led by basic questions about who is involved and how, what is being done, in what way, when and where, and how things change and why. It is also important to take note of things that do not happen, as these may be significant. In structured observation the observation the researcher conducts the observation within the framework of a number of more precisely predefined categories. These can include the quality of group interaction, effectiveness of certain techniques, or individual responses or behaviours. Techniques include the following: using a checklist of items to note or tick off devising a way to count and categorise specific aspects of behaviour and attitude taking notes against a series of questions, either open or fixed choice, such as: ‘Which topic areas cause the most/least discussion?’
Participant observation This involves observing while participating in activities, such as training courses and workshops, club activities and activity weekends. The advantage of participating is that you will gain experience of the project as a user and put users at their ease. The disadvantage is that it is harder to stay neutral. Take into account that: gathering, recording and analysing observation data can be costly there may be concern about the reliability and validity of observational data if there is more than one observer, you need to check that you are using the same criteria and applying them consistently.
Always let people know when they are going to be watched, and ask their permission. However, be aware that saying this is likely to affect behaviour. Mystery shopping is shopping is an assessment of a service carried out by people posing as users. It is useful as a check on service delivery and to check performance against standards. Depending on the service provided, you may need a specialist to do this. You can do it with the knowledge and consent of those providing the service or without it. When deciding on this, bear in mind the type of service and the likelihood of ill-feeling being caused if people have not been told beforehand and had the value of the method explained to them.
103 collecting and analysing data Data collected by an independent observer
charities evaluation services
104
Number of older people who have had needs assessments Level of user satisfaction Number of people contacting statutory and independent agencies Extent and type of working relationships with other agencies Number of older people with Housing Benefit Total amount of Housing Benefit Numbers receiving health care Types of services received Numbers receiving care packages Types of care packages
Data collection matrix A data collection matrix is a helpful tool to plan the methods you will use for each area of enquiry. You can assess which method will give you the richest source of data, how you can triangulate data by collecting it
from a number of sources, and how to balance what you ask using each method so as to keep the work manageable. The matrix below shows the methods suggested to collect data for an evaluation of St Bernard’s Homes, which provide hostels and shared accommodation for older people.
Interviews w ith users
Sta ff interviews
Te T e lephone interviews with other agencies
Monitoring data
Document review
Case r e c o r ds
charities evaluation services
Sampling
Random sample
Whatever method you choose, you may wish to get your data from a sample. By sampling you can collect information on a smaller number of people or a smaller number of project activities rather than from the entire group, or ‘population’. Sampling may be necessary either for qualitative or quantitative studies. In drawing up the sample, you need to decide who and what to gather information from, for example: categories or groups of people specific individuals
105 collecting and analysing data
For quantitative studies, some form of random sampling will be a reliable and valid method of sampling. First you need a complete list of the whole population so that every individual or organisation has the same chance of being included in the sample. This allows you to make statistical generalisations about the wider population from what is found in the sample. Try to avoid taking a sample that you know is not typical. The size has to be large enough to cover the range of experiences and characteristics that you want to find out about.
Data collection matrix Sampling
For example:
project locations
every fourth name on your records
activities or events.
every tenth organisation on your register.
Find a random way of deciding where to start the sample. In the following example, a sample of six is chosen randomly from a population of 18. Every fourth person is selected. When you get to the end of the grid you continue counting from the beginning.
A 5
B
C 1
D
E
F
6
Sample starts here
G 2
H
I
J
K 3
L
M
N
O 4
P
Q
R
charities evaluation services
106
Stratified Stratifie d sample Sometimes a simple random sample is not the best way of making sure that all the types of people or organisations that you work with are represented. For example, if you only have a few clients who are disabled, random selection may miss them entirely, just by chance. To avoid this, in a sample of 80 you may decide that you want: 25 men aged 19 to 54 25 women aged 19 to 54
as well as: 10 disabled people 10 people under 19 10 people over 54.
This would enable you to get the views of all the important constituent groups, in large enough numbers for the feedback to tell you something about the way that individual groups perceive the service, as well as how the overall user group sees it.
Random stratified sample In a random stratified sample, people or organisations would be randomly selected within sub-groups. For example, if 70 of your users are women and only 30 are men, you might want to get feedback from women and men in proportion. If you are interviewing 30 people, you would interview 21 women and 9 men. To choose them, divide up your participant list into men and women, and choose, for example, every fourth person on each list until you have the number you want.
Purposeful sample With a purposeful sample, you make a deliberate choice of who or what to include, according to specific characteristics. This type of sample enables you to capture a wide range of views, experiences or situations. For example, you might choose to interview five users who have had ‘successful’
outcomes and five who have not been successful. This allows you to make the most of the information you get from a smaller sample, but you cannot use the data to make generalisations about the population or to describe typical experiences.
A time sample A time sample will include every case that happens over a set period. For example, you may monitor the number of people coming to a busy service over a week or a fortnight.
Choosing the right sample size Statisticians have worked out the recommended sample sizes for various populations, for those who wish to make generalisations from the sample and apply them to the whole population. However, for qualitative studies there are no hard and fast rules about your sample size. It does not necessarily have to increase according to the number of users you have. For practical purposes, sample size should be as large as you can afford in terms of time and money. Be guided by your available resources, by how you plan to analyse the data, and the need for credibility. Select the number of individuals or organisations that will give you a wide enough range of experience, and a reasonable number within each range to give you the depth of useful information you need. If you plan to analyse the data by sub-sets, for example according to age or ethnic group, your sample size should be large enough to provide significant numbers in those sub-sets.
Longitudinal studies A longitudinal study is data collected on a number of occasions from the same population over a period of time. If you are unable to collect data from the same group, you may interview another group from the same population, as long as the samples are selected randomly both times to represent the entire population. Use identical data collection instruments, with the same wording, or you will invalidate the findings.
charities evaluation services
Samples have associated problems. These are shown below:
P r o bl e m Sample bias This is mainly due to non-response or incomplete answers. For example, completed responses to a survey to older people may not reflect the true age range of project users. This may be because very old or more vulnerable users were less likely to complete the questionnaire and their views are therefore less likely to be represented.
Sampling error Here you run the risk that if another sample was drawn di different re results mi might be be ob obtained.
Action
Make repeated attempts to reach nonrespondents. Describe any differences between those participants who replied and those who did not.
Select as large a sample size as is practicable wi within yo your re resources.
107 collecting and analysing data
charities evaluation services
108
Data analysis The quality of your evaluation will depend to a large extent on the quality of your data analysis. Data analysis is about organising your evidence in a logical, well-argued way in order to make sense of it. You may need advice when you design your evaluation to make sure that the type and amount of data you propose to collect is capable of being analysed. Data analysis should be done throughout your evaluation process, not just at the end.
Where large amounts of quantitative data are collected, the calculations need to be done by people trained in statistical analysis. Calculations aim to show you whether or not your results are statistically significant rather than just due to chance. A statistical analysis can tell if the differences shown by a sample can be repeated without having to repeat the study or choose a different sample. Correlation, t–test, chi-square, and variance analysis are among the most frequently used statistical tests.
Computer analysis Quantitative data analysis Data collected using quantitative methods can be analysed statistically for patterns, for example: percentages averages ratios range between lowest and highest levels trends rates.
Before you collect large amounts of quantitative data, think how it is going to be stored and analysed. You will need a computer database if you have a large number of questionnaires asking for a lot of information. Remember that monitoring forms should have a client code or name if you want to be able to compare two or more forms relating to the same client. If you cannot handle the data in your project, you may be able to get the coding and analysis done by a student or a research firm. First check your data for responses that may be out of line or unlikely, such as percentages that are numerically incorrect. You may need to exclude these from the data to be analysed. The process of data analysis can be a useful way to check the accuracy of the data collected and can identify unusual or unexpected results.
Most common spreadsheet software has a range of useful statistical functions for analysing data. There are also specialised statistical programs. Understand the computer program you use, because this will influence the design of your questionnaire and coding. When using a computer spreadsheet or statistics package, you should regularly check the accuracy of the data entry because of potential user error. Qualitative data can also be stored in the computer.. It is often helpful to analyse computer qualitative data numerically. A simple ana lysis of questionnaire or interview data should suggest some main categories or types of response. You can then add up the number of responses you have in these different categories.
Coding Coding means using a number, or a few key words, to summarise a lot of textual material. Categories are used to classify the data so they can be processed by the computer. There are two main approaches to coding responses: Pre-planned – respondents reply to fixed choice questions with pre-set categories. Additional coding is not necessary.
charities evaluation services
Post-analysed – this will be necessary for open-ended answers and comments. It allows the researchers to set codes from the full range of responses.
Codes should include ‘don’t know’, ‘no response’ and ‘not applicable’ responses. Piloting evaluation tools can help to show whether questionnaires cover all possible responses to each question. Many evaluation questions can be answered through the use of descriptive statistics. For example: How many cases fall into a given category – their frequency . What is the average, the mean or the median – the central tendency . Frequency Once coded, the frequency of each response can be worked out. Frequency can be shown in number totals or percentages. Set out the findings clearly in a table or chart. It is good practice in tables to show the actual numbers (the frequencies) as well as the percentages.
Example of grouping data: if, for Thursbury Community Theatre Project, one of the evaluation questions was: ‘What was the age range of young people participating in after-school and holiday schemes?’ you schemes?’ you could ask the computer to group data into a table, for example: Age range of young people attending after-school and holiday schemes over the year Age range
N u m be r o f you oung ng pe peop ople le
Percentage of to tota tall %
Under 10 10-12 13-16 Over 16
21 16 56 44
15.3 11.7 40.9 32.1
Total number
137
100
Central tendency The mean , or arithmetical average, involves adding up the data and dividing by the number of participants. The median is the point at which half the cases fall below and half above the sample. The mode is the category with the largest number of cases.
The mean may not provide a good illustration of responses where these have been widely distributed. You may need to be more specific. For example, ‘65% of participants found the course good or very good, 15% found it average, and 20% said that it was poor or very poor’ will poor’ will be more helpful than presenting the average response. The mode is useful for describing the most commonly occurring non-numerical data. Remember to make the response rate clear in your analysis. Whenever you can, record the basic characteristics of those declining to participate in, for example, a questionnaire, say by age or sex. Make it clear how the profile of those responding relates to the wider population you are researching. Cross tabulation and sub-group analysis Cross tabulation examines findings in greater detail. If the Community Theatre Project wanted to examine the length of participation in a holiday scheme by age, the computer could show this. The table on the next page demonstrates this.
109 collecting and analysing data Data analysis
charities evaluation services
110
Length of participation in the summer holiday scheme by age Age range
2 days or under
3-5 days
6-10 days
11-15 days
Total
Under 10
-
1
-
2
3
10-12
1
2
1
2
6
13-16
3
1
6
8
18
Over 16
-
-
1
7
8
Total
4
4
8
19
35
In this case, while the cross tabulation allows a more detailed description of the summer holiday scheme, it is not possible to draw any significant conclusions about the length of participation according to age. You need at least 20 cases in each sub-group for differences to be judged significant, that is, to judge that any difference shown is unlikely to have occurred by chance. Even then, you must recognise that there may be several interpretations or reasons for the apparent relationship.
Qualitative data analysis Analysing by themes If you have carried out interviews or used survey questionnaires with open-ended questions, or you have observation data, you will need to establish what the data is telling you. The most common way of doing this is to analyse it by themes and patterns in your respondents’ perceptions, attitudes and experiences. You may: already have established themes against which you will analyse the data, that is, you have a theory or theories which you wish to check to see if there is illustrative data allow themes to emerge from your data and then categorise your notes into recurring themes that seem relevant to your evaluation questions.
Use a highlighter pen, a simple coding, or ‘cut and paste’ to bring together data that relates to particular themes. Then draw out similarities or differences. What you are doing is categorising your notes into recurring topics that seem relevant to your evaluation questions. Track recurring words or comments. Go through responses to one specific question in a questionnaire and list all the phrases that say the same thing. For a questionnaire: for each question read through at least 15% of the questionnaires, writing down each different type of response when no new responses are found, finalise the code list against each separate category of response repeat this for each open-ended question.
Coding qualitative data also allows you to quantify your qualitative results because, once your questions are coded, you can count how many respondents said the same things. When you write your report, you can illustrate the differences among and within each category of response by providing more detail of responses and some quotations to illustrate them. There are a number of software packages that can help you carry out this coding and grouping of qualitative data, and help you save time in your analysis.
data collection tools Introduction This section gives some examples of data collection tools and reporting formats. However, the tools you use to collect your data will be highly individual to your particular project, to the information that you need and your specific informants. Here are the examples we are using.
Tool 1 Tool 2 Tool 3 Tool 4 Tool 5 Tool 6 Tool 7 Tool 8 Tool 9 Tool 10
Quarterly monitoring report Outline evaluation report Interview topic guide User satisfaction form Intermediate outcome monitoring form Individual assessment tool Outcome assessment: participant self-report Telephone interview guide User interview schedule Participatory learning learning and action (PLA) tools
Most of these tools have already been used by voluntary organisations. It is always useful to learn and build from formats that other projects have used. Most people will be happy to share what they have used successfully. It will be particularly helpful to contact organisations that work with specific groups, such as people with physical disabilities or learning difficulties. They may be able to advise you on points to consider, or incorporate, in order to encourage participation and feedback to the evaluation.
charities evaluation services
111 data collection tools
Introduction
charities evaluation services
112
tool 1 quarterly monitoring report The following is an example of a summary monitoring report for trustees and staff, providing information against targets. Services delivered by Thursbury Community Theatre Project: 1 January to 30 September 1 s t q u a r te r
2nd quarter
3rd quarter
This quarter
Cumulative Annual target
% of target met
1 7 1
9 16 2
15 20 5
60% 80% 40%
Total number of organisations
4
15
25
60%
Total attendance
102
533
1200
44.4%
Number of classroom workshops Half-day On e - d a y Two-day
2 1 -
12 5 2
20 10 5
60% 50% 40%
Total number of schools
3
11
15
73.3%
Total attendance
43
321
700
45.9%
Number of holiday schemes
1
2
3
66.7%
Total attendance
28
42
75
56%
Number of theatre skills courses
1
1
2
50%
Total attendance (teachers)
12
12
10
120%
Total attendance (community workers)
-
-
10
0%
Number of performances Schools Community centres Other venues
Positive
4 t h q u a r te r
On target
Negative
charities evaluation services
113 data collection tools
Other performance targets and indicators To date, 67% of all performance costs have been met through sponsorship and charges (target 60%). 28% of the attendance at holiday schemes has been drawn from black and ethnic minority groups (target 30%). Three performances were with special needs groups (target two). Eight of the performances in schools and community venues have drawn attendance from the Eastleigh and Freemantle Estates, identified as estates with a high level of social problems and lying within our targeted wards. The holiday schemes served young people mainly from those two estates. Twenty five of the performances focused on the three areas targeted: bullying, sexual health and drugs. We held another two performances on the theme of family relationships.
tool 1 quarterly monitoring report
charities evaluation services
114
tool 2 outline evaluation report The report would not necessarily include all sections. Preliminaries Front cover
Short title, project name, date, author Contact details on inside front cover
List of contents
Also list appendices and tables
Abbreviations and glossary
Present in full any abbreviations or acronyms used in the text Glossary of terms used
Executive summary
Provides an overview of the evaluation, its findings and implications
The main document Introduction to the evaluation
Brief description of the evaluation, the evaluation questions and limits to the study Methods, data collection tools, respondent groups (for example, staff, trustees), sampling techniques, analysis
Report overview
What is covered in each section
Project description
Service aims, objectives, values, outputs and expected outcomes, structure, resources, location, timescale Target users, user profile, numbers and patterns of use
Results and findings
Usually organised in terms of the evaluation questions Do not feel obliged to report on everything – just what is necessary It is helpful to have a summary of the findings at the end of this section
Conclusions
An assessment of the value of the project Way in which the results shed light on the evaluation questions Be explicit about any assumptions you have made Consider alternative explanations to the conclusions drawn
Recommendations
Statement of key recommendations for future action that would improve the project and the outcomes for users Base any recommendations on findings Ensure recommendations are realistic and within the control of evaluation users
Ending sections Notes
Supplementary information relating to the main text
References
All texts and publications referred to in the report Make consistent use of a recognised system
Appendices
Anything which is relevant to the report, but not essential to the main text, such as the evaluator’s brief, questionnaires, graphs, charts and detailed statistics
tool 3 interview topic guide
charities evaluation services
The following example is for an interview with a project officer in the course of an evaluation of the implementation of a mentoring project. A topic guide outlines areas for interview, allowing a flexible questioning approach.
115 data collection tools
tool 2 outline evaluation report tool 3 interview topic guide
1 In Intr trod oduc ucti tion on:: personal introductions and explanation of the evaluation process length of interview and outline of topics to be covered assurances about confidentiality and anonymity. 2 Outline of roles roles and resp responsib onsibilities ilities in the proj project. ect. 3 Start Start date date in post, post, and and over overvie view w of tasks tasks and and workload since started. 4 Manage Managemen mentt support support and worki working ng relatio relationsh nships ips with other project staff. 5 First mentor mentor recr recruitmen uitment: t: methods, methods, success successes es and difficu difficulties. lties. 6 Recruitmen Recruitmentt of youn young g people people and liaison liaison with schools schools:: process of building relationships; types of relationships with different schools. 7 First mento mentor/men r/mentee tee grou group: p: most most succes successful sful aspec aspects; ts; least successful aspects. 8 Drop out – mento mentors: rs: profil profile e and reas reasons ons for for drop drop out. out. 9 Drop out out – young young people: people: profil profile e and reason reasonss for drop out. 10 Lessons learnt from first mentor/mentee group; group; things that will be done differently. 11 Most important issues: project project strengths and weaknesses weaknesses – emerging during the start-up of the project. 12 End of intervie interview: w: contact details for any follow-up information thanks.
charities evaluation services
116
tool 4 user satisfaction form Training course evaluation form We would be grateful if you would fill in this form so that we can monitor, evaluate and improve our training.
Course
Date
Trainer
Venue
1
Please show whether you agree or disagree with these statements about how the training was delivered (tick one box only for each statement) Strongly agree
Agree
Neither agree nor disagree
Disagree
Strongly disagree
The course aims were well met The course met my own expectations The course was pitched at the right level for me The presentations were clear and easy to understand The group work was useful The trainer(s) responded well to questions The handouts are a good record of course content Overall, the course is good value for money
Please write any comments explaining your responses to the above statements
Please continue on the next page
charities evaluation services
2
Please show whether you agree or disagree with these statements about how the training may help you (tick one box only for each statement) Strongly agree
Agree
Neither agree nor disagree
Disagree
Strongly disagree
As a result of this course, I feel that I: have further developed my knowledge and understanding of monitoring and evaluation am more confident about implementing monitoring and evaluation within my organisation will be able to help improve the quality of my organisation’s service delivery will be able to help my organisation be more effective in meeting users’ needs
Please write any comments explaining your responses to the above statements
3
Please make any other comments or suggestions about the course
Thank you very much for your time and co-operation
117 data collection tools
tool 4 user satisfaction form
charities evaluation services
118
tool 5 intermediate outcome monitoring form Peer education training course: trainers’ participant monitoring form Trainers should complete three forms for each participant: one at the end of the first day, one at the end of the first semester, and one at the end of the course (before the placement).
1
Skills
Participant ID number
Rate the participant on the following, ticking one box only for each statement, and giving evidence to support your rating.
The participant’s ability to:
listen to others communicate with others use appropriate body language summarise what people say discuss difficult/ sensitive topics work with different types of people
plan presentations
deliver presentations
facilitate a group
d o o g y r e V
d o o G
y r o t c a f s i t a S
d o o g y r e v t o N
d s e i e h n t y n l l o a k e r r o y w e h o Please explain or give evidence for your answer T t
charities evaluation services
2
Confidence Rate the participant on the following, ticking one box only for each statement, and giving evidence to support your rating.
e e r g a y l g n o r t The participant: S
e e r g A
e e r e g e r a g r a e s i h d t i r e o N n
e e r g a s i D
y l e g e r n g o a r i t s S d Please explain or give evidence for your answer
is confident of their skills in working with people is able to describe all of their skills is aware of a range of career opportunities available to them
is self-confident feels good about themselves/has high self-esteem
3
Support
Dates of any one-to-one support with participant
4
Comments
Any other comments Where relevant, please note any other changes you have observed in the participant’s skills or self-image.
119 data collection tools
tool 5 intermediate outcome monitoring form
charities evaluation services
120
tool 6 individual assessment tool The following horticultural therapy assessment tools are used, together with an individual person plan, to help clients identify their needs and how best they can be met. If a client’s aim is to get a job, the type of job and skills required should be identified. The initial assessment provides a baseline from which progress can be assessed periodically. The assessment tool therefore can also serve as an individual outcome monitoring tool. It is important to note, however, that the effectiveness of the project does not necessarily relate to an increase increase in in individual skills or work habits. Maintenance of mobility alone may be an indicator of success for some clients. Individual outcome information therefore therefor e needs careful interpretation. Table 1 Continuous assessment: work habits Date a Punctuality b Appearance c Personal hygiene d Manners e Attitude f Interest g Effort h Aptitude i Communication j Co-operation k Integration
A five-point scale can be used to complete this assessment as follows: 1 2 3 4 5
Unacceptable level Shows some competence Below average average standard for open employment employment Approaches average average standards for open employment Acceptable standard for open employment
charities evaluation services
Table 2 Continuous assessment: planting Date a
Selects correct plant material
b
Selects correct number of plants
c
Selects correct tools
d
Transports a), b) and c) to Transports planting area
e
Digs or dibs correct size of hole
d
Removes plant from container
g
Handles plant appropriately
h
Places plant in hole
i
Holds plant at correct level
j
Fills in soil
k
Firms in soil
l
Judges/measures planting distance
m Wat ate ers pl plan antt
The following scoring can be used to complete this assessment: 1 2 3 4
Has attempted attempted the task Can complete the task with physical physical assistance Can complete the task with verbal verbal assistance Can complete complete the task unaided
Extracted and adapted from an assessment briefing paper in Growth Point , Winter 1994, Number 208. Thrive, The Geoffrey Udall Centre, Beech Hill, Reading RG7 2AT.
121 data collection tools
tool 6 individual assessment tool
charities evaluation services
122
tool 7 outcome assessment: participant self-report The ‘who-are-you?’ quiz We are all better at some things than others – tick the boxes and see how you score. Score
1
2
3
4
5
Very Good
Good
OK
Could be better
Need to work on this
Smiley
Comment
1 2 3 4 5 6 7 8 9 10 11 12 13 14 Smiley
charities evaluation services
Cross reference personal and social skills to questions This self-assessment tool can be used at different points in time. It is based on the personal and social skills categories used by Fairbridge, a service working with disaffected young people, in its staf f training a nd course development. Two further areas – ‘T ‘Taking aking responsibility’ and ‘Facing up to consequences’ – were introduced during an evaluation.
Questions
Fairbridge/CES personal and social skills area
1 How good am I at letting other people know what I mean?
Communication
2 How good am I at understanding what other people are saying to me? 3 How good am I at ge gett ttiing on wit ith h people le? ?
Esta tab bli lisshing interpersonal relationships
4 How good am I at making and keeping friends? 5 How good am I at keeping my feelings under control?
Managing feelings
6 How good am I at un understanding why I like some people and not others?
Understanding and identifying with others
7 How How goo good d am am I at und under ersta stand nding ing tha thatt diff differ eren entt people have different ways of thinking?
Unde Un ders rstan tandi ding ng so socia ciall val value uess
8 How good am I at sorting problems out?
Problem solving
9 How good am I at understanding other people’s point of view?
Negotiating
10 How good am I at give and take, or compromise? 11 Ho How w good good am I at thi think nkin ing g and and pla plann nnin ing g ahea ahead? d?
Plan Pl anni ning ng
12 How good am I at learning from my successes and mistakes?
Reviewing
13 How good am I at accepting my share of the blame when things go wrong?
Taking responsibility
14 How good am I at thinking through what will happen to me and other people before I do something?
Facing up to consequences
123 data collection tools
tool 7 outcome assessment: participant self-report
charities evaluation services
124
tool 8 telephone interview guide Note: If the named person is not available, find a time to call back. Do not explain the reason for the call to a third party, or ask people to call back. Once the named person is on the phone, introduce the evaluation: ‘My name is Grace Kamwa. I am working on behalf of the National Advice and Information Service to evaluate its service. I am calling because you kindly agreed to allow us to contact you by telephone for feedback on the service you received when you contacted the advice line last month.’
1 Firstl Firstly y, was the the advice advice line line the firs firstt place place you contacted with this query?
Yes No Don’t know/can’t remember 2 If you you did did try try other other pla places ces fir first, st, which were they?
Check that the time is convenient or offer to call back at a more convenient time. Reassure them that all information is anonymous and confidential.
3 Did you you contac contactt the adv advice ice line line on on your your own behalf or for somebody else?
Own query For a friend/relative f riend/relative In a professional capacity Other (what?)
4 What What promp prompted ted yo you u to con contact tact the advice line? (if not already given)
5 How do you you feel feel the advic advice e work worker er dealt with your enquiry? How polite were they?
Very polite Quite polite Not very polite Rude Don’t know/not sure
charities evaluation services
6 How How well well did the they y unde underst rstand and your situation?
Very well Quite well Not very well Not well at all Unsure
11 Overall, have have you been able to act on any any of the information given? If so, what and how? Has anything changed with regard to [query] since you contacted the advice line? Record details of response.
tool 8 telephone interview guide
7 How How clear clear was the advice advice or or inform informati ation on the advice worker gave you?
Very clear Quite clear Not very clear Very unclear Unsure 8 How How useful useful was was the the advice advice or or inform informatio ation n given to you by the advice line?
Very useful Quite useful Not useful Unsure If not useful, go to Q 9. If quite useful, go to Q 10. If very useful or not sure, go to Q 11. 9 So you you feel feel that that the info inform rmatio ation n given given to you was not useful. Why was this? Tick the following from responses given:
Your questions were not answered You did n ot understand the advice or information It was not relevant The advice line was not the appropriate organisation to help you It was not accurate. If ‘yes’ to this, how do you know this? 8 So you you feel feel that that the info inform rmatio ation n given given to you was quite useful. Is there any way in which it could have been more useful?
125 data collection tools
Continue to probe for outcomes and examples and whether these are attributable to adviser input or written information and complete the following from responses:
Solved completely Know where to go Reassured Other 12 Would you you recommend the advice line to a friend?
Yes No Unsure If ‘no’ or ‘unsure’, why not? Thank you very much for all your comments. They will be combined with other responses and used by the advice line to continue to improve its advice and information services.
If the respondent specifically asks for a summary of the findings, this can be made available. Keep a record of those requesting one.
charities evaluation services
126
tool 9 user interview schedule This is an extract from an interview schedule used by People First, an organisation run by and for people with learning difficulties, as part of their user-led evaluation. Cards were used with sad and happy faces, which were helpful, but did not always work. A book was also used with large prints of the pictures.
7 Getting on with staff/support workers
12
Did Di d you you me meet et the the sta staff ff bef befor ore e they they ca came me to to wor work k her here? e? Yes
13
No
Don’t know
Did Di d you you ta tak ke part part in in inte terv rvie iewi wing ng th the e staf staff? f? Yes
No
Don’t know
charities evaluation services
14
15
How Ho w ofte often n wou would ld you li lik ke the the sta staff ff to to be be her here? e?
127 data collection tools
tool 9 user interview schedule
Do you get on with the staff? Yes
No
Don’t know
16
How do staff help you? (eg: learning to cook; personal hygiene; talking about problems)
17
If you you thi think nk abo about ut the the sta staff ff when when you you liv lived ed in in hospi hospita tal… l….. .. I s i t th e s am e he re ?
I s i t b e t te r ?
Is it worse?
charities evaluation services
128
tool 10 participatory learning and action (PLA) tools Card sorting
Things we liked
Things that were OK
Things we didn’t like
Evaluation wheel Evaluation of adventure weekend Participants are asked to tick the things they liked. Camp leaders
Cooking
Survival training
Canoeing
Making friends
Camping
charities evaluation services
Graffiti wall Large sheets of paper are hung on the wall and participants are invited to write comments on them during the activity or event.
129 data collection tools
tool 10 participatory learning and action (PLA) tools
Timeline – Kate’s Story positive change
charities evaluation services
130
further reading Annabel Jackson Associates (2004), Evaluation Toolkit for Voluntary and Community Arts in Northern Ireland , Annabel Jackson Associates, Bath. Burns, S and Cupitt, S (2003) Managing Outcomes: a Guide for Homelessness Organisations , Charities Evaluation Services, London. Burns, S and MacKeith, J (2006) Explaining the Difference Your Project Makes: A BIG Guide to Using an Outcomes Approach , Big Lottery Fund, London. Charities Evaluation Services: Evaluation Discussion Papers. 1 The Purpose of Evaluation (1998) 2 Different Ways of Seeing Evaluation (1998) 3 Self-evaluation (1998) 4 Involving Users in Evaluation (1998) 5 Performance Indicators: Use and Misuse (1998) 6 Using Evaluation to Explore Policy (1998) 7 Outcome Monitoring (2000) 8 Benchmarki Benchmarking ng in the Voluntary Sector 9.
(2003) Assessing Impact (2005)
Charities Evaluation Services (2008) PQASSO (Practical Quality Assurance System for Small Organisations) , Third edition, London. Clarke, A and Dawson, R (1999) Evaluation Research: An Introduction to Principles, Methods and Practice , SAGE, London. Coe, J and Mayne, R (2008) Is Your Campaign Making a Difference? NCVO, London. Connell, JP and Kubisch, AC (1998) ‘Applying a Theory of Change Approach’ in Fulbright Anderson, K, Kubisch, AP and Connell, JP (eds), New Approaches to Evaluating Community Initiatives, Vol 2: Theory, Measurement and Analysis , The Aspen Institute, Washington Washington DC. Cracknell, BE (2000) Evaluating Development Aid: Issues, Problems and Solutions , SAGE, London. Cupitt, S and Ellis, J (2007) Your Project and its Outcomes , Charities Evaluation Services, London. Davey, S, Parkinson, D and Wadia, A (2008)
Using IT to Improve your Monitoring and Evaluation, Performance Hub , Charities Evaluation Services, London.
Dewson, S, Eccles, J and Tackey, ND (2000) Guide to Measuring Soft Outcomes and Distance Employment nt Studies, Travelled , Institute of Employme Brighton. Dixon, L and Aylward, N (2006) Youth Arts in Practice: A Guide to Evaluation, National Institute of Adult Continuing Education , Leicester. Ellis, J (2008) Accountability and Learning: Developing Monitoring and Evaluation in the Third Sector , Charities Evaluation Services, London. Available to download at www.cesvol.org.uk Ellis, J, Gregory, T and Wadia, A (2009) Monitoring and Evaluation Resource Guide, Charities Evaluation Services, London. Availab Available le to download at www.ces-vol.org.uk Feuerstein, M-T (1986) Partners in Evaluation: Evaluating Development and Community Programmes with Participants , Macmillan, London. Gosling, L and Edwards, M (2003) Toolkits: A Practical Guide to Monitoring, Evaluation and Impact Assessment , Save the Children, London Guba, EC and Lincoln, YS (1981) Effective Evaluation , Jossey Bass, San Franci Francisco. sco. Hatry, HP, Cowan, J and Hendricks, M (2004) Analysing Outcomes Information: Getting the Most from Data , The Urban Institute, Washington DC. Hatry, HP and Lampkin, L (eds) (2001) Outcome Management in Nonprofit Organisations , The Urban Institute, Washington DC. Kreuger, RA (1994) Focus Groups: A Practical Guide for Applied Research , Second edition, SAGE, London. Kumar, R (1999) Research Methodology: A Step- by-Step Guide for Beginners , SAGE Publications, London. Latif, S (2008) Becoming More Effective: An Introduction to Monitoring and Evaluation for
charities evaluation services
Refugee Organisations , Charities Evaluation Services and the Refugee Council, London.
Evaluation: A Systematic Approach , Seventh Edition, SAGE, California.
Marsden, D, and Oakley, P (1990) Evaluating Social Development Projects , Development Guidelines No 5, Oxfam UK, Oxford.
Sanfilippo, L (2005) Proving and Improving: A Quality and Impact Toolkit for Social Enterprise , new economics foundation, London.
McCrum, S and Hughes, L (1998) Interviewing Children: A Guide for Journalists and Others , Second edition, Save the Children, London.
Scriven, M (1991) Evaluation Thesaurus , Fourth edition, SAGE, California.
McKie, L, Barlow, J and Gaunt-Richardson, P (2002) The Evaluation Journey – an Evaluation Resource for Community Groups , Action on Smoking and Health Scotland, Edinburgh.
Sudman, S and Bradburn, N (1982) Asking Questions: A Practical Guide to Questionnaire Francisco.. Design , Jossey-Bass, San Francisco Sue, VM and Ritter, LA (2007) Conducting Online Surveys , SAGE Publications, California.
Mebrahtu, E, Pratt, B and Lönnquist, L (2007) Rethinking Monitoring and Evaluation: Challenges and Prospects in the Changing Global Aid Environment , INTRAC, Oxford.
Taylor, D and Balloch, S (eds) (2005) The Politics of Evaluation: Participation and Policy Implementation , The Policy Press, University of Bristol.
new economics foundation (2000) Prove It! Measuring the Effect of Neighbourhood Renewal on Local People, new economics foundation , London.
Taylor, M, Purdue, D, Wilson, M and Wilde, P (2005) Evaluating Community Projects: A Practical Guide , Joseph Rowntree Foundation, York.
Oppenheim, AN (1992) Questionnaire Design and Attitude Measurement , Pinter, London.
Valley of the Sun United Way (2006) The Logic Model Handbook , Valley of the Sun United Way, USA.
Palmer, D (1998) Monitoring and Evaluation: A Practical Guide for Grant-making Trusts , Association of Charitable Foundations, London.
Van Der Eyken, W (1999) Managing Evaluation , Second edition, Charities Evaluation Services, London.
Parkinson, D and Wadia, A (2008) Keeping on Track: A Guide to Setting and Using Indicators, Performance Hub, Charities Evaluation Services, London.
York, P (2005) A Funder’s Guide to Evaluation: Leveraging Evaluation to Improve Non-profit Effectiveness , Fieldstone Alliance and Grantmakers for Effective Organisations, USA.
Patton, MQ (2008) Utilization-focused Evaluation , Fourth edition, SAGE, California.
Wilkinson, D (ed) (2000) The Researcher’s Toolkit: The Complete Guide to Practitioner Research , RoutledgeFalmer, New York
Paul Hamlyn Foundation (2007) Evaluation Resource Pack , London. Available to download at www.phf.org.uk Phillips, C, Palfrey, C and Thomas, P (1994) Evaluating Health and Social Care , Macmillan, Basingstoke. Robson, C (2000) Small-scale Evaluation: Principles and Practice , SAGE, London. Roche, C (1999) Impact Assessment for Development Agencies , Oxfam Publishing, Oxford. Rossi, PH, Lipsey, MW and Freeman, HE (2004)
PLA contacts
Hull and East Yorkshire Participatory Learning and Action Network c/o Community Development Company The Community Enterprise Centre Cottingham Road Hull HU5 2DH 01482 441002 Institute of Development Studies University of Sussex Brighton BNl 9RE 01723 606 261
131 further reading
charities evaluation services
132
glossary There are some technical terms that are difficult to avoid because of their wide use in voluntary sector management today and, more particularly, particularly, within the context of monitoring and evaluation. These are explained below.
A Accountability: how much individuals or groups are held directly responsible for something, such as spending or activities. Accuracy: the extent to which data and an evaluation is truthful or valid.
Auditing: checking that certain standards are met and controls are in place; this may be done internally or by an outside agency.
B Baseline data: facts about the characteristics of a target group, population and its context, before the start of a project or programme. Benchmark: comparison of activities, processes or results with those already achieved by your organisation or by another organisation Bias: undue influence causing a particular leaning towards one view.
more conclusively any change observed in the project participants due to the project. Conclusions (of an evaluation): final judgements based on analysis and interpretation of data. Control group: a group not receiving the project benefits, but matching in all other respects the group being ‘treated’, used for comparison purposes (see Comparison group). Correlation: a statistical measure of the degree of relationship between variables.
Aim: an aim tells everyone why the organisation exists and the difference it wants to make.
Budget: an estimate of future Cost-benefit analysis: estimate of the overall cost income and spending. and benefit of each Business plan: a detailed alternative product or plan showing how resources programme. will be managed to achieve Cost-effectiveness analysis: the strategic plan. determines the results of C projects and programmes in non-monetary terms against Case study: an intensive, the cost. detailed description and analysis of a single unit, Criterion, criteria: such as a person, project standard(s) against which or programme within a judgement is made. particular context.
Anonymity: action to make sure that the subjects of the study or report cannot be identified.
Client: person or group of people receiving a particular service or collection of services.
Assessment: judgements about the organisation’s performance.
Coding: translation of a given set of data or items into categories.
Achievement: performance by a project or programme demonstrated by some type of assessment or testing. Activities: this usually means the main things your organisation does, often the services it provides.
Attitude: settled way of thinking or behaviour. Audience(s): individuals, groups or organisations that will or should read or hear of the evaluation.
Collate: gather together in an ordered and useful way. Comparison group: a group not receiving project benefits which is studied to establish
D Data: information gathered for a specific purpose and linked to evaluation questions. Design (of evaluation): sets out approaches, methods, data sources and timetable. Dissemination: communication communicatio n of information to specific audiences.
charities evaluation services
E Effective: having the results or effect you want; producing the intended benefits. Efficient: producing the intended results with the minimum necessary resource resources. s. Evaluation: involves using monitoring and other information to make judgements on how an organisation, organisatio n, project or programme is doing. Evaluation can be done externally or internally. Executive summary: a nontechnical summary report which provides a short overview of the full-length evaluation report.
F Facilitator: someone who brings together and focuses a discussion discussion,, encouraging participation by group members. Feedback: presenting findings to people involved in the subject in a way that encourages further discussion and use. Focus group: interview with a small group focusing on a specific topic. Formative evaluation: evaluation designed and used to improve a project, especially when it is still being developed.
I Impact: the effect of a service on a wider society than its direct users. This can include af fecting policy decisions at government level.
Impact evaluation: evaluation of the longer-term effects of the project, relating to overall purpose. Implementation evaluation: assessment of programme or project delivery. Indicators: see Performance indicators. Informant: person providing information both directly and indirectly. Informed consent: agreement given, before an evaluation, to take part or to the use of names and/or confidential information, in the light of known possible consequences. Inputs: resources and activities which are used in the organisation to create the services offered. Instrument: tool used for assessment or measurement. Integrated: built into and part of a process or system. Intermediate outcomes: Intermediate outcomes achieved in the short term, but linking to longer-term outcomes. Internal evaluation: evaluation carried out by the staff of the project being evaluated. Interpretation: understanding what the data means in relation to evaluation questions. Intervention: service or activity intended to change the circumstances of an individual, group, or physical environment or structure
Interview: directed face-toface discussion between two or more people, which aims to find out the answers to questions. Interview schedule: list of questions or topics to be covered during an interview.
L Longitudinal study: a study over a substantial period of time to discover changes.
M Management: the people responsible for the organisation; the techniques they use to run the organisation. Measurement: finding out the extent or quantity of something. Median: the point in a distribution which divides the group into two, as nearly as possible. Matrix: an arrangement of rows and columns used to display information. Mean: obtained by adding all scores and dividing by the total number. Methodology: details of how the evaluation is carried out. Mission statement: a short statement of the overall aim or purpose of the organisation, usually concentrating on the difference it wants to make and defining the values that it will work by. Mode: the value that occurs more often than any other.
133 glossary
charities evaluation services
134
Monitoring: routine and systematic collection and recording of information.
N Needs assessment: identification of the extent and types of existing problems, services available and unmet needs.
O Objectives: the practical steps the organisation will take to accomplish its aims. Observation: direct examination examinatio n and noting of processes, events, relationships and behaviours. Operational plan: the same as a year plan (see Year plan). Outcome evaluation: evaluation of the intended and unintended effects of a project or programme. Outcomes: the changes or benefits resulting from services and activities. Outputs: what the organisation does; the services it delivers.
P Participatory evaluation: actively involving stakeholders in the evaluation process. Partnership: an arrangement between organisations for joint action. Performance indicators: well-defined, easily measurable information, which shows how well the organisation is performing. PEST analysis: analysis of the political, economic, sociological and technical issues affecting the project.
Pilot test: test: a brief and simplified preliminary study to try out evaluation methods.
Prompt: reminder used by an interviewer to obtain complete answers.
Plan: a written description of the steps the organisation will take to achieve certain things.
Proxy indicators: things you measure or assess that will indicate changes that cannot be measured directly.
Policy: a clear statement of intent about how an organisation will behave over certain issues.
Purpose: the reason for the existence of an organisation or an activity; the changes it hopes to achieve.
Population: all people in a particular group.
Q
Primary data: data collected specifically for the evaluation study. Procedure: a written, up-todate statement of how things are done, easily available to those who need to know. Process: the method, or stepby-step description, of how a task or activity is to be done. Process evaluation: evaluation of how the project works, that is, its processes and its activities or outputs. Profile: the characteristics of a group of people or an organisation. Programme: where a number of projects are funded or operate within the framework of a common overall aim. Progress evaluation: evaluation of the implementation of a project or programme in relation to aims and objectives. Project: a major task involving several activities and resources, which may need special attention and control (note: some organisations are called ‘projects’).
Qualitative evaluation: evaluation evaluati on or part of an evaluation that is primarily descriptive and interpretative. Qualitative information: see Qualitative evaluation. Quantitative evaluation: an evaluation approach involving the use and analysis of numerical data and measurement. Quantitative information: see Quantitative evaluation. Questionnaire: a series of questions listed in a specific order.
R Random sampling: selection of a smaller number of items from a larger group so that each item has the same chance of being included. Ranking: placing things in order; used to identify preferences or priorities. Recommendations: suggestions for specific appropriate actions based on evaluation conclusions. Reliability: likelihood of getting the same results if procedures are repeated; therefore genuinely reflecting what you are studying.
charities evaluation services
Respondent: individual providing information directly.
Statistics: numerical facts systematically collected.
Review: assessing an activity against a target or standard to see how far you have achieved what you intended.
Strategic plan: covers the vision for the future of the organisation and outlines the steps necessary to achieve this over a three- to five-year period.
S Sample: selection for study of a smaller number of items from a larger group. Sample bias: error largely due to non-response or incomplete responses from selected sample subjects. Sampling error: where the probability is that different results might be obtained from another sample. Scale: presents respondents with a range of possible responses to a number of statements. Secondary data: data collected for a different original purpose. Self-evaluation: when an organisation uses its internal expertise to carry out its own evaluation; evaluation is integrated into project management. Service: all the goods and information you supply, and things you do for your users (and indirectly for purchasers). Stakeholders: the people who have an interest in the activities of an organisation. This includes staff, volunteers, users and their carers, trustees, funders, purchasers, donors, supporters and members. Standard: an agreed level on which to base an assessment.
Strategy: a planned way of achieving long-term aims. Summary: a short restatement of the main points of a report. Summative evaluation: evaluation designed to present conclusions about the merit or worth of a project, taking into account processes and changes effected by the project. SWOT analysis: an analysis of the strengths and weaknesses of the organisation, and of the opportunities and threats which surround it. System: the way things are done to achieve a result; usually written down and made available to those who need it.
T
Trustees: the people responsible for controlling the management and administration of an organisation; another name for management committee members.
U Unanticipated outcomes: a result of an activity, project or programme that was unexpected. Users: people who use the organisation’s services. Utilisation (of evaluations evaluations): ): making use of evaluation findings and recommendations.
V Validity: extent to which an instrument measures what it intends to measure. Values: principles and basic beliefs about what really matters, that guide how things should be done. Variable: any characteristic that can vary.
Y
Target: something to aim for; Year plan: a one-year it is a countable or budgeted plan, outlining measurable result. objectives and targets for Terms of reference: detailed the organisation. plan for an evaluation. Treatment: particular project or programme activities or services. Trends: show changes over time; can be used to plan future services. Triangulation: looking for evidence or taking measurements from different points to increase reliability and validity.
135 glossary
charities evaluation services
136
other publications from Charities Evaluation Services First Steps in Monitoring and Evaluation (2002) First Steps in Quality (2002) Monitoring Ourselves, 2nd edition (1999) Managing Evaluation, 2nd edition (19 (1999) 99) Developing Aims and Objectives (1993) Does your Money Make a Difference? (2001) Your Project and its Outcomes (2007) Evaluation discussion papers PQASSO (Practical Quality Assurance System for Small Organisations) 3nd edition PQASSO CD Rom PQASSO in Practice (2006) Using ICT to Improve Your Monitoring and Evaluation (2008)
For further details, please contact CES at: 4 Coldbath Square London EC1R 5HL
t +44 (0) 20 7713 5722 f +44 (0) 20 7713 5692 e
[email protected] w www.ces-vol.org.uk
helping you do better what you do best 4 Coldbath Square London EC1R 5HL +44 (0) 20 771 7713 3 5722 +44 (0) 20 771 7713 3 5692
[email protected] www.ces-vol.org.uk