What is evaluation?

Evaluation supports evidence-based service design and resource allocation decision making. The main types of evaluation, which inform different types of decisions, are:

  • Process evaluations describe how programs are being implemented, delivered and experienced by clients. They are useful for identifying good practice, challenges to implementation and understanding the mechanisms of how a program can have an impact and why it might work better for some clients in some contexts than others. However, they don’t provide strong evidence of effectiveness.
  • Outcome evaluations focus on measuring the effectiveness of the program in achieving its aims. They seek to identify a causal link between the program and the aim, while taking account of, or controlling for, the impact of non-program related factors on outcomes. Some outcome evaluation designs, such as randomised control trials (RCT) provide strong evidence of effectiveness.
  • Economic evaluation is a type of outcome evaluation. It is necessary for decisions about resource allocation. Cost benefit analysis and cost effectiveness analysis are examples. Cost benefit analysis quantifies in monetary terms both the costs and benefits of programs to determine the extent to which one outweighs the other. Cost effectiveness analysis is more appropriate when it is difficult to monetise the outcomes as it compares the relative cost of programs to identify the lowest cost method of achieving the same outcome.


Why do evaluation?

Good evaluation evidence informs efficient and effective use of financial and other resources, and for this reason may be mandated by funders for large programs. In New South Wales, government agencies are expected to evaluate their programs to inform key program, policy and funding decisions [see Further Reading below].

Evaluations can provide information about how programs have been implemented, barriers encountered, as well as unexpected or unintended outcomes. A well-designed evaluation can comment on what works for whom in what circumstances, the evidence of which can inform the sustainability of the program in alternative contexts.


How to do evaluation

The evaluation of legal assistance services can be challenging because of the complex settings in which services are delivered. To maximise the usefulness and trustworthiness of evaluations it is vital to ensure that they are both realistic and rigorous.

Evaluation can be resource intensive, so evaluation resources should be directed at larger programs (or service models) and/or those at greater risk of unintended outcomes.

Evaluations require an understanding of how inputs link to outputs and outcomes. A program logic describes in diagram form the relationship between activities and the intended aim of the program. Developing a program logic is best done in partnership with all program stakeholders, and can therefore provide a shared articulation of the agreed aim of program and how the inputs are intended to achieve this/these aims.

Given the many potential pitfalls of conducting evaluations, it is recommended that professional researchers are consulted early in the program development.

Watch Maria Karras, a Senior Research Fellow at the Foundation, talk about evaluation in the legal assistance sector and how we can make it both more rigorous and realistic


Further Reading

Links are provided as a resource but are not an endorsement

Read about Monitoring
This website is produced by the Law and Justice Foundation of New South Wales © 2023. All Rights Reserved.