ASSESSING IMPACT

How do we know that legal assistance and justice system resources are being used effectively, efficiently, and are having their intended impact?

Most justice system providers operate within a limited-resource setting and are increasingly required to demonstrate ‘effectiveness’, ‘impact’ and ‘successful outcomes’.  Relevant evidence can be gathered on a continuous or routine basis through monitoring or through ad hoc evaluation of specific programs or activities.

Impact can be assessed DIRECTLY by measuring outcomes and having robust evidence that the inputs and outputs are causally related to the outcomes. Outcome and economic evaluation methodologies can provide this quality of evidence. Impact can be measured INDIRECTLY, through ongoing monitoring of inputs and outputs, once there is good evidence these achieve the intended outcomes. Below, Principal Researcher and evaluation specialist, Dr Kerryn Butler, presents an overview of the findings from the Foundation's report.

Service model case studies provide a mechanism to share good practice about how programs achieve their impact. The format of any monitoring or evaluation, including what data or other information should be collected, can be informed by constructing a program logic

Why use a program logic

A program logic describes how inputs and outputs are linked to the intended outcomes and is therefore a useful first step in articulating how these elements are related and identifying relevant measures for each.

Developing a program logic is best done in partnership with all program stakeholders so it can provide a shared articulation of the agreed aim of program and how the inputs are intended to achieve the aim.

Measuring impact will generally require the collection of information on inputs such as number of staff and resources expended; outputs, such as counts of services delivered of different types and to different clients; and outcomes, such as legal and non-legal resolutions, client satisfaction, law reform and staff wellbeing. 

Evaluation

Good evaluation evidence informs efficient and effective use of financial and other resources, and for this reason may be mandated by funders for large programs. Government agencies are often encouraged to evaluate their programs to inform key program, policy and funding decisions.

Evaluations can provide information about how programs have been implemented, barriers encountered, as well as unexpected or unintended outcomes. A well-designed evaluation can comment on what works for whom in what circumstances, the evidence of which can inform the sustainability of the program in alternative contexts. It is good practice to share the findings of evaluations, to support good practice and efficient use of resources and avoid reinventing the wheel.

What types of evaluation are there?

Outcome and economic evaluations are intended to measure impact and provide evidence of the causal relationships between inputs, outputs, and outcomes, while process evaluations focus on how a program is being implemented and experienced by participants and can provide informed evidence of why a program achieves an impact. The main types of evaluation, which inform different types of decisions, are:

  • Process evaluations describe how programs are being implemented, delivered and experienced by clients. They are useful for identifying good practice, challenges to implementation and understanding the mechanisms of how a program can have an impact and why it might work better for some clients in some contexts than others. However, they don’t provide causal evidence of effectiveness.
  • Outcome evaluations focus on measuring the effectiveness of the program in achieving its aims. They seek to identify a causal link between the program and the aim, while taking account of, or controlling for, the impact of non-program related factors on outcomes. Some outcome evaluation designs, such as randomised control trials (RCT) provide strong evidence of effectiveness.
  • Economic evaluation is a type of outcome evaluation. It is necessary for decisions about resource allocation. Cost benefit analysis and cost effectiveness analysis are examples. Cost benefit analysis quantifies in monetary terms both the costs and benefits of programs to determine the extent to which one outweighs the other. Cost effectiveness analysis is more appropriate when it is difficult to monetise the outcomes as it compares the relative cost of programs to identify the lowest cost method of achieving the same outcome.

How to do evaluation The evaluation of legal assistance services can be challenging because of the complex settings in which services are delivered. To maximise the usefulness and trustworthiness of evaluations it is vital to ensure that they are both realistic and rigorous. Evaluation can be resource intensive, so evaluation resources should be directed at larger programs (or service models) and/or those at greater risk of unintended outcomes. Evaluations require an understanding of how inputs link to outputs and outcomes.

A program logic describes in diagram form the relationship between activities and the intended aim of the program. Without a quality evaluation design, it will not be possible to reliably conclude that the outcomes achieved can be attributed to the activities of the program. Given the many potential pitfalls of conducting evaluations, it is recommended that professional researchers are consulted early in the program development.

Listen to Maria Karras, a Senior Research Fellow at the Foundation, talk about evaluation in the legal assistance sector and how we can make it both more rigorous and realistic.

 

Monitoring

In the context of delivering legal assistance and justice system services, monitoring refers to the collection of information about activity related to service delivery, and the results of that activity.

Monitoring can provide regular information to assess progress in delivering against an operational strategy. This might include progress against a defined set of targets and performance indicators that are intended to represent strategic priorities.

In some circumstances monitoring can also provide information about changes in demand for services, although it is important to remember that in the context of limited resources and an operating environment where services are targeted at specific areas of law or client groups, the overall profile of services delivered will reflect in large part the nature and extent of what is actually offered and available to potential clients. Trends in services delivered should therefore be interpreted in the context of any changes to service provision.

Monitoring can also provide information about equity of service provision, where services delivered are compared to estimates of legal need. The Foundation’s former Data Dashboard Online and Victoria Legal Aid’s current Data Discovery Tool are examples of presenting data in this way. 

How to do monitoring

Information on inputs and outputs can be collected in management information systems, with staff entering data on each client, each service provided and, where available, the outcomes achieved. This type of data can be aggregated to provide reports by, for instance:

  • Business area, office, or service location
  • Client type, including NLAP defined priority client groups
  • Type of legal matter, ranging from broad area of law (civil, family, criminal) to specific matter or problem type(s)
  • Type and intensity of service
  • Outcomes achieved

More advanced reporting systems can support reporting by more than one of these factors at a time, to monitor, for instance, outcomes achieved by priority client type within a specific area of law. The quality and usefulness of this information will depend on good quality data entry.

Some aspects of service delivery require alternative methods of data collection. For example, client experience is best monitored through ad hoc surveys, and/or by mechanisms for ongoing feedback and routine capturing of complaints. Staff wellbeing can be monitored through surveys, appraisals, absenteeism, and staff turnover.

Evaluation and other methods of research can be used to inform what inputs, outputs and outcomes are measured through monitoring, and also how these can be measured appropriately and accurately.

 

Service model case studies

Service model case studies provide examples of current service models and feedback on how services are being delivered. They can therefore be a useful method to monitor how an organisation’s resources are being deployed. Service model case studies differ from individual client story case studies. Access our top tips on writing service model case studies.

Service model case studies are a useful method for sharing good practice as they bring together information on

  • the PURPOSE of a particular service program or model
  • how it is being delivered OPERATIONALLY
  • the IMPACT it is having
  • potential unintended CONSEQUENCES
  • LESSONS for improved performance
  • cost and other RESOURCES required to deliver the program or model.
This website is produced by the Law and Justice Foundation of New South Wales © 2023. All Rights Reserved.