The contribution of citizen led learning assessments (CLLA) in which community organisations conduct simple reading and/or math evaluations has rightly been celebrated. A new Results for Development Report(R4D) provides insight into their strengths, limitations and most importantly makes practical pointers on how they can be improved.
The report has a lot to praise. CLLAs have mobilised civil society. They provide large scale reporting on skill proficiencies among children and adolescents both in and out of school. They have set the bar high in their adept communication of findings — presenting results in ways that are glaringly easy to understand – and as a consequence have impact.
But herein lies the weakness: how often can you report dire results in learning before the shock value of the reports no longer shocks? The R4D report finds that to date none of the CLLAs have succeeded in raising learning levels. This is largely because assumptions about how CLLAs should work haven’t held. Principal amongst these is that reporting on woeful learning levels will automatically galvanise action.
Does this mean that investments in CLLA’s are misplaced? Absolutely not! The R4D study makes important recommendations on how to improve CLLAs so that they have greater impact when launched. It suggests looking to more diagnostic approaches that help inform solutions rather than simply quantifying the extent of the learning problem. R4D suggests this could be achieved by moving from annual benchmarking to bi- or triennial benchmarking. Savings realized could then be re-deployed on efforts to develop diagnostic assessments and to engage parents, communities and teachers in supporting children with complementary learning initiatives.
There is a clear logic behind this. Firstly, reporting the scale of learning deficits doesn’t make them go away. Rather, well-informed practical actions in the classroom, home and community can drive learning improvement. Secondly, learning levels are not likely to dramatically shift from one year to the next. In addition, after an initial period of reporting poor results, the shock factor diminishes. Thus, more infrequent benchmarking may have greater impact.
The study also contains guidance on how CLLAs can endorse a continuing focus on early years learning and encourages CLLAs to broaden the range of skills they are testing. In reading, it suggests expanding from assessing letter and word recognition to evaluating reading comprehension skills – thereby signalling the importance of ‘reading for meaning’. R4D also recommends raising the profile of skills in foundational mathematics, something the OECD has also recently stressed. Most importantly, the report strongly recommends CLLAs make rigorous efforts to ensure findings are robust and comparable both across time and geography (including opening up to third party external scrutiny). Clearly, without this there can be no valid comparison of progress in learning.
One area on which the report is surprisingly silent is the relationship between CLLAs and government assessment systems. Many of the recent innovations in learning assessment—for example, CLLAs, early grade reading assessments–are advanced by NGOs and receive substantial support from external funders. However, if governments are to be held accountable on learning then it is important that government frameworks of assessment are also strengthened and given credence. There are encouraging examples such as India where great strides have been made in developing diagnostic learning assessments, which have robust comparability.
As we move forward with a reinvigorated attention on learning improvement in the post-2015 education agenda, it is crucial that we build mutually reinforcing systems for learning measurement, diagnosis, and remediation which synergise with regional and international initiatives.