Functional Literacy: A Better Way of Assessing Reading Ability?

When I lived in Nigeria, my driver, a young man in his 20s, told me had gone to school for six years. Yet he struggled to read and write. Once when taking me to the airport, he almost missed the turn for ‘Departures’. I realised he couldn’t read the sign. Other times he sent me text messages containing scrambled letters and words that I deciphered with a smile and a bit of sadness. I later learnt that he was going to school again to improve his literacy. The thing is, he was also a boxer who competed internationally. He said it was difficult for him to travel without being able to read. That ‘Departures’ sign was indeed important for his own life too.

Literacy is clearly key to getting on in life, whether you are well off and taking it for granted, or disadvantaged and struggling to read. Without the ability to read and write, you might miss out on opportunities to learn, adopt new practices, or indeed get by in everyday life. For organisations and governments working to improve the situation for poorer people in Africa and Asia in particular, it is essential to know what the level of literacy is and what the gaps are. As illustrated by my driver, the level of schooling is often not a good measure. Literacy needs to be measured specifically.

There are several ways in which this can be done. Literacy measures at population level normally involve a quantitative household survey[1]. The degree of usefulness and resource intensity of the measures varies, however. Data are usually collected face-to-face, though the more simplistic measures can be applied in other modes as well. Here I will briefly discuss the pros and cons of the main approaches, and also highlight the method of ‘functional literacy’ which has been developed and implemented by IBOPE Inteligência, associated with Kantar Public in Brazil, Instituto Paulo Montenegro, the social arm from IBOPE, and Ação Educativa, a non-governmental organisation focused on education in Brazil.

In this blog post I will focus on ways of measuring reading ability, but similar approaches can be applied for writing ability and basic numeracy. Moving on, then, to the main approaches:

  1. Asking about reading ability directly. For example “How well can you read?” or “How well can you read a newspaper?” Response options may be “Very well”, “Somewhat well”, and “Not at all”.

Clearly this approach relies entirely on respondents’ subjective opinion of how well they can read, and may also be subject to social desirability bias. It may be influenced by reading ability among people around them, and their own rose-tinted self-perception. Perhaps a respondent can easily read her brother’s text message – better than anyone else in the household – but she might struggle to read more complicated texts. She would like to say she can read very well. What will she respond?

Having said that, there are times when self-perceived ability is what matters, for example where one wishes people to put themselves forward for adult education. Another advantage of this otherwise quite limited approach, is that it is a very short question that can fit into even SMS questionnaires. Moreover, the version of the question that simply asks how well respondents can read avoids the issue of defining the language. While this may be a drawback if more in-depth information is required, the question can serve to give a general sense of literacy level.

Asking specifically about newspaper reading means a reference point-of-sorts is introduced. However, it also raises the issue of language. What if most newspapers are published in, say, English rather than local languages? Which language should the question refer to?

Finally, it is worth mentioning that the literacy questions above are sometimes asked with respect to other people in the household rather than the respondent. This avoids potential social desirability bias, but it means links with other factors cannot be analysed so straight-forwardly.

  1. Asking the respondent to read a sentence out loud, eg ‘Parents love their children’ (from the Demographics and Health Survey, as referenced in the 2006 UNESCO paper).

This approach moves closer to assessing actual ability in an objective manner, rather than relying on self-reported answers. Responses are normally coded along the lines of ability to read ‘full sentence’, ‘partial sentence’ or ‘not at all’. While this approach is generally an improvement from self-reported measures, the sentence is usually a very simple one and provides a rather crude tool for assessment. Also, responses may not reflect actual comprehension. Few respondents succeed in reading only ‘part of the sentence’ – usually they can either read all of it or nothing, meaning it is not a very nuanced measure even for what it is trying to assess.

  1. Giving the respondent a brief text to read and then assess their comprehension.

Giving respondents a brief text to read and then asking questions to assess their comprehension provides a better assessment of literacy than just asking them to read a sentence out loud. The example below is taken from an Education Impact Evaluation survey in Ghana (2003), again as referenced in the UNESCO paper.

“John is a small boy. He lives in a village with his brothers and sisters. He goes to school every week. In his school there are five teachers. John is learning to read at school. He likes to read very much. His father is a teacher, and his parents want him to become a school teacher too.”

The respondent is then asked questions such as ‘Who is John?’, ‘Where does John live?’, ‘What does John do every week?’ etc. Often the responses are provided in multiple choice format.

Responses are grouped into categories based on the number of correct answers. This approach provides more reliable and nuanced results than the measures above, but it arguably doesn’t capture an adequate range of literacy levels reflecting how well people can function in the real world.

  1. Functional literacy: Giving the respondent a test to assess literacy based on a series of everyday-related activities.

This approach takes the literacy assessment a step further by incorporating a number of different tasks, reflecting everyday life in the context of a given society. It thus provides a much richer measure of literacy. It specifically measures ‘functional literacy’. The test has been developed in Brazil and covers things like reading a magazine, instruction manuals, and health related information. The test contains about 20 questions. For example, respondents are asked to look at a magazine and indicate where on the cover the title is located, or link the headings on the cover with the relevant articles. Other test questions relate to instructions on how to clean a water tank, information on who is eligible for vaccinations, and information on how to pay for a TV in installments. The level of difficulty increases as the test progresses. The responses are then coded using the method of Item Response Theory, meaning the increasing level of difficulty is taken into account in the weighting of responses. Respondents are categorised into one of four groups reflecting the level of functional literacy: 1) Illiterate, 2) Rudimentary, 3) Basic, and 4) Fully literate.

As mentioned above, this approach has been developed by our Kantar Public team in Brazil in partnership with Instituto Paulo Montenegro and Ação Educativa. It now provides official literacy statistics over time for the country. In principle, the assessment can be incorporated into any questionnaire and could be adopted for other countries. The downside, however, is that it can take a bit of time. While a person who can read well would only need about 15 minutes to complete the task, it often takes much longer for someone with lower level of literacy, not least because respondents often do not wish to give up. The other thing is that, as far as I am aware, it has so far only been developed for the Brazilian context. It would be extremely useful to adopt it to other languages and societies too, which indeed I hope we will get a chance to do.

On that note, I will end this blog post. Hopefully the continued measurement and development of global literacy indicators will help direct resources to improve people’s literacy among those who need it the most. The adoption of functional literacy in other countries would be a step in the right direction.

Hopefully better measures and improved literacy will contribute to a future where no one is held back because they struggle to locate the ‘Departures’ sign, and people like my Nigerian driver can take off in their boxing careers, or in any other ambition or aspiration they may have.

[1] For a comprehensive discussion of the first three approaches described in this blog post, see the UNESCO paper ‘Measuring literacy in developing country household surveys: issues and evidence’ (2006), available at: http://unesdoc.unesco.org/images/0014/001462/146285e.pdf.

From snoring camels to product diversification: A gendered analysis of internet participation in Ghana, Kenya, Nigeria and South Africa

It is hard to find anything that offers so much hope and potential as increased internet access across Africa. The internet offers a whole new world of information, ideas, tools, and ways of connecting people as well as providing sources of entertainment and distractions, certainly with silly kittens and camels galore. Importantly, it offers revolutionising ways of accessing and delivering services, including vital ones such as finance. Recent discussions with jua kali, or informal sector producers in Kenya, showed enormous potential to diversify their product lines provided they had access to and knowledge of the internet. Enabling people at the bottom of the pyramid, who currently have little or limited internet access, to make use of all of this will be life changing.

Or so we like to think. In reality, the picture is more complex. While internet access itself may be binary, just like the data it holds, the users are intricate, inconsistent and often contradictory human beings. Indeed, internet participation cannot be reduced to zeros and ones. A paper by Kantar Public, presented at the African ITS Conference in Accra in March 2016, sheds light on the complexity of internet engagement and the factors that underpin it. The paper, authored by Nicola Marsh, is based on analysis of a global annual study of internet use conducted by Kantar TNS in a wide range of countries[1]. This particular piece of analysis focuses on Ghana, Kenya, Nigeria and South Africa.

Gender is a key part of this picture. Fewer women than men use the internet in most African countries, and these four countries are no exception. By way of example, 19% of men in Ghana have access to the internet, whereas the figure for women is a measly 9%. In South Africa, which has the highest level of internet access among the four countries, 41% of men use the internet whereas only 29% of women do so[2]. Consequently the door to the digital world remains shut for many women.

Figure 1. Internet access by country and sex, 2012

figure-2

Source: Research In Africa, 2012

The KP paper analysed different levels of internet engagement and factors that underpin different types of usage. First, an overall “internet participation” composite score was created based on a bunch of common online activities and their frequency. The findings show that greater access for women, or indeed disadvantaged men, does not imply online engagement. In fact, the countries with higher levels of access tend to have lower levels of participation. Within the countries, men consistently have higher levels of engagement than women.

Figure 2. Mean score of internet participation by country and sex, 2015

figure-2

Source: Kantar TNS Connected Life Survey, 2015

Second, this overall score was then broken down into three main factors or categories of usage, capturing some of the nuance of internet engagement. The categories are:

  • Popular activities. This includes instant messaging, social networking, uploading photos, playing games, reading news/sports/weather.
  • Sophisticated activities. This includes mobile payments[3], streaming/downloading shows/movies, streaming music/radio, watching videos, internet banking
  • Text heavy activities. This includes blogging, visiting blogs/forums, and emails.

The gender gap is further highlighted when looking at these different categories of internet usage, with sophisticated activities having the greatest gap.

Other factors in addition to gender that lead to greater internet participation overall are younger age, better education, and higher socio-economic group. However, different life stages, defined as student status, marital status and having kids, have no consistent impact on online participation across the four countries.

Lower education and social class have less of an impact on the popular online activities. If we want to get women and people who are less well educated to participate more, the starting point should arguably therefore be data light services.

These findings show that as online participation increases and people lower down the pyramid gain access, proportionately more people engage with the internet in lighter ways. Women are often among those who are late to join the online party. Indeed, across the four countries the gender gap for internet participation is inversely related to the level of internet access.  For example, in South Africa a more similar proportion of men and women access the internet, but among those who are online, women have a lower level of participation than men.  In contrast, in Ghana where the gender gap in access is large, the men and women who do have access have more similar levels of engagement.

In sum, this analysis makes it clear that for the internet to be a truly useful tool for disadvantaged groups of people, much more ought to be done to get women in particular to develop more technical skills and online literacy, as well as solving other affordability and access issues. If not, many of the most vulnerable people will remain excluded from the digital possibilities including access to services, information, networks and ideas. While a few tentative steps online might mean people tumble into Facebook and other social networks, it is essential they don’t just get sucked into the whirlpool of singing dogs, snoring camels and other people’s dinner from which they may or may not emerge. Rather, people need to engage with more sophisticated online activities if they are to click their way onwards and upwards. A snoring camel ain’t gonna help with that.

The full version of the paper is available on request.

[1] The analysis was based on the data from the annual, multi-country survey conducted by Kantar TNS, called “Connected Life”. The survey covers technology and internet behaviours amongst internet users. All those interviewed use the internet at least once a week, and the sample for each country is weighted to be nationally representative of weekly internet users aged 16+. The data was collected between June and August 2015.

[2] Source: Research In Africa, Gillwald et al (2012), http://www.researchictafrica.net

[3] Note that in Kenya mobile payments are commonly done using Mpesa, but the level of penetration of mobile money is much lower in other countries.