Beyond the Achievement-Proficiency Divide: a New Perspective on Language Assessment

  • by

The field of language teaching (represented by ELT as we know it) has one rather unusual feature which sets it apart from other formally taught school subjects. In the specific domain of assessing the progress of learners or pupils receiving instruction, a clear sharp distinction is made between testing, which is achievement oriented and testing which is proficiency oriented. It is widely accepted that the central aim of teaching English language is the development of the ability to use the language for communication or proficiency, though the value of different means (techniques) is a matter of vigorous debate. School board and university syllabi and examinations are known to be heavily weighed down by the ideational content of the ‘passages’ in the readers and lists of grammatical elements demonstrating form delinked from meaning. It is a long-standing complaint that these achievement tests favouring memorization and reproduction do not measure language proficiency. Mathew (2006) in the pages of Fortell  highlighted this issue, asking whether the move from achievement to proficiency is being realized. It is taken for granted that this is an appropriate direction of change or reform. I do not dispute this basic argument in the specific setting of the unhappy history of ELT in India. However, the proposal to address proficiency directly and ‘bypass’ achievement needs more analysis. I propose in this essay to revisit this achievement-proficiency tension in language education from the wider perspective of educational measurement. I go on to address the basic issue of articulating a framework for assessing the progressively developing language ability in English over several years.

In the register of psychological measurement we have three categories of tests: aptitude, achievement and diagnostic. Certain characteristics are associated with each. The achievement (of course objectives) tests are the ubiquitous examination. The diagnostic test is also familiar (since it is often found as a ritual pre-fix to the remedial courses prominent in ELT practice). The term proficiency is virtually synonymous with ability. Proficiency is sometimes defined in contrast with aptitude. Proficiency is demonstrable ability that has actually developed through relevant learning experiences, sometimes including deliberate instruction. Aptitude is the potential that facilitates such development of an ability along with motivation, effort, learning support. It is worth noting and stressing in passing here that aptitude is not a genetic/inborn/hardwired and unchanging characteristic of individuals: both nature and nurture are involved.

The achievement or attainment test —the bad guy—is our focus. Its definition indicates its primary function in formal public education. This is to gauge the degree to which pre-set learning objectives of a course (or segment thereof) of planned instruction has been achieved by each learner, for certification and other administrative purposes. Such tests are syllabus linked, or rather, bound. The requirement of content validity (match with syllabus specifications) is strict: question papers are subject to moderation. The achievement test is also located necessarily after the unit of instruction: chapter or block, or (calendar wise) fortnight, month, term, and year. The last one seems to be highly favoured in Delhi.  These two features – control of scope by a static syllabus statement and location at the terminal point – are major factors contributing to the varied problems of ‘examinations’ that call for reform. As a student of measurement I shall record a partisan observation. Many unheeded suggestions for reforming evaluation have come from our field. Final examinations remain toweringly dominant despite years of endorsement of internal and continuous-comprehensive assessment. Syllabus committees proclaim model question papers largely oriented to memory, though we can test understanding. It is the wider context of values, beliefs, policies, habits, pressures arising in the cultural politics of public education determine how the nature of evaluation in practice. The major paradigm shift needed now is not in measurement theory.

Getting back to the proficiency-achievement divide in language testing, as noted earlier this tension is not found in the setting of other fields or school subjects. The distinction if made at all is between a specific syllabus oriented achievement test and a broad achievement/proficiency test; these comparable in scope and difficulty and differ only in some specific aspects. A ready example is found in the Std. XII mathematics paper of a state board its counterpart in the nationaI level IITJEE.

What is it then that makes language instruction different? A clarification is needed here. The term  ‘language’ in the present discussion refers to a specific strand within the far wider enterprise of language education. This is the functional skills orientation of General English: the core concern of “ELT”. This stands in contrast with the liberal education/humanistic orientation of English literature. . The memorable phrase “language through literature” (which is in a way constitutive of the history and identity of ELT) captures the point here.  The formal study of the linguistics of a target language even when it is housed in a university language department has no skills focus. It is a curious fact that many issues central to ELT are not echoed in the discourse around the school curriculum for regional languages.  With this clarification, we can consider the implications of a widely accepted argument (restated here).

A considerable amount of language learning takes place outside the requirements and provisions of the formal ‘teaching syllabus’. Young learners especially are constantly interacting with others and responding with curiosity to the ‘texts’ encountered in the world around them. Such exposure to English is spreading  into so-called remote areas too. This additional learning is largely unconnected to and certainly goes beyond what is required and suggested in course related ‘homework’, fortunately. Since language learning is our primary concern, it is reasonable to recognize and value this enrichment of the limited and limiting formal syllabus and achievement test in the direction of proficiency. Testing at least could be made more skill oriented (and hence more valid). It is here that we need to be careful, as there is the danger of overlooking certain critical aspects of validity when assessing students’ progress in learning.

The spirit of content validity is that what is tested should match what is actually taught. There are two degenerate (and convenient) interpretations of ‘taught’. One lies in the policy level perspective which treats taught as that which the syllabus specifies. Another more realistic one accepts portions actually covered. In a pedagogically honest perspective what is taught would mean the curriculum transacted (and experienced) in the manner intended by the syllabus developers. While the test specification on paper materializes unfailingly into appropriate activity (test production, administration, scoring, declaration of results), the syllabus plan on the other hand has no such ‘power’. In reality children are absent, teacher shortages and teacher absenteeism exist, working days are lost; even lessons nominally ‘taught’ are often inappropriate because of poor teacher capacity, crowding, noise, indiscipline. The only truth we know is that (most of the) portions are somehow covered. Thus the basis for pedagogic validity– appropriate transaction—lies in the realm of faith and hope. The point emerging is the disconcerting fact that even the low level tasks of the traditional achievement test make inappropriate/unfair demands on large numbers of learners (as pupils). The well-meant further demand of proficiency has to be seen in this perspective. The bottom line is that even in the hands of the kindest teacher, a message of inadequacy goes to some; and for many of this some it is virtually daily bread. It is the cumulative of effect of always being behind others that we see in demoralized and apathetic students in higher classes.

I do not mean to suggest that we just give up presenting learners with challenges. I wish to draw attention to the challenge of establishing a sounder basis for selecting tasks, and more importantly for the interpretation of individuals’ performance. The public face of assessment begins with the result of scoring: the mark or grade which represents a position on a low-to-high scale related to some standard of adequacy. . Different levels of learning are linked to higher/lower score in a model that underlies any assessment exercise. A learner (her/his performance) gets one of these known or preset scores. Where do these level definitions come from?  The obvious answer is that there is a broadly agreed upon continuum of ability/proficiency in English (for non-native learners) spanning the ‘beginner’ and ‘advanced’ levels. This axis of development is visualized (for convenience) as a sequences of stages through which learners pass. In the schemes of the CBSE and state education boards the ‘long range syllabus’ for teaching English span about a dozen such stages –each rendered operational through a syllabus and course book and of course assessment scheme. This represents the mechanism for the implementation of policy relating to English language teaching.  A sizeable proportion of the ELT expertise we have accumulated over half a century is located in the processes of  design-development-revision of the syllabus-materials-methods package for these successive levels. Thus when a student is to be assessed she/ he is placed at a known zone in the already charted sequence of progress. The lack of pedagogic validity, which weakens the model, has been noted. However, there is a more fundamental issue to be addressed: the delivery problem is a practical one that can be solved.

Consider the hypothetical case of a student in class 5 in a well provisioned and managed KV. In a mid September assessment certain tasks linked to the syllabus are given and a certain quality of performance is expected. How can we say, “You should have reached this level”? Remember here that ‘you’ covers any and every student at this class level across the system. Is the grand theory underlying the entire English teaching syllabus (and ideal curriculum transaction) all that well founded and fine-tuned? It seems much more likely that its appropriateness lies not in theory but in our shared practical experience. After all, a number of children do manage… fairly well too sometimes. But does their success reflect the sound logic of the syllabus delivered thus far? Or could it be unaccounted ‘exposure and engagement’ outside the syllabus? This factor somehow seems uncomfortably associated with social privilege. When in our impatience with the ‘memory based’ achievement test we raise the demand to proficiency, the claim of intrinsic appropriateness of the sequential-cumulative syllabus is even stronger. Uncertainty about appropriate standards for assessing performance looks large. My submission is that we need as a profession to find the time and energy to look critically at the model of developing* proficiency (*an adjective here) that informs the sequential structure of the 12 year syllabus for English. If overlaps are found across these arbitrary stages (I believe there are), we need to acknowledge them upfront and allow much more latitude or slack in applying ‘standards’. We should stop pretending there is principled gradation in the 12-year language syllabus, and give up the larger fiction that a gentle but steady gradient runs through each one in the sets of the attractive course books flooding the market.

My intention is not to stop with criticizing Mathew for overloading students with achievement plus proficiency, as it might appear. It is, rather, to endorse the way forward she suggests. After finding that “[W]hile the Can-do statements that the tasks exemplified were not within [their reach] the tasks could in fact capture some of the on-the-way or enabling abilities” these children did possess. Thus the Can-do statements of the tasks can be seen to be “made up of several sub-Can-do statements”, and a study of longer duration could “throw light on the development sequence of such abilities”. Yes, we need many such studies—urgently.

I end with two expressions of hope relating to the deeper engagement with an old challenge called for; both have a bearing on the examination reform agenda. The indeterminacy of language learning is the main complicating factor: we can’t trust our children not to learn more. This pushes us to see the steps in the preset syllabus (which have to be indicated for sheer practical reasons) not as determining the end of assessment, but a resource for the beginning of assessment –as discovery. Our responsibility as teachers is to discover/understand where our learners are initially and help them move forward. Only hardened autocrats will fail to see the possibilities of collaboration (learner participation) here.  In this frame teaching and assessment become mutually dependent and supportive. On-the-way assessment yielding formatively usable information comes to be seen as a necessary aspect of pedagogy.  I believe we are better placed than our subject area colleagues because of the clear indeterminacy of learning trajectories in our field. They (poor souls) have less encouragement to believe that what is learnt is more/different from what is in the syllabus, and thence to employ assessment as discovery rather than audit/inspection.

The second hope relates to a likely consequence of this process. If we come to value assessment as support for teaching we would naturally want to engage in it more, also more purposefully and comprehensively. This would not depend on the official weight for internal assessment being raised (by orders from above). More assessment in this mode will shift the emphasis away from conventional summative evaluation. Formal examinations will then be relegated to their rightful but very specific and very limited role in education. This seems one sure way of making some progress towards examination reform.

The thought that these hopes place teachers of English language in the role of pioneers is a happy one for me.

Works Cited

Mathew, Rama. “Achievement Testing to Proficiency Testing: Myth or Reality.” Fortell 9 (2006): 6-11


Jacob Tharu has retired as professor of Evaluation from EFLU. He set up the Evaluation Department of EFLU (CIEFL) Hyderabad in 1973 and worked there for about 30 years. He specializes in language testing, research methods, examination reform, primary level curriculum, teacher preparation/support and the relevance of transition/bridge (not remedial) programmes for under-prepared college entrants. jimtharu@gmail.com

* Article first published in FORTELL, September 2011

Leave a Reply

Your email address will not be published. Required fields are marked *