CTI logo (1857 bytes)
History, Archaeology
& Art History

Computer-assisted assessment

Using computers to design and deliver objective tests

 

Objective testing

An objective test is one in which each question is designed to have an unequivocally correct answer or set of answers. In contrast a subjective test, for which there is no uniquely correct answer, examines the ability of the student to respond clearly and develop an argument. The Multiple Choice Question (MCQ) is the most commonly used type of objective question. While objective tests are used largely to test knowledge and comprehension, with careful design they can also assess higher learning outcomes such as the application of knowledge and analytical skills.

Because they can be marked quickly and easily and can cover a wide range of course content, objective tests are particularly appropriate for informal self-assessment to give students ongoing feedback about their progress. They can be used in this way throughout an undergraduate course. For formal assessment, however, their contribution to final marks is likely to decline as students progress through their degree programmes and the range of abilities being examined becomes more comprehensive and demanding.

The use of computers for objective testing

Approaches to computer-assisted assessment (CAA) can be divided into two systems:

automated marking of paper forms, using an optical mark reader (OMR) with hard copy question paper and an OMR-readable answer form;

computerised marking in which questions are presented and responses assessed entirely by computer software, with no paper involved.

CAA software can provide immediate supportive feedback for each question, tailored to the answer given, making this particularly suitable for informal self-assessment. Superior graphical and visualisation exercises can also be included with the use of multimedia files such as graphics, sound and video clips. Most software offers a facility for reporting results and analysing student responses. An increasing number of systems are web based.

Benefits

It is objective (i.e. highly reliable in the assignment of marks)

Tests can be marked and returned very speedily

Tests can incorporate a variety of media (images, video, audio)

Relevant feedback can be given automatically (during or after the test)

Randomised selection can be made from large question banks

Flexible access gives students opportunities for self-assessment

Built-in test management (collation, analysis, tabulation, report generation) eases the administrative burden of assessment

 

Concerns

Questions tend to address factual content and may therefore encourage learning of surface detail rather than an appreciation of underlying concepts

The nature of the process makes it difficult to retrieve mistakes in compiling questions before the test is taken, placing a heavy duty of responsibility on the question setter

Formal assessment by computer is subject to concerns over security

Students may obtain the correct answer for the wrong reason or may be encouraged to reinforce misconceptions about the subject while answering questions incorrectly

Hardware and software resources and support must be available

Producing well designed, appropriate questions involves a considerable investment of staff time

 

Objective question types

There are various forms of objective question:

multiple choice: choose the correct answer form a list of alternatives

multiple response: select a number of correct answers from the list

true/false

selection/association: match items from two related lists

assertion/reason: choose the correct reason for an assertion (a special case MCQ)

While these question types can be delivered on paper and then processed by OMR, the following two types specifically require a computer for input:

text match/gap filling: enter a word, short phrase or number

visual identification/hotspot: move a marker is moved to identify a particular ‘hotspot’ on an image

 

Designing questions

An MCQ question typically comprises three parts:

stem: the question component;

key: the correct answer;

distractors: incorrect answers provided as alternatives to the key;

and optionally:

feedback: a mark and/or comment reflecting the student’s performance. This may be presented immediately after the question, at the end of the test or not at all. Feedback may also be provided on the basis of overall test performance.

For MCQs students might be discouraged from guessing by the imposition of a penalty for incorrect answers, although it is generally considered that negative marking simply alters the baseline of results. In any case, the rewards of ‘blind’ guessing diminish rapidly as the number of questions increases. Other viewpoints hold that ‘intelligent’ guessing may be no bad thing, and that well designed MCQ questions will lead a genuinely blind guesser astray.

It is self-evident that the questions should reflect the aims and objectives of the course and be appropriate to the abilities of the students for whom they are intended. A pre-test during which questions can be tried out, edited and reformulated is considered essential. Beyond that:

stem: ensure that it is concise and unambiguous, avoiding negatives (‘Which of the following is NOT…?’) and grammatical clues to the key

key: this should be the same length as the distractors and its position in any list should be randomised using the software (unless it is a numeric answer in which case answers should be in ascending order, avoiding any systematic placement of the key)

distractors: these should be apparently realistic alternatives that, as far as practicable, cover the full range of options and easily identify misconceptions

feedback: this should be appropriate, helpful, encouraging, varied and unpatronising

 

Implementing tests

Students find that objective tests require considerable concentration and it is probably wise to limit the test content to no more than one hour’s duration, although imposition of a time limit for self-assessment purposes is usually counterproductive. The use of open tests to assess coursework is subject to abuse through collaboration unless supervised, but this is true of conventional activities executed out of class as well. Encouraging this sort of student collaboration may even be considered desirable. The provision of feedback under such conditions is also a significant issue: withholding information can compromise other educational objectives. Using CAA in examination conditions requires careful preparation including back-up plans in case of hardware or software problems. Step-by-step tutorials and check lists are available for summative assessment planning.

Question banks

Writing good objective questions can be a lengthy and laborious exercise. It is usually faster to edit existing questions. Some question banks are available commercially, generally associated with text books. It is worth finding out whether the textbooks you use for your courses have associated questions. It is also possible to adapt questions found in textbooks for objective testing. Otherwise, contact colleagues at other institutions to see whether they are willing to work with you to produce questions for similar courses. This may best be done via a relevant learned society. Your CTI Centre can probably point you in the right direction.

Resources

Computer-assisted assessment centre: http://caacentre.ac.uk/

Computer-assisted assessment mailing list: http://www.mailbase.ac.uk/lists/computer-assisted-assessment/

For information on OMRs see Data & Research Services: http://www.drs.co.uk/

For information on computerised systems see Question Mark Designer and Perception: http://www.qmark.com/ or TRIADS: http://www.derby.ac.uk/assess/talk/quicdemo/html/

For Web-assisted assessment see CASTLE: http://www.le.ac.uk/castle; the Web-assisted assessment mailbase list: http://www.mailbase.ac.uk/.

A tutorial on setting effective objective tests is available from the CTI Centre for Land Use and Environmental Sciences: http://www.clues.abdn.ac.uk:8080/caa/caatut.html

A protocol for the implementation of summative computer-assisted assessment examinations is at: http://www.clues.abdn.ac.uk:8080/caa/protocol.html

  

Original Authors

Eamonn Twomey

Jacqui Nicol

Christina Smart

Version: 4.1.10.1

 


CTICH Home | Archaeology Links | History Links | Art History Links | About

07 May 1999