focused on the relative effectiveness of feedback in general, among various computer-based instruction environments. Four previous meta-analyses in the general area of feedback were identified, 1991, 1988, 1983, and 1982, only one (1983) examined the effects of feedback on learners in computerized and programmed instruction. It found a medium effect size of 0.47. Since this study included paper-based as well as computer-based instruction, Azevedo and Bernard give good reason for studying the pure effects of feedback in computer-based instruction with a new meta-analysis. The “new” meta-analysis, presented by Azevedo and Bernard, indicates an overall weighted effect size of 0.80 suggesting that achievement outcomes were greater for the feedback group than the no-feedback group. Concurring with Morey (1996) and sharing in the general consensus that feedback is one of the most critical components of Computer Based Instruction (CBI), analysis by Azevedo and Bernard found the higher performance of learner achievement was attributable to the large effect size for the feedback group. However, Azevedo and Bernard identify potential flaws in their analysis, due to the number of rejected studies. This “bespeaks the somewhat methodologically weak state of research in the area” (Azevedo & Bernard, 1995).
In general, the value of feedback cannot be overlooked in the design of computer-based instructional materials. Feedback can guide the learner through a tutorial, prompting correction, review, and in some cases encourage the motivation to successfully continue. As presented in the third leg of the S-R-R method, feedback offers the discriminative reinforcement necessary to shape learner behavior toward the objectives of the particular lesson.
“Learner control,” a concept that is readily described in terms of autonomy and independence, is generally defined as an instructional