You are here

Bloom's Taxonomy in Developing Assessment Items - Discussion, Teaching Implications, and Conclusion

Author(s): 
Draga Vidakovic, Jean Bevis, and Margo Alexander

Good understanding of the concepts included in a precalculus course is crucial for building students’ mathematical understanding, confidence, and success in all subsequent undergraduate mathematics courses. We have observed that the majority of students enrolled in our course have weak mathematical backgrounds and low motivation. Consequently, the course has been identified as a course with a high "drop" rate. We are using WebCT to develop quizzes, tests, and on-line materials that emphasize conceptual understanding of functions, as well as of numeric and algebraic manipulation and algorithms. Typically, students beginning a precalculus course try to solve problems using simple rules without seeking any understanding of related concepts. Some popular rules include (-)(-) = (+), "and means plus", "the slope is the coefficient of x", and Note that the rules may be incomplete or even incorrect, but they are still adopted by students who avoid conceptual understanding. By using tasks at higher levels of Bloom’s taxonomy, we are forcing students to move beyond the uninformed use of such rules. We hope that this will help students retain knowledge as well as improved understanding and attitude.

We found Bloom’s taxonomy to be a useful framework for developing multiple-choice, short-answer, matching, and essay questions that can involve students in complex cognitive tasks. We emphasize that we classify the task/item in a certain level of Bloom’s taxonomy based on the highest level of cognitive task posed to the student. That is, if the highest level of expectation for the student is to remember some facts, definitions, terminology, symbols, etc., such a task is at the knowledge level. If the student is expected to translate, illustrate, extrapolate, estimate, predict, identify/distinguish, interpret -- without necessarily relating it to other material or seeing its fullest implications -- the task is classified at the comprehension level. A task that requires students to use abstraction and apply it in particular and concrete situations is classified at the application level. When the student is expected to break down information into its constituent parts, considering their relationships and organizational principles, the task represents the analysis level. When the situation/task is opposite to analysis -- the student is expected to put together elements and parts to form a whole -- it is said to be at the synthesis level. And finally, when the student is required to use criteria and judgment to justify something based on internal/external evidence, the task is at the evaluation level.

We use online assessment in our teaching in several ways:

  1. to observe individual student learning and to identify the concepts and issues with which students have difficulty;
  2. to modify old assessment items or to develop new ones and to adapt our teaching strategies to include more discussions on those concepts with which students have difficulty;
  3. as a vehicle for student-teacher interactions.

For an example of the third use, we encourage students to ask questions about quizzes at the beginning of each class period. Very often a class starts with student statements such as "I have a question about an item on the last quiz". The instructor encourages those students to present the problems or items together with their solutions. Usually, more than one student had the same problem on a particular item, which motivates them to listen and to participate. On the other hand, students who were successful with those items engage by sharing their solutions.

We believe that class discussions initiated by students have value in that the students are active participants, motivated, and reflective learners. In the assessment literature, this kind of assessment is referred to as formative assessment (see, for example, Black and William, 1998). The assessment is called formative when information is used as feedback by students/learners about their understanding (and sometimes skills) and by teachers to inform their practice, e.g., to evaluate and modify teaching strategies or to assess students’ understandings and misunderstandings throughout the teaching process.

Recommendations from professional organizations encourage teachers to use formative assessment in their classrooms. At the K-12 level, teachers are advised to use already developed and tested assessment items -- items developed or approved by testing agencies. But there is a shortage of databases with appropriate testing items at the undergraduate level. Teaching mathematics at the undergraduate level may be quite variable due to differences in instructor style. However, there is anecdotal evidence that many instructors rely solely on problems drawn from their textbooks.

In developing our assessment items, we were guided by the objectives we set for our students in each lesson. Additionally, we have observed students’ performances on all items and modified them accordingly, if necessary. For instance, the items that were answered by all students correctly were either modified to incorporate more complex questions or their weight was modified. On the other hand, if there were any items answered correctly by only a few students, these were modified appropriately after class discussions during which we identified the causes. This process contributes towards the validity (the degree to which an item measures what it is supposed to measure) and reliability (consistency of item results) of database items.

It is worth mentioning that the experience of one of the authors on the University System of Georgia Regents Mathematics Test Development Committee -- developing a rising junior examination in mathematics suitable for all University System colleges and universities -- has been useful in developing our items. Additionally, all of the authors are members of the Quality in Undergraduate Education committee for developing standards and assessment for undergraduate mathematics courses.

We started development, implementation, and revision of the online assessment a few semesters ago. The results of implementation, including students’ reactions, have been positive so far. There are indications, based on the attitude pre- and post-tests, that students’ attitudes tend to change over the semester in at least three areas. Specifically, at the end of the semester:

  1. a smaller percentage of students believe that "memorizing is the most important in learning mathematics";
  2. more students believe that they "can learn mathematics"; and
  3. a larger number of students feel comfortable working with computers.

Although these results represent our individual observations in our own classrooms -- they are not the results of a well-developed study -- they are nevertheless of significant value to us. These results motivated us to devote numerous hours in developing and revising assessment items. The assessment outcomes have informed us on individual student’s progress, as well as helping us reshape our instruction. For example, each of us has been in a situation in which we revised a planned daily class activity to accommodate class discussion about certain issues or concepts that appeared on a quiz.

In conclusion, our process of developing and using online course material and assessment items in the WebCT environment has been useful for us, both as authors and instructors, and for our students in the following ways:

  1. We are motivated to set and state clearly the objectives for the course and for each individual lesson.
  2. We can follow closely the progress of each student.
  3. We have immediate feedback on the concepts that give our students difficulty, and we can revise our daily teaching plan to intervene right away.
  4. The system gives our students immediate feedback and opportunity for self-reflection.
  5. The system gives students and instructors additional motivation to interact.

Draga Vidakovic, Jean Bevis, and Margo Alexander, "Bloom's Taxonomy in Developing Assessment Items - Discussion, Teaching Implications, and Conclusion," Convergence (December 2004)