-
PDF
- Split View
-
Views
-
Cite
Cite
Mairead Greene, Paula Shorter, Conceptual Understanding Weighting System: a targeted assessment tool, Teaching Mathematics and its Applications: An International Journal of the IMA, Volume 36, Issue 1, March 2017, Pages 1–17, https://doi.org/10.1093/teamat/hrw003
Close - Share Icon Share
Abstract
The goal of this article is to introduce the revised Conceptual Understanding Weighting System (CUWS) Greene & Shorter (2012, Trans. Dial.), a tool for categorizing and weighting a mathematical task in order to identify the level of conceptual understanding being assessed by the task. In addition, there is discussion of how this weighting system relates to other frameworks that are currently used to classify mathematical tasks and student reasoning. Examples are provided to demonstrate how the revised CUWS can be used to calculate a conceptual understanding score for a particular exam, to create mathematical tasks that assess understanding at different levels and to analyse student understanding within those different levels. This in turn allows instructors to identify concepts with which students are struggling and quickly adapt classroom activities to improve students’ understanding. The ability to use the CUWS to create mathematical tasks (not just categorize them) is one of the important goals of the CUWS and one that is not regularly emphasized when working with other frameworks to categorize mathematical tasks.
1. Background
One can approach the problem of classifying mathematical tasks from a number of different perspectives, and there are many such approaches documented in the literature. Schoenfeld, A. H. (1992) provides a framework for balanced assessment, which highlights seven key dimensions that an instructor should consider when assessing student understanding. These seven dimensions are as follows: content, thinking processes, products, mathematical point of view, diversity, circumstances of performance and pedagogics–aesthetics. Within each of these seven dimensions, there are varying numbers of categories which could be considered. Any given task can be classified by choosing the appropriate category that it falls into under each dimension. The goal of overall assessment of a given concept would be to have a set of tasks that represents variety within each dimension and ensures that all dimensions are being considered. This allows students the most flexibility in how they demonstrate their understanding and therefore provides the instructor with the best assessment of that understanding.
Following Schoenfeld’s (1992) work, there have been a number of classification frameworks that focus more precisely on the mathematical thought processes required by particular tasks. Pointon and Sangwin (2003) developed a classification for mathematical tasks, which broke the tasks down into eight task groups as shown in Table 1.
| 1. Factual recall |
| 2. Carry out a routine calculation or algorithm |
| 3. Classify some mathematical object |
| 4. Interpret situation or answer |
| 5. Proof, show, justify | (general argument) |
| 6. Extend a concept |
| 7. Construct example/instance |
| 8. Criticize a fallacy |
| 1. Factual recall |
| 2. Carry out a routine calculation or algorithm |
| 3. Classify some mathematical object |
| 4. Interpret situation or answer |
| 5. Proof, show, justify | (general argument) |
| 6. Extend a concept |
| 7. Construct example/instance |
| 8. Criticize a fallacy |
| 1. Factual recall |
| 2. Carry out a routine calculation or algorithm |
| 3. Classify some mathematical object |
| 4. Interpret situation or answer |
| 5. Proof, show, justify | (general argument) |
| 6. Extend a concept |
| 7. Construct example/instance |
| 8. Criticize a fallacy |
| 1. Factual recall |
| 2. Carry out a routine calculation or algorithm |
| 3. Classify some mathematical object |
| 4. Interpret situation or answer |
| 5. Proof, show, justify | (general argument) |
| 6. Extend a concept |
| 7. Construct example/instance |
| 8. Criticize a fallacy |
The levels of reasoning seen in Pointon and Sangwin’s (2003) task groups range from low-level recall to higher level thinking. Many of these groups show overlap with the categories that Schoenfeld (1992) included in his ‘Thinking Processes’ dimension. There is also some similarity between the higher order thinking task groups (5–8) of Pointon and Sangwin (2003) and the five task types that Swan (2008) proposes promote conceptual understanding. Swan’s five task types are as follows: classifying mathematical objects, interpreting multiple representations, evaluating mathematical statements, creating problems, and analysing reasoning and solutions. Tasks that fall into task group 2 of Pointon and Sangwin (2003) can be analysed further by considering the framework of Stein and Smith (1998) which provides a way to categorize these procedural tasks as part of their ‘Levels of Cognitive Demand’ framework.
All of the frameworks discussed above focus on classifying the mathematical task, itself, as it is written or chosen by the instructor. A different approach is provided by Lithner (2005), who more directly classifies different types of student reasoning required by mathematical tasks. His framework divides student reasoning into three different categories: memorized reasoning, algorithmic reasoning and creative reasoning. His work was used by Bergqvist (2007) in her classification of exam tasks in first-year university mathematics courses where she focused on plausible reasoning which she viewed as a subcategory of creative reasoning. After further research, Lithner (2008) provided additional detail to the framework emphasizing that creative reasoning is defined through novelty, flexibility, plausibility and mathematical foundation.
One key element that Lithner (2005) introduces within his framework which had not been emphasized previously is the importance of considering the work that the student has engaged in prior to a particular task before categorizing the student’s reasoning on that task. It is impossible to determine whether a student is engaged in algorithmic or creative reasoning, for example, if you do not know the algorithms that the student can be reasonably expected to have developed in classwork prior to that task. This consideration of prior student work in the classification of the student’s reasoning is also an important element of the Conceptual Understanding Weighting System (CUWS) described below.
2. Revised CUWS
The CUWS (Greene & Shorter, 2012) was initially developed to reflect the way in which the authors created tasks that assessed a student’s understanding of a particular concept after that concept had been developed and strengthened through course activities. In this development, the authors established their use of the term conceptual understanding as follows: ‘A student who achieves conceptual understanding in our course is understanding the meaning of a concept well enough (or deeply enough) to be able to adapt, modify and expand that concept in order to apply it in novel situations or novel ways (within an application or within a purely mathematical context). Perry (1999) reported that Cornfield and Knefelkamp (1979) characterized conceptual understanding as the ability to adapt, modify, and expand on a concept. They argued that this level of understanding enables a student “to investigate and compare things and to make judgements about adequacy or inadequacy, appropriateness or inappropriateness.” (p.1) This view of conceptual understanding is synonymous with deep learning, which has acknowledged links to positive college student outcomes (Laird, Shoup, Kuh, & Schwarz, 2008; Ramsden, 1992).’ Thus a student is considered to have achieved conceptual understanding when they are able to reason independently from that understanding to answer novel questions and solve novel problems. This is consistent with the use of the term conceptual understanding in the calculus reform movement (Huges-Hallett, 2000). Examples of developing and assessing conceptual understanding are provided in Section 3 of this article.
The original goal of the CUWS was to help the authors identify meaningful assessments of their students’ conceptual understanding that would guide improvements in the developmental tasks that we wrote. However, as we focused on creating such assessment tasks, we found ourselves considering to what extent each task required conceptual understanding. We realized that we wanted to include tasks that assessed skills and methods that had been developed in course activities, but recognized that these tasks would not be assessing a deep level of conceptual understanding. This led us to consider the following question: ‘To what level do skills and methods require conceptual understanding?’


This revision is informed by the five task types that Swan (2008) identifies as promoting conceptual understanding mentioned above as well as the eight task types described by Pointon and Sangwin (Pointon & Sangwin, 2003). However, all of the task types mentioned in Swan (Swan, 2008) and Pointon and Sangwin (2003) do not immediately translate to a bullet in Check 3 as some tasks fall under the Level 0, Level 1 or Level 2 categorization. For instance, Pointon and Sangwin’s Type 1 corresponds to Level 0 and Type 2 corresponds to Level 1. The way in which Pointon and Sangwin’s (2003) Types 3–8 compare to the CUWS Levels depends on what the students have encountered in classwork before they approach that task. For instance, a task that asks students to prove could be a Level 0 (a proof that they have already done in class and have memorized), Level 1 (a type of proof, e.g. induction, which they have practiced many times and are being asked to complete in a slightly altered setting) or now Level 3 (a novel proof) due to the new revisions. One of the task types described by Swan (2008), interpreting different representations, was already considered in Check 3, but we added bullets to represent two more: evaluating mathematical statements and analysing sample work. A bullet was not added to Check 3 for classifying mathematical objects. Therefore, this type of task will either be classified as Level 0 or 1 if it is a familiar situation and object for the students, or as Level 2 if it is not something they can complete from memory or by applying an existing method. It is possible that such a task could become a Level 3 task if it also requires students to engage in one of the checks in Level 3, for instance, making connections between different representations. Differentiating two levels of conceptual understanding (Level 2 and Level 3) is consistent with the idea of knowledge quality discussed in Star and Stylianides (2013) where it is highlighted that students understanding of concepts could range from superficial to deep. The authors believe that these are appropriate weights for such a task and that this relative ranking is also reflected in Pointon and Sangwin (2003), which places ‘Classify some mathematical object’ as a Type 3 task just above factual recall and routine calculations and algorithms. The other task type from Swan which is not included in Check 3 is ‘Creating Problems’ as this seemed better suited as a developmental task for conceptual understanding. We therefore worked with a subset of the five tasks (interpreting multiple representations, evaluating mathematical statements, analysing reasoning and solutions) that we felt would appropriately require students to demonstrate an ability to reason from conceptual understanding after that understanding had been developed.
More generally, Wiggins and McTighe (2001) ‘have developed a multifaceted view of what makes up a mature understanding, a six-sided view of the concept’. The six facets that they mention are as follows: can explain, can interpret, can apply, have perspective, can empathize and have self-knowledge. These facets are seen to varying degrees in each of the characteristics for reasoning from conceptual understanding highlighted in Check 3. For instance, ‘interpreting the meaning of a mathematical characteristic in a novel setting’ would involve can explain, can interpret and can apply and ‘analysing sample work on a task and identifying flaws with accompanying reasoning’ could conceivably involve all six facets.
Based on the literature on demonstrating conceptual understanding and ranking understanding discussed above, the authors determined that the five points in Check 3 represent the main attributes of a task that requires deep conceptual understanding. This coincides with the experience of the authors in their classroom testing. However, future work is planned to expand the content to which the CUWS is applied and to investigate any additional task types that would demonstrate deep understanding.
Finally, it is worth discussing further the fact that the CUWS requires the consideration of tasks the students have previously encountered. As noted earlier, this is also a requirement of Lithner’s classification system (Lithner, 2005). This aspect of the CUWS was first developed by the authors independently of Lithner’s works (Lithner, 2005, 2008) when we realized that it was impossible to classify something as a method if you did not know the work the students had done in class prior to encountering that task. It is possible and even likely that a Level 1 task in one classroom could be a Level 3 task in another classroom depending on whether students had seen that type of task before. Boesen et al. (2010) have used a minimum number of occurrences in the course text to reasonably classify tasks as likely to be answered using memorized reasoning or algorithmic reasoning. The authors have used CUWS in an active-learning setting and know the students have worked through and discussed all of the tasks on the course activities and online homework assigned. In our application of the CUWS checks, we have therefore assumed that if the task, method or setting has occurred once in one of these activities or in the homework, then the students can be reasonably expected to recall and apply that prior experience. Although a textbook is used in our courses, there are not a lot of examples completed in the book and most students do not reference the book regularly so at this time the book has not been included in that analysis.
3. Using the CUWS
3.1 Creating tasks
One use of the CUWS is to help an instructor create tasks for a homework, quiz or exam in order to truly assess whether the students have successfully developed a deep understanding of a particular concept. Boesen et al.(2010) point out that this is important not only for accurate and meaningful assessment but also for appropriately influencing student thinking. They propose that the types of questions used on assessments emphasize to students the thinking that is considered important and therefore influence the type of thinking that students will spend time developing.


Although this is a different applied setting from the one originally encountered in class, slightly different wording and the data are increasing instead of decreasing, it is possible for a student to complete this task by simply repeating the same sequence of steps (the method) that they developed and performed on the Fig. 3 on the course activity without needing to regenerate or even revisit the reasoning from understanding that led them to this sequence of steps the first time. This means that the student is no longer using (and thus, demonstrating) their ability to reason from an understanding of the concepts involved. Most likely, they are instead recognizing that this task is of a very similar format to the task on the course activity and then using their method from that task to answer this one. As a result, a task that might have assessed conceptual understanding very well in another precalculus classroom becomes more of a method task in this particular classroom due to these students’ previous experience with that concept in the context of the course.
As written, this Fig. 4 task would be assigned a ranking of CUWS Level 1 in the authors’ classroom for the application of a previously developed method. However, one small change in the data provided in this task—for example, giving incomes for years since 1980 that are not in consistent increments (e.g. 3, 8, 10, 14 instead of 3, 5, 7, 9, 11, etc.)—renders that the method developed by the student in the course activity no longer applicable to this task. The student would need to return to their understanding of the mathematical concept and adjust their previously developed method in order to complete this task. As a result, this adapted task would be assigned a ranking of CUWS Level 2. Notice also that the applied setting does not play a significant role in either the original task or our adapted task and students are not asked to interpret any mathematical characteristic in the context of that new setting. As none of the additional criteria in Check 3 are satisfied, this task remains a CUWS Level 2.




Recall that in this instructor’s course there has been no previous connection made between average rate of change and slope of a line. At this point, students have only been introduced to the definition of average rate of change and practiced working with this definition on points and tables of data. As a result, in this classroom Task 4 Version C is a Level 3 task as students are unable to use a method from class and must make a connection between numerical (the values of average rates of change) and graphical (slopes of secant lines) representations of the concept of average rates of change.
Finally, the instructor could move away from the graphical situation altogether and provide a narrative description such as the following:
Between Jan 1, 2008 and Jan 1, 2014 the money in Jenna’s savings account increased by $6,723. What is the average rate of change of the money in Jenna’s account (in dollars) as a function of time (in years)? Interpret the meaning of this average rate of change in terms of the money in Jenna’s account.
This is a Level 3 task as students are unable to simply enact their previously established method. Instead, they must work from understanding of the definition of average rate of change here. Additionally, they are required to interpret the meaning of average rate of change in a novel (and relevant) setting.
The above examples are intended to illustrate the ways in which the CUWS can assist an instructor in choosing, adapting or writing tasks for assessing skills, methods and concepts in their classroom. In our choice of examples above, we have focused on tasks that assess concepts, as these tend to be the tasks that are most difficult to write.
3.2 Studying student success within the different CUWS levels
Above we have discussed how the CUWS can be used to help develop tasks that assess (or require) a certain level of conceptual understanding. Of course, the CUWS can also be used to categorize already existing tasks on exams, quizzes or other assessments, providing both a measure of the extent to which a given exam or quiz assesses different levels of conceptual understanding and a way to measure student success at specific levels of conceptual understanding. For example, in Table 2 below are the results of applying the CUWS to the tasks on a midterm exam for a Calculus I course .
Table 2. CUWS analysis of midterm exam from Calculus I
| CUWS level . | 0 . | 1 . | 2 . | 3 . |
|---|---|---|---|---|
| Per cent of exam with tasks at this level | 10 | 49 | 18 | 23 |
| Average score (%) on all tasks at this level | 87.93103 | 83.18086 | 77.77778 | 65.06747 |
| Standard deviation of scores (%) on all tasks at this level | 11.65123 | 15.91586 | 16.35949 | 8.168201 |
| Highest average individual task score (%) at this level | 98.27586 | 94.82759 | 89.65517 | 79.31034 |
| Lowest average individual task score (%) at this level | 72.41379 | 41.37931 | 48.27586 | 55.17241 |
| CUWS level . | 0 . | 1 . | 2 . | 3 . |
|---|---|---|---|---|
| Per cent of exam with tasks at this level | 10 | 49 | 18 | 23 |
| Average score (%) on all tasks at this level | 87.93103 | 83.18086 | 77.77778 | 65.06747 |
| Standard deviation of scores (%) on all tasks at this level | 11.65123 | 15.91586 | 16.35949 | 8.168201 |
| Highest average individual task score (%) at this level | 98.27586 | 94.82759 | 89.65517 | 79.31034 |
| Lowest average individual task score (%) at this level | 72.41379 | 41.37931 | 48.27586 | 55.17241 |
Table 2. CUWS analysis of midterm exam from Calculus I
| CUWS level . | 0 . | 1 . | 2 . | 3 . |
|---|---|---|---|---|
| Per cent of exam with tasks at this level | 10 | 49 | 18 | 23 |
| Average score (%) on all tasks at this level | 87.93103 | 83.18086 | 77.77778 | 65.06747 |
| Standard deviation of scores (%) on all tasks at this level | 11.65123 | 15.91586 | 16.35949 | 8.168201 |
| Highest average individual task score (%) at this level | 98.27586 | 94.82759 | 89.65517 | 79.31034 |
| Lowest average individual task score (%) at this level | 72.41379 | 41.37931 | 48.27586 | 55.17241 |
| CUWS level . | 0 . | 1 . | 2 . | 3 . |
|---|---|---|---|---|
| Per cent of exam with tasks at this level | 10 | 49 | 18 | 23 |
| Average score (%) on all tasks at this level | 87.93103 | 83.18086 | 77.77778 | 65.06747 |
| Standard deviation of scores (%) on all tasks at this level | 11.65123 | 15.91586 | 16.35949 | 8.168201 |
| Highest average individual task score (%) at this level | 98.27586 | 94.82759 | 89.65517 | 79.31034 |
| Lowest average individual task score (%) at this level | 72.41379 | 41.37931 | 48.27586 | 55.17241 |

The grading scheme that was used awarded students 1 point if they correctly circled (a) and 1 point if they correctly circled (b) for a total of 2 points. Therefore, the average per cent on the question tells us that 44.83% of the class circled (a) correctly and 41.38% of the class circled option (b) correctly. This is extremely low for a question that we anticipated they would have developed a method for, so we revisited the work that they did in class to prepare them for this question.
We will examine (a) more closely here as an example of how categorizing tasks in this way can help an instructor in their overall course development. Related to (a), they encountered a number of relevant tasks previously in class. For example, they had to evaluate the validity of the following statement on an in-class activity: If is positive at then is increasing at . This was followed on the same activity with a task asking them, ‘In your own words, how can we use information to help when we sketch the graph of ?’ They encountered a similar statement on a true–false quiz in class which was followed up with a class discussion on the statements. On this true–false quiz, students had to determine the truth of this statement: If for , then is increasing for .
When categorizing this task as a CUWS Level 1 task, we had assumed that students would have extrapolated from these experiences to develop a method for checking if is increasing or decreasing using . Clearly, students did not accomplish this extrapolation in their studying. We would like this to be a method that students develop. So we made a change to follow up our initial course activity with some opportunities to implement that general statement in specific examples, pushing students more explicitly to develop the desired method during their course work. After making these changes, we will again monitor how students are doing on this task.

When examining student responses to (b) above responses fall into three distinct categories shown in Table 3 below.
| Number of students . | Response to task . | Success on (a) . |
|---|---|---|
| 14/16 correct | ||
| 5/6 correct | ||
| 0/4 correct | ||
| Never | 1/1 correct | |
| 1/1 correct | ||
| 1/1 correct |
| Number of students . | Response to task . | Success on (a) . |
|---|---|---|
| 14/16 correct | ||
| 5/6 correct | ||
| 0/4 correct | ||
| Never | 1/1 correct | |
| 1/1 correct | ||
| 1/1 correct |
| Number of students . | Response to task . | Success on (a) . |
|---|---|---|
| 14/16 correct | ||
| 5/6 correct | ||
| 0/4 correct | ||
| Never | 1/1 correct | |
| 1/1 correct | ||
| 1/1 correct |
| Number of students . | Response to task . | Success on (a) . |
|---|---|---|
| 14/16 correct | ||
| 5/6 correct | ||
| 0/4 correct | ||
| Never | 1/1 correct | |
| 1/1 correct | ||
| 1/1 correct |
This is a Level 3 task—reflecting that this is not a type of task that students have seen in any classwork before. With that in mind, this distribution of answers over tasks (a) and (b) above is not necessarily a cause for concern. However, one issue that does appear with this particular analysis of the data is that all of the students who answered 35–110 for (b) answered (a) incorrectly. They all answered (a) with and . This suggests that they were answering both tasks as if they were looking at the graph of instead of the graph of the derivative of . Another issue for consideration is that eight students answered (a) correctly but answered (b) (in various ways) incorrectly. These students were able to correctly interpret the graph to determine when was decreasing but could not do the same for increasing at an increasing rate. This analysis highlights that our students are struggling with the concept of increasing at an increasing rate but probably not the concept of increasing and understanding the distinction between the graph of and the graph of . We therefore return to our course activities to identify ways to improve their understanding without providing a question like this so as to retain this task as a Level 3 task and not reduce it to a Level 1 method task.
The above examples illustrate some of the ways in which the CUWS allows you to focus in on levels of understanding whether it be skills, methods or conceptual understanding and carry out an analysis of tasks within a particular CUWS level. This is especially helpful when considering what changes to make to your curriculum as a result of that analysis. For instance, weakness in a Level 1 task highlights a problem with students failing to develop or recognize the relevance of a method that you have worked on in your curriculum. That provides a very particular part of your curriculum to return to and examine whether it is preparing students in the way you anticipated. You may choose to provide students with more practice on this type of problem in course activities so that they identify what they have developed as a method. Additionally, the authors have had some success with more explicitly prompting students to identify and articulate the methods they have developed. Weakness in a Level 2 or 3 task highlights an issue with student understanding, which is likely developed in a number of places throughout your curriculum. You can then find these points in your curriculum and either attempt to provide more opportunities for your students to draw this understanding together or provide new ways for your students to develop this understanding. However, it is unlikely that simply giving them more practice with the types of tasks that you have used already will have much effect on their understanding.
3.3 Overall conceptual understanding score
Notice that this removes the effect of Level 0 tasks entirely from the score as we determined that these tasks do not contribute any information about the students’ conceptual understanding. It also minimizes the impact of Level 1 tasks and maximizes the impact of Level 3 tasks, due to the weightings assigned. This is appropriate since Level 3 tasks require and therefore demonstrate stronger conceptual understanding than Level 1 or Level 2 tasks.
For example, the Calculus I midterm exam that we have discussed above contained 38 separate tasks, with each task assigned a CUWS Level. In Table 2 above, we saw the breakdown of these 38 tasks: 10% of the points on this exam were in Level 0 tasks, 49% of the points on this exam were in Level 1 tasks, 18% of the points on this exam were in Level 2 tasks and 23% of the points on this exam were in Level 3 tasks.
The Overall CUWS Score helps us to identify students who are struggling with conceptual understanding—something that can be missed by a typical raw exam score when compensated for by proficiency on Level 0 and Level 1 tasks. Of course, if a student has gotten 100% on the Level 2 and 3 tasks on an exam but has struggled on the Level 0 and 1 tasks (e.g. receiving a 50% on Level 0 and 1 tasks), then this will be reflected in their raw score on the exam, , but not seen in their Overall CUWS Score, . This is appropriate since the Overall CUWS Score provides a measure of student learning specifically targeted at conceptual understanding. It is a different approach to analysing the results of an exam and provides complementary information to traditional raw exam scores.
An Overall CUWS Score for students on that particular midterm exam is calculated to reveal the class distribution shown in Table 4.
Table 4. CUWS distribution for Calculus I midterm
| Overall CUWS Score . | Percentage of students (%) . |
|---|---|
| 90–100 | 13.79 |
| 80–90 | 20.69 |
| 70–80 | 24.14 |
| 60–70 | 17.24 |
| 50–60 | 24.14 |
| Overall CUWS Score . | Percentage of students (%) . |
|---|---|
| 90–100 | 13.79 |
| 80–90 | 20.69 |
| 70–80 | 24.14 |
| 60–70 | 17.24 |
| 50–60 | 24.14 |
Table 4. CUWS distribution for Calculus I midterm
| Overall CUWS Score . | Percentage of students (%) . |
|---|---|
| 90–100 | 13.79 |
| 80–90 | 20.69 |
| 70–80 | 24.14 |
| 60–70 | 17.24 |
| 50–60 | 24.14 |
| Overall CUWS Score . | Percentage of students (%) . |
|---|---|
| 90–100 | 13.79 |
| 80–90 | 20.69 |
| 70–80 | 24.14 |
| 60–70 | 17.24 |
| 50–60 | 24.14 |
This shows us that 41.38% of students (12/29 students) scored below 70% using the Overall CUWS Score for this exam. These are students that we would want to talk to more carefully about how they are approaching learning in the course, as they seem to be struggling with developing conceptual understanding. As part of general education, programme or departmental assessment, it is also worth considering the Overall CUWS Score on a particular exam that represents an acceptable performance standard for conceptual understanding in that course. For instance, in our calculus classroom our goal is for students to be 90% successful on Level 1 questions, 80% successful on Level 2 questions and 70% successful on Level 3 questions. This would mean an Overall CUWS Score on the exam discussed above of Currently, only 10/29 or 34.48% of students achieved this performance standard on that exam.
Using the CUWS as a targeted assessment tool, instructors (and departments) can work towards improving conceptual understanding in two main ways. First, by analysing class performance in different CUWS levels and making appropriate modifications and additions to course materials, over time and with different groups of students. In this way, instructors (and departments) can help students become more successful at developing and applying conceptual understanding. Secondly, by looking carefully at students’ Overall CUWS Scores and working with individual students to address their specific areas of need, instructors can assist students in achieving conceptual understanding course goals by the end of the semester.
4. Final thoughts
The CUWS was developed as a tool for an instructor in the classroom to use in real time. It focuses on evaluating a task in combination with how students have experienced learning in a particular classroom to determine whether that task will require a student to use conceptual understanding. It is accessible so that an instructor can use the CUWS in their classroom simply by having a detailed knowledge of the work their students have engaged in and a desire to reflect on tasks in this way. By using the CUWS, instructors or departments can calculate a conceptual understanding score for a particular exam, create mathematical tasks that assess understanding at different levels and analyse student understanding within those different levels. This in turn allows instructors to identify concepts with which students are struggling and quickly adapt classroom activities to improve students’ understanding. The ability to use the CUWS to create mathematical tasks (not just categorize them) is one of the important goals of the CUWS and one that is not regularly emphasized when working with other frameworks to categorize mathematical tasks. Using the CUWS in this way allows instructors to create or modify tasks in a very intentional way in order to assess specific levels of conceptual understanding. In addition, the ability to assign a conceptual understanding score to a particular assessment for review side by side with the raw score allows for a real-time analysis of student understanding in a unique way. We hope that this emphasis on usability will make the revised CUWS a useful tool for any instructor interested in assessing conceptual understanding in their classroom. The examples provided in this article are taken from precalculus and calculus courses, but the authors believe that the CUWS tool is applicable in any mathematics classroom and this will be the focus of future work.
Mairead Greene is an Associate Professor of Mathematics at Rockhurst University in Kansas City, Missouri. She uses an inquiry-focused, active-learning approach in all of her teaching. Her goal is to provide students with a learning environment which encourages them to develop as independent thinkers and problem solvers. She is particularly interested in how to effectively assess the mathematical understanding developed in her classroom. Mairead has a Ph.D. in Algebraic Number Theory. She is actively engaged in research in the areas of math education and the scholarship of teaching and learning.
Paula Shorter is a Professor of Mathematics and is currently serving as Associate Vice President for Academic Affairs at Rockhurst University in Kansas City, Missouri. Before moving to administration two years ago, she served as a full-time faculty member in the mathematics department at Rockhurst for 20 years. Throughout those twenty years and in all of her classrooms, from lower division to upper division courses, she has used an active-learning, inquiry-based approach, with a focus on the development and assessment of conceptual understanding. She has authored and co-authored active-learning, inquiry-based curricular materials in most of the courses that she has taught—precalculus, calculus, differential equations, probability and statistics, and others. Paula received her Ph.D. in Mathematics from the University of Virginia. Her primary areas of research have been in applied probability and in the scholarship of teaching and learning.