Student perceptions of the impact of quality matters essential standards in an animal physiology course

Abstract As online learning becomes increasingly popular in higher education, the quality of courses that utilize this modality is becoming a focus of inquiry. Quality Matters (QM) is a leading quality assurance organization that reviews online and hybrid (partially online, partially in-person) courses for standards of pedagogy and instructional design and certifies courses that sufficiently meet these standards. In this study, we examine student perceptions of course quality in a hybrid three-credit-hour animal science course that has been certified by QM. The class met twice a week for 1.25 h with one class period online and one in person. It consisted of 11 modules, each of which included learning content, learning activities, and assessments. Upon completion, 46 of 114 students completed a survey in which they rated the course on each of the 21 QM essential standards (Fifth edition). Descriptive analysis revealed that for 19 of the 21 specific review standards, 75% to 91% of students agreed or strongly agreed that the course reflected the best practice described in the standard. For the other two standards, over half of students (72%, 63%) agreed or strongly agreed that best practices were reflected in course design. Another way to examine the data is to collapse specific review standards into eight general review categories as specified by QM; the collapsed data revealed that 75% to 88% of students agreed or strongly agreed that the course design reflected the eight general course design standards. The percentage of students disagreeing that the course reflected each best practice was 11% or lower. Cronbach analysis to examine the internal consistency of the QM questionnaire (0.96), indicated instrument reliability and stability. A principal component analysis of the data conducted to further examine features and patterns of student responses revealed four primary factors that students rated highly (learning objectives, learner interaction and engagement, accessibility and usability, and clarity) that explained 78% of the data variance. This study demonstrates that the high quality of course design and delivery in a QM-certified course is clear to students. and provides justification for the investment in high-quality online and hybrid course design. In the future, we plan to compare student perceptions of course quality in a course that has not been QM-certified with one that has, as well as the impact of those revisions on student outcomes.


INTRODUCTION
Online learning is becoming increasingly prevalent at colleges and universities (He, Xu, & Kruck, 2014;Guppy, Verpoorten, Boud, Lin, Tai, & Bartolic, 2022).Students in higher education are open to learning online (Bali & Liu, 2018) and recognize that instructional design is a critical component of successful online courses (Nassoura, 2020).To ensure the instructional design quality of their online courses, higher education institutions across the nation have begun to implement quality assurance efforts such as the Quality Matters (QM) Quality Assurance System.QM is a not-for-profit organization that has gained national acknowledgement.This system consists of a process for the design and review of online or hybrid (i.e., partially online) courses based on a rubric that comprises 42 specific review standards subsumed under eight overarching general review standards (in the fifth edition for higher education).General Standard (GS) 1 focuses on providing clarity in the course's overall design, establishing expectations for students (e.g., prerequisite knowledge, technology requirements, computer skills), and providing guidance to help them be successful in the course.GS 2 establishes standards for clear, specific, measurable, level-appropriate learning objectives (i.e., what students should be able to do upon completion of the course).GS 3 addresses the need for course assessments to evaluate students' progress toward and achievement of stated learning objectives.GS 4 supports the inclusion of instructional materials that enable students to achieve the learning objectives.GS 5 addresses the need to facilitate and support student engagement and interaction with the instructor, materials, and classmates.GS 6 covers the need for course technologies to support student achievement of the learning objectives.GS 7 ensures that the course facilitates student access to available support (academic, technical, accessibility, and student services).GS 8 addresses ways to reflect a commitment to making the course and its components accessible and usable for all students.Taken together, the eight QM general standards provide a framework for applying best practices in online course design with the goal of facilitating student success.
Courses can be submitted for official peer review by the QM organization.Course review can result in certification by QM if standards are sufficiently met (Shattuck et al. 2014).The soundness of the QM rubric was examined by being compared to a set of standards put forth by the Council for Higher Education Accreditation and the eight regional accrediting agencies.The QM rubric was determined to be congruent with the published accreditation standards for online education (Sadaf et al. 2019).
The legitimacy of the QM rubric having been established; the next step is to begin to document the impact of the approach on students.Impact can be measured in multiple ways; one important consideration is student perception of course quality, which directly affects engagement with online learning (Baloran et al. 2021).Student engagement with online learning is the "extent to which students actively engage by thinking, talking, and interacting with the content of a course, the other students in the course, and the instructor" (Dixson 2015).Student perceptions could also be used to revise current QM standards in future additions of the rubric (Shattuck 2015).Although the QM rubric was created to reflect best practices in the instructional design of online courses, there is little information about whether students recognize these best practices.The purpose of this study was to gauge student perceptions of course quality in a QM-certified course in order to better understand the learners' perspective and determine potential avenues for continuous course improvement.

Course Description and QM Review
Physiology of Domestic Animals is a three-credit-hour course on mammalian physiology taught in the fall and spring semesters at North Carolina State University (NC State).It is a content-heavy required core course for animal science students, with a high enrollment of 110 to 140 participants each semester.It utilizes Moodle, a learning management system (LMS), to present course materials, activities, and assessments to track student progress.The course employs a hybrid design, with components being taught online and in person.It consists of 11 learning modules, each with a practice multiple choice question assignment and diagram labeling assignment.These assignments have three attempts before the deadline and accept the highest grade.There are also five quizzes and five congruent short answer assignments throughout the semester.The quizzes are timed, have one attempt, and consist of short answers, multiple choice, true/ false, fill in the blank, and diagram labeling questions.This course was selected by the Digital Education and Learning Technology Applications (DELTA) unit at NC State for course design modifications and review of course quality to meet QM standards.The instructor attended face-to-face group meetings, individual consultations, participated in asynchronous online training and discussions, designed and applied improvements to an online course, submitted the course for internal review, and completed course modifications based on the peer reviews.The course was then submitted for the official QM certification process, which involves an official independent review by three outside trained peer reviewers.
For QM certification the peer reviewers assess whether the course meets each of the 42 specific review standards in the QM rubric (Fifth Edition) (QM Higher Education Rubric, Fifth Edition., 2018).In order to be certified, a course must meet 23 standards that are deemed essential for course quality and earn at least 85 out of 100 total possible points in the review.The Physiology of Domestic Animals course met these requirements and received QM certification in October 2021, indicating professional recognition that it met standards for high-quality course design.

Participants and Procedures
Participants for this study involved students enrolled in the Physiology of Domestic Animals course in Spring 2022.The 114 students enrolled in the course were invited to complete a 25-item online Qualtrics survey (Supplementary Material) at the end of the course.Students were informed that participation was voluntary, the survey was not a graded assignment, and all responses would remain anonymous.There was a prize raffle incentive of eight NC State Bookstore t-shirts, four Tervis tumbler mugs, and two backpacks.The NC State IRB approved this research study (IRB 24779).
Of the 114 enrolled students, 46 completed the survey (40% response rate) and 39 included their demographic information.Among these students, 10.26% identified as male, and 89.74% identified as female.With respect to academic status of students, 10.26% were first-year students, 30.77% were sophomores, 51.28% were juniors, and 7.69% were seniors.Animal Science majors from the traditional 4-year track accounted for 79.49% of respondents, and 20.51% were transfer students.

Instrumentation
Qualtrics software was used for the online survey platform tool.The link to the survey was distributed via email with a reminder email sent 3 days before the survey closed.After completing an informed consent to participate, students were able to access the survey, which consisted of 21 QM questions and 4 demographic questions.The QM questions represented the 21 essential specific review standards in the QM rubric, Fifth edition.These specific review standards can be grouped into overarching "general review standards," each of which covers a broader concept in instructional design.The eight general review standards are Course Overview and Introduction (two of the specific review standards in our survey fall under this general standard), Learning Objectives (five items in our survey), Assessment and Measurement (three items), Instructional Materials (two items), Learner Interaction and Engagement (three items), Course Technology (two items), Learner Support (two items), and Accessibility and Usability (two items).The response option for each survey item was a Likert-type scale with responses ranging from 0 to 4 (0 = Strongly Disagree, 1 = Disagree, 2 = Neutral, 3 = Agree, and 4 = Strongly Agree).

Data Analysis
The survey data were analyzed to determine student perceptions of course quality based on the instructional design standards of QM.Descriptive analysis (percentage, mean and standard deviation) was conducted for each item.The grand percentage was calculated for each of the eight general standards.Cronbach analysis was conducted to examine the internal consistency of the QM questionnaire.
To further examine the important features and patterns from student responses on QM standards, a multivariate data analysis using principal component analysis (PCA) was conducted to summarize students' recognition of QM standards in their online learning experience.This method is conducted by transforming the 21 survey questions into a smaller set of variables called principal components which retain the most valuable features from the data.All analyses were performed in R (R Core Team, 2019) and RStudio (version 1.0.143).A Kaiser-Meyer-Olkin (KMO) analysis was made as a measure of sampling adequacy Kaiser (1960).
The KMO values for the overall data are greater than 0.8 (KMO = 0.85), and the individual variables are at least larger than 0.75 which indicates our sampling is adequate.

Descriptive Results
The extent to which students agreed that the course reflected best practices in instructional design is shown in Table 1.Percentage, mean, and standard deviation are shown for each specific review standard (i.e., individual survey item), and grand percentage is noted for each general standard (i.e., overarching concept).General Standard 1, "Course Overview and Introduction," had two questions and a grand percentage of 54.3% that strongly agree and 0% that strongly disagree.Standard 2, "Learning Objectives," had five questions and a grand percentage of 48.7% that strongly agree and 0.9% that strongly disagree.Standard 3, "Assessment and Measurement," had three questions and a grand percentage of 35.5% that strongly agree and 1.4% that strongly disagree.Standard 4, "Instructional Materials," had two questions and a grand percentage of 42.4% that strongly agree and 1.1% that strongly disagree.Standard 5, "Learner Interaction and Engagement," had three questions and a grand percentage of 33.3% that strongly agree and 0% that strongly disagree.Standard 6, "Course Technology," had two questions and a grand percentage of 35.9% that strongly agree and 0% that strongly disagree.Standard 7, "Learner Support," had two questions and a grand percentage of 35.9% that strongly agree and 0% that strongly disagree.Standard 8, "Accessibility and Usability," had two questions and a grand percentage of 44.6% that strongly agree and 0% that strongly disagree.The overall percentages for all the standards were 42.2% strongly agree, 38.7% agree, 15.4% neutral, 3.1% disagree, and 0.5% strongly disagree, as shown in Figure 1.
Student ratings for each specific review standard were high, with 75% to 91% agreeing or strongly agreeing that the course reflected each best practice for 19 out of 21 standards assessed.The two standards that did not reach 75% or higher agreement were "clearly explained the use of learning materials" (72%) and "provided grading criteria tied to grading policy" (63%).The percentage of students disagreeing or strongly disagreeing that standards were met in this course was low, with all but one standard below 10% disagreement (the exception being "learning objectives were suited to the level of the course" with 11% of students disagreeing).
Students' positive ratings were also reflected in the grand percentages calculated for general review standards in Table 1, each of which received 75% or higher agree/strongly agree ratings and disagree/strongly disagree percentages under 7%.The two highest rated general standards were Course Overview and Introduction (88% strongly agree or agree), and Learning Objectives (85% strongly agree or agree).The lowest-rated general standard was Assessment and Measurement (75% strongly agree or agree).
Preliminary analysis Cronbach's alpha was used to examine the internal consistency of student responses across the survey questions (0.96) was high, which indicated stability in students' survey responses, and therefore, instrument reliability.We could conclude that the instrument used to collect student's perception of this online course is reliable.
The results of the principal component analysis on the percentage of variation are presented in Table 2.The singular value decomposition method was used to conduct the principal component analysis.The first principal component (PC) explained 56.42% of the variance from student data, which accounts for the most variation in the data.About 77.53% of the data variance explained was by the first four PCs together.The scree plot shown in Figure 2 orders the eigenvalues, numbers that indicate how the data are spread out on the eigenvector (aka.principal component) from largest to smallest.In the results, the first four principal components have eigenvalues greater than 1.
In order to explore the magnitude and direction of each PC to the original QM questions, we examined the absolute values of the loading scores.The larger the absolute value of the loading, the more important the corresponding question is in calculating the PCs.Table 3 presents    has large values associated with standard 3, 6, 7, 8, especially focus on information about accessibility and clear technical support from 7 and 8, so this component primarily measures Accessibility and Usability.The fourth component has large values associated with standard 2, 3, and 5, especially on the clearly stated learning objectives and engagement of standard 2, so this component primarily measures Clarity.

Discussion
Student interest in online and hybrid learning in higher education is growing, and that trend is expected to continue in the coming years (Garrett, Simunich, Legon, & Fredericksen, 2022).With the movement toward universal adoption of this instructional modality comes increased scrutiny on its acceptability and impact.Most higher education institutions have adopted quality assurance standards for their online and hybrid courses (Garrett et al., 2022), investing significant time and resources into learning and implementing best practices in the design of online courses.To date, however, research on student perceptions of courses that meet these standards is limited.The purpose of the current study was to determine whether students perceive a QM-certified course to be of high quality, and if so, to provide support for quality assurance efforts in online education.
The hybrid (taught partially in-person and partially online) course examined in this study was certified by QM.At the end of the course, students provided ratings for how well the course reflected each of the essential review standards.For almost all of the essential specific standards, three-quarters or more of the students agreed or strongly agreed that the course reflected the best practice described in the standard.We also examined student responses by the eight general standards in the rubric, which are overarching concepts that include several specific standards.When the data were examined in this way, we determined that their highest ratings were for the general standards that covered course overview and introduction and learning objectives.The course overview and introduction are a vital component, as it sets the tone for student engagement throughout the course.The more engaged students are, the more they are satisfied with the course and their subsequent learning (Baloran et al. 2021).Learning objectives that are specific, measurable, and clear are also fundamental, because they lead to higher learner performance (Darabi et al., 2013).
We also conducted a principal components analysis to determine meaningful clusters of standards based on our data.This method creates applicable categories for factors no matter what specific standard they fall under, as recommended by Shattuck (2015).The resulting four components were learning objectives, learner interaction and engagement, accessibility and usability, and clarity.With this knowledge, the four components can be used to positively increase student perceptions, especially in areas where they are lacking.
The results of this study demonstrate that the high quality of a QM-certified course is clear to students.The findings provide justification for the time and resource investment required to design courses that meet these standards.Student perception is an important factor in the assessment of the impact of quality assurance efforts.In our future research, we plan to build on these findings by investigating how ratings of course quality change before and after a course is modified to meet quality assurance standards, the impact of such a change on student outcomes such as retention and grades, and the influence of specific course design factors on those outcomes.

SUPPLEMENTARY DATA
Supplementary data are available at Translational Animal Science online.

Figure 1 .
Figure 1.Percentage of student response.The percentage of student responses for each question in the survey is shown in accordance with a Likerttype scale ranging from strongly disagree to strongly agree.
the first four PCs and the top 10 absolute loading scores.The first principal component has large values associated with QM standard 2, 4, and 6, especially focus on learner's perspectives and objectives from standard 2, so this component primarily measures the Learning Objectives.The second component has large values associated with standard 5, 3, and 8, especially toward learning activities and classroom responses from standard 5, so this component primarily measures the Learner Interaction and Engagement.The third component

Figure 2 .
Figure 2. Scree plot for the first 10 PCs.The scree plot orders the eigenvalues from largest to smallest to determine how the data are spread out over the principal component.The first four principal components have eigenvalues that are greater than 1.

Table 1 .
Percentage, mean, and standard deviation of response options for QM standards impact on student learning

Table 2 .
Eigenvalues, variance explained, and cumulative variance for survey response

Table 3 .
PCs and absolute loadings