The nature and prevalence of diagnostic testing in mathematics at tertiary-level in Ireland

In this study, we conducted a survey of all tertiary level institutions in Ireland to find out how many of them use diagnostic tests, and what kind of mathematical content areas and topics appear on these tests. The information gathered provides an insight into what instructors expect students to know on entry to university and what they expect students to have difficulty with. In total, 12 diagnostic tests from 11 Irish institutions were collected during the study. An item-by-item analysis was carried out on each diagnostic test to determine the content areas and topics that are assessed on the diagnostic tests and to identify any additional knowledge required for successful completion. We compare the tests administered in the university sector and the Institute of Technology (IT) sector of the Irish tertiary level education system. In addition, current diagnostic tests are compared with two tests from the 1980s to see if the level of mathematical preparedness expected of first-year students has changed over time. Our analysis is situated in a discussion of previous research on diagnostic testing both nationally and internationally.


Introduction
In this paper, we will consider diagnostic tests which aim to measure the mathematical preparedness of students entering tertiary-level education in Ireland. These types of tests have been used internationally for more than 30 years (Tall & Razali, 1993;Lawson, 2003). This and the prevalence of developmental mathematics courses or bridging courses (McGillivray, 2009) show that instructors at tertiary-level feel that incoming students have serious difficulties with basic mathematics. In our study, we conducted a survey of all tertiary-level institutions in Ireland to find out how many of them use diagnostic tests and what kind of mathematical content areas and topics appear on these tests. This information gives us an insight into what mathematics departments expect students to know on entry to university and what they expect students to have difficulty with. To date, research in this area in Ireland has focussed on students' performance on individual diagnostic tests and subsequent interventions. With our data, we hope to shed light on the nature and prevalence of diagnostic testing in Ireland. We will compare the tests administered in the university sector and the Institute of Technology (IT) sector of the Irish tertiary-level education system. In addition, we will compare current diagnostic tests with two tests from the 1980s to see if the level of mathematical preparedness expected of first-year students has changed over time. We begin by summarizing what is known about diagnostic testing in Ireland. We will then outline our methodology before presenting our results and conclusions.
Diagnostic testing is an international phenomenon that has been heavily researched, especially in the UK (Gillard et al., 2010;LTSN, 2003). In 2003, an extensive report of diagnostic testing in the UK, conducted by the Learning and Teaching Support Network, identified two primary aims of diagnostic testing: to inform staff of the overall level of competence in basic mathematical skills of the cohort they are to teach; to inform individual students of any gaps in the level of mathematical knowledge they will be assumed to have-so that they can take action to remedy the situation. (LTSN, 2003, p. 1) The report 'Measuring the Mathematics Problem' (Hawkes & Savage, 2000) extended these aims to include the removal of 'unrealistic' staff expectations, and an email survey carried out by Gillard et al. (2010) further added the comparison of cohorts across successive years to the list of aims. These aims have also been expressed in the Irish context (e.g. Pfeiffer, 2019;Ní Fhloinn, 2009;Mac an Bhaird et al., 2009). To begin, we will give some details on the Irish education system.

The Irish education system
Second-level education in Ireland culminates in the Leaving Certificate, a set of high-stakes terminal examinations which are a prerequisite to completing secondary school and act as a gateway to further education. Students sit an examination in at least six subjects, and their grades are converted to points which determine entry to tertiary-level education. Almost all students take mathematics as one of their subjects. Mathematics can be taken at three levels in the Leaving Certificate (higher, ordinary and foundation), though higher or ordinary are typically required for many tertiary-level courses. In 2019, 55,094 students sat mathematics for the Leaving Certificate: 33% were examined at higher level, 57% at ordinary level and 10% students at foundation level (State Examinations Commission, 2020). These are noticeably different from 2010, particularly the number of students opting for higher level which has risen steadily from 16% (State Examinations Commission, 2010). The increase in students sitting the higher-level exam has coincided with a fall in the percentage of students sitting ordinary level (72% in 2010), while the number of students sitting foundation level has remained relatively constant (11% in 2010) (State Examinations Commission, 2010). An initiative designed to improve the numbers of students who sit the higher-level exam is the introduction of 'bonus points' for students who achieve a passing grade (at least 40%) (Shiel & Kelleher, 2017). The introduction of bonus points coincided with the introduction of a revised mathematics syllabus, Project Maths in September 2010. Project Maths (NCCA, 2008) has a particular focus on problem solving and included changes in content (the amount of statistics covered was increased, there were reductions in the calculus syllabus and vectors were omitted) and assessment (Shiel & Kelleher, 2017).
Tertiary-level education in Ireland comprises universities, ITs and other higher education institutions. Historically, the focus of universities and ITs have been different, where ITs have focused on vocational and STEM subjects while universities offered a more varied range of educational pursuits. This distinction has become less pronounced over time, however. In 2018, the Technological Universities Act enabled ITs to achieve technological university status (Citizen Information Board, 2019). Technological University Dublin (TU Dublin) was the first of these institutions to form (emerging from Dublin, Tallaght and Blanchardstown ITs) and is the only tertiary institution in Ireland that offers qualifications from Level 6 to Level 10 of the qualifications framework (Citizen Information Board, 2019). There are currently nine universities, 11 ITs and four other higher education institutions in Ireland (Higher Education Authority, 2020).

Diagnostic testing in Ireland
Diagnostic testing has been taking place in some Irish universities and tertiary-level institutions since the 1980s. In the 1990s, more institutions began using these tests in an effort to describe 'the mathematics problem' in their institution. At that time, the mathematical ability of incoming students became a growing cause for concern among lecturers in Ireland, mirroring the situation in the UK .
Despite the effectiveness of diagnostic testing in identifying the areas in which students excel or might require improvement, there is an agreement that on its own it is not sufficient to solve the mathematics problem. Savage et al. (2000) offer the following approach to identifying and addressing student difficulties: Diagnostic testing should be seen as part of a two-stage process. Prompt and effective follow-up is essential to deal with both individual weaknesses and those of the whole cohort (Savage et al., 2000, p. iii).
This two-stage process, though varying slightly between institutions, is by far the most popular approach to addressing the mathematics problem in Ireland. For example, TU Dublin (Carr et al., 2013;Marjoram et al., 2008), IT Tralee (Cleary, 2007) and the National University of Ireland, Galway (Pfeiffer, 2019) test their students repeatedly across several weeks in conjunction with online resources and targeted revision sessions as part of their follow-up practices. Dublin City University (Ní Fhloinn, 2009) uses its diagnostic test results to identify 'At Risk' students (students who score <50% on the diagnostic test) and offers them 'refresher sessions' which take place during the first 2 weeks of semester. Similarly, at Maynooth University (Mac an Bhaird et al., 2009), the diagnostic test is used to identify students who are deemed 'At Risk' (at or below 33%) and to offer them targeted supports. The University of Limerick (UL) (Faulkner et al., 2009) also uses its diagnostic test to identify students who may be at risk of failing first-year modules. In addition, each of the aforementioned institutions makes students aware of their Maths Support Centres.
In Ireland, diagnostic tests are typically written either by people involved in mathematics support or people who are involved in first-year lecturing (often these are the same people). The tests are administered to first-year service students, meaning they are enrolled in at least one mathematics module but are not pursuing a degree in mathematics. The programmes they are enrolled on include the sciences, engineering, business and education.
Existing research on diagnostic testing in Ireland tends to investigate the diagnostic testing of a given institution across consecutive years (for example, Carr et al., 2013;Cleary, 2007;Fitzmaurice et al., 2009;Gill & O'Donoghue, 2007;Ní Fhloinn, 2009;Pfeiffer, 2019). Within this, research has focused on the performance of a subset of service mathematics students such as engineers (Carr et al., 2013;Cleary, 2007) and preservice mathematics teachers (Fitmaurice et al., 2019) or on performance on questions within the diagnostic tests (Gill & O'Donoghue, 2007;Ní Fhloinn, 2009;Pfeiffer, 2019). One domestic study investigated diagnostic testing from the students' perspective, finding that the majority of students think diagnostic testing is a good idea, appreciate the purpose of diagnostic testing and believe that early within the first semester is the most suitable time to facilitate the testing (Ní Fhloinn et al., 2014). Research conducted internationally (for example, The Mathematics Diagnostic Test Project; Betts et al., 2017) has focused on how diagnostic testing can improve learning through providing feedback to teachers and learners, albeit at a younger age. Other international studies have reported on performance (Lawson, 2003;Steyn et al. 2008;Pemberton & Belward, 1996;Carr et al., 2015;Jennings, 2009). We are not aware of any studies which have examined more than one diagnostic test or have compared the topics covered in questions on multiple tests.
Knowing what types of questions are contained on diagnostic tests could give us insight into mathematics departments' expectations of incoming students. This has not been studied in an Irish context to date, though Deeken et al. (2019) investigated what German lecturers expect of incoming STEM students. They identified 179 prerequisites (across mathematical content, mathematical processes, views about the nature of mathematics and personal characteristics), of which 140 garnered a consensus among participating university lecturers. Though this is a large number of prerequisites to expect of students, the authors conclude that 'The expected mathematical prerequisites are mainly on a basic level and do not require formalistic or abstract mathematical knowledge or abilities' (Deeken et al., 2019, p. 23). Though differences will exist across contexts (for example, vectors and calculus prerequisites may differ in different countries because of second-level syllabi) the majority of the topics and skills identified are likely to be common internationally. Deeken et al. (2019) identified six basic content prerequisites: fractions, percentages and proportionality, linear and quadratic equations and functions. In addition, they outlined the skills that students need: Students should be able to perform basic mathematical processes in unfamiliar situations (Level 2) (e.g., fluently and precisely performing mathematical standard procedures without electronic tools; proficiently handling and transferring between mathematical representations; understanding written mathematical formulations including technical terms.... (Deeken et al., 2019, p. 34) In this study, we gathered diagnostic tests from tertiary-level institutions with the goal of learning about the prevalence and nature of mathematics diagnostic testing in Ireland. The research questions (RQs) are as follows: 1. How prevalent is mathematics diagnostic testing at tertiary-level in Ireland? a. What form do the diagnostic tests take? 2. What mathematical content areas and topics are assessed on diagnostic tests? a. What content areas and topics are assessed most frequently? b. Do the content areas and topics that are assessed vary between universities and ITs? c. Do the content areas and topics that are assessed vary across time?
Answering these RQs will give us a clear picture of the prevalence and nature of diagnostic testing in Ireland. In the following sections, we outline the methods of data collection and analysis, present the results of the study and discuss the findings in the context of the present literature.

Methodology
In this section, we detail the collection, analysis and presentation of the data in this study.

Data collection
In order to inform the RQs, we needed access to every diagnostic test in use at tertiary-level in Ireland. The diagnostic tests were obtained in two ways: email correspondence with academic staff in each institution and a survey of the literature. In most cases, we contacted staff with whom we had a previous professional relationship; we asked them if their institution used a diagnostic test and if so, whether they could provide us with a copy of it. The UL's diagnostic test has been published in the literature (Fitzmaurice et al., 2019); we confirmed that this test is currently in use and included it in our analysis. The other diagnostic tests in our study were sent to us by email in December 2019.
Within our RQs and throughout the results and discussion, we use the terms 'mathematical content areas' and 'topics' repeatedly. Though these terms appear frequently in mathematics education (often axiomatically), we offer the following description for each in the context of this research. We consider mathematical content areas to be broad areas of study within mathematics such as Number, Algebra, Calculus, Geometry and Trigonometry. This is in line with language used in the Irish second-level mathematics curriculum, where the term 'strand' is used. We use 'topics' to describe specific pieces of knowledge that exist within each mathematical content area. For example, within the content area of Number exists the topics of Exponents, Rules of logs and Ratios, among others. In this sense, we consider mathematical content areas to be broad areas of study within mathematics, and 'topics' are the more specific pieces of knowledge that exist within the content area. See Table 3 for an extensive list of the topics that we identified during this work and Appendix 1 for explanations of each topic.

Data analysis
A combination of qualitative and quantitative data is required to answer the RQ in this study. In order to answer RQ1, we recorded the responses to our enquiry and calculated the proportion of Irish tertiary-level institutions where diagnostic testing is used.
Analysing the qualitative data required a different approach, however. RQ 2 looks at the nature of questions on the diagnostic tests, so the data gathered to answer this question are qualitative in nature. We developed a categorization system in order to describe the types of tasks (specifically the mathematical content areas and topics involved) on the diagnostic tests. We did this in a manner similar to Heid et al. (2002), who developed a mathematical task coding instrument with a focus on the end goal of the question. Procedures to assess the trustworthiness of the category system, (independent coding, coding consistency check and stakeholder checks) which Thomas (2006) describes, were also practiced during our data analysis. This approach has been used previously by a member of the research team when analysing students' responses to multi-step mathematics problems at tertiary-level (Hyland, 2018).
We began by initially describing the primary mathematical focus of each question on the diagnostic tests. This process resulted in an exhaustive list of mathematical content areas and topics. We refined this list, by removing duplicates and grouping very similar descriptions together. Each of the descriptions was reduced to a word or short phrase (label of category) encapsulating their meaning. When this was complete, each researcher's list was reconciled to form the initial categorization. The questions on the diagnostic tests were then categorized by each researcher independently before a meeting took place to discuss outliers and other discrepancies. This iterative process occurred multiple times, each time resulting in a refined categorization and increased agreement. In the initial phase, there was 90% agreement (Cohen's kappa of 0.89) between the researchers' categorizations. The categories were refined until the researchers were in complete agreement. The final list contained 50 categories. We present an illustrative example from the IT Tralee diagnostic test. In this example, we describe how a category evolved through successive iterations of the coding process.
Q23 from the IT Tralee diagnostic test is an example of a question that could be classified in two separate categories on the preliminary list. The question was initially coded as Expression (simplify) and Expressions (expand) by each researcher during the independent coding phase. The ensuing discussion resulted in the merging of Expression (simplify) and Expression (expand) (and Expressions (factorize)) into one category: Expressions. Another example of reducing the number of categories through refinement is Exponents, which is the combination of four distinct categories (one for positive, negative, fractional and rules of exponents) from the preliminary list.

Data presentation
The data in this study are presented in the text, and the descriptions are supplemented by explanatory tables. The tables are used to highlight the salient points of the analysis, in addition to other ancillary findings. Two of the tables contain large amounts of numbers, resulting from various tallies. In these cases, colour is used to enhance readability, where shade is used to distinguish between values. An example is shown below (Table 1).  Table 3 and Table 5 3

. Results
In this section, the results of the study are presented. Fourteen diagnostic tests were included in the analysis, 12 of which are currently in use in Ireland and the other two are from more than 30 years ago. For the sake of transparency, we will make distinctions at each stage between the diagnostic tests that are currently in use and those that are not. Our aims are to discover how prevalent diagnostic testing is at undergraduate level in Ireland and to gain insight into the mathematical content areas and topics that are assessed on those tests.

The prevalence of diagnostic testing in Ireland
Information about diagnostic testing at 22 of Ireland's tertiary-level institutions (all the universities and ITs- Table 2) is required to inform RQ 1. A multi-faceted approach was used to gather this information. We contacted members of academic staff in each institution and asked if they used a diagnostic test or not, and if so, we requested that they send us a copy of the test. In a small number of cases, the tests were in the public domain since they had been published in a research article. These methods yielded the following information with respect to the prevalence of mathematics diagnostic testing at tertiary-level in Ireland. Eleven institutions (AIT, DCU, DIT, DKIT, ITC 2 , ITS, ITTa, ITTr, MU, NUIG, UL) administer a diagnostic test to a significant number of their large first-year classes, while three (CIT, LyIT, UCD) use them as part of special programmes. One university (UCC) is currently developing a diagnostic test, and no information was obtained on the remaining institutions (GMIT, ITB, LIT, TCD, WIT). This means that 14 of 20 3 institutions (70%) are currently using a diagnostic test of some form, 1 (5%) is developing a diagnostic test and the remaining 5 institutions (25%) did not respond. In summary, over 70% of universities are using diagnostic tests (75% when TU Dublin is included), with 64% of the ITs employing them.

The form of the diagnostic tests
The number of questions on the diagnostic tests that we collected ranged from 11 in IT Sligo to 40 in UL 4 with a mean of 23 questions with a standard deviation of 7.8. The tests were of two types: those that contained multiple choice questions (MCQs) only and those that used open response questions. Out of the 14 tests that we analysed, 7 consisted of MCQs (ITCb with 3 possible answers for each question; MU, ITTa, DIT and NUIM with 4 answer options; DCU and DkIT with 5 answer options). When describing the diagnostic test used in DCU, Ní Fhloinn (2009)  The test was designed for marking by hand so that one could investigate the specific errors that students make and identify where the gaps in student knowledge lie. (Gill & O'Donoghue, 2007, p. 229) There is also slight variation in how incorrect responses are scored. For example, the diagnostic test in DCU, which initially gave zero marks for an incorrect response, introduced negative marking to discourage guessing and increase students' awareness of their own shortcomings (Ní Fhloinn, 2009).
Though not applicable to RQ 1, the information gathering process revealed two extra diagnostic tests that are included in Table 2 and the section on RQ 2. These tests (CRTC and NUIM) are of Irish origin and date from the 1980's but are no longer in use. They are included to provide more insight for readers in that they offer a point of comparison across time. They are included in the rightmost columns of Table 3 for clarity.

The students taking diagnostic tests
Though diagnostic testing is used extensively throughout Ireland and the UK, the students taking the tests are often pursuing different degrees within each institution. It is reasonable, for example, for a student studying business and a student studying engineering to prioritize specific content areas more relevant to their studies. Consequentially, knowing which cohorts of students are taking diagnostic tests within each institution should be taken into account while analysing the results in this section.
Service mathematics is taught to students pursuing degrees in engineering and computing, business, science and mathematics teaching. Though the numbers of students studying each subject varies by institution, some statements can be made that inform our analysis. First, the four universities for which we have diagnostic tests (DCU, MU, NUIG and UL) have students from each discipline, all of whom complete the diagnostic test upon entry to tertiary-level. One exception to this is in UL, where the preservice mathematics teachers stopped taking the test in 2014 (Fitzmaurice et al., 2019). Second, all seven of the ITs test their incoming engineering students: we know that the tests received from AIT, DIT, ITTa, ITTr, DkIT and ITS are aimed at engineers while ITC provided two separate tests, one (ITCa) which is administered to their science students and the other (ITCb) which is aimed at engineers.

Content areas and topics assessed on the diagnostic tests
In order to answer RQ 2, an item-by-item analysis of all diagnostic tests was carried out, generating the results described below. We analysed each question in two ways: firstly, we categorized the primary mathematical content area required to answer each question; we also identified other topics that are required to complete each question. These two analyses are presented separately below. We begin with the analysis of the primary categorizations of the questions.
This analysis resulted in the use of 50 distinct categories of mathematical topics across six content areas: Number, Algebra, Trigonometry, Geometry, Function and Calculus. Table 3 shows the categorization of the questions from all the diagnostic tests. The ticks are a tally of the number of questions on a given diagnostic test whose primary categorization is shown to the left. For example, Maynooth University contained one question on its diagnostic test with the primary categorization Multiply/divide (real), three questions with the primary categorization Exponents, and no question with the primary categorization Ratios. To improve readability, all categories appearing on two tests or fewer do not appear in Table 3. These categories (21) are listed in Appendix A. A description for each category is also contained in Appendix A.
The table contains all categories with three or more occurrences, of which there are 28. No category occurs in every test, though Expressions is on 13/14 diagnostic tests (AIT the exception). The next most frequent categories are Exponents with 11 occurrences, PEMDAS has 10, Add/subtract (fractions) and Transposing equations with nine and Multiply/divide (fractions) and SLEs with eight.
Tallies of individual categories give an interesting insight on the type of questions that test designer's value by how frequently they are posed to students. In Table 4, a similar approach is taken at a macroscopic level. This time, the number of occurrences of questions by strand is tallied. Table 4 shows the dominance of the Number and Algebra strands on diagnostic tests, accounting for 78% of items collectively. This figure is consistent across institution types (T: 78%; U: 79%; IT: 80%; H: 72%) showing that the proportions do not seem to be that different across institutions or even across time. This indicates that what academics are interested in testing with these instruments is students' ability to calculate, to perform algebraic manipulations and to work with equations and inequalities. The largest difference between universities and ITs is seen in Geometry, which may be associated with a larger proportion of students in ITs pursuing engineering degrees.

Secondary analysis
The purpose of Table 5 is to highlight the knowledge required for successful completion of questions on diagnostic tests that are different from the primary category. That is, Table 5 tallies all necessary and implicit knowledge that is required to successfully complete a question. In order to be counted in this part of the classification, we required the knowledge to be different from the primary categorization (distinct and implicit) and to be a vital step in solving the problem (necessary). Many questions are not represented in Table 5 because they are single-step problems, or they require repeated use of a single piece of knowledge (which will be its primary categorization). We coded questions as requiring a category only once, even if the same knowledge is required multiple times. In other words, a question that requires, for example, addition and/or subtraction multiple times en route to a correct solution will only increase the tally of Add/subtract by 1 in Table 5. An illustrative example is provided. DCU Item 1 : The primary category for this question is Add/subtract (fractions). In order to complete the task, the fractions must be expressed as one, using the LCM method. This involves Multiply/divide (real) and subsequently Add/subtract (real). Each of these categories registers for Table 5. Finally, even though both categories are applied multiple times, they only tally once.
When questions are analysed in this manner, Table 5 acts to highlight the number of necessary and implicit topics that are not the primary category required to successfully complete a question. A final point highlighting the worth of Table 5 comes from comparing two questions from the MU diagnostic test below.
Question 1 : Simplify a √ b 4 ; Question 2 : a 10 3 equals Both questions receive the primary category of Exponents, making them indistinguishable from the perspective of Table 3. We assert that Question 1 contains an extra step for students to navigate in order to successfully complete the task. The extra step in Question 1 comes from having to convert from surd form ( . Once this is done, students can display their knowledge of Exponents (the primary category) to Multiply real numbers (Table 5 category for each question). The result is an additional code of Convert between representations is applied to Question 1, which is reflected in Table 5. Extending this reasoning tells us that a diagnostic test with a large number of tallies in Table 5 contains questions that require a large amount of necessary and implicit knowledge that is distinct from the primary category. Table 5 is shown below, followed by explanatory text. Table 5. The necessary and implicit knowledge that is required to successfully complete a question Similar to Table 3, categories that occurred fewer than three times are omitted from Table 5 and are listed in Appendix A (11).
Recall that the primary categories that appear most frequently in Table 3 are Expressions (13), Exponents (11), PEMDAS (10), Add/subtract (fractions) (9) and Transposing equations (9). Table 5, on the other hand, was dominated by Add/subtract (number) (14), Add/subtract (algebra) (14), Multiply/divide (number) (14) and Multiply/divide (algebra) (13) which are the principal operations involved in questions with a procedural focus. The other category in the top five of each table is Exponents. We believe that this category has a dual role given that questions can focus on the rules of exponents in their own right, but they are also often evoked by questions on PEMDAS, Expressions, Fractions and Transposing equations where brackets, exponents and division play a central role. From this point of view, it is not surprising that Exponents is common to both tables. Table 5 was constructed to highlight and tally all necessary and implicit knowledge that is required to successfully complete a question. It offers a different perspective on the mathematical content areas and topics contained in diagnostic tests because of its more granular approach to analysis, highlighting the significance of basic manipulations (+, −, ×, ÷) of Algebra and Number that are critical to students entering tertiary-level.
A final piece of analysis of secondary categorizations examined the number of secondary categorizations registered for each diagnostic test. This was measured as a quotient of total tallies in Table 5 per diagnostic test divided by the number of questions on that diagnostic test, as outlined below.
Total tallies per diagnostic test Number of questions per diagnostic test = additional knowledge coefficient  Tally  35  28  42  44  34  50  33  34  38  20  45  56  30  25  Items  20  20  20  40  21  40  20  20  25  11  22  29 16 20 Modal tally 2 (7) 2 (6) 1, 3 (7) 0 (17) 2 (17) 1 (9) 2 (9) 2 (7) 2 (11) 2 (4) 2 (7) 2 (15) 1, 2 (5) 2 (8) Additional knowledge coefficient So, as an example, ITTr gets an additional knowledge coefficient of 1.93 (56/29), meaning that in addition to the primary categorization, tasks had on average 1.93 additional topics requirements. Table 6 shows the additional knowledge coefficient for each diagnostic test, ranging from 1.10 to 2.10. Values at each end of the range could be generated by taking opposing approaches to designing diagnostic tests. A lower value infers that the diagnostic test contains questions with fewer additional topics on average per question than tests with higher values. Diagnostic tests with low values are almost certainly better for isolating and diagnosing distinct difficulties because there are fewer topics involved. The diagnostic test of UL is the best example of this: an additional knowledge coefficient of 1.10 is the lowest in our sample, meaning that the questions contain a primary category and just over one additional topic on average. Diagnostic tests with a coefficient at or above 2, on the other hand, are more intricate but have the potential to assess multiple pieces of knowledge at once (if carefully constructed and analysed). We caution that the additional knowledge coefficient should not be used in isolation to categorize tests as either attempting to isolate or assess multiple pieces of knowledge simultaneously. We also do not believe that this coefficient is a measure of the difficulty of a test since it is often the case that in order to test a specific piece of knowledge it is impossible to isolate it; for example, it is difficult to design a question that tests if students can solve a quadratic equation without involving other topics from the Number and Algebra categories. However, this coefficient could be useful to test designers to indicate how complex their questions are. We include a modal tally per question for each test to provide further insight for the reader and to supplement the additional knowledge coefficient. The modal tally is the most frequently occurring additional tally per question for each test and is followed by the number of questions with that additional tally in brackets.

Comparison to historical tests
The analysis of the two additional tests found similarities and differences among the diagnostic tests that reflect their era. This is based on a comparison of the primary categories, where historical tests had a similar number of questions on Number, Geometry, and Function, considerably fewer Algebra questions than all other diagnostic tests, with the difference spread across Trigonometry and Calculus (Table 4). We note the small number of diagnostic tests and do not offer anything more than context as the most likely explanation: the curriculum in Ireland has undergone two revisions since the 1980s and the differences highlighted in Table 4 are indicative of the differences between the two curricula.
Project Maths, the current curriculum in Ireland, represents a different outlook and set of priorities than its 1980s equivalent. There is evidence that instructors have noticed a change in the relative strengths and weaknesses of students since its introduction (Prendergast et al., 2017). The change in mathematical content at second-level brought about by Project Maths could also explain the revisions that several diagnostic tests (e.g. MU) have undergone recently, where test designers chose to change their tests to reflect the changes in education of the incoming students. The latest change in the syllabus has meant that the amount of calculus studied by students has decreased and this may explain the reduction in the numbers of questions on this topic in the diagnostic tests.

Conclusion
In this study, we collated and analysed mathematics diagnostic tests that are currently administered at tertiary-level institutions in Ireland. We sought to describe the prevalence of diagnostic testing at tertiarylevel in Ireland, and our analysis has confirmed the widespread use of diagnostic tests. Currently 11 institutions use diagnostic testing in a large-scale manner, with another 4 using diagnostic tests in a limited manner or have a diagnostic test under development. IADT Dun Laoghaire and The Royal College of Surgeons are the only institutions that we know do not use a diagnostic test with students, most likely due to their programmes of study. At the time of writing, we had not received information from the remaining five institutions. The diagnostic tests that we collected contained a mean of 23 questions with a standard deviation of 7.8. Out of the 14 tests that we analysed, seven consisted of MCQs, with between three and five options.
Our second RQ led us to analyse 12 diagnostic tests that are currently in use at 11 Irish institutions. In addition, two other tests were included from Irish institutions that are no longer in use. The analysis resulted in findings that included categorizing questions by mathematical content area and topic (Tables  3 and 4), singular operations within each question (Table 5) and average additional knowledge per question (Table 6). There is evidence that the aim of many diagnostic tests is to gauge the mathematical preparedness of students (LTSN, 2003); thus, our analysis gives us an insight into the expectations of test-designers and mathematics departments concerning the prior knowledge of their incoming first-year undergraduate students. We see that the majority of questions on the diagnostic tests relate to Number and Algebra (Table 4) and the knowledge needed to solve these questions involve the use of basic arithmetic or algebraic manipulation (Table 5). Very few questions on the tests require more advanced mathematical understanding. These findings seem to be stable across time, in that the analysis of the older tests did not yield significantly different results. We also note that the tests from the university and IT sectors are not significantly different. This shows that the test designers across the Irish tertiary system value basic mathematical ability but also that they are worried that their new students may not be proficient with them. This echoes the concerns of the Chief Examiner's report from 2015 who advised second-level teachers to 'provide frequent opportunities for students to gain comfort and accuracy in the basic skills of computation, algebraic manipulation, and calculus.' (State Examinations Commission, 2015, p. 29) and noted that 'Students should be particularly careful with signs, powers, and the order of operations.' Perhaps unsurprisingly, there is significant overlap between the categories that occurred most frequently in Table 3 and the questions answered least successfully in diagnostic tests and at Leaving Cert level. In fact, each of Expressions (DIT), Exponents (DCU, DIT, UL), PEMDAS (UL), Transposing Equations (UL) and manipulating fractions 6 (DIT, ITTr) were considered to be poorly answered on diagnostic tests by the respective authors (Ní Fhloinn, 2006;Ní Fhloinn, 2009;Cleary, 2007;Gill & O'Donoghue, 2007). Two of the diagnostic test studies (ITTr and UL) cite the Chief Examiner's report (2000) to bolster their explanations of poor performance. The report, which found 'The lower grade profile overall was a result of a higher proportion of answers that displayed weak basic mathematical skills and a poor grasp of important concepts' (2000, p. 18) has been echoed by the most recent Chief Examiner's report (2015) which stated that "the majority of candidates struggled to deal with the fractions involved, and a lot of the work produced here was of very poor quality." (2015, p. 22). The worries that prompted institutions to introduce diagnostic testing at the end of the 20th century seem to be equally relevant today.
There are subtle differences across institution type, and time, however. For example, many of the less common categories, which could be considered 'more advanced' (Mathematization, Differentiation (power rule), Inequalities and Rules of logs) each only appear on one IT diagnostic test (which is DIT in each case). These topics occur frequently on the historical tests. In addition, the more modern tests have a greater emphasis on algebra than their historical counterparts. This may be an indication that the topics covered by diagnostic tests have become less advanced and less difficult over time, although we do not have enough historical data to substantiate this claim. The changes may be a result of the change of syllabus at second level however the inclusion of more algebra questions in the current tests suggests that test designers are worried about students' algebraic manipulation skills.
We also found many similarities between the tests and the content and skills prerequisites detailed by Deeken et al. (2019) in their study of expectations of STEM instructors in Germany. From Table 3, we can see that some categories (for example Exponents and Expressions) are included in almost all diagnostic tests which shows their perceived importance by test designers and lecturers at tertiary-level both in the university and IT sectors. This is in line with Deeken et al.'s (2019) findings on expected prerequisites (who identified six basic content prerequisites: fractions, percentages and proportionality, linear and quadratic equations and functions) which gained a consensus among lecturers. However, they also report other prerequisites (such as comprehension and explanation of definitions; fluently and proficiently using general heuristic principles) that are not featured. This may be because of the fact that diagnostic tests usually focus on basic skills. In any case, it seems that worries about the lack of basic mathematical proficiency among school leavers are prevalent internationally (e.g. UK- Hunt & Lawson, 1996;US-Schmidt, 2012;Portugal-Carr et al., 2015).
From our review of the literature, it appears as though diagnostic testing and subsequent support mechanisms in Ireland are in a strong position. Excellent data are being collected that is leading to longitudinal analyses (for example, Gill & O'Donoghue, 2007;Ní Fhloinn, 2009;Cleary, 2007) and investigations of specific cohorts of students such as preservice mathematics teachers (Fitzmaurice et al. 2019) and engineers (Carr et al., 2013). The support mechanisms which are undoubtedly the more important half of the solution to the mathematics problem vary in form; however, all institutions that have a Maths Support Centre (or similar) advertise the service to students to various degrees (Mac an Bhaird et al., 2020). These supports are of great value to students, though Tall & Razali (1993) suggest that courses designed specifically for student who do not perform well may the optimal approach: . . . it would be of little use to remediate the problems whilst the students are attempting to start a new course and suggest an intermediate foundation course in which the less able are given explicit teaching of the qualitative thinking skills which the more able develop implicitly. (Tall & Razali, 1993, p. 210) This sentiment is echoed by Edwards (1997), who stressed that 'The right Extra Maths support is important.' and that 'It is vital to analyse results from time to time to determine support effectiveness.' (Edwards, 1997, p. 121) Our final point is in relation to the initial design and subsequent revisions of diagnostic tests to account for various changes, for example, the ability and prior learning of students. In the time since, many of these diagnostic tests have been designed; the syllabi on which they were based have changed. DCU (Ní Fhloinn, 2009) and Maynooth University have revised their tests since the introduction of Project Maths, and Fitzmaurice et al. (2019) offer an explanation as to why the UL diagnostic was not revised but not all tests have been updated accordingly. Ní Fhloinn (2009) addresses the impact of changing a diagnostic test can have: Introducing any modifications to a diagnostic test means that a full comparative study of results with previous years is no longer possible, and as such, the benefits of any such changes must be carefully weighed up against this loss of comparative data. However, bearing in mind the principal aim of the test, which is to identify those most at risk, occasionally changes are necessary in order to improve the accuracy of the information being obtained (Ní Fhloinn, 2009, p. 369) Ní Fhloinn goes on to outline a version of this dilemma encountered at DCU: . . . there is a real danger that the diagnostic test currently being used is not suitable for some of the more demanding modules. However, the advantage of having all service mathematics students take the same diagnostic test is that it allows a direct comparison between students in different modules. (Ní Fhloinn, 2009, p. 373) Crucially, they offer a solution to their issue, design a harder version of the test (of which the easier one is a subset) and use both with various groups of students.
The issue of outdatedness of a test varies with institution, as do potential resolutions, which must be examined by practitioners. We encourage practitioners to communicate their approaches and expertise across institutions. Collaboration like this will undoubtedly improve the quality of diagnostic testing and the measures aimed at helping students by extension. We remind ourselves of the conclusions of Prendergast et al. (2017, p.1) that ' . . . more needs to be done to ensure there is coherent and uniform approaches to the teaching, learning, and assessment of mathematics in the transition from second to third level education'. While much of this work must happen prior to students entering third level, there are many improvements to be made on both sides of the transition to improve the experience for students.
An ambitious attempt at reconciling the words of Prendergast et al. (2017) with Ní Fhloinn's (2009) explanation about the value of comparative data might be for academics from across the tertiary mathematics departments to design a shared diagnostic test (or bank of tests). The relative costs in data loss would quickly be outweighed by the large number of students who complete the test annually. In addition, the types of analyses currently being done on specific cohorts (such as prospective teachers or engineers) would be enhanced by the ability to pool together data from each institution.
Our intention is to expand on this research by collecting and analysing diagnostic tests from other countries and regions. Several tests from the UK, US and Australia were discovered during the background and data collection phase of this research that could be expanded upon to conduct such an analysis.