Abstract

Driven by innovations in the digital space, surveys have started to move towards online data collection across the world. However, evidence is needed to demonstrate that online data collection strategy will produce reliable data which could be confidently used to inform policy decisions. This issue is even more pertinent in cross-national surveys, where the comparability of data is of the utmost importance. Due to differences in internet coverage and willingness to participate in online surveys across Europe, there is a risk that any strategy to move existing surveys online will introduce differential coverage and nonresponse bias. This paper explores representativeness across waves in the first cross-national online probability-based panel (CRONOS) by employing R-indicators that summarize the representativeness of the data across a range of variables. The analysis allows comparison of the results over time and across three countries (Estonia, Great Britain and Slovenia). The results suggest that there are differences in representativeness over time in each country and across countries. Those with lower levels of education and those who are in the oldest age category contribute more to the lack of representativeness in the three countries. However, the representativeness of CRONOS panel does not become worse when compared to the regular face-to-face interviewing conducted in the European Social Survey (ESS).

1 Introduction

Social surveys, which have historically been conducted by face-to-face interviewing, are currently undergoing a paradigm shift. Data collection is moving online due to two major trends: response rates in social surveys have been declining for years (e.g. European Labour Force Survey (LFS) as in de Leeuw et al., 2018) resulting in increasing survey costs in an attempt to keep response rates high. Second, the internet now appears to offer a viable mode of survey administration. In the European Union (EU) as a whole and in the United Kingdom (UK), internet penetration rates are now at about 89% and increasing (Eurostat, 2019). Online surveys are potentially cheaper once everything is set up, and for many topics problems with interviewer effects and social desirability are lower when a survey is conducted online (Kreuter et al., 2008).

However, there are still disparities between countries and socio-demographic groups within countries with regards to access to and frequency of the internet use (Callegaro et al., 2015; de Leeuw, 2018). For example, internet penetration in 2019 was estimated to be 98% in the Netherlands but only 72% in Bulgaria. The gap is even larger when internet use is considered. While 90% of 16–74 year olds in Denmark use the internet daily, this is only 55% in Romania (Eurostat, 2019). The differences observed within countries are sometimes even larger than those between countries. The degree of urbanization is a strong predictor of having household internet access in Central and Eastern Europe, but in Western and Northern Europe internet access is similar in both urban and rural areas. Similarly, the level of education and the age of individuals predict who uses the internet in all EU countries and the UK, but age and education gaps appear to be larger in Eastern and Southern Europe when compared to Western Europe (Eurostat, 2018).

When one is interested in comparing countries over time, as is the goal for many cross-national surveys, it is important that the survey is conducted in the same way in every country. So far, most cross-national surveys (e.g. the European Social Survey (ESS), the Survey of Health, Aging and Retirement in Europe (SHARE), the Generations and Gender Survey (GGS) or European Union Statistics on Income and Living Conditions (EU-SILC)) have relied primarily on face-to-face interviewing with some minor exceptions. If these surveys are to use online interviewing more widely, the main challenge will be how to recruit respondents for an online survey in a way that ensures the comparability of survey results.

Differences in the ability and willingness of participants to do online surveys between and within countries present a major challenge. Another major problem is that in many countries it is challenging to obtain a random sample of individuals and invite them by mail to participate in an online survey. In some countries, population registers allow for such a ‘push-to-web’ approach (Dillman, 2017; Lynn, 2020) but in many countries no such register is available (e.g. the UK). In face-to-face surveys, the lack of a sampling frame can be overcome by the use of random-route sampling or enumeration by the interviewers. This approach, however, re-introduces the interviewer and takes away many of the benefits—mostly lower costs per interview—of doing surveys online (Scherpenzeel et al., 2017).

One possible approach to overcome all these problems discussed above is to use the benefits of face-to-face recruitment of respondents into the survey with online interviewing later. Such a re-interview approach leads to the establishment of a probability-based online panel.

In recent years different online probability-based panels were created on the back of probability-based face-to-face surveys. For example, the NatCen probability-based online panel was created on the back of the British Social Attitudes (BSA) survey in the UK. Other examples include the Longitudinal Internet Study for the Social Sciences (LISS) in the Netherlands, the probability-based mixed-mode access panel for the Social Sciences (GESIS panel) and German Internet Panel (GIP) in Germany or the ELLIPS longitudinal internet panel in France (Blom et al., 2016). These panels are set up in a similar way, but often differ in how they deal with respondents who indicate in the face-to-face interview that they are not able or willing to participate in the online panel, because they do not have internet access or do not know how to use internet. The LISS panel has provided such respondents with a special simplified PC with internet connection (Eckman, 2016), ELIPSS has provided a tablet (Blom et al., 2016), and the GESIS has offered a paper-based survey (Bosnjak et al., 2018). The GIP panel initially did provide a PC as well (Blom et al., 2015) but since 2019 the GIP has been running experiments which demonstrated that the potential coverage bias is now considered to be small (Krieger et al., 2019).

This paper reports on a study that intended to set up a cross-national comparative probability-based online survey. The CROss-National Online Survey (CRONOS) was set up on the back of the ESS Round 8 collected in 2016–2017. In order to test whether it was possible to set up the panel, three specific countries were selected across Europe: Slovenia, Estonia and Great Britain (Fitzgerald & Bottoni, 2019; Villar et al., 2018). Six waves of data were collected in these three countries.

In cross-national surveys, it is important to address potential issues of differential or comparability error across countries. Various components help to maximize comparability of functional equivalence of surveys by reducing comparability error which was introduced by Smith (2019) within total survey error (TSE) framework. The main components which impact differential error across countries are coverage error, nonresponse and measurement equivalence.

Cernat and Revilla (2020) investigated differences in measurement quality between ESS Round 8 and the CRONOS panel and reported no large differences in measurement effects. Bottoni and Fitzgerald (2021) also evaluated the CRONOS panel with a focus on recruitment into the panel. Beyond these two studies, not much is known about data quality or sample quality for the CRONOS panel over time and across countries, or for cross-national online panels more generally. It is important to investigate representativeness across waves and between countries in CRONOS in order to conclude whether setting up a probability-based online panel on the back of a face-to-face survey approach works successfully and in order to understand the issues associated with representativeness related to this approach to data collection. This paper does not investigate aspects of comparability error discussed above which are related to measurement error but only to representativeness dimension and specifically nonresponse errors. The main research question in this paper is how does representativeness in CRONOS online probability-based panel compare between waves and across countries? This paper addresses representativeness of probability-based samples (and compares intended samples with realized ones) which use only online data collection and which target general population with or without the internet access as tablets and internet are provided for those without internet access.

Although the response rate is often used as an indicator for the quality of a survey, in practice, survey methodologists advocate looking at differential rates of coverage and nonresponse bias among key demographic groups as the more interesting quality metric (Groves, 2006; Groves & Peytcheva, 2008; Olson, 2006). One way to do this is to compare survey estimates to benchmark estimates that are known for the target population as a whole, ideally using census or administrative data sources. In this paper we specifically focus on differences between the intended and realized samples as we use a probability sample for the starting point of the analysis. The effect of nonresponse is estimated in what is termed the ‘representativeness’ of the survey. The focus of this paper is on unit nonresponse rather than on item nonresponse.

To assess the effectiveness of the panel recruitment in terms of representativeness and to investigate this research question, R-indicators are employed which were proposed and developed by Schouten et al. (2009). ‘R’ stands for representativeness. R-indicators are designed to measure the degree to which the respondents to a survey resemble the complete sample or population (de Heij et al., 2015; Schouten et al.,2009, 2011; Shlomo et al., 2012). In this analysis we examine representativeness by comparing R-indicators over the course of the CRONOS panel in the specific sense of consequences of unit nonresponse. We do not intend to examine representativeness in the full sense of comparison between the target population and the realized sample as we start from a probability-based sample.

We expect nonresponse to occur in the recruitment of the face-to-face sample in a similar way as in earlier rounds of the ESS (Koch et al., 2014). Estonia, Great Britain and Slovenia all exhibit comparable levels of nonresponse bias when both internal (previous rounds of ESS) and external (labour force surveys (LFSs)) benchmarks are used. However, we expect that differences between the countries will be observed in the online phase of the study. According to Eurostat (2019), in 2018 internet access of households was 95% in the UK, 90% in Estonia and 87% in Slovenia. In Slovenia, 65% of respondents reports to use the internet daily, whereas this percentage is 77% for Estonia, and 88% for Great Britain, implying that it might be harder to convince respondents in Slovenia than other countries to take part in the survey (Eurostat, 2018). Comparable statistics in internet use disparities within countries are in short supply. However, we expect age, urbanicity and level of education to explain nonresponse in the online phase of the CRONOS panel to a different degree in these countries. Regional differences may be larger in Slovenia and Estonia when compared to Great Britain.

The next two sections of the paper will present the data and introduce the methods used for the analysis. Our results are then followed by a discussion of the results in the wider context, implications of the results for survey practice as well as limitations of the current analysis and suggestions for future work.

2 Data

2.1 Sample

We analyse the representativeness in all six waves of the CRONOS probably-based online panel in three countries (Estonia, Great Britain and Slovenia). CRONOS respondents were recruited from respondents in the face-to-face survey ESS round 8. ESS Round 8 fieldwork took place between September 2016 and March 2017. Sampling frames and sample designs used in ESS Round 8 varied between the CRONOS countries. Estonia and Slovenia used population registers as sampling frames and selected units according to different strata whereas Great Britain used an address-based sampling frame and a multi-stage sampling design with systematic sampling at the first two stages and Kish grids at the last stage for random selection of respondents within households (Villar et al., 2018). Other than this the recruitment and interview procedures were as similar as possible between the three countries.

After completing the ESS Round 8 face-to-face interview, respondents in these three countries who were age 18 or above were invited to participate in the CRONOS panel. The CRONOS online panel consisted of a 10-min welcome survey (Wave 0) followed by six 20-min online surveys (Waves 1–6) every 2 months over a 12-month period between December 2016 and February 2018. The start of CRONOS overlapped with the final stages of the ESS Round 8 fieldwork, which implied that some respondents were recruited into CRONOS later than December 2016, and therefore ‘missed’ the welcome survey (Wave 0) of CRONOS data collection. This occurred for 10.5% of respondents in Estonia, 8.9% in Great Britain and 0.5% in Slovenia.

The interview flow for respondents who were part of both the ESS and CRONOS target population was as follows. At the end of the ESS Round 8 interview, respondents received the following question after the details about the study were provided: ‘Would you be interested in participating in this study?’ (Villar et al., 2018). Only participants who responded ‘yes’ or ‘unsure’ to this panel invitation, were subsequently invited to a short recruitment interview that took place straight after the main ESS Round 8 interview.

During the recruitment interview respondents were asked to provide an email address for receiving invitations and reminders to online survey. When respondents did not provide an email but also did not refuse to join the panel, they were still invited by post for the first few waves to increase participation rates (Villar et al., 2018).

As mentioned earlier, respondents who were willing to join CRONOS but did not have access to internet were offered a tablet and an internet connection for the duration of the project to ensure they were able to participate. In Great Britain 4.6% of those participated in Round 8 of ESS received a tablet to access the internet, 2.4% in Estonia, and 4.2% in Slovenia. For those who received a tablet and an internet connection, email addresses were also set up for communication purposes. It is important to mention that among those who received a tablet, in Great Britain between 37.8% and 64.6% were nonrespondents across Waves 0–6 of CRONOS, between 23.9% and 37.0% in Estonia, and between 25.0% and 44.2% in Slovenia. This suggests that even provision of a tablet and internet access were not very effective at recruiting those without internet access in CRONOS.

All eligible respondents were offered unconditional incentives (vouchers) for a value of £5/€5 per completed wave (Villar et al., 2018). Additional monetary incentives were offered in different countries. Also, Great Britain ran an experiment of offering incentives at each wave versus one-off incentive for all waves. Each panel member received an email invitation to each wave with an individual survey link and three email reminders. In Waves 5 and 6 contact modes were varied for experimental purposes.

Each wave of the survey included approximately 100 questions on various topics such as societal wellbeing, values, beliefs and attitudes towards science and technology, marriage, migrants, trust and social capital, election participation, work, education, family structure, internet use and attitudes towards social media among other topics. CRONOS also collected paradata which included device used by respondents to complete survey (Villar et al., 2018). However, these paradata are only available for Waves 2–6 and not available for Wave 0 and Wave 1 of the panel. Respondents were free to choose a device (PC/laptop, tablet or smartphone) they wished to use for survey completion. For more information about CRONOS panel see Villar et al. (2018).

As the CRONOS panel was set up on the back of ESS, it is important to take into consideration all the stages in which respondents may choose to drop out of the study. Table 1 contains all stages in all three countries at which respondents were dropping off. Stages 1–5 are conducted face-to-face, while stages 6–12 occur online. Respondents may drop out at any of these stages implying that the representativeness of the resulting data may change as well.

TABLE 1

All stages of CRONOS data collection in three countries

StageStage descriptionDetails
Face-to-face stages
1Gross samples for ESS and CRONOS
2Responded to ESS Round 8
3Those who were expected to participate in CRONOS15–17 year olds were excluded; Northern Ireland was excluded in GB sample
4Initial panel invitationDifferences between stages 3 and 4 are due to individual decisions made by national coordinators or interviewers not to invite some respondents
5Recruitment interview (ESS interview)Only those who said ‘yes’ or ‘unsure’ to participation in initial panel invitation
Online stages
6CRONOS Wave 0—Welcome surveyIssues with this stage as not all panellists were invited to this stage as it was launched in December 2016 when ESS fieldwork was still in progress (10.5% of ESS Round 8 interviews took place in 2017 in EE, 8.9% in GB and 0.5% in SL)
7CRONOS Wave 1
8CRONOS Wave 2
9CRONOS Wave 3
10CRONOS Wave 4
11CRONOS Wave 5
12CRONOS Wave 6
StageStage descriptionDetails
Face-to-face stages
1Gross samples for ESS and CRONOS
2Responded to ESS Round 8
3Those who were expected to participate in CRONOS15–17 year olds were excluded; Northern Ireland was excluded in GB sample
4Initial panel invitationDifferences between stages 3 and 4 are due to individual decisions made by national coordinators or interviewers not to invite some respondents
5Recruitment interview (ESS interview)Only those who said ‘yes’ or ‘unsure’ to participation in initial panel invitation
Online stages
6CRONOS Wave 0—Welcome surveyIssues with this stage as not all panellists were invited to this stage as it was launched in December 2016 when ESS fieldwork was still in progress (10.5% of ESS Round 8 interviews took place in 2017 in EE, 8.9% in GB and 0.5% in SL)
7CRONOS Wave 1
8CRONOS Wave 2
9CRONOS Wave 3
10CRONOS Wave 4
11CRONOS Wave 5
12CRONOS Wave 6
TABLE 1

All stages of CRONOS data collection in three countries

StageStage descriptionDetails
Face-to-face stages
1Gross samples for ESS and CRONOS
2Responded to ESS Round 8
3Those who were expected to participate in CRONOS15–17 year olds were excluded; Northern Ireland was excluded in GB sample
4Initial panel invitationDifferences between stages 3 and 4 are due to individual decisions made by national coordinators or interviewers not to invite some respondents
5Recruitment interview (ESS interview)Only those who said ‘yes’ or ‘unsure’ to participation in initial panel invitation
Online stages
6CRONOS Wave 0—Welcome surveyIssues with this stage as not all panellists were invited to this stage as it was launched in December 2016 when ESS fieldwork was still in progress (10.5% of ESS Round 8 interviews took place in 2017 in EE, 8.9% in GB and 0.5% in SL)
7CRONOS Wave 1
8CRONOS Wave 2
9CRONOS Wave 3
10CRONOS Wave 4
11CRONOS Wave 5
12CRONOS Wave 6
StageStage descriptionDetails
Face-to-face stages
1Gross samples for ESS and CRONOS
2Responded to ESS Round 8
3Those who were expected to participate in CRONOS15–17 year olds were excluded; Northern Ireland was excluded in GB sample
4Initial panel invitationDifferences between stages 3 and 4 are due to individual decisions made by national coordinators or interviewers not to invite some respondents
5Recruitment interview (ESS interview)Only those who said ‘yes’ or ‘unsure’ to participation in initial panel invitation
Online stages
6CRONOS Wave 0—Welcome surveyIssues with this stage as not all panellists were invited to this stage as it was launched in December 2016 when ESS fieldwork was still in progress (10.5% of ESS Round 8 interviews took place in 2017 in EE, 8.9% in GB and 0.5% in SL)
7CRONOS Wave 1
8CRONOS Wave 2
9CRONOS Wave 3
10CRONOS Wave 4
11CRONOS Wave 5
12CRONOS Wave 6

Apart from drop out of respondents at each stage, there are other factors that affect how representative the CRONOS panel is. In the ESS, 15–17 year olds are a part of the target population. However, those 15–17 year olds were not invited to take part in CRONOS due to the potential risks associated with providing minors with financial incentives and devices which could be used to access internet (Villar et al., 2018). Also, in the Great Britain sample, participants residing in Northern Ireland were excluded from CRONOS. These changes introduce complexities in the consistent assessment of representativeness as the target population changes slightly between stages 2 and 3 of data collection. Due to this change in the target population, we created two gross samples: 1. the gross sample used in ESS round 8 (nonresponse weights were used to recreate this ESS gross sample for stage 1 and 2). The CRONOS gross sample which excluded 15–17 year olds and respondents from Northern Ireland from the recreated ESS gross sample.

Consequently, we used the CRONOS gross sample to examine all stages of the data collection process. It is also important to mention that individual decisions made by national coordinators or interviewers not to invite some individual respondents in stage 4 are not well documented, and we, therefore, do not always know specific reasons why a participant was not invited for the panel. However, it is likely that at least in some situations, respondents consented only to be interviewed for ESS round 8 under the condition that it would be a one-time survey, and that interviewers therefore did not ask the panel invitation question. Such decisions happened for 30 respondents in Great Britain (or 1.7% of respondents who were expected to participate in CRONOS out of those responded to ESS Round 8), 18 respondents in Estonia (0.9%) and 17 respondents in Slovenia (1.4%). This issue affects only very small proportion of respondents and does not present a major problem for the panel nor the analysis.

3 Methodology

Schouten and colleagues proposed the R-indicator to summarize information about the representativeness of a survey of multiple benchmarks into one estimate (de Heij et al., 2015; Schouten et al., 2009, 2011). The contrast between respondents and the gross sample or population is defined with respect to specific auxiliary variables. It is important to mention that R-indicators are not directly related to nonresponse bias in specific estimates (Moore et al., 2018; Schouten et al., 2011).

In order to obtain R-indicators, a logistic regression model is fitted using the auxiliary variables for which respondents' data are available as covariates. The starting point for our analysis is the gross sample with the covariate distribution re-weighted to correspond the population, which is based on the ESS interview (Stage 2 of data collection). This gives an advantage to the analysis as all the variables which were collected in this ESS interview can be used as auxiliary variables. Based on the regression estimates, response propensities can be estimated.

To compute R-indicators, we use a binary response indicator for an individual i that is denoted by yi,i=1,n.

The response indicator of each individual at each wave and in each country is defined as:

for each individual i, response propensities for yi are denoted as pi=Pr(yi=1) and (1-pi)=Pr(yi=0).

The logistic regression model is then defined as:

where B=(β0,β1,,βj) is a vector of regression parameters and Xi is a vector of auxiliary covariates at individual level within each wave and each country.

R-indicators are then calculated using the response propensities obtained in the model specified above. The overall R-indicator is a transformed response propensity standard deviation (SD), which can be calculated as

where n is the sample size, pi is the sample member estimated response propensity and p¯ is the mean response propensity (Moore et al., 2018). R-code produced by de Heij et al. (2015) which was developed as a part of Representativity Indicators for Survey Quality (RISQ) project was used to obtain various representativeness indicators in this analysis. R-indicators in practice vary between 0 and 1; values close to 1 imply a representative response and a value close to 0 implies a maximum deviation from representative response (de Heij et al., 2015).

The main limitation of R-indicators is that they do not allow to evaluate what particular subgroups contribute most to the unrepresentativeness of the data. However, in order to be able to target specific subgroups, partial R-indicators were designed to evaluate the contribution of a single specified auxiliary variable or category within that variable (Schouten et al., 2011). There are two types of partial R-indicators: at the variable level and within categories. R-indicators can also be calculated as unconditional partial indicators (univariate) and conditional partial indicators (multivariate) where the results are conditional on other variables included in the calculation of partial R-indicators. Four different partial R-indicators can be calculated: unconditional partial indicators on the variable level, unconditional partial indicators within categories, conditional partial indicators on the variable level and conditional partial indicators within categories (de Heij et al., 2015). We use three of the four partial R-indicators in this paper but do not demonstrate the results for unconditional partial indicators at the variable level. For conditional indicators the formulas require the variables to be categorical.

Conditional partial R-indicators on the variable level can be calculated only for variables which are used for the response propensity model. They measure relative importance of a variable for representativeness and nonresponse or, in other words, the impact of a specific variable conditional on other variables in the response model (de Heij et al., 2015). They isolate the part of the deviation of representativeness that is attributed to a variable alone.

This indicator Pc(Xk) for a variable xk is obtained by cross-classification of all variables in the model with the exception of the variable itself. This cross-classification results in L cells U1,U2,,UL. Let nl denotes the sample size in cell l, for l=1,2,,L. n1+n2++nL=N. Let p¯l denotes the mean of the response probabilities in cell l. The conditional partial indicator for variable xk which can only be categorical is then defined as

where Pc(Xk) is the remaining within cell variation of the response probabilities if the variable xk is removed from the cross-classification (de Heij et al., 2015). Partial indicators can be interpreted in the following way: the larger the value for Pc, the larger the role of this variable in the lack of representativeness.

Conditional partial indicators within categories can provide even more insight when they are computed for each category of a variable separately. For partial indicators within categories the indicators are calculated for each category separately. The remaining within cell variation of the response probabilities after removing a variable xk from the cross-classification is computed here for each category of xk separately. Let xk has H categories, labelled h=1,2,,H,andΔh,i be the 0–1 indicator for category h. This means that nh=i=1nΔh,i, where Δh,i is the 0–1 indicator for sample unit i being a member of category h. Each category h contributes an amount 1Nl=1LiUlΔh,i(pip¯l)2 to Pc(Xk). The conditional partial indicator within categories are then obtained as follows:

The values for these conditional partial R-indicators are always between 0 and 1 but is always less than 1. Conditional partial R-indicators within categories need to be interpreted in combination with unconditional partial R-indicators, discussed below.

According to de Heij et al. (2015), unconditional partial R-indicators within categories PU(Xk,h) measure the amount of variation of the response probabilities between the categories of a variable. Large variation implies a stronger impact of the variable on the representativeness of the dataset. Each category h contributes an amount of nhn(p¯h-p¯)2 to the unconditional partial indicators on the variable level PU(Xk) (not discussed here, see de Heij et al. (2015) for details). These unconditional partial indicators are obtained:

where p¯ is the mean response probability in the sample and p¯h is the mean of the response probabilities in category h of Xk.

These R-indicators PU(Xk,h) can have positive or negative values. Positive values suggest that this specific category is overrepresented, whereas negative value implies that this particular category is underrepresented (de Heij et al., 2015).

The main limitation of R-indicators is that at very low or very high response rates the response propensity variation is limited and, therefore, R-indicators may be overestimated (Moore et al., 2018). Moore et al. (2018) suggested that coefficient of variation (CV) of response propensities is in such cases potentially a better indicator. The CV also ranges between 0 and 1 and is calculated as SD divided by the mean propensity with low values of CV (close to 0) implying very good representativeness and high values (close to 1) implying lack of representativeness.

In our analyses we first redesigned the gross ESS sample. We use the ESS round 8 nonresponse weights as weights to sample fictitious nonresponding cases from the dataset of respondents. This results in a gross sample, exactly reflecting the population distribution on the variables age, gender, level of education and region, and exactly of the size of the gross sample in the ESS. As we focus our analyses on these four auxiliary variables, which are known at the population level, we can use the reconstructed gross sample as our base, and compare unit nonresponse in every next stage of the CRONOS study to this gross or intended sample dataset.

In a second step, we restrict our reconstructed gross sample to the CRONOS target population. This implies that from the ESS gross sample dataset, we excluded the 15–17 year olds and those living in Northern Ireland. Due to the fact that CRONOS population was slightly different from the ESS population, we decided not to use the design weights that were available with the ESS round 8 for estimating response propensities nor for calculating the R-indicators (Lynn & Anghelescu, 2018). However, we did conduct sensitivity analysis to assess whether their inclusion for calculating the R-indicators had an impact on the final results. We found our results did not change. These results from the sensitivity analysis allow us to extrapolate our findings from how the response propensities, as estimated in the sample, vary in the sample (when the design weights are not used) to how they would vary in the population (when design weights are used to calculate the R-indicators). However, as our main focus is on the differences between intended and realized samples and design weights are potentially incorrect for CRONOS population, the main results will be interpreted in the context of the intended sample.

After restricting the target population, we then modelled response propensities at each stage of the CRONOS study separately. Response propensities were estimated using a binary logistic regression with value 1 being assigned to nonrespondents at this specific stage of the survey and 0 otherwise. Variables used for response propensity models were chosen to be the same as the ones used for calculation of weights in ESS, that is, age, gender, education and region. The first three variables were standardized across the three countries but the variable region was specific to each country. R-indicators and partial R-indicators were then calculated and compared over time and across three countries. CVs were also calculated and presented in Table A.2 of the Online Supplementary Materials but not discussed in detail.

4 Results

In this section the findings of the analysis are discussed. We start by discussing response rates across different stages of the CRONOS online panel in the three countries used for the analysis, we then discuss results of the R-indicators across different stages and across different countries. In order to disaggregate overall R-indicators, we then present partial R-indicators.

4.1 Response rates

Figure 1 shows response rates specific to CRONOS target population across different stages of the survey process. Table A.1 in the Online Supplementary Materials presents proportions of gross CRONOS sample used in the analysis and also proportions of the ESS gross sample used in analysis for each stage of the data collection process. A small number of respondents have missing values on age and education (Great Britain (GB): n = 74 (3.8%); Estonia (EE): n = 1 (0.1%); Slovenia (SI): n = 4 (0.3%)). We have excluded these cases as it is not possible to use them in models and, as a result, for the calculations of R-indicators. Tables A.1 and A.2 in the Online Supplementary Materials show that response rates' trajectories reported by Villar et al. (2018) and proportions of gross CRONOS sample used in the analysis at different stages closely follow each other within countries. The minor differences are due to the small number of respondents with missing values on covariates which were removed from the analysis.

Response rates of CROss-National Online Survey (CRONOS) panel across different stages of the survey process conditional on being in the CRONOS gross sample
FIGURE 1

Response rates of CROss-National Online Survey (CRONOS) panel across different stages of the survey process conditional on being in the CRONOS gross sample

Figure 1 demonstrates that most of the nonresponse in the study as a whole is introduced in the face-to-face recruitment (stage 2) and CRONOS panel welcome survey (stage 6). We also find differences between countries with Great Britain having the lowest response rate across time when compared to the other two countries. However, once respondents have joined the panel, no substantial change in response rates is observed in Great Britain. An increase in the response rate at CRONOS waves 3 and 5 is observed in Slovenia (stages 9 and 11), which is caused by panel members returning to and a few panel members joining the CRONOS panel. All three countries experience similar trajectories in declining response rates between stages 1 and 12 of the analysis.

It is important to remember that response rates do not tell the whole story about data quality or representativeness of the sample. Therefore, it is crucial to conduct further investigations and to examine the representativeness of CRONOS in different stages of the data collection in the three countries.

4.2 Representativeness: R-indicators

Figure 2 presents R-indicators across different stages of the survey process conditional on being in the CRONOS gross sample. Table A.2 in the Online Supplementary Materials shows that the R-indicator ranges between 0.719 and 0.942 across the three countries when region, gender, age and level of education are used as covariates. CVs for each stage of data collection are presented in Table A.2 but not discussed further.

R-indicators across different stages of the survey process conditional on being in the CRONOS gross sample
FIGURE 2

R-indicators across different stages of the survey process conditional on being in the CRONOS gross sample

Figure 2 shows that representativeness is similar across countries and time. Trajectories in three countries are also similar with some small deviations. Overall, the R-indicators become the lowest at stage 5 (willingness to become part of CRONOS), and recover after that with some minor fluctuations across stages. This implies that despite the big drop in response rates in the transition from the face-to-face interview (stage 3) to the first online interview (stage 6), we find that the R-indicators generally improve. This means that those demographic groups which were harder to recruit for the ESS round 8, are easier to convert to the panel. R-indicators are generally high across all stages, so we can already conclude that recruiting an online panel on the back of the ESS round 8 does not lead to large variability or representativeness errors.

When we inspect our results closer at the country level, we find some small differences. The representativeness is slightly lower after stage 5 (willingness to become part of CRONOS) in Slovenia in comparison to other countries. The representativeness is the highest in Great Britain at stages 4–12 when compared to other countries but the highest in Slovenia at stage 3 (those who were supposed to be in CRONOS and responded to ESS Round 8). In Great Britain there is a large drop in representativeness between stages 4 and 5 but then a large increase is observed at stage 6, after which R-indicators stabilize. In Estonia a similar pattern is observed, while in Slovenia we observe a large decline in representativeness at stage 5 followed by an increase at stages 6, 7 and 8 and fluctuations between stages 8 and 12.

It is important to mention that despite the fact that Great Britain had the lower response rates across all stages when compared to other countries (see Figure 1), it has a higher representativeness over time at nearly every stage of data collection (with the exception of stage 3 when it is very slightly higher in Slovenia). This finding provides additional evidence for the importance of investigating risk of nonresponse bias in detail and not making conclusions regarding data quality based only on response rates which was already suggested in previous literature (Groves, 2006; Groves & Peytcheva, 2008; Olson, 2006). Our results reported above do not yet allow us to establish which specific variables or categories within variables contribute most to the lack of representativeness, therefore, it is important to investigate and present the results of partial R-indicators.

4.3 Representativeness: Partial R-indicators

First, partial conditional R-indicators at the variable level were obtained. After this, we also investigated partial unconditional R-indicators along with partial conditional R-indicators within categories. Figure 3 shows partial conditional R-indicators at the variable level in three different countries.

(a)–(c) Partial conditional R-indicators on variable level
FIGURE 3

(a)–(c) Partial conditional R-indicators on variable level

Figure 3a–c suggests that education has the highest impact on the lack of representativeness in all three countries starting from stage 7 in Great Britain and Slovenia and stage 8 in Estonia when compared to other variables included into the model. Age contributes the most to the risk of nonresponse bias at stage 5 in Great Britain, at stages 5–7 in Estonia and at stages 5–11 in Slovenia when compared to other variables. Also, consistently across all three cultural contexts gender contributes the least to the risk of nonresponse bias with the only exception of Estonia at stages 3 and 4. The region variable contributes little to the risk of nonresponse bias in Estonia and Slovenia, except for Estonia at stages 3 and 4. The contribution of the region variable is generally higher in Great Britain especially at stages 3 and 4.

Further analysis of specific categories within variables and their impact on the lack of representativeness allows for the comparison of specific aspects of samples across time and across countries. In order to conduct this assessment, we first investigate partial unconditional R-indicators at the category level as they help to conclude which categories are underrepresented and which ones are overrepresented when the results of partial conditional R-indicators are discussed. All unconditional R-indicators within categories are presented in Tables A.3–A.8 of the Online Supplementary Materials.

Figure 4a–c shows partial conditional R-indicators within categories of education in three countries.

(a)–(c) Partial conditional R-indicators within education
FIGURE 4

(a)–(c) Partial conditional R-indicators within education

When both unconditional (see Table A.3) and conditional (see Figure 4a–c) partial R-indicators are considered, we observe a similar impact across levels of education on representativeness in the different countries. As discussed earlier, level of education is one of the most important variables explaining nonresponse in all three countries (Figure 3a–c). In Great Britain the level ‘no education or primary education’ is the most underrepresented and contributes the most to the lack of representation at stages 5–12 while at stages 3 and 4 it is those with masters and PhD who are most underrepresented. In Estonia and Slovenia there are two categories which are most underrepresented: ‘no education or primary education’ and ‘secondary or secondary higher education’ with those in ‘secondary or secondary higher education’ group contributing the most to the risk of nonresponse bias. Also, partial univariate and conditional R-indicators within categories suggest that in the Estonian context those with masters or PhD are the most overrepresented group at stages 5–12 and those with secondary or secondary higher education are the most underrepresented group. At stages 3 and 4 the relationship is reversed. In Slovenia it is those with a bachelor's degree or equivalent who are the most overrepresented whereas those with secondary education are the most underrepresented. All these results are consistent with earlier findings (Groves & Couper, 1998; Roose et al., 2003) which found that lower educated groups are usually harder to reach and recruit in surveys.

Figure 5a–c shows partial conditional R-indicators within categories of age in three countries. When age is examined, the results suggest that in the unconditional context (see Table A.4) those who are 80–94 (stages 5–12) and also those who are 18–29 (stages 9–12) at later stages of data collection are the most underrepresented groups in the online stage of CRONOS together with those 60–69 at face-to-face stages 3–4 in Great Britain. Figure 5a–c also suggests that in Estonia those who are 80–94 contribute most to underrepresentation at stages 5–12 with those 60–69 contributing most at stages 3 and 4. In Slovenia we observe a pattern similar to Estonia. Interestingly, in Estonia and Slovenia those 18–29 are one of the most overrepresented category across all stages, whereas in the UK this was one of the most underrepresented groups.

(a)–(c) Partial conditional R-indicators within age
FIGURE 5

(a)–(c) Partial conditional R-indicators within age

Figure 6 shows partial R-indicators within gender across three countries. When gender is analysed, males are the more underrepresented group across all stages in Slovenia, starting from stage 6 (Wave 0 of CRONOS) in Estonia, whereas in Great Britain there are observed fluctuations across the stages.

Partial conditional R-indicators within gender in three countries
FIGURE 6

Partial conditional R-indicators within gender in three countries

Finally, when we explore the variable region in more detail (see Tables A.6–A.8 and Figures 7-9), we find that the following regions contribute the most towards the lack of representativeness by being underrepresented in Great Britain: the South East at stages 3 and 4, Scotland at stages 5–7 and 9 and London at stages 7–12.

Partial conditional R-indictors within region variable in Great Britain
FIGURE 7

Partial conditional R-indictors within region variable in Great Britain

Partial conditional R-indicators within region variable in Estonia
FIGURE 8

Partial conditional R-indicators within region variable in Estonia

Partial conditional R-indicators within region variable in Slovenia
FIGURE 9

Partial conditional R-indicators within region variable in Slovenia

Scotland is located in northern part of Great Britain, whereas London and South East in south-eastern part. These two very prosperous regions in the UK (London and South East) are quite urban and consists of many cities and towns contribute to the highest underrepresentations in Great Britain. Scotland also contains a large number of cities and towns.

In Estonia it is the Pohja-Eesti and Louna-Eesti regions that are contributing the most to underrepresentativeness. Pohja-Eesti region is located on the southern shore of the Gulf of Finland, whereas Louna-Eesti region is situated in southern Estonia further away from the capital than other Estonian regions. It is not surprising that Pohja-Eesti region contributes to underrepresentativeness as this northern region of Estonia has the largest towns in the country including the capital Tallinn. In both Great Britain and Estonia capital cities contribute to underrepresentation. The Louna-Eesti region is the least educated and the standards of living there are lower than in other parts of the country (European Commission, 2000a).

In Slovenia the most underrepresented regions in the unconditional context are Goriska and Pomurska. Goriska region is located in the western part of the country along the Italian border, whereas Pomurska region is located in the extreme north-eastern part of Slovenia. These findings are not surprising as Goriska region has the highest percentage of old people in the country and Pomurska region is a mostly rural region with high unemployment and high level of migration from the region (European Commission, 2000b).

As expected, our findings suggest that older age, urbanicity and lower levels of education mainly explain the risk of nonresponse in the online stages of the CRONOS panel to a different degree in the three countries. Overall, we conclude that the representativeness of CRONOS panel does not become worse when compared to the regular face-to-face interviewing conducted in the ESS.

5 Discussion and Conclusions

This paper analyses the quality of the data from the first cross-national probability-based online panel CRONOS. In recent years different online probability-based panels were created on the back of probability-based face-to-face surveys. Little is known about the representativeness of the sample over time in these panels or across countries in the cross-national context. This paper fills this gap in knowledge and proposes an innovative way of applying R-indicators in this new context. The paper uses various representativeness indicators (R-indicators, partial R-indicators and CVs (not discussed in detail)) to assess representativeness in the first cross-national probability-based online panel CRONOS.

The results suggest that R-indicators and partial R-indicators are useful when the quality of samples and specifically representativeness are of interest. Our results suggest that despite the fact that many respondents drop out in the process of moving from a face-to-face interview to an online panel study, there are few adverse effects on the representativeness of the data. Rather, we find that the R-indicators improve, implying that those who were harder to recruit into the face-to-face study are easier then to convert to an online interview. The R-indicators obtained are generally high, implying that in terms of the risk of nonresponse, it is possible to successfully set up a probability-based online panel in Slovenia, Great Britain and Estonia using face-to-face recruitment.

Moreover, we find that differences across countries and over time within countries are relatively small, implying that it is also possible to design an online panel that can be used for cross-national or comparative research across several European countries. Here the one caveat is that the biases at the variable level sometimes go in different directions across countries, for example, for the youngest age group. This group is overrepresented in Estonia and Slovenia but underrepresented in Great Britain at the later stages of CRONOS. This suggests that specific cultural contexts need to be kept in mind when recruitment strategies are designed for different countries in cross-national surveys.

Apart from age, we find that education is the main contributing to the lack of representativeness variable. Specifically, we find that respondents with lower levels of education, that is, no education or primary education and secondary or secondary higher education, are progressively getting more underrepresented in the panel. We also find that in Great Britain at stage 5, across all stages of data collection in Slovenia and in Estonia from stage 5 age is an important predictor of the risk for nonresponse across the panel. The older age groups (70–79 and 80–94) are difficult groups to recruit and retain the web-part of the panel.

For the future probability-based panels, it is important to pay special attention to older and lower educated groups over the entire process of running the online panel. As mentioned earlier, in the Great Britain context those of age 18–29 become one of the most underrepresented group at the end of the panel data collection suggesting that it is difficult to retain young people in the online panel in Great Britain context. However, the same pattern regarding young people is not observed in Slovenia or Estonia. Contrary, this group is one of the most overrepresented in these two contexts. Therefore, it is important to increase efforts to ensure that in future online panels in Great Britain young people are retained in the panel until the end.

As expected, our findings suggest that older age, urbanicity and lower level of education mainly explain risk for nonresponse in the online stages of the CRONOS panel to a different degree in the three countries. Overall, we conclude that the representativeness of CRONOS panel does not become worse when compared to the regular face-to-face interviewing conducted in the ESS.

All these results are important for assessment of the representativeness in CRONOS online panel over time and across countries. Also, these findings can in future be used for targeting and prioritizing specific subgroups at different stages of data collection. However, it is hard to predict in advance which groups will be difficult to recruit and keep in a future cross-national probability-based online panel. Using R-indicators during data collection can be useful to monitor how the risk of nonresponse bias in specific parameters that are correlated with the variables considered in the R-indicators evolved over the course of fieldwork and to decide during the fieldwork whether extra measures are necessary to target some specific groups. For example, the youngest age groups may be susceptible to some extra incentives, while the oldest old may benefit from extra help in navigating the online survey. However, as we do not find large biases, such targeting is perhaps not crucial.

Weights can be computed to correct for biases in nonresponse throughout the CRONOS study. However, such weights would only be correct for nonresponse bias in the few variables we have investigated the risk of nonresponse bias for. Many of the target variables in the ESS are attitudinal, and although variables like age, level of education and region sometimes explain such target variables well, most often they do not. There is a risk that nonresponse bias beyond the variables we tested for is present in CRONOS, especially when nonresponse is closely related to some of the target variables (e.g. people with low trust not being willing to do more than one interview).

The main limitation of the study is unavailability of device paradata in Wave 1 of CRONOS. It would be useful to assess representativeness of the sample by device which respondents chose to use for survey completion and would allow to assess any differential effects across times or across countries observed by devices if there are any. This analysis would further contribute towards the debate regarding the use of smartphones by respondents when completing surveys online (Maslovskaya, 2020; Maslovskaya et al., 2020). Also, it would be useful to study patterns of switching between devices between waves and the association of the switching with representativeness of samples. Therefore, we recommend the collection of device paradata for each wave of online panels to data collection organizations and survey practice.

Another limitation was the fact that we could only partially conduct the analyses using design weights. Due to the different inclusion criteria for CRONOS across countries, the design weights that are part of data releases of the ESS do not reflect the true sampling mechanism. We conducted analyses both with and without available design weights and found that results for the R-indicators and CVs to differ at the third decimal at most, so we are confident about our results. In the future users analysing CRONOS data would need to take account of the sampling design correctly. An easy fix to this problem would be to not further limit the population under study in CRONOS-2. In practice this would mean also including 15–17 year olds and people from Northern Ireland in Great Britain. However, if this option is not available, it would be of high importance to have access to correct design weights for CRONOS-2 calculated by ESS team.

Supporting information

Additional supporting information may be found in the online version of the article at the publisher’s website.

Acknowledgements

This work was supported by the ESRC Secondary Data Analysis Initiative grant ‘Understanding survey response behaviour in a digital age: Mixed-device online surveys and mobile device use’, grant number ES/P010172/1.

References

1

Blom
,
A.G.
,
Bosnjak
,
M.
,
Cornilleau
,
A.
,
Cousteaux
,
A.S.
,
Das
,
M.
,
Douhou
,
S.
et al. (
2016
)
A comparison of four probability-based online and mixed-mode panels in Europe
.
Social Science Computer Review
,
34
(
1
),
8
25
.

2

Blom
,
A.G.
,
Gathmann
,
C.
&
Krieger
,
U.
(
2015
)
Setting up an online panel representative of the general population: the German internet Panel
.
Field Methods
,
27
(
4
),
391
408
.

3

Bosnjak
,
M.
,
Dannwolf
,
T.
,
Enderle
,
T.
,
Schaurer
,
I.
,
Struminskaya
,
B.
,
Tanner
,
A.
et al. (
2018
)
Establishing an open probability-based mixed-mode panel of the general population in Germany: the GESIS panel
.
Social Science Computer Review
,
36
(
1
),
103
115
.

4

Bottoni
,
G.
&
Fitzgerald
,
R.
(
2021
)
Establishing a baseline: bringing innovation to the evaluation of cross-national probability-based online panels
.
Survey Research Methods
,
15
(
2
),
115
-
133
.

5

Callegaro
,
M.
,
Manfreda
,
K.L.
&
Vehovar
,
V.
(
2015
)
Web survey methodology
.
London
:
Sage
.

6

Cernat
,
A.
&
Revilla
,
M.
(
2020
)
Moving from face-to-face to a web panel: impacts on measurement quality
.
Journal of Survey Statistics and Methodology
.
9
(
4
),
745
763
. Available from: https://doi.org/10.1093/jssam/smaa007

7

de
Heij
,
V.
,
Schouten
,
B.
&
Shlomo
,
N.
(
2015
)
RISQ manual 2.1: Tools in SAS and R for the computation of R-indicators, partial R-indicators and partial coefficients of variation
. Resresentativity Indicators for Survey Quality (RISQ) project. Available from: http://hummedia.manchester.ac.uk/institutes/cmist/risq/RISQ-manual-v21.pdf

8

de
Leeuw
,
E.D.
(
2018
)
Mixed-mode: past, present, and future
.
Survey Research Methods
,
12
(
2
),
75
89
.

9

de
Leeuw
,
E.
,
Hox
,
J.
&
Luiten
,
A.
(
2018
)
International nonresponse trends across countries and years: an analysis of 36 years of Labour Force Survey data
.
Survey Methods: Insights from the Field
,
1
11
.

10

Dillman
,
D.A.
(
2017
)
The promise and challenge of pushing respondents to the Web in mixed-mode surveys
.
Survey Methodology
,
43
(
1
),
3
30
.

11

Eckman
,
S.
(
2016
)
Does the inclusion of non-internet households in a web panel reduce coverage bias?
Social Science Computer Review
,
34
(
1
),
41
58
.

12

European Comission
(
2000a
)
Portrait of the regions: Volume 8 – Estonia, Latvia, Lithuania. Statistical Office of the European Communities
. Available from: https://ec.europa.eu/eurostat/documents/3217494/5629084/KS-29-00-795-EN.PDF.pdf/a154d5a1-49c2-4d57-aea6-d0a4e6843e39?t=1414770330000

13

European Comission
(
2000b
)
Portrait of the regions: Volume 9 – Slovenia. Statistical Office of the European Communities
. Available from: https://ec.europa.eu/eurostat/documents/3217494/5628980/KS-29-00-779-EN.PDF/3f76edf8-3793-4b3a-a8d4-2b52dea222b3?version=1.0

15

Eurostat
(
2019
)
Digital economy and society statistics – households and individuals
. Available from: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Digital_economy_and_society_statistics_-_households_and_individuals

16

Fitzgerald
,
R.
&
Bottoni
,
G.
(
2019
)
Online data collection for comparative research: Perspectives from the European Social Survey
. Paper presented at conference ‘The future of online data collection in social surveys’, 20–21 June 2019Southampton, UK

17

Groves
,
R.M.
(
2006
)
Nonresponse rates and nonresponse bias in household surveys
.
Public Opinion Quarterly
,
70
(
5
),
646
675
.

18

Groves
,
R.M.
&
Couper
,
M.P.
(
1998
)
Nonreponse in household interview surveys
.
New York
:
John Wiley and Sons
.

19

Groves
,
R.M.
&
Peytcheva
,
E.
(
2008
)
The impact of nonresponse rates on nonresponse bias: a meta-analysis
.
Public Opinion Quarterly
,
72
(
2
),
167
189
.

20

Koch
,
A.
,
Halbherr
,
V.
,
Stoop
,
I.A.L.
&
Kappelhof
,
J.W.S.
(
2014
)
Assessing ESS sample quality by using external and internal criteria
.
Mannheim
:
European Social Survey, GESIS
.

21

Kreuter
,
F.
,
Presser
,
S.
&
Tourangeau
,
R.
(
2008
)
Social desirability bias in cati, ivr, and web surveys: the effects of mode and question sensitivity
.
Public Opinion Quarterly
,
72
(
5
),
847
865
.

22

Krieger
,
U.
,
Blom
,
A.
,
Cornesse
,
C.
,
Felderer
,
B.
&
Fikel
,
M.
(
2019
)
Push-to-web recruitment of a probability-based online panel: experimental evidence
. Paper presented at the 2019 General Online Research conferenceCologne, 3–5 march.

23

Lynn
,
P.
(
2020
)
Evaluating push-to-web methodology for mixed-mode surveys using address-based samples
.
Survey Research Methods
,
14
(
1
),
19
30
.

24

Lynn
,
P.
&
Anghelescu
,
G.
(
2018
)
European Social Social Survey round 8 weighting strategy
. Colchester: Institute for Social & Economic Research. Available from: https://www.europeansocialsurvey.org/docs/round8/methods/ESS8_weighting_strategy.pdf&usg=AOvVaw1ssnYfpbT0yAmXABGqqkHV

25

Maslovskaya
,
O.
(
2020
)
Data quality in mixed-device online surveys in the UK. City, University of London/ European Social Survey HQ/ NatCen Social Research Survey Methodology Seminar Series
. Webinar, 30, April 2020. Available from: https://www.youtube.com/watch?v=pT2wygY_OeI&feature=youtu.be

26

Maslovskaya
,
O.
,
Smith
,
P.W.F.
&
Durrant
,
G.
(
2020
)
Do respondents using smartphones produce lower quality data? Evidence from the UK Understanding Society mixed-device survey
. NCRM Working Paper, University of Southampton. Available from: http://eprints.ncrm.ac.uk/4322/

27

Moore
,
J.C.
,
Durrant
,
G.B.
&
Smith
,
P.W.F.
(
2018
)
Data set representativeness during data collection in three UK social surveys: generalizability and the effects of auxiliary covariate choice
.
Journal of the Royal Statistical Society: Series A (Statistics in Society)
,
181
(
1
),
229
248
.

28

Olson
,
K.
(
2006
)
Survey participation, nonresponse bias, measurement error bias, and total bias
.
International Journal of Public Opinion Quarterly
,
70
(
5
),
737
758
.

29

Roose
,
H.
,
Waege
,
H.
&
Agneessens
,
F.
(
2003
)
Respondent related correlates of response behaviour in audience research
.
Quality and Quantity
,
37
(
4
),
411
434
.

30

Scherpenzeel
,
A.
,
Maineri
,
A.
,
Bristle
,
J.
,
Pflüger
,
S.M.
,
Mindorova
,
I.
,
Butt
,
S.
et al. (
2017
)
Report on the use of sampling frames in European studies: SERISS Deliverable
. SERISS-Deliverable (2.1). Available from: https://seriss.eu/wp-content/uploads/2017/01/SERISS-Deliverable-2.1-Report-on-the-use-of-sampling-frames-in-European-studies.pdf

31

Schouten
,
B.
,
Cobben
,
F.
&
Bethlehem
,
J.
(
2009
)
Indicators for the representativeness of survey response
.
Survey Methodology
,
35
,
101
113
.

32

Schouten
,
B.
,
Shlomo
,
N.
&
Skinner
,
C.
(
2011
)
Indicators for monitoring and improving representativeness of response
.
Journal of Official Statistics
,
27
(
2
),
231
253
.

33

Shlomo
,
N.
,
Skinner
,
C.
&
Schouten
,
B.
(
2012
)
Estimation of an indicator of the representativeness of survey response
.
Journal of Statistical Planning and Inference
,
142
(
1
),
201
211
.

34

Smith
,
T.W.
(
2019
) Improving cross-national/cultural comparability using the total survey error paradigm. In:
Johnson
,
T.P.
,
Pennell
,
B.-E.
,
Stoop
,
I.A.L.
&
Dorer
,
B.
(Eds.)
Advances in comparative survey methods: Multicultural, multinational and multiregional contexts (3MC)
.
Hoboken
:
John Wiley & Sons Inc.
, pp.
13
43
.

35

Villar
,
A.
,
Sommer
,
E.
,
Finnøy
,
D.
,
Gaia
,
A.
,
Berzelak
,
N.
&
Bottoni
,
G.
(
2018
)
CROss-National Online Survey (CRONOS) panel: data and documentation user guide
.
London
:
ESS ERIC
Available from: https://www.europeansocialsurvey.org/docs/cronos/CRONOS_user_guide_e01_1.pdf

Author notes

Funding information ESRC, ES/P010172/1

This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.