Abstract

This article examines the labor market payoffs to different types of postsecondary education, including field and institution of study. Instrumental variables (IV) estimation of the payoff to choosing one type of education compared to another is made particularly challenging by individuals choosing between several types of education. Not only does identification require one instrument per alternative, but it is also necessary to deal with the issue that individuals who choose the same education may have different next-best alternatives. We address these difficulties using rich administrative data for Norway’s postsecondary education system. A centralized admission process creates credible instruments from discontinuities that effectively randomize applicants near unpredictable admission cutoffs into different institutions and fields of study. The admission process also provides information on preferred and next-best alternatives from strategy-proof measures of individuals’ ranking of institutions and fields. The results from our IV approach may be summarized with three broad conclusions. First, different fields of study have substantially different labor market payoffs, even after accounting for institution and peer quality. Second, the effect on earnings from attending a more selective institution tends to be relatively small compared to payoffs to field of study. Third, the estimated payoffs to field of study are consistent with individuals choosing fields in which they have a comparative advantage. Comparing our estimates to those obtained from other approaches highlights the importance of using instruments to correct for selection bias and information on individuals’ ranking of institutions and fields to measure their preferred and next-best alternatives. JEL Codes: C31, J24, J31.

I. I ntroduction

According to data from the Organisation for Economic Co-operation and Development ( OECD 2014 ), the majority of young adults in developed countries enroll in postsecondary education. One of the decisions that virtually all these students have to make is choosing a field of study or college major. 1 Earnings differences across fields rival college earnings premiums. This indicates that the choice of field of study is potentially as important as the decision to enroll in college, yet the evidence base is scarce. Altonji, Blom, and Meghir (2012) review the literature and conclude that “there is a long way to go on the road to credible measures of the payoffs to fields of study.”

In this article, we investigate the (ex post) payoffs to different types (field and institution) of postsecondary education. 2 We begin by showing that identifying the payoffs is difficult, not only because of the standard problems of correlated unobservables but also because individuals are choosing between several unordered alternatives. We show that even with a valid instrument for each type of education, instrumental variables (IV) estimation of the payoffs from choosing one type of education compared to another requires information about individuals’ ranking of education types or additional assumptions over and above those needed in settings with binary choices (e.g., college versus no college) or ordered choices (e.g., years of schooling). Otherwise, IV does not identify the payoff to any individual or group of the population from choosing one education type instead of another.

We build on these identification results in our empirical analysis of payoff to different types of postsecondary education. In particular, we use instruments to correct for selection bias, and we exploit measures of information on individuals’ ranking of institutions and fields to measure their preferred and next-best education type. Taken together, this allows us to identify the payoff to completing one type of education relative to a particular next-best alternative. For example, we are able to examine whether the gains in earnings to persons choosing science instead of teaching are larger or smaller than the gains in earnings to those choosing business instead of teaching. We also examine whether there are significant gains in earnings to graduating from a more selective institution.

The information on next-best alternatives also allows us to examine the pattern of sorting to fields. For example, a random person might achieve a negative payoff from a science degree, while someone with appropriate talents who chose science might obtain a positive payoff. By combining the estimated payoffs with information about individuals’ ranking of fields, we can assess whether individuals tend to choose fields in which they have comparative advantage. In contrast to much of the existing literature on education and self-selection, we do not make assumptions about selection criterion, information sets, or the distribution of unobservables.

The context for our analysis is the Norwegian postsecondary education system. Norway provides an attractive context for this study for several reasons. First, its postsecondary education system satisfies the requirement for a large and detailed data set that follows every student through the layers of the education system and into their working career. 3 Norway also has a centralized admission process that covers almost all universities and colleges; prospective students in apply to a field and institution simultaneously (e.g., teaching, at the University of Oslo). In their application, prospective students can rank up to 15 choices. The applicants are scored by a central organization based on their high school grade point average (GPA). Applicants are then ranked by their application score, after which places are assigned in turn: the highest-ranked applicant gets her preferred choice; the next-highest ranked applicant gets the highest available choice for which she qualifies, and so on. This process creates credible instruments from discontinuities that effectively randomize applicants near unpredictable admission cutoffs into different institutions and fields. 4 At the same time, the process provides us with strategy-proof measures of individuals’ ranking of institutions and fields.

Our IV estimates may be summarized with three broad statements. First, different fields have widely different payoffs, even after accounting for institutional differences and quality of peer groups. For many fields the payoffs rival the college wage premium, suggesting that the choice of field is potentially as important as the decision to enroll in college. For example, by choosing science instead of the humanities, individuals almost triple their earnings early in their working career. Second, when disentangling the causal contribution of institution and field of study, we find that field of study drives the heterogeneity in the payoffs to postsecondary education. Indeed, we find little evidence of significant gains in earnings to graduating from a more selective institution once we hold field of study fixed. Third, the estimated payoffs are consistent with individuals choosing fields in which they have comparative advantage. This finding has implications for the type of economic models that can help explain the causes and consequences of individuals choosing different types of postsecondary education. For example, the presence of comparative advantage is at odds with influential economic models based on an efficiency unit assumption such as the Ben Porath model, but it is consistent with the Roy model ( Heckman and Guilherme 1985 ; Willis and Rosen 1979 ).

In interpreting these findings, there are several things to keep in mind. Since we only observe earnings early in the subject’s working career, the estimated payoffs should not be interpreted as internal rates of return to investments in human capital. Additionally, it is important to emphasize the local nature of our results. The payoffs we estimate are local average treatment effects of instrument-induced shifts in field of study. Since our instruments are admission cutoffs, they pick out individuals who are at the margin of entry to particular fields. This means that the estimated payoffs are informative about the effect of policy that lowers admission cutoffs to different fields, a change that could be achieved by increasing the number of slots to these fields. However, we need to be cautious when extrapolating the payoffs we estimate to the population at large or inframarginal applicants. Moreover, our findings might be specific to the Norwegian context. For example, the effects of institution may be larger in settings with more private financing of higher education.

Our article is primarily related to a small but growing body of literature on the payoffs to field of study or college major, reviewed in Altonji, Blom, and Meghir (2012) and Altonji, Arcidiacono, and Maurel (2015) . To date, most studies perform OLS estimation, and thus assume that all selection is on observables. A notable exception is Hastings, Neilson, and Zimmerman (2013) , who make important progress over much of the previously conducted research by using discontinuities from the centralized, score-based admissions system in Chile to address selection on unobservables. By doing so, these authors can consistently estimate the earnings effects of crossing the threshold for admission to a preferred institution-field (called degree). Such intention-to-treat estimates can be of interest for a variety of reasons, such as informing policy that marginally expands or contracts a particular degree program. The large sample sizes in Hastings, Neilson, and Zimmerman (2013) allow them to explore heterogeneity in threshold-crossing effects by selectivity, field of study, and student characteristics. These authors also have long time series on labor market outcomes that enable them to estimate longer-term earnings impacts. 5

A related body of work seeks to identify the effect of college quality on future labor market outcomes. Most of this literature does not account for differences across institutions in the student composition by fields. 6 A notable exception is Andrews, Li, and Lovenheim (2012) , who use data from Texas to estimate the effect of college quality on postgraduation earnings. These authors’ findings suggest that the estimated effect on earnings from attending a more selective college is largely driven by differences between graduates in college majors. However, Andrews, Li, and Lovenheim are hesitant to draw strong conclusions because their empirical strategy cannot address endogeneity of major selection.

We complement the literature on the payoffs to postsecondary education in several ways. First, we provide initial evidence on the payoff to completing one field of study relative to a particular next-best alternative while addressing concerns about selection on unobservables. Because we can track individuals through each step of the education system, we are able to estimate the impact of graduating with a degree in a field in addition to the intention-to-treat effect of crossing the admission cutoff to a field. In our context, this turns out to be important as completion rates are sometimes low and vary systematically across fields, which complicates interpretation of intention-to-treat estimates. 7 Second, the admission system we study creates exogenous variation in both field and institution choice, which under certain assumptions helps in interpreting the estimated payoffs and in quantifying the relative importance of institution and field of study. Third, by combining the estimated payoffs with information about individuals’ ranking of fields, we can assess whether individuals tend to prefer fields in which they have comparative advantage.

Our article also contributes to the literature on identification of treatment effects in unordered choice models. Heckman, Urzúa, and Vytlacil (2006) and Heckman and Urzúa (2010) conclude that discuss the challenges associated with the identification and interpretation of treatment effects in such models. Both sets of authors show that individuals induced into a state by an instrument may come from many alternative states, so there are many margins of choice. Heckman, Urzua, and Vytlacil (2006) and Heckman and Urzua (2010) conclude that structural models can identify the earnings gains arising from these separate margins, while this is a difficult task for IV without invoking strong assumptions. 8 We complement this work by making precise what IV can and cannot identify when there are multiple, unordered treatments. In particular, we show that consistent IV estimation of the payoffs to unordered treatments requires information about next-best alternatives or additional assumptions than those required in settings with binary or ordered treatments. In the empirical analysis, we test and reject two additional assumptions that would have ensured identification in the absence of observing next-best alternatives, thus highlighting the usefulness of this information.

While our empirical findings are specific to the context of postsecondary education, there could be lessons from our work for other settings with unordered choices. Examples can be found in observational studies that use IV to study workers’ selection of occupation, firms’ decision on location, or families’ choice of where to live. Our study highlights key challenges and possible solutions to understanding what the causal effects of these choices are. Another example is the frequent use of the encouragement design in evaluation studies, where programs are made available but take up is not universal (see Duflo, Glennerster, and Kremer 2008 , for an example). Researchers then use OLS and IV to estimate intention-to-treat and local average treatment effects (LATE) parameters, respectively. We show which assumptions and information are required to draw causal inference from encouragement designs in settings with multiple, unordered treatments. 9

The remainder of the article is organized as follows. Section II discusses instrumental variables in unordered choice models, laying the groundwork for our empirical analysis. In Section III , we describe the admission process to postsecondary education in Norway. Section IV describes our data and presents descriptive statistics. Section V provides a graphical depiction of our research design, while Section VI turns to the formal econometric model. Section VII describes our main findings on payoffs to field of study and reports results from specification checks. In Section VIII , we compare our estimates to those obtained using other approaches. Section IX examines the relative importance of institution and field of study. In Section X , we examine the pattern of selection to fields. The final section offers concluding remarks.

II. IV in U nordered C hoice M odels

This section lays the groundwork for what we do in the empirical analysis, and shows the assumptions under which IV estimates can be given a causal interpretation as local average treatment effects (LATE) in settings with multiple unordered treatments. For notational simplicity, we consider a case in which individuals choose between three alternatives, but our identification results can straightforwardly be extended to any finite number of unordered treatments.

II.A. Regression Model, Potential Outcomes, and Choices

Assume that individuals choose between three mutually exclusive and collectively exhaustive alternatives, d{0,1,2} }. To fix ideas, consider a setting in which all individuals complete some postsecondary education, but they choose between three different fields of study. 10 For simplicity, we suppress the individual index, and also abstract from any control variables. Our interest centers on how to interpret IV (and OLS) estimates of the following equation:  

(1)
y=β0+β1d1+β2d2+ε,
where y is observed earnings, and dj1[d=j] is an indicator variable that equals 1 if an individual completed field j , and 0 otherwise.

Both in this section and in the empirical analysis, we consider the case in which the number of instruments equals the number of regressors, so that equation (1) is exactly identified. Specifically, suppose that individuals are randomly assigned to one of three mutually exclusive and collectively exhaustive groups, Z{0,1,2} }, and let zj1[Z=j] be an indicator variable that equals 1 if an individual is assigned to group j , and 0 otherwise. One can think of z j as an instrument that shifts the cost or benefits of choosing field j . For each individual, this gives three potential field choices, d z , and nine potential earnings levels, yd,z .

Throughout the article, we make the standard IV assumptions:

A ssumption 1. (E xclusion ) yd,z=yd for all d , z .

A ssumption 2. (I ndependence ) yd,dzZ for all d , z .

A ssumption 3. (R ank ) E[zd] has full rank.

Note that we do not restrict the heterogeneity in the payoffs to field of study: for a given individual, the payoff may vary depending on the fields being compared (e.g., y1y0 differs from y2y0) ; and for a given pair of fields, the payoff may vary across individuals (e.g., y1y0 differs between individuals).

We link observed and potential earnings and field choices as follows:  

(2)
y=y0+(y1y0)d1+(y2y0)d2,
 
(3)
d1=d10+(d11d10)z1+(d12d10)z2,
and  
(4)
d2=d20+(d21d20)z1+(d22d20)z2,
where djz1[dz=j] is an indicator variable that tells us whether an individual would choose field j for a given value of Z . For example, dj0 gives the status of field of study choice j when Z = 0 ( z1=0 and z2=0 ), dj1 is the status when Z = 1 ( z1=1 and z2=0 ), and dj2 is the status when Z = 2 ( z1=0 and z2=1 ).

As in the usual LATE framework with a binary treatment (see Imbens and Angrist 1994 ), we assume that switching on z j does not make it less likely that an individual chooses field j :

A ssumption 4. (M onotonicity ) d11d10andd22d20 .

Note that Assumption 4 puts no restrictions on the possibility that z j affects the costs or benefits of field k relative to field l for l,kj . For example, it is silent about whether an individual’s choice between field 2 and 0 is affected by whether z1 is switched on or off.

Because there are many fields, the data demands for IV are high: for each field it is necessary to find a variable that is conditionally random, shifts the probability of choosing that field relative to the other options, and does not directly affect y . As a result, most of the research to date uses OLS to estimate the payoffs to field of study. 11 We therefore begin with a brief discussion of how to interpret OLS estimates of equation (1) before turning to what IV can and cannot identify.

II.B. OLS Estimation of Payoffs to Field of Study

In equation (1) , the OLS estimate of the payoff from choosing, say, field 2 instead of 0 is the sample analogue of E[y|d=2]E[y|d=0] . As usual, we can write the expectation of the OLS estimate of β2 in terms of potential outcomes as follows:  

(5)
E[y|d=2]E[y|d=0]=E[Δ2|d=2]Payoff+E[y0|d=2]E[y0|d=0]Selection Bias,
where Δ2y2y0 is the individual-level payoff to completing field 2 instead of field 0, and E[Δ2|d=2] is the average payoff for those who completed field 2 instead of 0.

The first key challenge to estimate payoffs to fields of study is to correct for selection bias, E[y0|d=2]E[y0|d=0] . Early and ongoing research adds many observable characteristics to equation (1) , in the hope that any remaining bias is small. Dale and Krueger (2002) , Black and Smith (2004) , Lindahl and Regner (2005) , Hamermesh and Donald (2008) , and Dale and Krueger (2011) illustrate the difficulty of drawing causal inferences about the payoffs to postsecondary education from observational data.

The second key challenge is that individuals who choose the same field may differ in their next-best alternatives while researchers usually only observe the chosen field. Let d/j denote an individual’s next-best alternative, namely, the field that would have been chosen if j is removed from the choice set. Expanding the first term on the right-hand side of equation (5) , we get:  

(6)
E[Δ2|d=2]=E[Δ2|d=2,d/2=0]Pr(d/2=0|d=2)+E[Δ2|d=2,d/2=1]Pr(d/2=1|d=2).

Equation (6) illustrates that, even in the absence of selection bias, it is difficult to give the OLS estimate of β2 a precise economic interpretation because it is a weighted average of payoffs to choosing field 2 instead of 0 across persons with different next-best alternatives. The average payoffs across individuals with different next-best alternatives will differ (i.e., E[Δ2|d=2,d/2=0]E[Δ2|d=2,d/2=1] ), if Δ2 varies across individuals and they base their ranking of fields, in part, on these idiosyncratic payoffs.

One limiting case that illustrates the difficulty with economically interpreting E[Δ2|d=2] is when everybody who completed field 2 has field 1 as next-best alternative, so that Pr(d/2=1|d=2)=1 . In this case, E[Δ2|d=2] is the average payoff of choosing field 2 instead of 0 for individuals for whom field 2 versus 1 is the relevant choice margin: E[Δ2|d=2,d/2=1] . In more realistic cases, E[Δ2|d=2] will be a weighted average of payoffs to choosing field 2 instead of 0 for individuals coming from separate margins: field 2 versus 1, and field 2 versus 0. The weights depend on the proportion of people at each margin, and are unobserved unless researchers have information on next-best alternatives.

Because individuals who choose different fields may differ in their next-best alternatives, it is also difficult to compare different payoffs. For example, it could be that the average payoff to field 2 over 0 is larger than the average payoff of field 1 over 0:  

E[Δ2|d=2]>E[Δ1|d=1],
even when the opposite is true for individuals at the relevant choice margins,  
E[Δ2|d=2,d/2=0]<E[Δ1|d=1,d/1=0].

This can happen because the weights on next-best alternatives may vary by chosen field.

More generally, OLS estimates of the payoffs to field of study can vary either because of selection bias, differences in potential earnings across fields, or differences in weights across the next-best alternatives.

II.C IV Estimation of Payoffs to Field of Study

To address selection bias, it is sufficient to have instruments that satisfy Assumptions 1–4. However, it turns out that identifying economically interpretable parameters remains difficult because there is no natural ordering of the alternative fields of study and researchers rarely observe the individual’s next-best alternative. We now show that even with a valid instrument for each field, identifying the payoffs to choosing one field of study instead of another requires more information about individuals’ ranking of fields or additional assumptions than those needed in settings with binary choices.

1. What IV Cannot Identify

IV uses the following moment conditions:  

(7)
E[εz1]=0,
 
(8)
E[εz2]=0,
and  
(9)
E[ε]=0,
which can be expressed in terms of potential outcomes and treatments by rewriting the residual of equation (1) in terms of equations ( 2 )–(4) as follows:  
(10)
ε=(y0β0)+(Δ1β1)d1+(Δ2β2)d2=(y0β0)+(Δ1β1)(d10+(d11d10)z1+(d12d10)z2)+(Δ2β2)(d20+(d21d20)z1+(d22d20)z2).

After substituting this expression in equations (7)–(9) and using the independence assumption, we obtain the following moment conditions, now in terms of potential outcomes and treatments:  

(11)
E[(Δ1β1)(d11d10)+(Δ2β2)(d21d20)]=0,
and  
(12)
E[(Δ1β1)(d12d10)+(Δ2β2)(d22d20)]=0.

Solving these two equations for β1 and β2 leads to Proposition 1. 12

P roposition 1

Suppose Assumptions 1–4 hold. From solving equations (11)–(12) for β1 and β2 , it follows that β j for j = 1,2 is a linear combination of the following three payoffs:

  • Δ1 : Payoff of field 1 compared to 0;

  • Δ2 : Payoff of field 2 compared to 0;

  • Δ2Δ1y2y1 : Payoff of field 2 compared to 1.

Proof of Proposition 1 is given in Online Appendix A .

Proposition 1 shows that without further restrictions, IV estimation of equation (1) does not identify the payoff to any individual or group of the population from choosing one field of study compared to another. For example, IV estimation would not tell us whether the gains in earnings to persons choosing engineering instead of business are larger or smaller than the gains in earnings to those choosing law instead of business. It is possible that persons choosing engineering gain while those choosing law lose; IV under Assumptions 1–4 only identifies a weighted average of the payoffs to different fields, which could be large or small, positive or negative.

2. What IV Can Identify

The basic problem with IV estimation of equation (1) is that individuals who are induced to choose, say, field 2 if z2 is switched on may select either field 0 or field 1 if z2 is switched off. The standard IV assumptions ensure that switching on z2 shifts some individuals into field 2, but they say nothing about the fields these compliers are shifted away from. Auxiliary assumptions are therefore necessary to identify the payoff from choosing one field of study compared to another. Proposition 2 makes precise what IV identifies under three alternative assumptions: (i) constant effects; (ii) restrictive preferences; and (iii) irrelevance and information on next-best alternatives.

P roposition 2

Suppose Assumptions 1–4 hold. Solving equations (11)–(12) for β1 and β2 , we observe the following results:

  • If Δ1 and Δ2 are common across all individuals (constant effects), then  

    β1=Δ1
     
    β2=Δ2.

  • If d21=d20 and d12=d10 (restrictive preferences), then  

    β1=E[Δ1|d11d10=1]
     
    β2=E[Δ2|d22d20=1].

  • If d11=d10=0d21=d20,d22=d20=0d12=d10 and we condition on d10=d20=0 (irrelevance and next-best alternative), then  

    β1=E[Δ1|d11d10=1,d20=0]
     
    β2=E[Δ2|d22d20=1,d10=0].

The proofs for Proposition 2 are given in Online Appendix A .

In result i of Proposition 2, Δ1 and Δ2 are common across all individuals and IV estimation of equation (1) identifies the payoff to each field. While there is little direct evidence of heterogeneity in the effects of field of study, the constant effect assumption has been tested and rejected in the setting where individuals choose between education levels (e.g., college versus high school). For example, Carneiro, Heckman, and Vytlacil (2011) find that the effect of college is heterogeneous and individuals make decisions about whether or not to enroll in college based on their idiosyncratic individual returns.

Instead of assuming constant effects, identification can also be achieved by making restrictions on individuals’ preferences. One possibility is to impose the assumption in result ii, which implies that changing z from 0 to 1 ( 2 ) does not affect whether or not an individual chooses treatment 2 ( 1 ). Behaghel, Crépon, and Gurgand (2013) show that this assumption allows for a causal interpretation of IV estimates in situations with multiple unordered treatments, as in regression model (1) . In many settings, however, it is difficult to justify this assumption as it imposes strong restrictions on preferences. For example, it implies that an individual who chooses field 2 if the cost of field 1 is low ( z = 1) must also choose field 2 if the cost of field 0 is low ( z = 0).

Another possibility is to combine information about individuals’ next-best alternatives with weak assumptions about individuals’ preferences. In result iii, we assume that if changing z from 0 to 1 ( 2 ) does not induce an individual to choose treatment 1 ( 2 ), then it does not make her choose treatment 2 ( 1 ) either. In our context, for example, this assumption means that if crossing the admission cutoff to field 1 does not make an individual choose field 1, it does not make her choose field 2 either. On its own, this irrelevance condition does not help in resolving the identification problem posed by heterogeneous effects under Assumptions 1–4. But together with information about individuals’ next-best alternatives, it is sufficient to identify LATEs for every field. The intuition is straightforward: by conditioning on the next-best alternative, individuals who are induced to complete a field by a change in the instrument come from a particular alternative field.

The identification result in part iii of Proposition 2 motivates and guides our empirical analysis of the payoffs to field of study below. The key to our research design is twofold: we use instruments to correct for selection bias, and measures of next-best alternatives to approximate individuals’ margin of choice. As discussed in greater detail later, our data provide us with strategy-proof measures of individuals’ ranking of fields. These measures are designed to elicit the applicants’ true ranking of fields at time of application. We use this information to condition on individuals’ next-best alternatives in the IV estimation of a model like equation (1) . As a result, we can estimate the payoffs to different fields while correcting for selection bias and keeping the next-best alternatives as measured at the time of application fixed. We also test (and reject) the alternative auxiliary assumptions of constant effects or restrictive preferences in results i and ii.

III. I nstitutional D etails and I dentification S trategy

In this section, we describe the admission process to postsecondary education in Norway, documenting in particular that the process generates instruments that can be used to correct for selection bias, as well as information about an individual’s next-best alternatives that allows us to approximate individuals’ choice margin.

III.A. Admission Process

During the period we study, the Norwegian postsecondary education sector consisted of four public universities and a number of public and private university colleges. The vast majority of students attend a public institution, and even the private institutions are funded and regulated by the Ministry of Education and Research. Obtaining a postsecondary degree normally requires three to five years. The universities (in Bergen, Oslo, Trondheim, and Tromsø) all offer a wide selection of fields. By comparison, the university colleges rarely offer fields like law, medicine, science, or technology, but tend to offer professional degrees in fields like engineering, health, business, and teaching. There are generally no tuition fees for attending postsecondary education in Norway, and most students are eligible for financial support (part loan/part grant) from the Norwegian State Educational Loan Fund.

The admission process to postsecondary education is centralized. Applications are submitted to a central organization, the Norwegian Universities and Colleges Admission Service, which handles the admission process to all universities and most university colleges. 13 The unit in the application process (course) is the combination of detailed field and institution (e.g., teaching, at the University of Oslo).

Every year in the late fall, the Ministry of Education and Research decides on funding to each field at every institution, which effectively determines the supply of slots. While some slots are reserved for special quotas (e.g., students from the northernmost part of Norway), the bulk of the slots are for the main pool of applicants. For many courses, demand exceeds supply. Courses for which there is excess demand are filled based on an application score derived from high school GPA. Individual course grades at high school range from 1 to 6 (only integer values), and GPA is calculated as 10 times the average grade (up to two decimal places). A few extra points on the application score are awarded for choosing specific subjects in high school. For some courses, the application score can also be adjusted based on ad hoc field-specific conditions unrelated to academic requirements (e.g., two extra points for women at some male-dominated fields). Additionally, applicants can gain some compensation in their application score depending on their age, previous education, and fulfillment of military service.

On applying, students rank up to fifteen courses based on their preferences. Information about what courses are offered by the various institutions is made available in a booklet that is distributed at high schools. The deadline for applying to courses is mid-April. This is the applicants’ first submission of course rankings; they can adjust their rankings until July. New courses cannot be added, but courses can be dropped from the ranking. Once the rankings are final in July, offers are made sequentially where the order is determined by the applicants’ application score: the highest-ranked applicant receives an offer for her preferred course; the second-highest applicant receives an offer for her highest-ranked course among the remaining courses, and so on. This is repeated until either slots run out, or applicants run out. This allocation mechanism corresponds to a so-called serial dictatorship, which is both Pareto efficient and strategy-proof ( Svensson 1999 ). By design, this mechanism should elicit the applicants’ true ranking of fields at time of application. 14

This procedure generates a first set of offers that are sent out to the applicants in late July. Applicants then have a week to accept the offer, choose to remain on a waiting list, or withdraw from the applicant process. The slots that remain after the first round are then allocated in a second round of offers in early August among the remaining applicants on the waiting list. These new offers are generated following the same sequential dictatorship mechanism as in the first round. Since applicants in this second round can only move up in the offer sequence, second round offers will either correspond to first round offers, or be an offer for a higher-ranked field. In mid-August, the students begin their study in the accepted field and institution. If the students want to change field or institution, they usually need to participate in next year’s admission process on equal terms with other applicants.

III.B. Admission Cutoffs, Instruments, and Next-Best Alternatives

For courses with excess demand, this admission process generates a setup where applicants scoring above a certain threshold are much more likely to receive an offer for a course they prefer compared to applicants with the same course preferences but marginally lower application score. This creates credible instruments from discontinuities that effectively randomize applicants near unpredictable admission cutoffs into different fields and institutions.

To see this, consider panel a of Table I , which shows a stylized example of an application where the applicant is on the margin of getting different field offers from the same institution. Suppose the applicant has an application score of 49. In this case, she would receive an offer for her third ranked course. This defines her preferred field in the local course ranking around her application score, namely, field 2. In this local ranking, her next-best alternative is field 3, the field she would prefer if field 2 would not be feasible. We can now compare her to an applicant with the same ranking of fields, but who has a slightly lower application score of 47. This applicant has the same preferred field and next-best alternative in the local ranking around her application score, but receives an offer for field 3 instead of 2. The intuition behind our identification strategy is that by comparing the outcomes of these applicants we can estimate the effect of getting an offer of field 2 instead of 3, while ruling out that differences in their outcomes are driven by unobserved heterogeneity in preferences, ability, and other confounders.

Table I

Illustration of Identification of Payoffs

Course Ranking Inst. Field Cutoff 
Panel A: Fields 
    1st best 57 
    2nd best 52 
    3rd best 48 
    4th best 45 
 Application score = 49 
Local Course Ranking 
    Preferred Yes 
    Next-best No 
 Application score = 47 
Local Course Ranking 
    Preferred No 
    Next-best Yes 
 
Panel B: Institutions 
    1st best 52 
    2nd best 48 
    3rd best 46 
    4th best 43 
 Application score = 49 
Local Course Ranking 
    Preferred Yes 
    Next-best No 
 Application score = 47 
Local Course Ranking 
    Preferred No 
    Next-best Yes 
Course Ranking Inst. Field Cutoff 
Panel A: Fields 
    1st best 57 
    2nd best 52 
    3rd best 48 
    4th best 45 
 Application score = 49 
Local Course Ranking 
    Preferred Yes 
    Next-best No 
 Application score = 47 
Local Course Ranking 
    Preferred No 
    Next-best Yes 
 
Panel B: Institutions 
    1st best 52 
    2nd best 48 
    3rd best 46 
    4th best 43 
 Application score = 49 
Local Course Ranking 
    Preferred Yes 
    Next-best No 
 Application score = 47 
Local Course Ranking 
    Preferred No 
    Next-best Yes 

Panel b of Table I provides another example where two applicants are on the margin of receiving an offer for the same field but from different institutions. One applicant has a application score of 49 and receives an offer from institution A, whereas the other receives an offer from institution B because she has a slightly lower application score of 47. By comparing the outcomes of these applicants we can estimate the effect of getting an offer of institution A instead of B, while ruling out that differences in their outcomes are driven by unobserved heterogeneity.

In the two examples of Table I , the applicants either receive offers for different fields from the same institution or from different institutions for the same field. This illustrates that we can have independent variation in field and institution choices. In principle, we could therefore estimate the payoff to field of study separately for a given institution. In our baseline 2SLS model, however, we abstract from differences in institutional quality, recognizing that changing field could involve changes in institution of study. Indeed, the baseline estimates of the payoffs to field of study will capture any effect that is linked to the change in fields because of crossing the admission cutoff between his preferred field and next-best alternative. We therefore think of the baseline estimates as measures of earnings gains from completing one field of study compared to another, with the understanding that these gains may not necessarily arise only from field-specific human capital. To examine the role of institutional quality in explaining the estimated payoffs, we take several steps. In particular, we extend the baseline 2SLS model to include a full set of indicator variables for both field and institution of study, and use the admission cutoffs to instrument for these endogenous variables.

IV. D ata and D escriptive S tatistics

IV.A. Data Sources and Sample Selection

Our analysis combines several sources of Norwegian administrative data. We have records for all applications to post-secondary education for the years 1998 to 2004. We retain the individuals’ first observed application, also requiring that they have no postsecondary degree at the moment of application. In a next step, we link this application information for the 1998–2004 cohorts to the Norwegian population registry to retain information on their socioeconomic background. In particular, we have information about parental education (both for the mother and father), income of the father, and the immigrant status of the family. This information is predetermined in the context of our analysis and refers to the year when the applicant was 16 (fathers’ earnings are averages at applicants’ ages 16 and 19).

For our treatment variables, we have information for all applicants on their completed field and the institution from which they graduate. This information comes from the national education register for the years 1998 to 2012. 15 Our measure of annual earnings comes from the Norwegian tax registers over the period 1998 to 2012. This means that every cohort is observed for at least eight years after their application. The measure of earnings encompasses wage income, income from self-employment, and transfers that replace such income like short-term sickness pay and paid parental leave (but excludes unemployment benefits). Earnings are deflated using the CPI with 2011 as the base year, and are converted to U.S. dollars using exchange rates. 16

In most of the analysis, we estimate the payoff to field of study among individuals who complete postsecondary education in terms of their earnings eight years after application. Relating the moment at which we measure earnings to the year of application (rather than year of completion) avoids endogeneity issues related to time to degree. Another advantage is that by positioning earnings eight years after applying, most individuals will have made the transition to work. As a result, the estimated payoffs should be interpreted as earnings gains early in the working career, rather than internal rates of return on the investment in human capital. In a robustness analysis, we show that the estimated payoffs change little if we include individuals who do not complete postsecondary education or measure earnings one year earlier or later.

In our main analysis, we aggregate specific fields into 10 broad fields of study, listed in Table II . We retain all applicants who apply for at least two broad fields of study, where the most preferred field needs to have an admission cutoff, and the next-best alternative must have a lower cutoff (or no binding cutoff). If the applicant applied to several courses within her preferred broad field of study, then we use the lowest cutoff as the one that puts her on the margin between her preferred field and next-best field. This ensures that we have information on both the preferred and the next-best field, and a source of identification (potentially binding admission cutoffs) in our analysis. In a robustness analysis, we show the sensitivity of the results to using a more detailed classification of fields. Our estimation sample covers applicants to 48 institutions. For the analyses of institution effects we pool 15 small university colleges; this affects 1.3% of our sample of applicants.

Table II

Classification of Broad Fields with Examples of More Detailed Fields

Science: biology; chemistry; computer science; mathematics; physics 
Business: administration; accounting; business studies 
Social science: sociology; political science; anthropology; economics; psychology 
Teaching: kindergarten teacher; school teacher 
Humanities: history; philosophy; languages; media 
Health: nursing; social work; physical therapy 
Engineering (BSc): electrical; construction; mechanical; computer 
Technology: MSc engineering; biotechnology; information technology 
Law: law 
Medicine: medicine; dentistry; pharmacology 
Science: biology; chemistry; computer science; mathematics; physics 
Business: administration; accounting; business studies 
Social science: sociology; political science; anthropology; economics; psychology 
Teaching: kindergarten teacher; school teacher 
Humanities: history; philosophy; languages; media 
Health: nursing; social work; physical therapy 
Engineering (BSc): electrical; construction; mechanical; computer 
Technology: MSc engineering; biotechnology; information technology 
Law: law 
Medicine: medicine; dentistry; pharmacology 

IV.B. Descriptive Statistics

The first column of Table III shows descriptive statistics for our sample of first-time applicants who applied for at least two broad fields of study, and whose most preferred field had an application threshold. We standardize application score in our sample to have zero mean and a standard deviation of 1. The majority of applicants, about 64%, is female. The applicants are, on average, between 21 and 22 years old when we observe them applying for the first time. 17 The father’s earnings (average of earnings at applicant’s age 16 and 19) is, on average, $66,000, and about 50% of the applicants have a highly educated mother or father.

On average, an applicant receives an offer for her second- or third-ranked course. Around 40% of applicants receive an offer for the first-ranked course, and close to 80% receives an offer for one of the three highest-ranked courses. Less than 1% receive no offer at all. The applicants rank, on average, there different fields and four different institutions. Figure I reports the two most common next-best fields for every preferred field. For example, this figure shows that almost half of the individuals whose preferred field is technology have engineering as their next-best alternative. By comparison, individuals with engineering as their preferred field typically have science as the next-best alternative. It is also clear that humanities, social science, and teaching tend to be close substitutes.

Figure I

Most Common Next-Best Fields by Preferred Fields

We use our sample of applicants to compute the probability of a next-best field given a preferred field. For each preferred field, we report the share of applicants for the two most common next-best fields.

Figure I

Most Common Next-Best Fields by Preferred Fields

We use our sample of applicants to compute the probability of a next-best field given a preferred field. For each preferred field, we report the share of applicants for the two most common next-best fields.

The second column of Table III reports observable characteristics for the whole population of applicants. As can be seen from the table, our sample is younger, has somewhat higher application scores, and a slightly more advantaged family background than the entire population. Compared to our sample, the population average applicant is more likely to receive her first-ranked course; at the same time, the fraction that does not receive any offer is higher in the population of applicants.

We can also compare average earnings across fields and institutions in our sample to those in the overall population of applicants. Panle a in Figure II shows that average earnings across fields are closely aligned along the 45-degree line, suggesting that our sample is very comparable to the other applicants in terms of levels of earnings by field. Panel b of Figure II shows that our sample is also fairly comparable in terms of average earnings by institution of study. Comparing Panels a and b, it is also clear that the variation in earnings across institutions is much smaller than the variation across fields. With the exception of the Norwegian School of Economics (the outlier with mean earnings of about $80,000), average earnings vary relatively little across institutions.

Figure II

Mean Earnings by Completed Field and Institution: Sample and All Applicants

This figure reports mean earnings by completed field/institution for our sample of applicants and for all applicants. Field of study is classified into ten broad fields, and institutions with less than 1,000 students in our sample are pooled into one category. Earnings are measured eight years after application. The measures of earnings are regression adjusted for year of application.

Figure II

Mean Earnings by Completed Field and Institution: Sample and All Applicants

This figure reports mean earnings by completed field/institution for our sample of applicants and for all applicants. Field of study is classified into ten broad fields, and institutions with less than 1,000 students in our sample are pooled into one category. Earnings are measured eight years after application. The measures of earnings are regression adjusted for year of application.

Table III

Descriptive statistics of estimation sample of applicants and of all applicants

 Sample All 
 Mean Std. Dev. Mean Std. Dev. 
Age 21.6 (4.4) 23.0 (5.8) 
Female 0.64  0.62  
Application score −0.00 (1.00) −0.23 (1.05) 
Earnings 8 yrs after appl. ($1,000) 55.5 (31.2) 52.8 (30.8) 
Parents are immigrants 0.04  0.04  
Mother has higher educ. 0.37  0.30  
Father has higher educ. 0.40  0.33  
Father's earnings ($1,000) 65.6 (56.4) 58.0 (51.9) 
Fields ranked 3.01 (1.11) 2.16 (1.24) 
Inst. ranked 3.70 (2.36) 3.18 (2.45) 
Rank of best (final) offer 2.50 (2.00) 1.82 (1.62) 
Offered rank=1 0.40  0.58  
Offered rank=2 0.25  0.15  
Offered rank=3 0.13  0.07  
No offer 0.01  0.11  
Observations 50,083  218,824  
 Sample All 
 Mean Std. Dev. Mean Std. Dev. 
Age 21.6 (4.4) 23.0 (5.8) 
Female 0.64  0.62  
Application score −0.00 (1.00) −0.23 (1.05) 
Earnings 8 yrs after appl. ($1,000) 55.5 (31.2) 52.8 (30.8) 
Parents are immigrants 0.04  0.04  
Mother has higher educ. 0.37  0.30  
Father has higher educ. 0.40  0.33  
Father's earnings ($1,000) 65.6 (56.4) 58.0 (51.9) 
Fields ranked 3.01 (1.11) 2.16 (1.24) 
Inst. ranked 3.70 (2.36) 3.18 (2.45) 
Rank of best (final) offer 2.50 (2.00) 1.82 (1.62) 
Offered rank=1 0.40  0.58  
Offered rank=2 0.25  0.15  
Offered rank=3 0.13  0.07  
No offer 0.01  0.11  
Observations 50,083  218,824  

Note : Columns display descriptive statistics of our estimation sample of applicants and of all applicants, respectively. Earnings of the applicants are measured eight years after application. All other characteristics are measured before or at the time of application. Offered rank = X is a dummy variable for whether an individual is offered her Xth ranked choice.

V. G raphical I llustration of R esearch D esign

As explained in Section III.B, our research design uses the admission cutoffs between preferred and next-best alternatives (in the local course ranking) as instruments for completed field of study. A virtue of this design is that it provides a transparent way of illustrating how the payoffs to field of study are identified. To this end, we begin with a graphical depiction before turning to the formal econometric model and the regression results.

V.A. Admission Cutoffs and Field of Study

Figure III pools all the fields and admission cutoffs and illustrates how crossing the cutoffs affects (i) the chance of receiving an offer to enroll in the preferred field, and (ii) the probability of obtaining a degree in the preferred field. The data are normalized so that zero on the x -axis represents the admission cutoff to the preferred field, and observations to the left (right) of this cutoff therefore have an application score that is lower (higher) than the cutoff. We plot the unrestricted means in bins and include estimated local linear regression lines on each side of the admission cutoff.

Figure III

Admission Thresholds and Preferred Field Offer and Completion

This figure shows the sample fraction that is offered or that completes the preferred field by application score. We pool all admission cutoffs and normalize the data so that zero on the x -axis represents the admission cutoff to the preferred field. We plot unrestricted means in bins and include estimated local linear regression lines on each side of the cutoff.

Figure III

Admission Thresholds and Preferred Field Offer and Completion

This figure shows the sample fraction that is offered or that completes the preferred field by application score. We pool all admission cutoffs and normalize the data so that zero on the x -axis represents the admission cutoff to the preferred field. We plot unrestricted means in bins and include estimated local linear regression lines on each side of the cutoff.

The probability of being offered the preferred field increases by about 50 percentage points at the admission cutoff. 18 There is also a sharp jump in the probability of graduating with a degree in the preferred field at the cutoff, with graduation rates increasing from roughly 0.46 to 0.62. There are two reasons why the jump in the offer rate is larger than the jump in the graduation rate: some individuals are offered their preferred field but never complete their program; others do not initially get an offer but they reapply in the following years and end up graduating with a degree in the preferred field. Since our treatment variables are defined as graduating with a degree in a given field, the former group of individuals are never takers (i.e., they do not complete their preferred field even when the instrument is switched on) while the latter group are always takers (i.e., they complete their preferred field even when the instrument is switched off). Our IV estimates are not informative about the payoffs to field of study for never or always takers.

V.B. Admission Cutoffs and Sorting

A potential threat to our research design is that people might try to sort themselves above the cutoff in order to receive an offer for their preferred field of study. If such sorting occurs, we would expect to see discontinuities around the cutoffs in the density of applicants and in the observed characteristics.

Figure IV shows the estimated density when we pool all the fields and admission cutoffs. What matters for our research design is that there is no discontinuous jump in probability mass at zero since that would point to sorting. As can be seen in Figure IV , there is no indication that applicants are able to strategically position themselves around the application boundary, and the test proposed by McCrary (2008) is insignificant and does not reject the null hypothesis of no sorting.

Figure IV

Bunching Check around the Admission Cutoffs

This figure shows the log density of applicants in our sample by application score. We pool all admission cutoffs and normalize the data so that zero on the x -axis represents the admission cutoff to the preferred field. We plot unrestricted means in bins and include estimated local linear regression lines on each side of the cutoff.

Figure IV

Bunching Check around the Admission Cutoffs

This figure shows the log density of applicants in our sample by application score. We pool all admission cutoffs and normalize the data so that zero on the x -axis represents the admission cutoff to the preferred field. We plot unrestricted means in bins and include estimated local linear regression lines on each side of the cutoff.

A complementary approach for assessing the validity of our instruments is to investigate covariate balance around the cutoffs. We consider several individual characteristics that correlate with earnings: gender, age, application score, parental education, and parental income. We construct a composite index of these predetermined characteristics, namely predicted earnings, using the coefficients from an OLS regression of earnings on these variables. Figure V shows average predicted earnings in small intervals on both sides of the pooled application cutoffs and a local linear regression fit. There is no indication that applicants on different sides of the application boundaries are different on observables. Indeed, a formal test is highly insignificant and we do not reject continuity in predicted earnings at the cutoff.

Figure V

Balancing Check around the Admission Cutoffs

This figure shows the predicted earnings by application score. Earnings are predicted from an OLS regression of earnings on gender, age, application score, parental education, and parental income. We pool all admission cutoffs and normalize the data so that zero on the x -axis represents the admission cutoff to the preferred field. We plot unrestricted means in bins and include estimated local linear regression lines on each side of the cutoff.

Figure V

Balancing Check around the Admission Cutoffs

This figure shows the predicted earnings by application score. Earnings are predicted from an OLS regression of earnings on gender, age, application score, parental education, and parental income. We pool all admission cutoffs and normalize the data so that zero on the x -axis represents the admission cutoff to the preferred field. We plot unrestricted means in bins and include estimated local linear regression lines on each side of the cutoff.

Taken together, Figures IV and V suggest that students do not sort themselves around the admission cutoffs. The absence of sorting around the cutoffs is consistent with key features of the admission process. First, the exact admission cutoffs are unknown both when individuals do their high school exams and when they submit their application. Second, the admission cutoffs vary considerably over time, in part because of changes in demand, but also because changes in funding cause variation in the supply of slots. Third, there is limited scope for sorting around the cutoff during the last semester of high school when students do their final exams and apply for postsecondary education. In our setting, the application scores depend on the academic results over all three years in high school, unlike countries in which admission is based only on how well the students do in their final-year exams or on college entrance tests.

V.C. Admission Cutoffs and Postgraduation Earnings

The figures above show that individuals on both sides of the cutoff are similar in their predetermined characteristics, but differ in the field in which they receive an offer and ultimately graduate. Figure VI examines whether the abrupt changes in field of study at the cutoffs are associated with discontinuous changes in post-graduation earnings. In particular, Figure VI illustrates how the changes in earnings depend both on individuals’ preferred field and their next-best alternatives.

Figure VI

Average Earnings around Admission Cutoffs

To construct the graphs, we estimate equation (13) separately for every next-best field. We plot the predicted jumps α^jk ’s. The lines show cell averages of predicted earnings when all covariates are set equal to their global means. We display the average residual for different quartiles of the application score on each side of the cutoff. The first graph aggregates the estimates across all fields and cutoffs. The second graph aggregates only across next-best fields. The third and fourth show estimates for specific combinations.

Figure VI

Average Earnings around Admission Cutoffs

To construct the graphs, we estimate equation (13) separately for every next-best field. We plot the predicted jumps α^jk ’s. The lines show cell averages of predicted earnings when all covariates are set equal to their global means. We display the average residual for different quartiles of the application score on each side of the cutoff. The first graph aggregates the estimates across all fields and cutoffs. The second graph aggregates only across next-best fields. The third and fourth show estimates for specific combinations.

To construct the figure, we estimate the following regression separately for every next-best field k :  

(13)
y=jkαjkzj+xξk+νjk+εk,
where y is earnings eight years after application, ν jk is a fixed effect for preferring field j and having k as the next-best field (in the local course ranking), z j is the predicted offer for field j , which is equal to one if j is the individual’s preferred field and her application score exceeds the admission cutoff for field j , and x is a vector of controls that includes the running variable (application score), gender, cohort, and age at application.

Figure VI plots the predicted jumps (the α^jk ’s) from this regression model. The lines shows cell averages of predicted earnings when all covariates are set equal to their (global) means. To show the data, the figure also adds the average residual for different quartiles of the application score on each side of the threshold. The first graph of Figure VI aggregates the estimates across all fields and cutoffs, and shows the average change in earnings when individuals cross the cutoff for admission to their preferred field. While there is a sharp jump in the probability of completing the preferred field at the cutoff, there is only a small increase in average earnings. For those above the cutoff of their preferred field, the second graph aggregates only across next-best fields. Thus, the graph displays how earnings change when individuals cross the cutoff for admission to specific preferred fields, namely business and teaching. Figure VI shows that the earnings of those offered business as a preferred field are much higher than the average of those offered their next-best field, while the earnings of those offered teaching are much lower.

As shown formally in Section II , the basic problem with interpreting the changes in earnings from crossing the admission cutoffs to the preferred fields is that individuals may come from different alternative fields, so the margins of choice will vary. The third and fourth graph of Figure VI illustrate how the change in earnings from crossing the admission cutoff to teaching can be positive or negative depending on the next-best alternative. In particular, crossing the admission cutoff to teaching is associated with a sharp drop in earnings if the next-best alternative is engineering, while we see a small jump in earnings if the next-best alternative is humanities.

Taken together, these graphs highlight how information on next-best alternatives is key to identifying the payoffs from choosing one field of study compared to another. In the next section, we detail how the admission cutoffs and the measures of next-best alternatives are used in IV estimation of the payoffs to different fields.

VI. B aseline 2SLS M odel

Section II laid the groundwork for our empirical analysis by showing the assumptions under which IV estimates can be given a causal interpretation as LATEs in settings with multiple unordered treatments. In particular, Proposition 2 showed that we can identify the LATE from choosing one field of study over another if our instruments satisfy an irrelevance condition, and if we can condition on individuals’ next-best fields. 19 This identification result motivates and guides our specification of the empirical 2SLS model. Specifically, we estimate the following system of equations separately for individuals with next-best field k (in the local course ranking):  

(14)
y=jkβjkdj+xγk+λjk+εk,
and  
(15)
dj=jkπjkzj+xψjk+ηjk+ujk,jk,
where ( 14 ) is the second-stage equation, and ( 15 ) are the first-stage equations, one for each field. Our outcome variable y is earnings eight years after applying, dj1[d=j] equals 1 if an individual completed field j , and is 0 otherwise. Since field k is the left-out reference category, β jk is then the payoff to completing field j instead of k for individuals who have specified field k as the next-best field.

The instruments z j in equation (15) are the predicted offers for field j , and z j is therefore equal to 1 if j is the individual’s preferred field and her application score exceeds the admission cutoff for field j , and 0 otherwise. As in Section II , we therefore have as many binary instruments as treatments (one z j for each completed field dummy d j ), and for a given individual at most one of the instruments z j can equal 1 (namely, that of her preferred field in the local course ranking).

Our estimation approach exploits the fuzzy regression discontinuity design implicit in the admission process described above, where individuals with application scores above the cutoff are more likely to receive an offer. Although the identification in this setup is ultimately local, we use 2SLS because our sample sizes do not allow for local nonparametric estimation. While the model laid out in Section II abstracted from any control variables, we now need to include certain covariates to ensure the exogeneity of our instruments.

First, all equations include controls for the running variable. While our baseline specification controls for the application score linearly, we report results from several specification checks, all of which support our main findings. Second, we control for individuals’ preferences by adding fixed effects for preferring field j and having k as the next-best field (in the local course ranking): λ jk and η jk . To gain precision, we estimate the system of equations (14)–(15) jointly for all next-best fields, allowing for separate intercepts for preferred field and next-best field (i.e., λjk=μk+θj and ηjk=τk+σj ). In a robustness check, we show that our estimates are robust to allowing for separate intercepts for every interaction between preferred and next-best field. Finally, to reduce residual variance we also add controls for gender, cohort and age at application, which are predetermined.

From the resulting 2SLS estimation of equations (14)–(15) across all next-best fields, we obtain a matrix of the payoffs to field j compared to k for those who prefer j and have k as next-best field. Following Imbens and Rubin (1997) and Abadie (2002) , we also decompose these payoff estimates into the complier average potential earnings with field j and the complier average potential earnings with field k . This decomposition helps in interpreting the magnitude of the estimated payoffs.

VII. P ayoffs to F ield of S tudy

VII.A. First-Stage Estimates

We begin by estimating the first-stage regressions. Since we estimate the first-stage equations separately by next-best fields, individuals who are induced to complete a field by a change in an instrument come from a particular alternative field. As a result, the first-stage estimates are informative about the substitution pattern across fields from small changes in admission cutoffs.

The first-stage estimates and the corresponding F -statistics can be found in Online Appendix Table B.1 . For brevity, these tables report the own instrument of each completed field: for the dependent variable d j , we report the estimated coefficient of z j and its F -statistic. The first-stage estimates confirm that crossing the admission cutoffs between next-best alternatives and preferred fields lead to jumps in the probability of completing the preferred fields. The F -statistics are generally large, suggesting that weak instruments are not a key concern. 20

From these first-stage estimates we can compute the proportion of compliers for whom the relevant choice margin is preferred field j versus next-best alternative k . Figure VII displays the two most common next-best fields for every preferred field. For example, this figure reveals that science is the typical next-best field for compliers who prefer technology or engineering. It is also clear that humanities, social science, and teaching tend to be close substitutes. By comparing Figure VII to Figure I , we see that the compliers to our instruments are fairly similar to noncompliers in terms of their preferred and next-best fields.

Figure VII

Complier Weights of Alternative Fields by Completed Fields

We use our first-stage estimates to compute the probability among our compliers of a next-best field given a completed field. For each preferred field, we report the share of compliers for the two most common next-best field.

Figure VII

Complier Weights of Alternative Fields by Completed Fields

We use our first-stage estimates to compute the probability among our compliers of a next-best field given a completed field. For each preferred field, we report the share of compliers for the two most common next-best field.

By computing the proportion of compliers by preferred and next-best field, we also learn that certain combinations of fields are rare. In particular, few compliers have law as their next-best field, and virtually none have medicine as the next-best field. This means that we do not have support in our data to identify the effect of choosing field j instead of medicine, and that we have too few compliers to obtain precise estimates of the payoff to choosing field j instead of law.

VII.B. Baseline 2SLS Estimates

Table IV reports the 2SLS estimates from the model given by equations (14) and ( 15 ). By estimating the model separately by next-best field, we are able to identify the payoffs to different fields while correcting for selection bias and keeping the next-best alternatives as measured at the time of application fixed. For example, the first column reports estimates of the payoffs to different fields as compared to humanities. This column shows significant gains in earnings to all fields as compared to humanities. The payoffs are largest for medicine, followed by engineering, science, business, law, and technology. By way of comparison, choosing health, social science, or teaching instead of humanities has substantially lower but still significant payoff. To better understand the magnitude of the estimated payoffs, the final row of Table IV computes the weighted average of the levels of potential earnings for compliers in their next-best field. For each next-best field, the weights sum to one and reflect the proportion of compliers by preferred field. By comparing the payoff to science instead of humanities to the counterfactual earnings in the next-best field (humanities) we see that these individuals almost triple their earnings early in their working career. Table IV also reports estimates of the coefficients on gender and application score. As expected, women tend to earn less than men, but the gender pay gap is relatively small compared to many of the earnings differences by field of study. Interestingly, the association between application scores (standardized to have zero mean and standard deviation 1) and earnings tends to be relatively weak. This suggests that grades in high school are weak predictors of earnings, over and above their impact on admission to different types of field of study.

Table IV

2SLS estimates of the payoffs to field of study (USD 1,000)

  Next Best Alternative ( k )  
 Humanities Soc Science Teaching Health Science Engineering Technology Business Law 
Completed field ( j )           
    Humanities   21.4 * −4.7  −22.9 * 5.0  −38.5 ** 6.9  −42.2 ** −156.3 
  (11.0) (9.8) (12.1) (11.9) (14.7) (48.3) (10.6) (437.3) 
    Social Science  18.7 **  9.8 −10.8  55.5 **  −55.4 ** −110.4  −28.4 ** −76.1 
 (6.7)  (11.6) (13.0) (21.5) (20.6) (103.0) (10.7) (86.4) 
    Teaching  22.3 **  31.4 **  1.8  23.5 **  −33.9 ** −35.3  −21.1 ** 22.8 
 (5.0) (7.9)  (6.6) (9.5) (12.5) (37.1) (7.1) (127.9) 
    Health  18.8 **  30.7 **  7.7 **   28.9 **  −27.9 **  −43.4 **  −17.4 ** −55.2 
 (6.3) (7.6) (2.8)  (7.6) (10.4) (20.8) (4.0) (97.7) 
    Science  53.7 **  69.6 **  38.6 **  29.6 **  −2.2 16.8 −4.9 148.3 
 (18.4) (22.4) (14.2) (11.5)  (14.6) (18.1) (10.5) (276.2) 
    Engineering 59.8 −5.5  75.2 ** 0.2  52.4 **  −46.0 −13.0 −57.7 
 (50.6) (58.2) (37.5) (16.4) (21.0)  (43.9) (23.7) (166.6) 
    Technology  41.9 **  58.7 **  22.1 *  32.5 **  68.1 ** −5.6  7.0 −53.1 
 (10.8) (10.1) (12.4) (10.1) (9.6) (12.0)  (9.5) (147.5) 
    Business  48.1 **  61.9 **  31.0 **  30.2 **  58.0 ** −3.4  28.5 *  3.5 
 (11.3) (12.0) (8.8) (10.9) (10.5) (12.6) (15.6)  (83.0) 
    Law  46.3 **  55.6 **  36.6 **  21.5 *  40.1 ** −27.5 −15.6 −1.4  
 (7.2) (8.3) (11.6) (11.5) (9.7) (18.3) (18.0) (8.7)  
    Medicine  83.3 **  79.4 **  62.6 **  45.6 **  81.3 ** 21.1  40.1 **  23.3 ** 14.8 
 (9.8) (10.7) (9.0) (7.0) (9.7) (20.7) (11.7) (8.8) (83.6) 
Female  −7.0 **  −6.3 **  −10.3 *  −5.6 **  −5.3 **  −5.1 **  −4.1 **  −7.0 ** −10.6 
 (1.1) (1.6) (1.3) (0.9) (1.3) (1.0) (1.6) (3.5) (6.9) 
Application score −0.6  4.3 **  4.0 **  1.6 ** −0.7  1.1 * −0.1 0.1 13.8 
 (0.8) (1.6) (0.9) (0.6) (0.7) (0.6) (1.3) (2.8) (14.6) 
Average y k 30.0 23.4 46.2 51.8 27.3 87.9 78.4 75.6 105.8 
Observations 8,391 11,030 10,987 3,269 6,422 3,085 1,245 4,403 1,251 
  Next Best Alternative ( k )  
 Humanities Soc Science Teaching Health Science Engineering Technology Business Law 
Completed field ( j )           
    Humanities   21.4 * −4.7  −22.9 * 5.0  −38.5 ** 6.9  −42.2 ** −156.3 
  (11.0) (9.8) (12.1) (11.9) (14.7) (48.3) (10.6) (437.3) 
    Social Science  18.7 **  9.8 −10.8  55.5 **  −55.4 ** −110.4  −28.4 ** −76.1 
 (6.7)  (11.6) (13.0) (21.5) (20.6) (103.0) (10.7) (86.4) 
    Teaching  22.3 **  31.4 **  1.8  23.5 **  −33.9 ** −35.3  −21.1 ** 22.8 
 (5.0) (7.9)  (6.6) (9.5) (12.5) (37.1) (7.1) (127.9) 
    Health  18.8 **  30.7 **  7.7 **   28.9 **  −27.9 **  −43.4 **  −17.4 ** −55.2 
 (6.3) (7.6) (2.8)  (7.6) (10.4) (20.8) (4.0) (97.7) 
    Science  53.7 **  69.6 **  38.6 **  29.6 **  −2.2 16.8 −4.9 148.3 
 (18.4) (22.4) (14.2) (11.5)  (14.6) (18.1) (10.5) (276.2) 
    Engineering 59.8 −5.5  75.2 ** 0.2  52.4 **  −46.0 −13.0 −57.7 
 (50.6) (58.2) (37.5) (16.4) (21.0)  (43.9) (23.7) (166.6) 
    Technology  41.9 **  58.7 **  22.1 *  32.5 **  68.1 ** −5.6  7.0 −53.1 
 (10.8) (10.1) (12.4) (10.1) (9.6) (12.0)  (9.5) (147.5) 
    Business  48.1 **  61.9 **  31.0 **  30.2 **  58.0 ** −3.4  28.5 *  3.5 
 (11.3) (12.0) (8.8) (10.9) (10.5) (12.6) (15.6)  (83.0) 
    Law  46.3 **  55.6 **  36.6 **  21.5 *  40.1 ** −27.5 −15.6 −1.4  
 (7.2) (8.3) (11.6) (11.5) (9.7) (18.3) (18.0) (8.7)  
    Medicine  83.3 **  79.4 **  62.6 **  45.6 **  81.3 ** 21.1  40.1 **  23.3 ** 14.8 
 (9.8) (10.7) (9.0) (7.0) (9.7) (20.7) (11.7) (8.8) (83.6) 
Female  −7.0 **  −6.3 **  −10.3 *  −5.6 **  −5.3 **  −5.1 **  −4.1 **  −7.0 ** −10.6 
 (1.1) (1.6) (1.3) (0.9) (1.3) (1.0) (1.6) (3.5) (6.9) 
Application score −0.6  4.3 **  4.0 **  1.6 ** −0.7  1.1 * −0.1 0.1 13.8 
 (0.8) (1.6) (0.9) (0.6) (0.7) (0.6) (1.3) (2.8) (14.6) 
Average y k 30.0 23.4 46.2 51.8 27.3 87.9 78.4 75.6 105.8 
Observations 8,391 11,030 10,987 3,269 6,422 3,085 1,245 4,403 1,251 

Notes . From 2SLS estimation of equations (14)–(15) , we obtain a matrix of the payoffs to field j as compared to k for those who prefer j and have k as next-best field. Each cell is a 2SLS estimate (with standard errors in parenthesis) of the payoff to a given pair of preferred field and next-best field. The rows represent completed fields and the columns represent next-best fields. The row labeled average y k reports the weighted average of the levels of potential earnings for compliers in the given next-best field. The final row reports the number of observations for every next-best field. Stars indicate statistical significance, at the *10% level and **5% level.

The payoffs we estimate are local average treatment effects of instrument-induced shifts in field of study. Since our instruments are admission cutoffs, they pick out individuals who are at the margin of entry to particular fields. We therefore need to be cautious in extrapolating the payoffs we estimate to the population at large. However, despite the local nature of our estimates, the payoffs among the compliers to our instruments are informative about policy that (marginally) changes the supply of slots in different fields. In Kirkeboen, Leuven, and Mogstad (2014) , we show that the effect of a policy that changes the field people choose depends inherently on the next-best alternatives, both directly through the next-best specific payoffs, and indirectly through the fields in which slots are freed up.

Figure VIII summarizes the results in Table IV , and shows the distribution of payoffs among the compliers for every combination of preferred field and next-best alternative. This figure illustrates that most compliers earn more in their preferred field than they would have earned in the next-best alternative. For many fields the payoffs rival common estimates of college earnings premiums, suggesting that the choice of field is potentially as important as the decision to enroll in college. In our data, for example, individuals who did not complete any postsecondary education earned, on average, $43,200 at age 30, whereas the average earnings of individuals with a postsecondary degree was $54,700 at the same age.

Figure VIII

Distribution of Estimated Payoffs ($1,000) to Field of Study

This figure graphs the complier-weighted distribution of estimates in Table IV . Three outliers with <0.001 of compliers are excluded.

Figure VIII

Distribution of Estimated Payoffs ($1,000) to Field of Study

This figure graphs the complier-weighted distribution of estimates in Table IV . Three outliers with <0.001 of compliers are excluded.

Figure IX graphs the weighted averages of payoffs to different completed fields across next-best fields. For each completed field, the weights sum to one and reflect the proportion of compliers by next-best field. This figure illustrates that the compliers tend to prefer fields that give them higher earnings than their next-best alternatives. Indeed, this is true even in fields for which earnings are relatively low, like teaching and health. The only exception is humanities, for which there is a negative average payoff.

Figure IX

Average Estimated Payoffs ($1,000) by Completed Field

This figure graphs the weighted averages of payoffs to different completed fields by next-best field. The payoffs come from Table IV . For each field, the weights sum to one and reflect the proportion of compliers by next-best field.

Figure IX

Average Estimated Payoffs ($1,000) by Completed Field

This figure graphs the weighted averages of payoffs to different completed fields by next-best field. The payoffs come from Table IV . For each field, the weights sum to one and reflect the proportion of compliers by next-best field.

VII.C. Robustness Analysis

1. Specification Checks

In Kirkeboen, Leuven, and Mogstad (2014) we presented results from several specification checks, all of which support our main findings. We summarize the results from these specification checks here. In the baseline model, we used a linear specification for the control of the running variable (application score). However, our results barely move if we instead used a quadratic or a cubic specification of the running variable in the 2SLS estimation. Additionally, the estimates are very similar when we used separate quadratic and cubic trends on each side of the admission cutoff.

We also showed that the baseline estimates are robust to adding more control variables or being more flexible in the specification of the controls: we added interactions between preferred field and the predetermined characteristics of the applicant; we included interactions between preferred field and separate trends on each side of the cutoffs; we added a full set of indicator variables for the interactions between preferred and next-best field; and we controlled for predetermined measures of parental education and earnings.

Because of data availability, our baseline specification estimated the payoffs to field of study in terms of earnings eight years after application. As a specification check, we also examined the sensitivity of the estimates to measuring earnings one year later or one year earlier in the individual’s working career. The results showed that the payoffs to field of study do not change appreciably if we use earnings seven or nine years after application as the dependent variable in the 2SLS estimation.

2. Completing Postsecondary Education

Our baseline sample excludes individuals who do not complete any postsecondary education. This sample selection helps in reducing the residual variance, leading to improved power and precision. A possible concern is that our instruments might not only affect field of study but also whether an individual completes any postsecondary education. To address this concern, we perform three specification checks. In each case, we add individuals who did not complete any postsecondary education to the baseline sample.

Using the extended sample, we first examine how the probability of completing any postsecondary education change when individuals cross the cutoff for admission to their preferred field. Panel a in Figure X shows that crossing the admission cutoffs to preferred fields matters little, if any, for whether an individual completes any postsecondary education.

Figure X

Robustness Checks: College Noncompletion

Panel a extends the baseline estimation sample to also include applicants who have not completed any higher education eight years after application. The dots show bin means, while the line is a local linear regression. In Panel b we compare the baseline payoff estimates, presented in Table IV , with payoffs estimated when we also include no college as an additional field. In these estimations applicants who prefer field j and who have no higher education as their second-best are used to construct an instrument for the payoff field j relative to no college. The correlation reported below Panel b is weighted with the inverse sum of the squared estimated standard errors of the payoffs, which is also indicated by the size of the markers in Panel b.

Figure X

Robustness Checks: College Noncompletion

Panel a extends the baseline estimation sample to also include applicants who have not completed any higher education eight years after application. The dots show bin means, while the line is a local linear regression. In Panel b we compare the baseline payoff estimates, presented in Table IV , with payoffs estimated when we also include no college as an additional field. In these estimations applicants who prefer field j and who have no higher education as their second-best are used to construct an instrument for the payoff field j relative to no college. The correlation reported below Panel b is weighted with the inverse sum of the squared estimated standard errors of the payoffs, which is also indicated by the size of the markers in Panel b.

Since our instruments do not affect the probability of completing any postsecondary degree, our baseline estimates should be very similar to the estimated field payoffs from an extended sample that includes applicants without a completed postsecondary degree. This is indeed what we find: the correlation between these estimates is very high (0.97).

Finally, for the full sample that includes noncompleters, we expand the model given by equations (14) and ( 15 ) to account for noncompletion and introduce a new endogenous variable in the second stage, namely a dummy variable for completing post-secondary education. Since the original equation was exactly identified, an additional instrument is needed. To achieve identification, we extend our sample further by including individuals who have a preferred field with a binding admission cutoff, and whose next-best alternative is no postsecondary education. The additional instrument is an indicator variable for crossing the admission cutoff from not receiving any offer of postsecondary education to receiving an offer. Panel b of Figure X shows that the estimated payoffs barely move if we account for endogeneity in completing postsecondary education.

VII.D. Peer Quality and Labor Market Experience

As discussed in Section III , our baseline 2SLS estimates capture any effect that operates through the individual changing field of study because of crossing the admission cutoff between his preferred field and next-best alternative. We therefore think of the baseline estimates as measures of earnings gains from completing one field of study compared to another, with the understanding that these gains may not necessarily arise only from field-specific human capital.

One possibility is that payoffs to field of study reflect differences in the quality of peer groups. 21 Depending on whether the individual’s application score is higher (lower) than the admission cutoff, the individual is predicted to be exposed to the peers in the preferred field (next-best alternative). By including a control variable for average application score of the predicted peer group in the first- and second-stage equations, our 2SLS estimates are identified from the variation in predicted field of study that is orthogonal to the average application score of the predicted peers. 22 Graph (a) in Online Appendix Figure B.1 shows that the 2SLS estimates barely move if we control for predicted peer quality in the first and second stages. We find a correlation of 0.98 between the estimated payoffs with and without controls for predicted peer quality.

Another possibility is that differences in labor market experience at the time we measure earnings contribute to the estimated payoffs. In Norway, earning a postsecondary degree typically requires three to five years. To examine the role of labor market experience in explaining the estimated payoffs, we exploit that we observe the expected duration of each field of study. Depending on whether the individual’s application score is higher (lower) than the admission cutoff, the individual is predicted to have an experience level of eight years, minus the expected duration of study. By controlling for the predicted experience level, our 2SLS estimates are identified from the variation in predicted field of study that is orthogonal to predicted experience. Graph (b) of Online Appendix Figure B.1 shows that the 2SLS estimates do not change appreciably if we control for predicted experience in the first and second stages.

VIII. C omparison with O ther A pproaches

The Norwegian data provide us with (i) one instrument for every field, (ii) measures of individuals’ ranking of fields, and (iii) information about both offered and completed field. In other settings, researchers may not have all three pieces of information. It is therefore useful to compare our estimates to those obtained using other approaches.

VIII.A. Comparison with Alternative IV Estimates

So far, we have focused on the results that build on the identification result in part iii of Proposition 2, and estimated a 2SLS model where we exploited information on next-best alternatives to identify the payoffs to field of study. However, parts i and ii of Proposition 2 point out alternatives to using information on next-best fields: one could use IV to identify payoffs by assuming constant effects, or by imposing strong restrictions on preferences.

To investigate the constant effect assumption we pool individuals with different next-best fields and reestimate the 2SLS model given by equations (14) and ( 15 ). If we reject equality between the baseline 2SLS estimates by next-best alternatives and the corresponding estimates based on pooled 2SLS, then we reject the null-hypothesis of constant effects. When we do this we find significant differences in the estimated payoffs, and a joint test of equality is strongly rejected ( p = .014). 23 A plot of the estimated payoffs by next-best alternative against the differences between the pooled estimates and the estimates by next-best field shows that the constant effect assumption leads to severe biases in the estimated payoffs to field of study. 24

The assumption of Behaghel, Crépon, and Gurgand (2013) , which imposes restrictions on preferences, also has testable implications. In particular, this assumption implies that for the first stage with dependent variable d j and omitted comparison field k , the coefficient of z l should equal 0 for all lj,k . The corresponding formal test has H0:πlj=0,lj,k . Online Appendix Table B.III reports the test statistics for this null hypothesis separately for each next-best field k , as well as pooled across all next-best fields. In all cases we strongly reject the restriction on preferences.

VIII.B. Comparison with OLS and Intention-to-Treat Effects

To date, most studies of the payoffs to postsecondary education perform OLS estimation and do not use information on next-best alternatives (see, e.g., the review in Altonji, Bloom, and Meghir, 2012 ). To compare our results to these studies, we pool individuals with different next-best alternatives and perform OLS estimation of equation (14) with and without controls. For every field we then compute the difference between each baseline 2SLS estimate (reported in Table IV ) and the corresponding estimate based on OLS. This exercise shows that the OLS estimates differ substantially from the baseline 2SLS estimates. Indeed, a joint test of equality is strongly rejected ( p < .001). 25

The OLS estimates may differ from our baseline 2SLS estimates for several reasons. One possibility is selection bias due to correlated unobservables. Another possible reason is that the OLS estimates do not use information on next-best alternatives. To explore the relative importance of these two explanations, we pool individuals with different next-best alternatives and perform 2SLS estimation of equations (14) and ( 15 ). We then compute the difference between the baseline 2SLS estimate and the corresponding pooled 2SLS estimate. While instrumenting for field of study makes the pooled estimates more similar to the baseline 2SLS results, the differences remain large. This suggests that information on next-best alternatives is essential for identifying payoffs to field of study. 26

Last, we contrast our baseline 2SLS estimates of the impact of completing a field of study to the intention-to-treat effect of crossing the admission cutoff to a field. We compare the full set of reduced form effects of crossing the admission cutoff to the preferred field from a particular next-best field. We find that the intention-to-treat effects differ substantially from the 2SLS estimates because completion rates are sometimes low and vary considerably across fields. 27

IX. F ield E ffects versus I nstitution E ffects

Our baseline 2SLS model abstracts from differences in institutional quality. As a result, the baseline estimates of the payoffs to field of study may reflect differences in institutional quality. To examine the role of institutional quality in explaining the estimated field payoffs, we will now exploit that we observe the institution of an individual’s completed education, as well as the institutional identifiers of her preferred field and her next-best alternative. Depending on whether the individual’s application score is higher (lower) than the admission cutoff, the individual is predicted to attend the institution of the preferred (next-best) field. As explained in Section III , this provides independent variation in both predicted field of study and institution.

IX.A. Robustness of Field of Study Estimates

We begin by extending the baseline 2SLS model with completed institution dummies as endogenous variables and instrumenting completed institution with predicted institution. The resulting 2SLS estimates of the payoffs to field of study are now identified from the variation in predicted field of study that is orthogonal to predicted institution. When we do this, the estimated field payoffs change very little and the correlation between the estimated field effects with and without controlling for institution is 0.91. 28

This exercise relies on the separability of field and institution, which may be perceived as being overly restrictive. We can relax this separability assumption by estimating 2SLS regressions stratified by individuals’ predicted institution, which is equivalent to estimating a fully interacted model. This flexibility comes at a cost in that we lose support as sample sizes are reduced, and different institutions offer different (pairs of) fields. Such flexibility allows us, however, to estimate 214 institution-specific field payoffs. 29 When we compare these institution-specific field effects to the results from our separable model (estimated on the same sample), we find that the estimated payoffs to field of study are quite similar (correlation of 0.82). Furthermore, regressing the former on the latter, we find a relationship very close to a 45-degree line. This suggests little if any interaction effect between field of study and institution, and confirms the conclusion from the separable model that the estimated field payoffs are not driven by differences in institutional quality.

IX.B. Empirical Importance of Institution Effects

To further understand the payoff to a postsecondary degree it is of interest to not only look at the field of study payoffs, but also have a closer look at how attending one institution over another affects earnings. There is a large, predominantly American body of literature on the effect of college quality on labor market outcomes (e.g., Dale and Krueger 2002 ; Hoxby 2009 ; Deming, Goldin, and Katz 2012 ). A first hint of such institution effects in the Norwegian setting was given by Figure II , which showed raw earnings differences between graduates from different institutions; we saw that these earnings differences were modest. Of course, graduates from different institutions may also differ in other ways that matter for earnings. When we control for graduates’ observed characteristics such as gender, age, and GPA, the institution effect estimates change somewhat but the differences between institutions continue to be of the same order of magnitude as the raw earnings differences.

Although controlling for observables may help in addressing concerns over selection bias, we can improve on the OLS estimates and use our threshold crossing instruments to estimate institution effects with 2SLS. When we do this, we find large and statistically significant impacts from attending different institutions. Like most of the college quality literature, these estimates do not account for differences across institutions in the student composition by field. Controlling for completed field in the regressions is problematic because field of study is potentially endogenous. To the best of our knowledge, no study of institution effects has addressed the potential endogeneity of field of study choices. However, we can address this issue by estimating a 2SLS model with separable institution and field effects, instrumenting completed institution and field with predicted institution and field.

Comparing the estimates of institution effects when we do and do not hold field of study fixed, we find that the correlation is very low (−0.11). Indeed, we find little evidence of significance gains in earnings for graduating from a more selective institution once we hold field of study fixed. The estimated payoffs to the ten largest institutions are shown in Figure XI (where the University of Oslo (UiO)—the largest institution in Norway—is the reference). We see that once we control for field of study there remains a large and statistically significant payoff to attending the Norwegian School of Economics, but a joint test of the remaining institution effects is now highly insignificant ( p = .600). Additional calculations show that the earnings differences between the universities (UiO, UiB, NTNU, and UiT) are all minor, while there is a fairly small ($12,000) but marginally significant (at the 10% level) payoff to attending a university college keeping field of study fixed.

Figure XI

Estimated Payoffs to Largest Institutions

Graph shows the 2SLS payoff estimates (estimate and CI’s) of the 10 largest institutions (those with at least 1,000 applicants completing in our sample). All payoffs are relative to the University of Oslo (UiO). Completed institutions are instrumented with predicted institution. Payoffs to fields are instrumented using predicted field. In addition to our baseline controls, the specification includes controls for preferred and next-best field and institution.

Figure XI

Estimated Payoffs to Largest Institutions

Graph shows the 2SLS payoff estimates (estimate and CI’s) of the 10 largest institutions (those with at least 1,000 applicants completing in our sample). All payoffs are relative to the University of Oslo (UiO). Completed institutions are instrumented with predicted institution. Payoffs to fields are instrumented using predicted field. In addition to our baseline controls, the specification includes controls for preferred and next-best field and institution.

To quantify the relative importance of institution and field of study, we use the estimates of the institution-specific field effects from the nonseparable model. When we regress these institution-specific field payoffs on institution dummies (see Online Appendix table B.V ) we find that that institution of attendance explains 23% of the variation in these payoffs. In contrast, a similar regression on field of study dummies explains 72% of the variation in institution-specific field payoffs, while further adding institution dummies raises the explanatory power of the regression by only 8 percentage points. Taken together, these results suggest that field of study rather than institution drives the heterogeneity in the payoffs to postsecondary education in Norway.

To summarize, using either a separable model or estimating payoffs to fields separately for different institutions, we find little evidence of substantial institution effects in our data, and controlling for institution has little impact on our payoff estimates for field of study. Controlling for field composition is, however, essential for correctly estimating the payoffs to institutions.

X. S orting pattern to fields

In Table IV , we presented a matrix of the payoffs to field j compared to k for those who prefer j and have k as their next-best field. The matrix is not symmetric, implying that (i) the payoff to field of study are heterogeneous across individuals, and (ii) the selection into fields is nonrandom. Taken together, this motivates our analysis of the sorting pattern to fields where we exploit the information on next-best fields to answer the following questions: Do individuals sort into fields in which they have comparative advantage? Is the sorting pattern consistent with earnings maximization, or are nonpecuniary factors necessary to rationalize individuals’ choices?

The answer to the first question tells us whether individuals differ not only in their productivities in a particular field, but also in their relative productivities in different fields. Describing how individuals with different abilities sort into different fields is important to understand the determinants of earnings inequality and the aggregate output for the economy as a whole (see, e.g., Sattinger 1993 ). The answer to the second question is informative about whether individuals behave as predicted by the Roy model of self-selection. We think of this as an important step toward understanding which economic models can help explain why individuals choose different fields of study.

X.A. Defining Comparative Advantage in the Context of Field of Study

The term “comparative advantage” is used in different ways by different authors. We follow the seminal work of Sattinger (1978 , 1993 ) in our definition of comparative advantage. To be precise, let qil denote individual i ’s productivity in field l and let π l denote the price per unit of worker output in that field; her potential earnings in field l is then given by yil=πlqil.

Consider individuals who are on the margin between two fields j and k . Individuals are potentially heterogenous in their productivity in these fields, and are each characterized by a pair (qij,qik) . If  

q1jq2j>q1kq2klogy1jlogy1k>logy2jlogy2k,
then individual 1 is said to have a comparative advantage in field j and individual 2 has a comparative advantage in field k .

Our goal is to examine whether individuals tend to prefer fields in which they have comparative advantage. If individuals prefer fields in which they have comparative advantage, then the relative payoff to field j as compared to k is larger for individuals who prefer j over k ( jk ) than for those who prefer k over j ( kj ):  

(16)
E[logyijlogyik|jk]>E[logyijlogyik|kj].

By way of comparison, the inequality would be reversed if individuals prefer fields in which they have comparative disadvantage (e.g., due to nonpecuniary factors), while random selection into fields would make equation (16) hold with equality.

X.B. Evidence on Comparative Advantage among Compliers

While a complete characterization of the pattern of selection would require a number of strong assumptions, we can use the estimated payoffs to learn about the comparative advantages of the compliers to our instruments.

To examine whether compliers tend to prefer fields in which they have comparative advantage, we reestimate the model given by equations (14) and ( 15 ), but now with log earnings as the dependent variable. 30 This gives us the LATE counterparts to the testable implication of equation (16) , namely,  

E[logyjlogyk|djjdjk=1,dlj,kk=0]>E[logyijlogyik|dkkdkj=1,dlj,kj=0].

Figure XII provides evidence on comparative advantage among compliers. This figure shows the distribution of the differences in relative payoffs to field j versus k between individuals whose preferred field is j and next-best alternative is k and those with the reverse ranking. As is apparent from the figure, most of these differences are positive, which suggests that compliers tend to prefer fields in which they have comparative advantage. Indeed, the differences in payoff are sometimes considerable depending on whether j is the preferred field or the next-best alternative, suggesting that sorting by comparative advantage could be an empirically important phenomenon in the choice of field of study.

Figure XII

Testable Implication of Sorting Based on Comparative Advantage

This figure graphs the distribution of the differences in relative payoffs to field j versus k between individuals whose preferred field is j and next-best alternative is k , and those with the reverse ranking. To construct this graph, we use the complier-weighted distribution of estimates in Online Appendix Table B.VI .

Figure XII

Testable Implication of Sorting Based on Comparative Advantage

This figure graphs the distribution of the differences in relative payoffs to field j versus k between individuals whose preferred field is j and next-best alternative is k , and those with the reverse ranking. To construct this graph, we use the complier-weighted distribution of estimates in Online Appendix Table B.VI .

X.C. Robustness of Comparative Advantage

There are at least three concerns about the conclusion of compliers preferring fields in which they have comparative advantage.

The first is that field of study may affect employment probabilities, which could bias the estimates with log earnings as dependent variable. However, we find fairly small impacts of field of study on employment. Furthermore, if marginal workers have lower potential earnings, any bias coming from employment effects should make it less likely to find evidence of comparative advantage.

Second, one might be worried that the conclusions drawn about selection patterns are driven by heterogeneity across subfields within our broader definition of fields (see Table II ). To address this concern, we have reestimated the model given by equations (14) and ( 15 ) with treatment variables defined according to subfields instead of broader fields. These estimates, reported in Online Appendix Figure B.V , suggest that aggregation to broader fields is not driving the conclusion that compliers tend to prefer fields in which they have comparative advantage. 31

A third concern is that we rely on variation in admission cutoffs across institutions and over time to assess self-selection to fields. To see this, consider a comparison of the payoff to preferred field j over next-best field k and the payoff to preferred field k over next-best field j . To identify both these payoffs, it is necessary that field j has a higher admission cutoff for some individuals, whereas field k has a higher admission cutoff for other individuals. This is possible if admission cutoffs for a pair of fields change over time or vary across institutions. For example, in one year (institution) the application threshold for field 2 is higher than for field 3, while in another year (institution) this is reversed.

To understand which is the identifying variation for getting reversals of cutoffs and how this matters for the conclusions about selection patterns, it is useful to reestimate the model given by equations (14) and ( 15 ) while controlling for direct effects of cohort, and instrumenting the payoffs to institutions using predicted institution (as discussed in Section IX ). The first-stage estimates and the corresponding F -statistics change little when we do this, suggesting that variation within institutions over time is important in identifying the selection patterns. The distribution of the resulting second-stage estimates is shown in Online Appendix Figure B.VII . The conclusion that compliers tend to prefer fields in which they have comparative advantage does not change if we only use variation in admission cutoffs within institutions over time.

X.D. Sorting Pattern and Economic Models

The above results suggest that self-selection and comparative advantage are empirically important features of field of study choices. These findings have implications for the type of economic models that can help explain the causes and consequences of individuals choosing different types of postsecondary education.

Much economic analysis and empirical work relies on an efficiency unit framework where there is only one type of human capital that individuals possess in different amounts. 32 While the presence of comparative advantage is at odds with models based on the efficiency unit assumption, it is consistent with the basic Roy model. The Roy model has a simple selection rule: individual i chooses field j over k when yij>yik , which means her relative productivity advantage in field j(qijqik) exceeds the relative prices (πkπj) . Although the majority of the estimated payoffs in Table IV are positive, and therefore consistent with the basic Roy model where individuals maximize earnings, for a subset of field pairs the estimated payoffs are negative. Negative payoffs are consistent with compensating differentials, and negative sorting can be rationalized by a generalized Roy model where idiosyncratic individual returns correlate negatively with the valuation of the nonpecuniary factors of fields (see, e.g., Heckman and Vytlacil 2007 ).

XI. C onclusion

Identifying the payoffs to different types of postsecondary education is difficult, in part due to selection bias but also because individuals are choosing between several unordered alternatives. We showed that even with a valid instrument for type of education, instrumental variables estimation of the payoffs of one field of study vs. another requires information about individuals’ ranking of education types or additional assumptions over and above those needed in settings with binary or ordered choices.

We build on these results in our empirical analysis of the choice of and payoff to field of study. Our context was Norway’s postsecondary educational system, where a centralized admission process covers almost all universities and colleges. This process creates discontinuities that effectively randomize applicants near unpredictable admission cutoffs into different fields of study. At the same time, it provides us with strategy-proof measures of individuals’ ranking of fields. Taken together, this allowed us to estimate the payoffs to different fields while correcting for selection bias and keeping the next-best alternatives as measured at the time of application fixed.

Our results showed that different fields have widely different payoffs, even after accounting for institutional differences and quality of peer groups. For many fields the payoffs rival college wage premiums, suggesting the choice of field is potentially as important as the decision to enroll in college. Comparing our estimates to those obtained from other approaches highlighted the importance of having information on next-best alternatives.

When disentangling the causal contribution of institution and field of study, we found that field of study drives the heterogeneity in the payoffs to postsecondary education. Indeed, we found that controlling for field is essential for estimating institution effects and once we do this there remains little evidence of significance gains in earnings to graduating from a more selective institution.

Finally, we also showed that the estimated payoffs are consistent with individuals choosing fields in which they have comparative advantage. This finding has implications for the type of economic models that can help explain the causes and consequences of individuals choosing different types of postsecondary education.

Supplementary Material

An

for this article can be found at QJE online (academic.oup.com/qje).

1. In most OECD countries, students typically enroll in a specific field of study upon entry to a university. In the United States, however, students only specialize in a major during the last year(s) of college.
2. Throughout the article, we will be measuring ex post payoffs. In studies that aim to explain or forecast schooling choices, the distinction between ex ante and ex post returns to schooling is important (see, e.g., Cunha, Heckman, and Navarro 2005 ; Cunha and Heckman 2007) . For example, ex post returns govern schooling decisions only if cohorts anticipate future changes in skill prices. Arcidiacono et al. (2014) review the literature on ex ante returns, and present survey evidence on college students’ expectations about earnings in different fields and occupations.
3. We therefore avoid the problem of nonresponse bias in previous studies relying on survey data. Hamermesh and Donald (2008) show that nonresponse bias can lead to misleading conclusions about the payoffs to postsecondary education.
4. Discontinuities in admission thresholds have also been used in other contexts than field of study, such as the effect of admission to particular institutions (e.g., Saavedra 2008 ; Hoekstra 2009 ; Zimmerman 2014) , the impact of another year of college ( Öckert 2010) , the marriage market consequences of admission to a higher-ranked university ( Kaufmann, Messner, and Solis 2013 ), the effect of admission to higher quality primary and secondary schools (see, e.g., Jackson 2010 ; Duflo, Dupas, and Kremer 2011 ; Abdulkadiroglu, Angrist, and Pathak 2012 ; Pop-Eleches and Urquiola 2013) , and the consequences of affirmative action in engineering colleges in India ( Bertrand, Hanna, and Mullainathan 2010 ).
5. Other studies have taken a complementary structural approach. Arcidiacono (2004) estimates a dynamic model of college and major choice. His estimates suggest that large earnings premiums exist for certain majors. Reyes, Rodríguez, and Urzúa (2013) estimate a parametric model of postsecondary schooling choice and examine the distribution of payoffs to different degrees according to years of study and private versus public institution.
6. For a review, we refer to Hoxby (2009) . See also Deming, Goldin, and Katz (2012) , who analyze the for-profit postsecondary schooling sector.
7. See Altonji (1993) for a discussion of the ex ante return associated with starting a particular major, which includes the probability of dropping out entirely and switching majors, and the ex post return to the completed major.
8. For example, Kline and Walters (2015) estimate a semi-parametric selection model to learn about the effects of Head Start as compared to no preschool or competing preschool programs. See also Dahl (2002) , who develops a semi-parametric method to study migration across U.S. states.
9. Behaghel, Crépon, and Gurgand (2013) also discuss the challenges to causal inference from encouragement designs in situations with multiple unordered treatments and propose restricting preferences to achieve identification. As discussed later, we test and reject this restriction in our context of postsecondary education.
10. Alternatively, one can think of a setting where individuals choose between not taking any postsecondary education ( d = 0), completing field 1 ( d = 1), or completing field 2 ( d = 2).
11. See Altonji, Blom, and Meghir (2012) and the references therein.
12. After solving equations (11) and (12) for β1 and β2 , the intercept β0 is identified from equation (9) .
13. The only notable exception is the Norwegian Business School in Oslo, which is not part of the centralized admission process.
14. A possible caveat to the strategy-proofness is the truncation of the application list at 15 courses. This truncation may induce individuals to list a safe option as the 15th choice to make sure they receive any offer of postsecondary education. In practice, this seems unlikely to matter for our findings: during the period our application data cover, less than 0.07% of all applicants are offered a 15th choice.
15. Norwegian Universities and Colleges Admission Service classifies specific fields into broad fields. This classification is different from the one used by the national eduction registry, http://www.ssb.no/a/publikasjoner/pdf/nos_c617/nos_c617.pdf . We recoded the information on individuals’ educational attainment to match that of the admission service.
16. We use a fixed exchange rate of 6.5 Norwegian kroner per U.S. dollar.
17. In Norway, students graduate from high school in the year they turn 19, after which many serve in the military, travel, or work for a year or two before enrolling in postsecondary education.
18. There are two main reasons that the probability of being offered the preferred field is not a deterministic function of application score. Some slots are reserved for special quotas and there may be some ad hoc conditions unrelated to academic requirements (e.g., if an applicant has been ill during the last part of upper secondary). Thus, we have some measurement error in the cutoffs. Also, some applicants do not choose to remain on the waiting list after the first set of offers are sent out and thus do not receive an offer, even though they are above the application threshold. See the discussion of institutional details in Section III .
19. If the irrelevance condition does not hold or if individuals do not understand the allocation mechanism, then our estimation approach should be interpreted as relaxing the constant effects assumption in part 1 of Proposition 2 to allow for heterogeneity in payoffs according to next-best fields.
20. For example, the F -statistic is above 10 in 72 of 81 cases.
21. There is limited evidence on peer effects in college, and the results are mixed. Some studies point to little if any influence of peer academic ability, while others suggest it can play a role in shaping study habits and beliefs. See, for example, Stinebrickner and Stinebrickner (2006) .
22. The application score of the predicted peer group is an exogenous variable conditional on the controls for the running variable and individuals’ preferences over education types. The same is true for our measures of predicted experience (discussed below) and predicted institution (discussed in Section IX ).
23. Online Appendix Table B.II reports the differences between the pooled estimates and the estimates by next-best alternative.
25. Online Appendix Figure B.III plots the distribution of these differences.
26. Online Appendix Figure B.III plots the distribution of these differences.
29. These 214 institution-specific estimates cover 48 different completed versus next-best field pairs, with estimates from 2–21 institutions for each. Fifteen institutions are represented, each with up to 23 payoff estimates.
30. Online Appendix Table B.VI reports the estimates of the payoffs to field of study with log earnings as the dependent variable.
31. The results are also robust to including the set of applicants who apply to only one broad field but have a preferred subfield with a cutoff.
32. A prominent example is the Ben-Porath model, which assumes efficiency units, so different labor skills are perfect substitutes. Heckman, Lochner, and Taber (1998) extend the standard Ben-Porath model by relaxing the assumption of efficiency units for labor services.

References

Abadie
Alberto
,
“Bootstrap Tests for Distributional Treatment Effects in Instrumental Variable Models,”
Journal of the American Statistical Association
  ,
97
(
2002
),
284
292
.
Abdulkadiroglu
Atila
Angrist
Joshua D.
Pathak
Parag A.
, “The Elite Illusion: Achievement Effects at Boston and New York Exam Schools,” IZA Discussion Paper 6790,
2012
.
Altonji
Joseph G.
,
“The Demand for and Return to Education When Education Outcomes Are Uncertain,”
Journal of Labor Economics
  ,
11
(
1993
),
48
83
.
Altonji
Joseph G.
Arcidiacono
Peter
Maurel
Arnaud
, “The Analysis of Field Choice in College and Graduate School: Determinants and Wage Effects,” NBER Working Paper 21655,
2015
.
Altonji
Joseph G.
Blom
Erica
Meghir
Costas
,
“Heterogeneity in Human Capital Investments: High School Curriculum, College Major, and Careers,”
Annual Review of Economics
  ,
4
(
2012
),
185
223
.
Andrews
Rodney J.
Li
Jing
Lovenheim
Michael F.
, “Quantile Treatment Effects of College Quality on Earnings: Evidence from Administrative Data in Texas,” NBER Working Paper 18068,
2012
.
Arcidiacono
Peter
,
“Ability Sorting and the Returns to College Major,”
Journal of Econometrics
  ,
121
(
2004
),
343
375
.
Arcidiacono
Peter
Hotz
V. Joseph
Maurel
Arnaud
Romano
Teresa
, “Recovering Ex Ante Returns and Preferences for Occupations using Subjective Expectations Data,” NBER Working Paper 20626,
2014
.
Behaghel
Luc
Crépon
Bruno
Gurgand
Marc
, “Robustness of the Encouragement Design in a Two-Treatment Randomized Control Trial,” IZA Discussion Paper 7447,
2013
.
Bertrand
Marianne
Hanna
Rema
Mullainathan
Sendhil
,
“Affirmative Action in Education: Evidence from Engineering College Admissions in India,”
Journal of Public Economics
  ,
94
(
2010
),
16
29
.
Black
Dan A.
Smith
Jeffrey A.
,
“How Robust Is the Evidence on the Effects of College Quality? Evidence from Matching,”
Journal of Econometrics
  ,
121
(
2004
),
99
124
.
Carneiro
Pedro
Heckman
James J.
Vytlacil
Edward J.
,
“Estimating Marginal Returns to Education,”
American Economic Review
  ,
101
(
2011
),
2754
2781
.
Cunha
Flavio
Heckman
James J.
,
“Identifying and Estimating the Distributions of Ex Post and Ex Ante Returns to Schooling,”
Labour Economics
  ,
14
(
2007
),
870
893
.
Cunha
Flavio
Heckman
James J.
Navarro
Salvador
,
“Separating Uncertainty from Heterogeneity in Life Cycle Earnings,”
Oxford Economic Papers
  ,
57
(
2005
),
191
261
.
Dahl
Gordon B.
,
“Mobility and the Return to Education: Testing a Roy Model with Multiple Markets,”
Econometrica
  ,
70
(
2002
),
2367
2420
.
Dale
Stacy
Krueger
Alan B.
, “Estimating the Return to College Selectivity over the Career Using Administrative Earnings Data,” NBER Working Paper 17159,
2011
.
Dale
Stacy
Krueger
Alan B.
,
“Estimating The Payoff to Attending a More Selective College: An Application of Selection on Observables And Unobservables,”
Quarterly Journal of Economics
  ,
117
(
2002
),
1491
1527
.
Deming
David J.
Goldin
Claudia
Katz
Lawrence F.
,
“The For-Profit Postsecondary School Sector: Nimble Critters or Agile Predators?,”
Journal of Economic Perspectives
  ,
26
(
2012
),
139
164
.
Duflo
Esther
Dupas
Pascaline
Kremer
Michael
,
“Peer Effects, Teacher Incentives, and the Impact of Tracking: Evidence from a Randomized Evaluation in Kenya,”
American Economic Review
  ,
101
(
2011
),
1739
1774
.
Duflo
Esther
Glennerster
Rachel
Michael
Kremer
, “Using Randomization in Development Economics Research: A Toolkit,” in
Handbook of Development Economics, Vol. 4
  ,
Schultz
T. Paul
Strauss
John A.
, eds. (
Amsterdam
:
Elsevier
,
2008
).
Hamermesh
Daniel S.
Donald
Stephen G.
,
“The Effect of College Curriculum on Earnings: An Affinity Identifier for Non-Ignorable Non-Response Bias,”
Journal of Econometrics
  ,
144
(
2008
),
479
491
.
Hastings
Justine S.
Neilson
Christopher A.
Zimmerman
Seth D.
, “Are Some Degrees Worth More Than Others? Evidence from College Admission Cutoffs in Chile,” NBER Working Paper 19241,
2013
.
Heckman
James J.
Lochner
Lance
Taber
Christopher
,
“Explaining Rising Wage Inequality: Explanations With A Dynamic General Equilibrium Model of Labor Earnings with Heterogeneous Agents,”
Review of Economic Dynamics
  ,
1
(
1998
),
1
58
.
Heckman
James J.
Guilherme
Sedlacek
,
“Heterogeneity, Aggregation, and Market Wage Functions: An Empirical Model of Self-Selection in the Labor Market,”
Journal of Political Economy
  ,
93
(
6
) (
1985
), pp.
1077
1125
.
Heckman
James J.
Urzúa
Sergio S.
,
“Comparing IV with Structural Models: What Simple IV Can and Cannot Identify,”
Journal of Econometrics
  ,
156
(
2010
),
27
37
.
Heckman
James J.
Urzúa
Sergio S.
Vytlacil
Edward J.
,
“Understanding Instrumental Variables in Models with Essential Heterogeneity,”
Review of Economics and Statistics
  ,
88
(
2006
),
389
432
.
Heckman
James J.
Vytlacil
Edward J.
, “Econometric Evaluation of Social Programs, Part II: Using the Marginal Treatment Effect to Organize Alternative Econometric Estimators to Evaluate Social Programs, and to Forecast Their Effects in New Environments,” in
Handbook of Econometrics, Vol. 6
  ,
Heckman
James J.
Leamer
Edward E.
eds. (
Amsterdam
:
Elsevier
,
2007
).
Hoekstra
Mark
,
“The Effect of Attending the Flagship State University on Earnings: A Discontinuity-Based Approach,”
Review of Economics and Statistics
  ,
91
(
2009
),
717
724
.
Hoxby
Caroline M.
,
“The Changing Selectivity of American Colleges,”
Journal of Economic Perspectives
  ,
23
(
2009
),
95
118
.
Imbens
Guido W.
Angrist
Joshua D.
,
“Identification and Estimation of Local Average Treatment Effects,”
Econometrica
  ,
62
(
1994
),
467
475
.
Imbens
Guido W.
Rubin
Donald B.
,
“Estimating Outcome Distributions for Compliers in Instrumental Variables Models,”
Review of Economic Studies
  ,
64
(
1997
),
555
574
.
Jackson
C. Kirabo
,
“Do Students Benefit from Attending Better Schools? Evidence from Rule-based Student Assignments in Trinidad and Tobago,”
Economic Journal
  ,
120
(
2010
),
1399
1429
.
Kaufmann
Katja Maria
Messner
Matthias
Solis
Alex
, “Returns to Elite Higher Education in the Marriage Market: Evidence from Chile,” IGIER Working Paper 489,
2013
.
Kirkeboen
Lars
Leuven
Edwin
Mogstad
Magne
, “Field of Study, Earnings, and Self-Selection,” NBER Working Paper 20816,
2014
.
Kline
Patrick
Walters
Christopher
, “Evaluating Public Programs with Close Substitutes: The Case of Head Start,” NBER Working Paper 21658,
2015
.
Lindahl
Lena
Regner
Hakan
,
“College Choice and Subsequent Earnings: Results Using Swedish Sibling Data,”
Scandinavian Journal of Economics
  ,
107
(
2005
),
437
457
.
McCrary
Justin
,
“Manipulation of the Running Variable in the Regression Discontinuity Design: A Density Test,”
Journal of Econometrics
  ,
142
(
2008
),
698
714
.
Öckert
Björn
,
“What’s the Value of an Acceptance Letter? Using Admissions Data to Estimate the Return to College,”
Economics of Education Review
  ,
29
(
2010
),
504
516
.
OECD
(
2014
), Education at a Glance 2014: OECD Indicators, OECD Publishing. .
Pop-Eleches
Cristian
Urquiola
Miguel
,
“Going to a Better School: Effects and Behavioral Responses,”
American Economic Review
  ,
103
(
2013
),
1289
1324
.
Reyes
Loreto
Rodríguez
Jorge
Urzúa
Sergio S.
, “Heterogeneous Economic Returns to Postsecondary Degrees: Evidence from Chile,” NBER Working Paper 18817,
2013
.
Saavedra
Juan E.
,
“The Returns to College Quality: A Regression Discontinuity Analysis,”
Mimeo
  ,
2008
.
Sattinger
Michael
,
“Comparative Advantage in Individuals,”
Review of Economics and Statistics
  ,
60
(
1978
),
259
267
.
Sattinger
Michael
,
“Assignment Models of the Distribution of Earnings,”
Journal of Economic Literature
  ,
31
(
1993
),
831
880
.
Stinebrickner
Ralph
Stinebrickner
Todd R.
,
“What Can Be Learned about Peer Effects Using College Roommates? Evidence from New Survey Data and Students from Disadvantaged Backgrounds,”
Journal of Public Economics
  ,
90
(
2006
),
1435
1454
.
Svensson
Lars-Gunnar
,
“Strategy-Proof Allocation of Indivisible Goods,”
Social Choice and Welfare
  ,
16
(
1999
),
557
567
.
Willis
Robert J.
Sherwin
Rosen
,
“Education and Self-Selection,”
The Journal of Political Economy
  ,
87
(
5
) (
1979
), pp.
S7
S36
.
Zimmerman
Seth D.
,
“The Returns to College Admission for Academically Marginal Students,”
Journal of Labor Economics
  ,
32
(
2014
),
711
754
.

Author notes

*We thank four anonymous referees, the editor, Peter Arcidiacono, and seminar participants at several universities and conferences for valuable feedback and suggestions. This study received financial support from the Norwegian Research Council’s Programme for Research and Innovation in the Education Sector (FINNUT, Project no. 237840).

Supplementary data