Abstract

I examine whether citizens who participate in agency rulemaking in the United States believe they have a meaningful voice. Using data drawn from 388 individuals, I employ an innovative survey research design approach—anchoring vignettes—to measure political efficacy during rulemaking, as well as a randomized survey experiment to study causal drivers. Despite powerful arguments associated with low efficacy, participants report, on average, “a lot of say” during rulemaking. However, experimental evidence suggests that participants believe agencies are more responsive to business interests than to those of ordinary citizens. When taken together, these results imply notable efficacy among those who participate. However, this normatively optimistic finding is tempered by the normatively pessimistic perception that business interests hold a clear advantage over the general public in influencing agency policy decisions.

Few theoretical constructs are more important to our understanding of democratic citizenship and public policymaking than political efficacy (Acock, Clarke, and Stewart 1985; Campbell, Gurin, and Miller 1954; Craig, Niemi, and Silver 1990; DeHoog, Lowery, and Lyons 1990). In short, political efficacy—defined as the belief that an individual can, and does, have a meaningful voice within a system of government—is a central tenet of political life and a founding principal of American political thought. As Niemi, Craig, and Mattei (1991) write, “political efficacy is one of the most theoretically important and frequently used” political constructs (emphasis original).

Yet we know surprisingly little about political efficacy within the context of bureaucratic decision making, and there are no studies of efficacy during the exercise of, arguably, the bureaucracy’s most important function: the creation of government regulations.1 This lack of attention is remarkable because the legitimacy of US bureaucratic policymaking rests, at least in part, upon the assumption that such regulatory policymaking (called rulemaking) provides a meaningful venue for public participation during agency decision making (Croley 1998; Rosenbloom 2003). Thus, despite a “voluminous” literature on efficacy in political science (i.e., Acock, Clarke, and Steward 1985) and an equally large literature in public administration on the responsiveness and accountability of public agencies (i.e., Wood and Waterman 1994; West 1995), scholars have little information on whether citizens active during rulemaking believe their efforts can, and do, influence regulatory policy decisions.

Of course, if public agencies were only minor contributors to government policy decision making in the United States, then we would not need to lament this lack of attention. However, quite the opposite is true. Federal agencies promulgate hundreds of rules each year (Yackee and Yackee 2010), and state agencies produce even more (Schwartz 2010). Through rulemaking, such agencies develop policies that are “functionally indistinguishable” from legislative statutes (Pierce 1996). Ultimately, much of the United States’ “physical and economic security and well-being depend upon” these rules (Stewart 2003).

I ask: “Do citizens active in the rulemaking process believe they have a meaningful voice? And if so, then what drives political efficacy?” I theorize that meaningful efficacy is possible and that participant information capacity—which I argue is a person’s ability to bring technical- and information-based arguments to the agency decision-making process—conditions one’s sense of political efficacy. I assess the argument with data from 388 survey respondents, who were active in a recent rulemaking. In doing so, I apply a novel survey design approach to improve the measurement of participant efficacy during rulemaking: anchoring vignettes. As King et al. (2004) write, some political concepts—such as political efficacy—are notoriously difficult for survey researchers to quantify. In particular, critical measurement problems occur when there are two respondents with the same “true” level of efficacy, but one ranks his or her efficacy as low, whereas the other ranks it as high. Anchoring directly addresses this thorny problem by equating political concepts to short, concrete stories about hypothetical individuals. These vignettes make political concepts “come alive” and allow for proper measurement. Although this approach advances the study and understanding of citizen participation during rulemaking, its application will also be of interest to the broader group of public administration and policy scholars who regularly use surveys within their research design strategies.

I find moderately high levels of political efficacy among rulemaking participants; respondents report, on average, “a lot of say” over getting agencies to address their concerns during rulemaking. I also find that several of the theorized drivers are significant predictors of efficacy: the ordered probit regression results largely demonstrate that respondents with greater information capacity perceive higher efficacy. In particular, I find that respondents who produce more technical comments, use data and information to make their arguments, and participate as representatives of business or industry show evidence of increased efficacy. Finally, I also employ a separate, randomized survey experiment, which confirms that participants believe agencies are more responsive to the stated demands of business interests than to those of ordinary citizens during rulemaking.

When taken together, these results yield implications for public administration and policy scholars and practitioners alike. Specifically, the frequently referenced concern that citizen voices are often left unheard by “unaccountable” public bureaucrats is not well supported here. Instead, participants typically believe their individual political action does have, or at the very least can have, an influential impact on agency decision making. From a normative perspective, this finding suggests that an important aspect of the US system of democratic governance is present within bureaucratic policymaking. The results, however, are not exclusively optimistic. Students of politics have long suspected that business interests may be advantaged during the policymaking process. This research demonstrates that a business advantage exists when predicting political efficacy, implying that “where you stand” may depend on “where you sit” with regard to one’s perceived degree of say over public agency decision making.

THEORETICAL CONTEXT AND ARGUMENT

This article seeks to understand whether citizens active in the rulemaking process perceive a meaningful voice during bureaucratic policymaking. And although past research has largely bypassed the empirical investigation of this topic, the scholarly literature has much to say in terms of the topic’s general importance and controversy. I begin by highlighting insights from the literature on agency accountability. I then provide a brief description of the notice and comment rulemaking process. In doing so, I suggest that there is a duality in scholarly perspectives on efficacy. To some, rulemaking is a mechanism designed specifically to increase the involvement and efficacy of those concerned individuals external to the agency. To others, the “bloom is off the rose.”

Accountability and Efficacy

The theme of political accountability has long dominated the bureaucratic politics literature. Much of this work focuses on how the US bureaucracy may be held accountable for its actions when the vast majority of its public servants do not stand for election. In its place, scholars have identified a number of mechanisms used by legislators, elected executives, and the courts to bring about some form of political accountability. Legislators, for instance, may use the setting of agency budgets (Wood and Waterman 1994), may hold hearings (Aberbach 1990), or may provide informal policy feedback to agency officials (Kelleher and Yackee 2006) to elicit accountability. In short, the idea that public agencies are, at times, accountable to multiple political principals is relatively well-tilled scholarly ground (e.g., Hammond and Knott 1996; Wood 1988; Wood and Waterman 1994).

It is less obvious to point out the potential complexities that may result when one adds citizen efficacy more consciously to the basket of considerations, especially efficacy during the bureaucracy’s implementation of public policy when the influence of political institutions is believed to be less than during legislative enactment (Meier, Wrinkle, and Polinard 1995). On the one hand, the belief that political institutions, such as legislatures, hold agencies accountable ought to increase feelings of self-efficacy. Under this scenario, one’s ability to have a “meaningful say” over public agencies is augmented by the belief that the constitutionally prescribed political institutions hold influence over agency decision making. On the other hand, perceptions of bureaucratic accountability to legislators, elected executives, and the courts ought to fail, in practice, to beget feelings of self-efficacy. For instance, citizens active during rulemaking may—rightly or wrongly—view political principals as crowding out their direct influence during rulemaking, thereby driving down feelings of self-efficacy.

However, a fuller picture of direct citizen accountability options ought to be painted. That is because, as John (2009) suggests, a variety of initiatives have developed to promote the direct accountability of public agencies to concerned citizens in recent years. He refers to these developments as “citizen governance” initiatives, whereas other scholars use the label “collaborative public management” (John 2009). Cooper, Bryer, and Meek (2006), for instance, highlight neighborhood councils in Los Angeles, which directly involved citizen stakeholders in government decision making. Although provocative and important, such collaborative public management efforts are best characterized as localized innovations, as opposed to widespread engagement options (Berry, Portney, and Thomson 1993). One main exception is notice and comment rulemaking.

Duality in Notice and Comment Rulemaking

Notice and comment rulemaking has developed into the most common type of lawmaking in the United States (Kerwin and Furlong 1992) and has been called “one of the greatest inventions of modern government” (Davis 1978). At the national level, Congress standardized rulemaking in the Administrative Procedure Act (APA) of 1946. This law requires all federal agencies to solicit public feedback during the writing of most regulations. Moreover, each of the state governments has similar procedures in place. As Jensen and McGrath (2011) find, all state legislatures have passed versions of their own APAs.

Today, notice and comment rulemaking is, arguably, the most frequently used form of collaborative public management. At the national level, agencies write many hundred legally binding rules each year, and the states are also major contributors. In Wisconsin, for instance, between 1995 and 2009, Wisconsin’s public agencies wrote more notice and comment regulations than the state legislature wrote Acts in 10 of the 15 years. This represents an average of more than 150 rules per year in just one state. At both the national and state level, rulemaking typically begins with an agency forming the text of a Proposed Rule, which must be open for public feedback. To do so, agencies typically provide a 30- or 60-day comment period during which time the public can send their feedback to the respective agency. The agency may also hold a public hearing (or hearings) on the draft rule to solicit feedback. After considering any expressed opinions, an agency will generally promulgate a Final Rule. Although agencies are required to consider feedback from the public, they are not required to make changes to a regulation based on that feedback.

Given that citizens have, or can have, a voice within the decision making on these rules, one might expect those individuals active during rulemaking to have heightened feelings of political efficacy. Put differently, the people who participate during notice and comment rulemaking must believe their participation holds policy influence—they would not participate otherwise. Although this conclusion may appear commonsensical, there are also reasons to question it. In particular, I identify a “duality” in thinking on the topic, with (1) reasons to believe that participant efficacy may be increased as a result of notice and comment procedures, as well as (2) reasons to believe that citizen activity during rulemaking may not be associated with high levels of efficacy.

On the one hand, notice and comment procedures may increase participant efficacy during regulatory decision making at the state and national levels. Indeed, the very legitimacy of US bureaucratic policymaking rests, at least in part, upon the premise that rulemaking provides an outlet for public voice by concerned citizens and their groups (Croley 1998; Rosenbloom 2003). In general, the literature points to two rationales for this potential relationship: participation and equality. In terms of participation, unlike legislative policymaking in the United States—where there is no formal requirement that the public be asked to share their views—in rulemaking, the public has the potential to be directly involved and engaged in particular governance decisions. As Rossi writes (1997), the “courts, Congress, and scholars have elevated participation [in rulemaking] to a sacrosanct status. . .greater participation is generally viewed as contributing to the democracy.” And, in theory, there is a relative low bar for participation in rulemaking; it only requires that one know of an agency’s rule proposal and then submit his or her opinions to the agency (Croley 1998).

Mashaw (1985) puts this topic in a broader context; he writes, “The notion of direct participation in administrative governance responds to deep strains of individualism and political equalitarianism in the American character.” This notion of equality in opportunity suggests an open public forum that allows those potentially affected to influence agency decision making. In short, it allows for voice (Pierce 1996). And there is some evidence that citizens believe this voice may be meaningful to agency decision making. Take the handful of important studies focusing on the perceived influence of interest group lobbying tactics during rulemaking. These studies, although not a direct assessment of overall participant efficacy, can provide some information towards understanding efficacy.2Furlong (1997), for instance, focuses on the perceived influence of interest group “methods of participation,” such as coalition formation, the mobilization of grassroots support, and written comment submission. He finds moderate-to-high perceived influence for certain interest group tactics, especially coalition formation and the use of informal contacts with agency officials before a proposed rule is issued. However, other methods, such as participation in public hearings, score much lower. Furlong and Kerwin (2005; see also Kerwin and Furlong 2011) also study interest group lobbying tactics and draw similar conclusions.3

In other work, Furlong (1998) surveys agency officials for their impressions regarding the influence of external participants to the agency rulemaking process. He finds that interest groups hold about average influence (a 2.98 on a five-point ordinal scale), whereas the general public’s influence scores 2.43 on a five-point scale. Although Furlong (1998) does not capture participant efficacy, which, by definition, must be measured from the participant’s perspective, it does shine further light on perceived influence patterns, which may be related to efficacy. Similarly, a body of research is now beginning to form that suggests that the actualization of voice during rulemaking may be influential over policy outputs. Yackee (2006), for example, finds that the overall messages found in public comments may affect the content of Final Rules.

Yet, on the other hand, there is also research implying that citizen activity during rulemaking may not be associated with high levels of participant efficacy. Put differently, participation in notice and comment rulemaking may not yield increased feelings of a “meaningful voice” over regulatory outcomes. This thinking rests in part upon research finding that few important policy changes occur during the notice and comment rulemaking process (Elliott 1992; Golden 1998; West 2004).4 If rulemaking participants recognize that such a pattern exists across rulemaking, then it may leave them disillusioned and believing that they lack voice during the process.

If this line of thinking is correct, then the question remains why an individual would formally participate during the notice and comment process if she or he believed that participation to be unimpactful. Several motives may be in play. Some may participate to gain standing later in the courts. As Klyza and Sousa (2008) report, savvy interest groups often view rulemaking as reversible later in the courts. However, such groups need standing to contest agency rules, and such standing may be only obtained through formal participation during the notice and comment period. Still other citizens may participant to appease or to build existing social networks. Nelson and Yackee (2012) find that a large number of public participants during rulemaking report that their participation was encouraged by others. Stated differently, these participants—who Nelson and Yackee (2012) report were often new to the regulatory process—submitted a comment to a rule because other people or organizations in their networks encouraged them to do so.

Argument

I further the literature by exploring participant efficacy during rulemaking more closely. I begin by drawing on the distinction between two forms of political efficacy. First, there is “internal efficacy”—which is the “perception of the self as competent to influence government” (Iyengar 1980). Stated differently, it is the belief that one’s self is knowledgeable and can participate effectively in government (Niemi, Craig, and Mattei 1991). It is important to note that internal efficacy can exist independent from whether actual participant influence occurs and it is politically important, as well as independent of actual influence. This form of self-efficacy corresponds with my first research questions: “Do citizens active in the rulemaking process believe they have, or can have, a meaningful voice? And if so, then why?” Second, there is “external efficacy”—which is “perceptions of the regime as responsive” (Iyengar 1980); the belief that government institutions and authorities are responsive to the stated demands put before them (Niemi, Craig, and Mattei 1991). This form of efficacy matches my second research question: “Do the factors driving internal efficacy also drive overall perceptions regarding who gets their stated demands addressed by public agencies during rulemaking?”

Internal Efficacy

I theorize that an individual’s capacity to make persuasive arguments to public agencies affects the level of self-efficacy present amongst citizens participating in agency rulemaking. Given the focus on knowledge within the internal efficacy construct, it may come as no surprise that I theorize capacity to be the most important of these theoretical drivers. Capacity here refers to a participant’s knowledge and expertise as applicable to agency rulemaking. Specifically, I hypothesize that people with higher capacity view their participation as more efficacious.

I point to two related reasons for this relationship. First, Congress is much more likely to delegate policymaking discretion to agencies when a policy topic is complex and expertise is needed (Epstein and O’Halloran 1999). Hence administrative agencies are often called upon to make complex policy decisions based on technical information and subject matter expertise (West 1995; Carpenter 2010). Moreover, as Rourke (1984) writes, bureaucrats often obtain their employment through their technical knowledge and credentials. It is not remarkable, then, to suggest that citizens active in rulemaking may be more successful in influencing regulations when they are able to make more sophisticated arguments (see also, Cuéllar 2005; Jewell and Bero 2007). Second, court review of agency rulemaking decisions reemphasizes this focus on information and expertise. The APA specifically instructs courts reviewing agency decision making to use the standard of “arbitrary and capricious.” This means that, in general, courts only overturn a substantive agency decision when they find the rule’s rationale or factual assertions is unreasonable or incorrect. In practice, this standard of review encourages agencies to direct more attention to the feedback of participants when that feedback holds policy-relevant information, data, and scientific evidence.

These stylized facts about agency decision making suggest that citizens active in rulemaking are more likely to feel efficacious when they are able to share policy-relevant information with agencies. Stated differently, the capacity to share information, data, and scientific evidence ought to affect the degree to which a participant believes that he or she can have a “meaningful say” during agency policymaking. It is worth emphasizing here that although the bureaucracy may be an outlier in terms of the value this political institution places on technical information and expertise, political scientists have long known that knowledgeable citizens feel more efficacious within American political life, more broadly (e.g., Verba, Scholzman, and Brady 1995).

Additionally, a number of individuals participate in rulemaking as part of their work duties, and I suggest that select professionals may be better suited to making persuasive and information-based arguments to public agencies. This includes individuals who work within other local, state, or federal government settings and may participate in the notice and comment process. This may also include people who work for nonprofit organizations, many of which are dependent on public funding and regulatory policy decisions and, as a result, may have expert knowledge to share with regulators. Additionally, given that businesses are frequently the targets of regulatory expansion (as well as deregulatory efforts completed via the notice and comment process), individuals who work in the private, for-profit sector may have an increased capacity—particularly over members of the general public—to make persuasive, information-based arguments to public agency officials. After all, agency regulators often need information from regulated entities to write appropriate regulations, as well as to identify any unanticipated consequences attached to proposed rules. Thus, a connection to select professional occupations may closely align with having a meaningful say during agency deliberations.

External Efficacy

The second research question focuses on external efficacy—the belief that government institutions will be responsive to stated demands, and I hypothesize that participant capacity will also be an important factor driving perceptions of this form of efficacy. I focus, in particular, on an individual’s professional connection to business or industry. As Jewell and Bero (2007) find in their examination of rulemaking in California, business contributors during the notice and comment process often provide different types of information to agencies. Business-related participants tend to ground their comments in evidence and hard data, employing what the authors call an “abstract-technical” frame (Jewell and Bero 2007). Moreover, I focus on this manifestation of capacity because most observers of the rulemaking process believe there to be an advantage for business participants during rulemaking—meaning that business and industry are more successful in getting their stated demands met during agency rulemaking than other types of rulemaking participants, including the general public. Yet although there is a small empirical literature suggesting the importance of business participation in rulemaking (Kamieniecki 2006; Kerwin and Furlong 2011), the degree to which business interests may affect regulatory outcomes is debated (Golden 1998; Yackee and Yackee 2006). Here I am able to assess whether individuals participating in the regulatory policymaking also perceive this type of bias, and by doing so help further our understanding of external efficacy during rulemaking.

TESTING THE ARGUMENT AND RESULTS

I assess these hypotheses with data drawn from a survey that gathered informed opinions and attitudes from citizens active in a recent rulemaking in the American state of Wisconsin. This state makes an excellent empirical case for studying participant efficacy for two substantive reasons. First, the same basic notice and comment rulemaking procedures used in Wisconsin are also used by the national government and the remaining 49 American states. Second, on all available state rulemaking indices, Wisconsin looks to be, roughly, “average.” For instance, Schwartz (2010) provides indices of legislative and gubernatorial rule review powers in 2010; in both cases, Wisconsin’s rule review powers match the 50-state medians. Additionally, Woods (2009) provides indices of public notification and public access requirements during the rulemaking process in the 50 states. Again, Wisconsin scores near the 50-state median in both cases.

Survey Data

To identify the sampling frame, I collected information on the volume and topics of all proposed rules in Wisconsin over a 10-year period. This analysis demonstrated that a large proportion of Wisconsin rules were health related, broadly construed. Indeed, almost 20% of the proposed rules promulgated by agencies between January 1, 2008 and June 30, 2010 were health related. To limit the cost of survey implementation, I focused on these proposed rules.

Focusing on health regulations provided two substantive benefits for the study. First, by concentrating on one policy area, I was able to also gather in-depth, contextual interview data on 29 rules from agency officials. These data aided in survey writing and provided a better understanding of the context of this research. Second, health regulation is a diverse area, which is at times salient to the public, whereas at other times, it is less salient.5 The sample rules reflect this diversity. For instance, topics include scope of practice restrictions for physician assistants, pesticide product restrictions, and specifications for food processing plants.

I used public agency rule dockets to identify citizen participation. The dockets listed all hearing attendees and public commenters, and thus provided names, and in most cases, the telephone numbers for each of the 679 eligible survey respondents.6 Ultimately, I received completed or partially completed telephone surveys from 388 citizens across 39 rules in 2011. Thus, the survey response rate was 57%. An analysis of the survey respondents and nonrespondents returned no meaningful differences.7 Six public agencies are represented in the data.

The survey posed three types of questions. Respondents were asked general questions about their perceptions of state rulemaking, including questions tapping their general sense of political efficacy. Additionally, respondents were reminded about their participation in a particular rule and were then asked questions about their attitudes and assessments of that rule. Several demographic questions were also posed. I use the resulting survey data to assess my argument.

Part 1: Internal Efficacy—Variables

As Craig, Niemi, and Silver (1990, 289) state, political efficacy is among the most frequently measured survey items in political surveys; yet its measurement often lacks “reliability and validity.” A major concern with regards to efficacy is interpersonal incomparability, which calls into question the validity with which survey respondents use ordinal scales (King and Wand 2007; King et al. 2004). For instance, critical measurement problems occur when there are two respondents with the same “true” level of internal efficacy, but one ranks his or her efficacy as low, whereas the other ranks it as high. Although this is a major problem in political science survey research, it is infrequently addressed by scholars due to the costs and time associated with tackling the problem (King and Wand 2007).

I address potential interpersonal incomparability problems with anchoring vignettes. Anchoring vignettes are short vignettes about hypothetical individuals. Following King et al. (2004), I composed five vignettes to measure different levels of efficacy—here tapping specifically the ability to have a meaningful voice during rulemaking. These vignettes were written to fall on an established ordered scale, from most to least efficacious. Appendix A presents the five vignettes. After hearing each vignette, a respondent was asked: “How much say does [randomized (she or he)] have in getting a state agency to address [randomized (her or his)] concerns during rulemaking: no say at all, a little say, some say, a lot of say, or unlimited say?” As is displayed in Appendix A, “Amy” is the most efficacious; she scores a 5. In contrast, “Stephanie” is the least efficacious; she scores a 1. It is important to note that the respondents did not receive the vignettes in rank order; vignette order was randomized, which forces each respondent to think more carefully about assigning values to vignettes. The names and genders used in the vignettes were also randomized during survey implementation.

After answering all five of the vignette questions, the respondent is then asked the same question about their own level of efficacy: “In general, how much say do you have in getting a state agency to address your concerns during rulemaking: no say at all, a little say, some say, a lot of say, or unlimited say?” By comparing the self-score to the vignette answers, I am able to identify if the respondent displays any incomparability problems. As King et al. (2004, 191) write, “Because the actual (but not necessarily reported) levels of the vignettes are invariant over respondents, variability in vignette answers reveals incomparability.” Simple recodes of the data are then used to adjust the respondent’s score in response to incomparability problems. “The idea is to recode the categorical self-assessment relative to the set of vignettes” (King et al. 2004, emphasis original).

The implementation of this recording depends on two main factors. First, survey vignettes are needed, and they can be expensive to implement because they take up a great deal of survey time and space. Thus, vignettes ought to be used on central concepts that are likely to display incomparability problems, such as efficacy (King and Wand 2007). Second, the chosen vignettes must be carefully constructed to obtain equivalence, meaning that all respondents must understand them in the same way (King et al. 2004). As a sensitivity check on vignette equivalence, I use the mean replies of the respondents to see if they display the assumed ordered nature (King et al. 2004). There are no problems; the sensitivity analysis confirmed the rank ordering of the vignettes (i.e., Amy was ranked highest, followed by John, Mike, then Sara and Stephanie). Furthermore, as an additional check on this assumption, I also explore whether or not business versus nonbusiness respondents display systematic differences in their responses to the vignettes. The data display no statistical evidence of this type of bias: difference of means tests show no difference between business and nonbusiness respondents on the adjusted efficacy variable.

Figure 1 provides a simplified example that yields insight into this process. It portrays two actual survey respondents. Respondent 1 correctly orders the anchoring vignettes and uses the full five-point scale: Stephanie is ranked the lowest, Mike is ranked in the middle, whereas Amy is the most efficacious. The respondent’s self-score is a 4. Given that Respondent 1 displays no problems of incomparability relative to the vignettes, his or her self-score is not adjusted. In contrast, in the middle columns we see original survey responses of Respondent 2. Respondent 2 also correctly orders the vignettes, but he or she does so by compressing the scale.8 This is a common cause of interpersonal incomparability. He or she ranks his or her self-score as a 3, but also ranks Amy’s anchoring vignette as a 3. Given we know that Amy’s “true” score is a 5, an adjustment is made to correct for the incomparability. The third column displays the correction for Respondent 2.

Figure 1

Comparing and Adjusting Survey Responses Due to Interpersonal Incomparability Problems

Note: See text for data descrition and additional details. Figure 1 utilizes similar display technique to those presented in King et al. (2004).

The adjusted variable, Internal Efficacy, serves as the dependent variable in the first set of analyses. There are six predictor variables, all of which tap a respondent’s technical and information capacity. These include the respondent’s Education level. This four-point variable runs from high school graduate and below, some college, college graduate, to a graduate degree. The respondent’s Use of Technical Information provides the degree to which the respondent used technical arguments during his or her participation on the sample rule. This five-point measure is drawn from the survey question: “How much did your participation on this rule focus on the technical details: none, a little, some, a lot, or entirely?” Additionally, I include a variable capturing whether or not the respondent Shared Data. This dichotomous measure scores a one if the respondent shared any health or health policy data, or any non–health related data or information with state agency officials during his or her rule participation. Finally, I include three dichotomous constructs tapping the following professional sectors: Government (including local, state, or federal government), Nonprofit Organizations, and Business (i.e., private, for-profit company).9 These variables gauge whether or not an individual represented one of these professional areas, and therefore, provide proxy measures for the policy-relevant information that select participants may bring to rulemaking.

I also include a host of control variables in the analyses. I include the respondent’s Gender, Age, and whether or not he or she lives in the Capital City Area. I also include the respondent’s Strength of Partisanship, whereas the variable is measured with independents scoring a zero, weak partisans scoring a one, and strong partisans scoring a two. Additionally, I include Party I.D., which measures the party of the survey respondent with a one equaling a Democrat and a zero equaling a Republican or Independent.10 Finally, I include one of two control measures for accountability, which assess the larger political environment.11 For Legislative Accountability, respondents were asked to rate the level of influence that members of the Wisconsin State Legislature or their staff had on the content of the sample rule. Legislative Accountability is an ordinal variable that scores a one for no influence, a two for little influence, a three for moderate influence, a four for very large influence, and a five for extremely large influence. Gubernatorial Accountability is an ordinal variable on the same scale, which measures influence of the governor and governor’s staff.12

To evaluate further the robustness of the article’s main results, I introduce several additional control variables on a one-by-one basis to the models. For instance, I include State Rulemaking Experience, which is a five-point variable drawn from the following question: “In the last five years, how often have you participated in other Wisconsin rulemakings: never, rarely, sometimes, very often, or extremely often?” 13

Part 1: Internal Efficacy—Results

I use these data and variables to explore the first research question, which centers on the degree of perceived “meaningful voice” one has, or can have, during agency rulemaking. I find, on average, a high level of efficacy expressed during state agency rulemaking. The Internal Efficacy variable’s mean is 3.6, suggesting that respondents generally believe they have “a lot of say” in getting a state agency to address their concerns during rulemaking. This implies that, for this sample, participating in rulemaking is often an efficacious act.14 The variable ranges between one and five, and its standard deviation is 1.0. Additionally, it is worth emphasizing the value obtained from addressing the incomparability problems in the data. If I had not employed the anchoring vignettes—and thereby not undertaken this extra step in measurement—then, in all likelihood, I would have drawn incorrect conclusions about the average rate of participant efficacy in these data. In fact, the nonadjusted score averaged a 2.6, which is closer to “some say” on the measure. Thus, the nonadjusted score is a full point lower than the adjusted Internal Efficacy variable—a 20% difference.15 Descriptive statistics for all model variables are included in table 1.

Table 1

Internal Efficacy Analyses-Descriptive Statistics

Dependent VariableMeanStandard DeviationMinimumMaximum
Internal efficacy3.5871.0311.0005.000
Predictor variables
 Capacity
  Education3.4490.7041.0004.000
  Shared data0.6610.4740.0001.000
  Use of technical information2.9841.2531.0005.000
  Business0.0830.2770.0001.000
  Government0.2550.4370.0001.000
  Nonprofit organizations0.2630.4410.0001.000
 Controls
 Gender1.6740.4691.0002.000
 Age51.88610.47224.00083.000
 Capital city area0.2350.4240.0001.000
 Partisanship strength0.6010.4900.0001.000
 Party I.D.0.6230.4850.0001.000
 Legislative accountability2.6791.1561.0005.000
 Gubernatorial accountability2.4201.2161.0005.000
 State rulemaking experience2.3971.0151.0005.000
Dependent VariableMeanStandard DeviationMinimumMaximum
Internal efficacy3.5871.0311.0005.000
Predictor variables
 Capacity
  Education3.4490.7041.0004.000
  Shared data0.6610.4740.0001.000
  Use of technical information2.9841.2531.0005.000
  Business0.0830.2770.0001.000
  Government0.2550.4370.0001.000
  Nonprofit organizations0.2630.4410.0001.000
 Controls
 Gender1.6740.4691.0002.000
 Age51.88610.47224.00083.000
 Capital city area0.2350.4240.0001.000
 Partisanship strength0.6010.4900.0001.000
 Party I.D.0.6230.4850.0001.000
 Legislative accountability2.6791.1561.0005.000
 Gubernatorial accountability2.4201.2161.0005.000
 State rulemaking experience2.3971.0151.0005.000

Note: Data are described in article’s text.

Table 1

Internal Efficacy Analyses-Descriptive Statistics

Dependent VariableMeanStandard DeviationMinimumMaximum
Internal efficacy3.5871.0311.0005.000
Predictor variables
 Capacity
  Education3.4490.7041.0004.000
  Shared data0.6610.4740.0001.000
  Use of technical information2.9841.2531.0005.000
  Business0.0830.2770.0001.000
  Government0.2550.4370.0001.000
  Nonprofit organizations0.2630.4410.0001.000
 Controls
 Gender1.6740.4691.0002.000
 Age51.88610.47224.00083.000
 Capital city area0.2350.4240.0001.000
 Partisanship strength0.6010.4900.0001.000
 Party I.D.0.6230.4850.0001.000
 Legislative accountability2.6791.1561.0005.000
 Gubernatorial accountability2.4201.2161.0005.000
 State rulemaking experience2.3971.0151.0005.000
Dependent VariableMeanStandard DeviationMinimumMaximum
Internal efficacy3.5871.0311.0005.000
Predictor variables
 Capacity
  Education3.4490.7041.0004.000
  Shared data0.6610.4740.0001.000
  Use of technical information2.9841.2531.0005.000
  Business0.0830.2770.0001.000
  Government0.2550.4370.0001.000
  Nonprofit organizations0.2630.4410.0001.000
 Controls
 Gender1.6740.4691.0002.000
 Age51.88610.47224.00083.000
 Capital city area0.2350.4240.0001.000
 Partisanship strength0.6010.4900.0001.000
 Party I.D.0.6230.4850.0001.000
 Legislative accountability2.6791.1561.0005.000
 Gubernatorial accountability2.4201.2161.0005.000
 State rulemaking experience2.3971.0151.0005.000

Note: Data are described in article’s text.

The hypothesis regarding the determinants of internal efficacy is investigated in table 2. I theorized that participant information capacity is a key determinant of efficacy. Thus, across the table, I expect the capacity variables to be positive and significant predictors. Given the ordered nature of the Internal Efficacy variable, I employ ordered probit estimation with standard errors clustered by rule.16

Table 2

Drivers of Participant Efficacy During Agency Rulemaking

PredictorsModel 1Model 2Model 3Model 4Model 5Model 6
(A) Capacity
 Education.007−.011−.009.050.024.017
.081.077.083.074.078.083
 Shared data.403**.389**.272*.384**.374**.270*
.153.157.155.144.143.145
 Use of technical.155**.142**.134**.174**.162**.157**
 Information.059.058.053.061.060.055
 Business.497**.462**.400**.487**.434**.373**
.188.183.194.198.178.188
 Government−.012.018.043−.073−.032−.021
.111.109.105.096.100.094
 Nonprofit
organizations
.240.192.143.239*.189.140
.166.172.169.148.139.138
(B) Controls
 Gender−.094−.059.010−.120−.087−.007
.145.150.155.136.164.167
 Age.000.002−.001.000.003.001
.006.006.007.007.007.007
 Capital city area−.152−.268−.266−.216−.336*−.319*
.190.198.195.188.189.185
 Partisanship−.038−.057.070−.065−.083−.103
 Strength.154.145.146.147.138.140
 Party I.D.−.016.102.076−.034.076.050
.104.111.120.110.108.116
 Legislative−.058−.035−.044 — — —
 Accountability.046.047.047
 Gubernatorial — — —−.016.006−.017
 Accountability.045.050.057
(C) Agency fixed effects — —
(D) Additional variable
 State rulemaking — —.228** — —.180**
 Experience.061.073
 Sample size302302301284284283
 Wald chi2; Prob > Chi2130.3; 0.0344.8; 0.0471.9; 0.0167.5; 0.0500.0; 0.0688.6; 0.0
PredictorsModel 1Model 2Model 3Model 4Model 5Model 6
(A) Capacity
 Education.007−.011−.009.050.024.017
.081.077.083.074.078.083
 Shared data.403**.389**.272*.384**.374**.270*
.153.157.155.144.143.145
 Use of technical.155**.142**.134**.174**.162**.157**
 Information.059.058.053.061.060.055
 Business.497**.462**.400**.487**.434**.373**
.188.183.194.198.178.188
 Government−.012.018.043−.073−.032−.021
.111.109.105.096.100.094
 Nonprofit
organizations
.240.192.143.239*.189.140
.166.172.169.148.139.138
(B) Controls
 Gender−.094−.059.010−.120−.087−.007
.145.150.155.136.164.167
 Age.000.002−.001.000.003.001
.006.006.007.007.007.007
 Capital city area−.152−.268−.266−.216−.336*−.319*
.190.198.195.188.189.185
 Partisanship−.038−.057.070−.065−.083−.103
 Strength.154.145.146.147.138.140
 Party I.D.−.016.102.076−.034.076.050
.104.111.120.110.108.116
 Legislative−.058−.035−.044 — — —
 Accountability.046.047.047
 Gubernatorial — — —−.016.006−.017
 Accountability.045.050.057
(C) Agency fixed effects — —
(D) Additional variable
 State rulemaking — —.228** — —.180**
 Experience.061.073
 Sample size302302301284284283
 Wald chi2; Prob > Chi2130.3; 0.0344.8; 0.0471.9; 0.0167.5; 0.0500.0; 0.0688.6; 0.0

Note: Data are described in article’s text. Ordered probit coefficients are displayed with standard errors (clustered by rule) underneath. Statistical significance is established by **p ≤ .05, *p < .10 with two-tailed tests employed. Models cut points are: (1) −1.537; −1.159; −0.830; −0.425; 0.122; 0.592; 1.221; and 1.366; (2) −1.586; −1.199; −0.855; −0.454; 0.101; 0.576; 1.216; and 1.365; (3) −1.251; −0.863; −0.536; −0.126; 0.438; 0.924; 1.586; and 1.741; (4) −1.313; −0.958; −0.541; −0.102; 0.444; 0.947; 1.599; and 1.729; (5) −1.429; −1.064; −0.641; −0.195; 0.357; 0.864; 1.530; and 1.664; and (6) −1.172; −0.814; −0.417; 0.034; 0.592; 1.107; 1.792; and 1.930.

Statistically significant variables are shown in bold.

Table 2

Drivers of Participant Efficacy During Agency Rulemaking

PredictorsModel 1Model 2Model 3Model 4Model 5Model 6
(A) Capacity
 Education.007−.011−.009.050.024.017
.081.077.083.074.078.083
 Shared data.403**.389**.272*.384**.374**.270*
.153.157.155.144.143.145
 Use of technical.155**.142**.134**.174**.162**.157**
 Information.059.058.053.061.060.055
 Business.497**.462**.400**.487**.434**.373**
.188.183.194.198.178.188
 Government−.012.018.043−.073−.032−.021
.111.109.105.096.100.094
 Nonprofit
organizations
.240.192.143.239*.189.140
.166.172.169.148.139.138
(B) Controls
 Gender−.094−.059.010−.120−.087−.007
.145.150.155.136.164.167
 Age.000.002−.001.000.003.001
.006.006.007.007.007.007
 Capital city area−.152−.268−.266−.216−.336*−.319*
.190.198.195.188.189.185
 Partisanship−.038−.057.070−.065−.083−.103
 Strength.154.145.146.147.138.140
 Party I.D.−.016.102.076−.034.076.050
.104.111.120.110.108.116
 Legislative−.058−.035−.044 — — —
 Accountability.046.047.047
 Gubernatorial — — —−.016.006−.017
 Accountability.045.050.057
(C) Agency fixed effects — —
(D) Additional variable
 State rulemaking — —.228** — —.180**
 Experience.061.073
 Sample size302302301284284283
 Wald chi2; Prob > Chi2130.3; 0.0344.8; 0.0471.9; 0.0167.5; 0.0500.0; 0.0688.6; 0.0
PredictorsModel 1Model 2Model 3Model 4Model 5Model 6
(A) Capacity
 Education.007−.011−.009.050.024.017
.081.077.083.074.078.083
 Shared data.403**.389**.272*.384**.374**.270*
.153.157.155.144.143.145
 Use of technical.155**.142**.134**.174**.162**.157**
 Information.059.058.053.061.060.055
 Business.497**.462**.400**.487**.434**.373**
.188.183.194.198.178.188
 Government−.012.018.043−.073−.032−.021
.111.109.105.096.100.094
 Nonprofit
organizations
.240.192.143.239*.189.140
.166.172.169.148.139.138
(B) Controls
 Gender−.094−.059.010−.120−.087−.007
.145.150.155.136.164.167
 Age.000.002−.001.000.003.001
.006.006.007.007.007.007
 Capital city area−.152−.268−.266−.216−.336*−.319*
.190.198.195.188.189.185
 Partisanship−.038−.057.070−.065−.083−.103
 Strength.154.145.146.147.138.140
 Party I.D.−.016.102.076−.034.076.050
.104.111.120.110.108.116
 Legislative−.058−.035−.044 — — —
 Accountability.046.047.047
 Gubernatorial — — —−.016.006−.017
 Accountability.045.050.057
(C) Agency fixed effects — —
(D) Additional variable
 State rulemaking — —.228** — —.180**
 Experience.061.073
 Sample size302302301284284283
 Wald chi2; Prob > Chi2130.3; 0.0344.8; 0.0471.9; 0.0167.5; 0.0500.0; 0.0688.6; 0.0

Note: Data are described in article’s text. Ordered probit coefficients are displayed with standard errors (clustered by rule) underneath. Statistical significance is established by **p ≤ .05, *p < .10 with two-tailed tests employed. Models cut points are: (1) −1.537; −1.159; −0.830; −0.425; 0.122; 0.592; 1.221; and 1.366; (2) −1.586; −1.199; −0.855; −0.454; 0.101; 0.576; 1.216; and 1.365; (3) −1.251; −0.863; −0.536; −0.126; 0.438; 0.924; 1.586; and 1.741; (4) −1.313; −0.958; −0.541; −0.102; 0.444; 0.947; 1.599; and 1.729; (5) −1.429; −1.064; −0.641; −0.195; 0.357; 0.864; 1.530; and 1.664; and (6) −1.172; −0.814; −0.417; 0.034; 0.592; 1.107; 1.792; and 1.930.

Statistically significant variables are shown in bold.

Models 1–3 explore the theorized relationships with the inclusion of the Legislative Accountability control measure. Model 1 provides a basic specification, whereas Model 2 incorporates state agency fixed effects. Model 3 introduces a further control measure. Across the first three models, half of the capacity constructs are statistically significant, and all significant variables run in the expected direction. For instance, the Use of Technical Information is significant across the models, with Model 1 probabilities suggesting that moving from participation that was not focused on technical details to participation entirely focused on technical details increases the probability of scoring at a four or above of the dependent variable by 24%. Similarly, a respondent that shared data and information with agency regulators as part of their rulemaking participation was 15% more likely to report that she or he had “a lot of say” or “unlimited say” over getting a state agency to address his or her concerns during rulemaking. The results for Education, however, provide no support for my argument. This variable is insignificant across table 2.

One of the three capacity variables tapping professional sector is significant, Business, whereas the participation of government and nonprofit organization representatives is insignificant. The Business results suggest that a connection to the private sector may bring about higher levels of perceived “meaningful say.” I generated the predicted probabilities associated with this variable to have a better sense of its substantive effect.17 Moving from not representing the private, for-profit sector to a one on the Business variable in Model 1, for instance, increases the probability of scoring a four or above of the dependent variable (which equates with “a lot of say” or more over rulemaking) by 19%.18

Models 4–6 present the results with the inclusion of the Gubernatorial Accountability control measure. Here the findings are largely analogous to the earlier models. Business remains statistically significant and meaningful, with probability estimates drawn from Model 5 suggesting that being a representative of a for-profit business interest increases a participant’s perception that he or she has at least “a lot of say” over rulemaking by 16%. Also using the Model 5 estimates, the Use of Technical Information augments the probability of scoring at a four or above on the dependent variable by 24%, whereas Shared Data increases the probability of reporting between “a lot of say” and “unlimited say” by 14%. In Model 4, nonprofit organization representatives report higher efficacy scores; however, this finding fades with the inclusion of the agency fixed effects.

Overall, I demonstrate some support for the theorized relationships in these state health rulemaking data. Put differently, these results suggest a participant’s capacity—in terms of his or her knowledge and expertise as applicable to agency rulemaking—can be an important predictor of perceived “voice” during agency rulemaking. In particular, the results demonstrate that Internal Efficacy is driven by the ability to use technical information and data, as well as a background in the private sector, which according to Jewell and Bero (2007), equates with making more information-driven arguments during agency rulemaking. A respondent’s general Education level, however, does not appear important within the context of internal efficacy during rulemaking. Similarly, although nonprofit organization representatives do show some signs of the hypothesized relationship, the lack of significance across model specifications provides less support for the robustness of this pattern.

Part 2: External Efficacy—Variables and Methods

I now explore the concept of external efficacy during agency rulemaking. Recall that external efficacy focuses not on the self, but on the general belief that government institutions and authorities will be responsive to the stated demands made to them. In the context of this article, I am thus investigating whether or not public participants in rulemaking believe state agencies to be responsive to those parties active during rulemaking. I theorized that the same factors driving internal self-efficacy also drive perceptions of external efficacy. Given the significance of the Business measures on Internal Efficacy established in table 2 and some disagreements regarding the degree of business influence in the broader literature (Golden 1998; Yackee and Yackee 2006), my focus in this next section is on testing the hypothesis stating whether or not a perceived business advantage is present with regard to perceptions of government responsiveness to participant activity during rulemaking.

To assess this, I designed a new experiment that was included within the previously described survey. The respondents were first told: “In this section, I will describe a hypothetical scenario. Some parts of the scenario may strike you as important; other parts may seem unimportant.” The full vignette is provided in Appendix B. The respondents were informed, “A Wisconsin state agency is developing a rule that will address the topic of new cancer treatments.” Respondents were also told, “At the proposed rule’s public hearing, about 100 individuals testify either in support or against the rule.” A key randomization was then implemented: approximately half of the respondents were told that the general public made up the majority of hearing participants, whereas the others were told that the insurance industry representatives were the main participants.19 The vignette closes by asking, “Given these facts, how much influence do you believe the following actors had on the content of the rule? For each please tell me if they are likely to have no influence at all, only a little influence, a moderate amount of influence, a very large amount of influence, or an extremely large amount of influence.” The actors included citizens and business interests.

Before analyzing the effects of the randomization, I confirmed that the treatment groups for both experiments were balanced on several key demographic and political covariates. I specifically analyzed whether the groups exhibited similar patterns in terms of respondent gender, age, education, and party identification.20 I also analyzed whether the groups were balanced in terms of the respondent’s past experience with rulemaking in Wisconsin, as well as their assessments of success in influencing the underlying study rule. Given that the experiment is focused on business bias, I also checked for equality in employment in the business sector. In all the cases the results displayed no statistical or substantive differences on average. In short, the randomization was successful. Following Tomz and Weeks (2010), this holds implications for the resulting method of data analysis. As a result of the successful randomization, I am able to analyze the results without using complicated modeling or numerous control variables. “Consequently, there is little need for elaborate statistical models with control variables. One can obtain unbiased estimates of treatment effects via cross-tabulation. . .” (Tomz and Weeks 2010, 11).

Part 2: External Efficacy—Results

Tables 3 and 4 display the experimental results. Overall, my expectation—that a perceived business advantage exists among participants active in rulemaking—is well supported in the survey experiment. Table 3 provides the results for the perceived influence of business interests. Business interests are seen as moderately to extremely influential over the rule’s content. Furthermore, the randomization appears to have important implications for the respondents’ assessments of state agency responsiveness. Those participants who were told that the hearing participants largely came from the insurance industry were significantly more likely to rate business influence as having a “very large” (45%) or “extremely large” (29%) effect on the rule’s content. In contrast, those with the general public treatment were more likely to see business interests as holding only “moderate” influence. The chi-square statistic for the analysis indicates statistically significant differences in the respondent’s assessments as a result of the randomization.

Table 3

The Effect of Participant Type on Perceptions of Business Influence on Agency Rulemaking Outcomes

Insurance Industry (%)Effect of Participant Type (%)General Public (%)Effect of Participant Type (%)
Business influence
None1.05.43.119.3
Little4.416.2
Moderate20.720.742.942.9
Very large45.373.929.237.9
Extremely large28.68.7
Insurance Industry (%)Effect of Participant Type (%)General Public (%)Effect of Participant Type (%)
Business influence
None1.05.43.119.3
Little4.416.2
Moderate20.720.742.942.9
Very large45.373.929.237.9
Extremely large28.68.7

Note: The table provides the percentage of respondents who expressed each preference. The sample size was 164 for the general public treatment and 203 for the insurance industry treatment. The chi-square statistic for the table is 53.4 with four degrees of freedom (p value is .00). See the text for data information.

Table 3

The Effect of Participant Type on Perceptions of Business Influence on Agency Rulemaking Outcomes

Insurance Industry (%)Effect of Participant Type (%)General Public (%)Effect of Participant Type (%)
Business influence
None1.05.43.119.3
Little4.416.2
Moderate20.720.742.942.9
Very large45.373.929.237.9
Extremely large28.68.7
Insurance Industry (%)Effect of Participant Type (%)General Public (%)Effect of Participant Type (%)
Business influence
None1.05.43.119.3
Little4.416.2
Moderate20.720.742.942.9
Very large45.373.929.237.9
Extremely large28.68.7

Note: The table provides the percentage of respondents who expressed each preference. The sample size was 164 for the general public treatment and 203 for the insurance industry treatment. The chi-square statistic for the table is 53.4 with four degrees of freedom (p value is .00). See the text for data information.

Table 4

The Effect of Participant Type on Perceptions of Citizen Influence on Agency Rulemaking Outcomes

Insurance Industry (%)Effect of Participant Type (%)General Public (%)Effect of Participant Type (%)
Citizen influence
None7.463.14.937.2
Little55.732.3
Moderate25.625.640.240.2
Very large9.411.318.322.6
Extremely large2.04.3
Insurance Industry (%)Effect of Participant Type (%)General Public (%)Effect of Participant Type (%)
Citizen influence
None7.463.14.937.2
Little55.732.3
Moderate25.625.640.240.2
Very large9.411.318.322.6
Extremely large2.04.3

Note: The table provides the percentage of respondents who expressed each preference. The sample size was 164 for the general public treatment and 203 for the insurance industry treatment. The chi-square statistic for the table is 24.9 with four degrees of freedom (p value is .00). See the text for data information.

Table 4

The Effect of Participant Type on Perceptions of Citizen Influence on Agency Rulemaking Outcomes

Insurance Industry (%)Effect of Participant Type (%)General Public (%)Effect of Participant Type (%)
Citizen influence
None7.463.14.937.2
Little55.732.3
Moderate25.625.640.240.2
Very large9.411.318.322.6
Extremely large2.04.3
Insurance Industry (%)Effect of Participant Type (%)General Public (%)Effect of Participant Type (%)
Citizen influence
None7.463.14.937.2
Little55.732.3
Moderate25.625.640.240.2
Very large9.411.318.322.6
Extremely large2.04.3

Note: The table provides the percentage of respondents who expressed each preference. The sample size was 164 for the general public treatment and 203 for the insurance industry treatment. The chi-square statistic for the table is 24.9 with four degrees of freedom (p value is .00). See the text for data information.

Table 4 displays the findings for the perceived influence of individual citizens on rule content. Overall, individual citizen influence is ranked lower than business interest influence. Furthermore, perceived individual citizen influence is best characterized as low amongst those respondents receiving the insurance industry treatment—the combined categories of “none” and “little” total over 63%, whereas perceived individual citizen influence increases to “moderate” when the respondents were told that the general public made up most of the hearing participants. The survey randomization is again significant (as per the chi-square statistic) across the table.

When taken as a whole, these experimental results largely match theoretical expectations, and this evidence suggests that a business advantage is present with regard to perceptions of government responsiveness during rulemaking in this sample of respondents. Stated differently, a participant’s belief that the US bureaucracy will be responsive to stated policy demands put before it during agency rulemaking appears to be colored by the perception of who—be it business interests or the general publicis making those demands.

CONCLUSION

The belief that those citizens who participate can, and do, have a meaningful say within their system of government is a central tenet of US political life and fundamental topic of study within the field of public administration. Indeed, “efficacy is a key concept in theories of political participation and democratic governance” (Acock, Clarke, and Steward 1985). Consequently, it is not surprising that a large literature has developed around the concept. What is surprising is that we don’t know more about efficacy during agency rulemaking. After all, in the United States “rules govern the purity of the food that we eat, the water that we drink, and the air we breathe. . . .they determine much about the health care available to us and practices used in banking, industry, agriculture, and many other areas of economic life” (Rosenbloom 2003). Moreover, the legitimacy necessary for unelected policymakers (i.e., bureaucrats) to develop these critical policies rests, at least in part, upon the fact that agency officials are required to take feedback from concerned citizens while writing regulations (Croley 1998; Kerwin and Furlong 2011). Yet, as Ulbig (2008) writes, “A voice that is perceived to have no influence can be more detrimental than not perceiving a voice at all.”

In this article, I ask: “Do citizens active in agency rulemaking believe they have a meaningful “say” over government policy outputs? And if so, then why?” Answers to these research questions remain in doubt because scholars have often bypassed the study of efficacy within bureaucratic policymaking. Instead, as Coglianese (2003) concludes, scholarly evaluations have tended to focus on measures that assess citizen satisfaction with regulatory policymaking, or alternatively, research has focused on the influence of specific interest group lobbying tactics (Furlong 1997; Furlong and Kerwin 2005; Golden 1998; Kerwin and Furlong 2011; West 2004; Yackee and Yackee 2006), as opposed to the broader construct of participant efficacy during rulemaking.

This article brings new theoretical and empirical traction to the study of rulemaking and participant “voice.” I hypothesize that internal efficacy—which focuses on the perceived ability of the self to have meaningful influence—is driven by a rulemaking participant’s capacity to make persuasive arguments to public agencies, as well as conditioned by one’s perceptions of existing political accountability relationships. To assess the argument, I gather survey data from almost 400 respondents active across recent rulemakings in Wisconsin. I employ traditional methods to assess the research questions alongside two noteworthy design advancements—(1) anchoring vignettes to improve the measurement of efficacy (see, King and Wand 2007; King et al. 2004) and (2) a randomized survey vignette experiment to investigate the external efficacy hypothesis.

I find high levels of internal efficacy. Respondents report, on average, “a lot of say” in getting state agencies to address their concerns during rulemaking. Using ordered probit models, I also find that several of my theorized drivers are predictors of political efficacy. In particular, the results demonstrate that representatives of business or industry have increased internal efficacy; respondents that share technically detailed information and data also perceive a more meaningful voice. For external efficacy, the randomized experiment confirms a statistically significant perceived advantage for business interests over individual citizens in making content changes to rules.

This article and its empirical results yield several policy-relevant implications. For instance, the findings suggest that participating in rulemaking is generally believed to be an efficacious act. Put differently, these results demonstrate that public participants do not just perceive a “say”; on average, they perceive a “meaningful say” over regulatory outcomes. From a normative perspective, this suggests that a hallmark of the US system of governance—political efficacy—is present within agency rulemaking. The article’s results, however, are not exclusively optimistic. We now have empirical evidence that a perceived business advantage exists and appears to color the perception of who gets their stated demands met during rulemaking. Nevertheless, more work is needed to fully understand these relationships. For instance, the theorizing here ought to be reevaluated in additional contexts. This is critical to assessing the external validity of the findings. Thus, although the results in this article are drawn from six public agencies in a state that is looked upon as “average” on a number of rule-related indicators, expanding the data collection to other American states, the federal government, as well as other nations will push these findings in new directions. Moreover, the concentration on health regulation needs reevaluation in other policy areas, where different constellations of participants may drive the results in new directions. By taking these steps and others, scholars will gain even more ground toward fully understanding the degree of “meaningful say” engaged citizens hold during policy decision making across the US agencies, as well as critical bureaucratic decision making in other nations.

Appendix 1

Anchoring Vignettes

Vignette 1: “Amy” = 5

[Randomized name]’s father had a heart attack two years ago. [randomized gender (she/he)] knows that a state agency is proposing a new rule that will reduce heart disease. [randomized gender (she/he)] is organizing a major write-in campaign in support of the rule. Based on [randomized gender (her/his)] efforts, it appears that so many people feel the same way as [randomized gender (her/him)] that the agency will listen to [randomized gender (her/his)] concerns. How much say does [randomized gender (she/he)] have in getting a state agency to address [randomized gender (her/his)] concerns during rulemaking: no say at all, a little say, some say, a lot of say, or unlimited say?

Vignette 2: “John” = 4

[Randomized name]’s father had a heart attack two years ago. [randomized gender (she/he)] has been asked by a coworker to attend an upcoming public hearing on a new proposed rule that will reduce heart disease. [randomized gender (she/he)] is planning on attending a hearing and will stand up to voice [randomized gender (her/his)] views on the proposed rule. How much say does [randomized gender (she/he)] have in getting a state agency to address [randomized gender (her/his)] concerns during rulemaking: no say at all, a little say, some say, a lot of say, or unlimited say?

Vignette 3: “Mike” = 3

[Randomized name]’s father had a heart attack two years ago. [randomized gender (she/he)] read in the newspaper that a state agency is considering a proposed rule aimed at reducing heart disease. [randomized gender (she/he)] is planning on submitting a short public comment to the agency. How much say does [randomized gender (she/he)] have in getting a state agency to address [randomized gender (her/his)] concerns during rulemaking: no say at all, a little say, some say, a lot of say, or unlimited say?

Vignette 4: “Sara” = 2

[Randomized name]’s father had a heart attack two years ago. [randomized gender (she/he)] heard from a colleague that a state agency is considering a rule that will reduce heart disease. [randomized gender (she/he)] plans to attend the hearing, but [randomized gender (she/he)] will sit at the back of the room and will not share [randomized gender (her/his)] views. How much say does [randomized gender (she/he)] have in getting a state agency to address [randomized gender (her/his)] concerns during rulemaking: no say at all, a little say, some say, a lot of say, or unlimited say?

Vignette 5: “Stephanie” = 1

[Randomized name]’s father had a heart attack two years ago. [randomized gender (she/he)] would like the government to do something about heart disease, but [randomized gender (she/he)] has no idea of how to voice his or her concerns. So [randomized gender (she/he)] is silent, hoping something will be done in the future. How much say does [randomized gender (she/he)] have in getting a state agency to address [randomized gender (her/his)] concerns during rulemaking: no say at all, a little say, some say, a lot of say, or unlimited say?

Appendix 2

Survey Vignette Experiment

“In this section, I will describe a hypothetical scenario. Some parts of the scenario may strike you as important; other parts may seem unimportant. After describing it, I will ask you a few questions.

A Wisconsin state agency is developing a rule that will address the topic of new cancer treatments. The proposed rule is likely to be controversial and is over five pages long. It is written in nontechnical language and encourages public participation. At the proposed rule’s public hearing, about 100 individuals testify either in support or against the rule. The hearing lasts about three hours.

Many of the hearing participants are from [randomized: the insurance industry OR the general public]. A good deal of the testimony concentrates on [randomized: the general benefits or costs of the rule to society OR technical health and health policy related data and information related to the rule]. Several months have now passed, and the state agency has just announced the text of the Final Rule.

Given these facts, how much influence do you believe the following actors had on the content of the rule? For each please tell me if they are likely to have no influence at all, only a little influence, a moderate amount of influence, a very large amount of influence, or an extremely large amount of influence.”

References

Aberbach
Joel D
.
1990
.
Keeping a watchful eye: The Politics of congressional oversight
.
Washington, DC
:
Brookings Institution
.

Acock
Alan
Clarke
Harold D.
Steward
Marianne C.
.
1985
.
A new model for old measures: A covariance structure analysis of political efficacy.
Journal of Politics
47
:
1062
84
.

Berry
Jeffery M.
Portney
Kent E.
Thomson
Ken
.
1993
.
The rebirth of urban democracy
.
Washington, DC
:
The Brookings Institution Press
.

Campbell
Angus
Gurin
Gerald
Miller
Warren E.
.
1954
.
The voter decides
.
Evanston, OL
:
Row, Peterson
.

Carpenter
Daniel
.
2010
.
Reputation and power: Organizational image and pharmaceutical regulation at the FDA
.
Princeton, NJ
:
Princeton Univ. Press
.

Coglianese
Cary
.
2003
.
Is satisfaction success? Evaluating public participation in regulatory policy making.
In
The promise and performances of environmental conflict resolution
, eds.
O’Leary
Rosemary
Bingham
Lisa
. 69–83
Washington, DC
:
Resources for the Future
.

Cooper
Terry L.
Bryer
Thomas A.
Meek
Jack W.
.
2006
.
Citizen-centered collaborative public management.
Public Administration Review
66
:
76
88
.

Craig
Stephen C.
Niemi
Richard G.
Silver
Glenn E.
.
1990
.
Political efficacy and trust: A report on the NES pilot study items.
Political Behavior
12
:
289
314
.

Croley
Stephen P
.
1998
.
Theories of regulation: Incorporating the administrative process.
Columbia Law Review
98
:
1
168
.

Cuéllar
Mariano-Florentio
.
2005
.
Rethinking regulatory democracy.
Administrative Law Review
75
:
412
99
.

Davis
Kenneth Culp
.
1978
.
Administrative law treatise
, 2nd ed. San Diego: K.C. Davis Publishing Co.

DeHoog
Ruth Hoogland
Lowery
David
Lyons
William E.
.
1990
.
Citizen satisfaction with local governance: A test of individual, jurisdictional, and city-specific explanations.
Journal of Politics
52
:
807
37
.

Elliott
E. Donald
.
1992
.
Re-inventing rulemaking.
Duke Law Journal
41
:
1490
96
.

Epstein
David
O’Halloran
Sharyn
.
1999
.
Delegating powers: A transaction cost approach to policy making under separate powers
.
Boston, MA
:
Cambridge Univ. Press
.

Furlong
Scott R
.
1998
.
Political influence on the bureaucracy: The bureaucracy speaks
.
Journal of Public Administration Research and Theory
8
:
39
65
.

Furlong
Scott R
.
1997
.
Interest group influence on rule making
.
Administration & Society
29
:
325
48
.

Furlong
Scott R.
Kerwin
Cornelius M.
.
2005
.
Interest group participation in rule making: A decade of change.
Journal of Public Administration Research and Theory
15
:
353
70
.

Golden
Marissa Martino
.
1998
.
Interest groups in the rule-making process: Who participates? Whose voices get heard?
Journal of Public Administration Research and Theory
8
:
245
70
.

Hammond
Thomas H.
Knott
Jack H.
.
1996
.
Who controls the bureaucracy? Presidential power, congressional dominance, legal constraints, and bureaucratic autonomy in a model of multi-institutional policymaking.
Journal of Law, Economics, & Organization
12
:
119
66
.

Iyengar
Shanto
.
1980
.
Subjective political efficacy as a measure of diffuse support.
The Public Opinion Quarterly
44
:
249
56
.

Jensen
Christian B.
McGrath
Robert J.
.
2011
.
Making rules about rulemaking: A comparison of presidential and parliamentary systems.
Political Research Quarterly
64
:
656
67
.

Jewell
Christopher
Bero
Lisa
.
2007
.
Public participation and claimsmaking: Evidence utilization and divergent policy frames in California’s ergonomics rulemaking.
Journal of Public Administration Research and Theory
17
:
625
50
.

John
Peter
.
2009
.
Can citizen governance redress the representative bias of political participation?
Public Administration Review
69
:
494
503
.

Kamieniecki
Sheldon
.
2006
.
Corporate American and environmental policy: How often does business get its way?
Stanford, CA
:
Stanford Law and Politics Press
.

Kelleher
Christine A.
Yackee
Susan Webb
.
2006
.
Who’s whispering in your ear? The influence of third parties over state agency decisions.
Political Research Quarterly
59
:
629
43
.

Kerwin
Cornelius M.
Furlong
Scott R.
.
1992
.
Time and rulemaking: An empirical test of theory.
Journal of Public Administration Research and Theory
2
:
113
38
.

———.

2011
.
Rulemaking: How government agencies write law and make policy
, 4th ed.
Washington, DC
:
CQ Press
.

King
Gary
Murray
Christopher J. L.
Salomon
Joshua A.
Tandon
Ajay
.
2004
.
Enhancing the validity and cross-cultural comparability of measurement in survey research.
American Political Science Review
98
:
191
207
.

King
Gary
Wand
Jonathan
.
2007
.
Comparing incomparable survey responses: Evaluating and selecting anchoring vignettes.
Political Analysis
15
:
46
66
.

Klyza
Christopher
Sousa
David
.
2008
.
American environmental policy, 1990–2006: Beyond gridlock
.
Cambridge, MA
:
MIT Press
.

Mashaw
Jerry L
.
1985
.
Due process in the administrative state
.
New Haven, CT
:
Yale Press
.

Meier
Kenneth J.
Wrinkle
Robert
Polinard
J. L.
.
1995
.
Politics, bureaucracy, and agricultural policy: An alternative view of political control.
American Political Quarterly
23
:
427
60
.

Naughton
Keith
Schmid
Celeste
Yackee
Susan Webb
Zhan
Xueyong
.
2009
.
Understanding commenter influence during agency rule development.
Journal of Policy Analysis and Management
28
:
258
77
.

Nelson
David
Yackee
Susan Webb
.
2012
.
Lobbying coalitions and government policy change.
Journal of Politics
74
:
339
53
.

Niemi
Richard C.
Craig
Stephen C.
Mattei
Franco
.
1991
.
Measuring internal political efficacy in the 1988 national election study.
American Political Science Review
85
:
1407
13
.

Pierce
Richard
.
1996
.
Rulemaking and the APA.
Tulsa Law Journal
32
:
185
201
.

Rosenbloom
David H
.
2003
.
Administrative law for public managers
.
Boulder, CO
:
Westview Press
.

Rossi
Jim
.
1997
.
Participation run amok: The costs of mass participation for deliberative agency decisionmaking.
Northwestern University Law Review
92
:
173
249
.

Rourke
Francis E
.
1984
.
Bureaucracy, politics, and public policy
, 3rd ed.
Boston, MA
:
Little Brown
.

Schwartz
Jason A
.
2010
.
52 Experiments with regulatory review: The political and economic inputs into state rulemakings
.
Institute for Policy Integrity, New York University School of Law
.

Stewart
Richard
.
2003
.
Administrative law in the twenty-first century.
New York University Law Review
78
:
437
60
.

Tomz
Michael
Weeks
Jessica L.
.
2010
.
An experimental investigation of the democratic peace.
Paper prepared for delivery at the
Annual Meeting of the American Political Science Association
,
September 2–5, 2010
.

Ulbig
Stacy G
.
2008
.
Voice is not enough: The importance of influence in political trust and policy assessments.
Public Opinion Quarterly
72
:
523
39
.

Verba
Sidney
Scholzman
Kay Lehman
Brady
Henry E.
.
1995
.
Voice and equality: Civic voluntarism in American politics
.
Cambridge, MA
:
Harvard University Press
.

West
William F
.
1995
.
Controlling the bureaucracy: Institutional constraints in theory and practice
.
New York City, NY
:
Sharpe
.

———.

2004
.
Formal procedures, informal procedures, accountability, and responsiveness in bureaucratic policymaking: An institutional policy analysis.
Public Administration Review
64
:
66
80
.

Wood
B. Dan
.
1988
.
Principals, Bureaucrats, and Responsiveness in Clean Air Enforcements.
American Political Science Review
82
:
213
34
.

Wood
B. Dan
Waterman
Richard W.
.
1994
.
Bureaucratic dynamics: The role of a bureaucracy in a democracy
.
Boulder, CO
:
Westview
.

Woods
Neal D
.
2009
.
Promoting participation? An examination of rulemaking notification and access procedures.
Public Administration Review
63
:
518
30
.

Yackee
Jason Webb
Yackee
Susan Webb
.
2006
.
A Bias toward Business? Assessing Interest Group Influence on the Bureaucracy.
Journal of Politics
68
:
128
39
.

Yackee
Jason Webb
Yackee
Susan Webb
.
2010
.
Is agency rulemaking “ossified”? Testing congressional, presidential, and judicial procedural constraints.
Journal of Public Administration Research and Theory
20
:
261
82
.

1

Foundational research for this article includes Furlong (1997, 1998) , Furlong and Kerwin (2005), and Kerwin and Furlong (2011); yet, it is notable that these important pieces focus on the perceived influence of specific interest group lobbying tactics during the creation of government regulations, as opposed to overall assessments of participant efficacy during the regulatory process.

2

It is worth emphasizing here that political efficacy, although a close cousin to influence, is treated as conceptually distinct from influence in the scholarly literature because efficacy centers on the more general beliefs of an individual regarding the ability to affect change in government.

3

I build and extend upon the work of Furlong and Kerwin. However, it is also worth noting the innovations present in this article that make it a novel contribution. First, this article’s focus on an overall measurement of efficacy from all participants (i.e., not exclusively interest groups) during rulemaking is unique and important. In doing so, it contrasts with the existing literature, such as Furlong and Kerwin’s measurement of the perceived influence attached to specific interest group lobbying tactics. Second, the response rates in the existing literature do not allow for firm generalizations. Furlong (1997), for example, had a response rate of just over eight percent, whereas Furlong and Kerwin (2005) report response rates of between 15% and just over 25%. As I detail in the pages that follow, this article has a strong response rate (57%), as well as a statistical assessment of nonrespondents. Third, as described more fully in the “testing the argument” section below, I employ survey anchoring vignettes to gauge participant efficacy, which is an improved measurement strategy over existing work and makes a novel contribution to the public administration literature, more broadly.

4

This lack of impact may result because of a robust policy agenda-setting stage taking place before the start of the notice and comment period (Naughton et. al 2009; West 2004).

5

For instance, a handful of rules received five or fewer participants, whereas one rule received 185.

6

A person who attended a hearing and submitted a comment to a rule was counted as a single eligible survey respondent for the rule.

7

Survey respondents and nonrespondents appear substantially similar on several known dimensions. For instance, location of the respondents (in the Capital City area or not) was similar with 25% of the respondents coming from the Madison, Wisconsin, area, whereas 24% of the nonrespondents were from the Madison area. Similarly, the type of participation was analogous, as 70% of respondents submitted a comment and 31% were present at a hearing. For the nonrespondents, 72% submitted a comment and 28% were present at a hearing.

8

Across a large survey, ties (where a respondent ranks more than one vignette in the same way) and inconsistencies in rank ordering are expected. When the self-score matches a vignette score, I follow the example provided in figure 1 and adjust the self-score. When there are ties across the vignettes that correspond with the self-score, I take the simple average. For instance, if the respondent scores Amy as a 4 and John as a 4 while he or she provides the self-score of a 4, then I adjust the self-score to a 4.5 to account for the vignette tie. As suggested by King et al. (2004), inconsistencies in rank ordering are treated as ties.

9

Other choice categories on the employment question included: self-employed, family business or farm, and not working for pay.

10

Analogous results occur if Republicans score a one, while Democrat and Independent score zeros.

11

I do not include the two measures simultaneously because they are correlated at a rate of 0.66

12

For all variables on the survey, there were also “don’t know” and “refused” options offered to the respondents. As a result of these options, the sample size is diminished somewhat in the article’s analyses. I investigated this issue and found that the majority of this reduction is caused by respondents answering “don’t know” on the legislative or gubernatorial rule influence question. Given this fact, I undertook a sensitivity analysis. In all cases, the respondents used in the analyses look similar to the “don’t know” respondents. For instance, the averages for Internal Efficacy, State Rulemaking Experience, and Business indicate no statistical or substantive differences.

13

I included a variety of other control variables in the analyses on a one-by-one basis as well, but do not formally present the results. For instance, I included measures tapping how the respondent participated (attended a hearing, submitted a public comment, or both). In other models, I included a formal participation scale, which ran from attended a rule’s hearing but did not testify at the hearing on the one extreme, to testified at a hearing and submitted written comments on the other extreme. I also incorporated a variable tapping whether or not the respondent used ex parte lobbying tactics on the rule. I then controlled for the use of coordinated participation during rulemaking. All these robustness checks returned results that are substantially and statistically similar to those presented later in the article.

14

One may argue that this descriptive statistic is driven exclusively by the most experienced or savvy of the sample respondents. Yet, sensitivity analyses do not suggest this to be the case. To draw this conclusion, I re-estimated the Internal Efficacy descriptive statistics for the cases that most closely mimicked the characteristics of “average citizens” in the data. For instance, I first looked at the descriptive statistics for respondents who attended but did not testify at a rule’s hearing. I also investigated respondent who had no previous rulemaking experience, and those with no more than a high school education. Results are substantially similar.

15

Although the majority of the adjustments increased the magnitude of the Internal Efficacy measure, the score for some respondents did adjust downwards.

16

There are no problems with the parallel regression assumption in the basic model specifications; however, when additional control variables are included, the tests suggest a potential problem. However, in my assessments of this assumption, I was unable to cluster the standard errors. Consequently, as a further sensitivity test, I ran all models using ordinary least squares regression estimation techniques, which does allow for clustered standard errors. No concerns are uncovered; analogous results to those presented across table 1 are returned.

17

All predicted probability calculations in the article are completed with dichotomous variables set to their modal values, whereas other variables are set to their means.

18

To assess the potential interaction between information use and employment sector, I multiplied the Use of Technical Information with Business, Government, or Non-Profit Organizations, respectively. Similarly, I multiplied Shared Data with these sector variables. I included the interaction effects on a one-by-one basis into all model specification. No relationships were uncovered; the interaction effects were statistically insignificant across all models.

19

The insurance industry acts as a proxy for a business/industry representative in this context.

20

When comparing the groups with the business treatment to those with the general public treatment, the mean comparisons are: gender (1.65/1.70), age (52 years/52 years), education (3.45/3.44), five-point party identification (2.33/2.38), business representative (0.07/0.10), and past Wisconsin rulemaking experience (2.43/2.36).