Bayesian Graphical Entity Resolution Using Exchangeable Random Partition Priors

Entity resolution (record linkage or deduplication) is the process of identifying and linking duplicate records in databases. In this paper, we propose a Bayesian graphical approach for entity resolution that links records to latent entities, where the prior representation on the linkage structure is exchangeable. First, we adopt a flexible and tractable set of priors for the linkage structure, which corresponds to a special class of random partition models. Second, we propose a more realistic distortion model for categorical/discrete record attributes, which corrects a logical inconsistency with the standard hit-miss model. Third, we incorporate hyperpriors to improve flexibility. Fourth, we employ a partially collapsed Gibbs sampler for inferential speedups. Using a selection of private and nonprivate data sets, we investigate the impact of our modeling contributions and compare our model with two alternative Bayesian models. In addition, we conduct a simulation study for household survey data, where we vary distortion, duplication rates and data set size. We find that our model performs more consistently than the alternatives across a variety of scenarios and typically achieves the highest entity resolution accuracy (F1 score). Open source software is available for our proposed methodology, and we provide a discussion regarding our work and future directions.


Introduction
As commonly known in the literature, entity resolution (ER; record linkage or de-duplication) is the process of taking large, noisy (dirty) databases and removing duplicate records (often in the absence of a unique identifier) (Doan et al., 2012;Elmagarmid et al., 2007;Naumann and Herschel, 2010;Getoor and Machanavajjhala, 2012;Christen, 2012a;Christophides et al., 2020;Ilyas and Chu, 2019;Papadakis et al., 2021;Binette and Steorts, 2022). This problem has become increasingly important in many fields, such as survey methodology, official statistics, computer science, political science, health care, human rights, and others. In this paper, we are motivated by several applications. For example, we consider a longitudinal health care survey, where information is categorical in nature due to privacy restrictions on the data set. This may be of interest to those in the survey methodology community as they may face similar issues. In addition, we consider categorical and string or textual based data sets (or surveys) such as bibliographic/citation documents, information from restaurants, and a traditional benchmark (synthetic) study. The goal of analyzing multiple data sets is to make the survey community more aware of data sets that are relevant for entity resolution methods. Other overarching goals are to extend recent Bayesian graphical ER methodology for these data sets, provide comparative analyses, simulation studies, and guidance to researchers. Moreover, we provide open-source software for the community for our proposed method and two recently proposed methods in the literature.
The idea of entity resolution dates back to Dunn (1946), who envisioned a "book of life" that would piece together information about an individual. Newcombe et al. (1959) proposed one of the first methods for performing ER, based on a heuristic statistical test. This method was later formalized by Fellegi and Sunter (1969), who developed a model based on agreement patterns between pairs of records, and a likelihood ratio test for classifying pairs as linked (referring to the same entity), possibly linked or non-linked. Under some strong assumptions, they showed that their method -now known as the Fellegi-Sunter (FS) method -is statistically optimal. The FS method has been advanced over many decades in the ER literature, especially in survey methodology due to its scalability, ease of use and simplicity (Winkler, 2006;Christen, 2012a;Sadinle and Fienberg, 2013;Enamorado et al., 2019). However, it has some inherent limitations: it makes inconsistent (intransitive) predictions, it does not naturally account for uncertainty, it cannot exploit patterns at the entity-level, and it is incompatible with generative modeling approaches (Tancredi and Liseo, 2011).
Some of these limitations can be addressed by adapting the FS method to a Bayesian setting, while also imposing consistency constraints on the links between records (Sadinle, 2014(Sadinle, , 2017. For example, Sadinle (2014) proposed a Bayesian extension of the FS model for performing ER within a single database. It incorporates consistency (transitivity) constraints by requiring that records are partitioned into groups that are mutually linked. In addition, the model supports multiple levels of agreement, and incorporates priors on the links and / probabilities (from the FS model). In contrast with traditional FS methods, quantification of ER uncertainty is possible by computing the posterior distribution on the links. However, despite these benefits, the model has not been widely examined in the literature, perhaps in part due to the lack of a publicly-available implementation. One of our goals in this paper is to evaluate Sadinle's model as a representative example of a Bayesian FS model, and compare it to another class of Bayesian entity resolution models, which we now describe.
In parallel to developments in Bayesian FS models, others have proposed a new class of generative models called Bayesian graphical entity resolution models (Tancredi and Liseo, 2011;Steorts, 2015;Steorts et al., 2016). In contrast with FS methods, these models do not operate on agreement patterns between pairs of records. Instead, they model a latent population of entities and the process by which records are generated from entities. They are known as "graphical" models because the fundamental objects in the model form a bipartite graph -the latent entities correspond to one set of vertices, the records correspond to another set of vertices, and the links are edges that connect the vertices. Tancredi and Liseo (2011) proposed one of the first models of this kind for performing ER across two databases. Subsequently, Steorts et al. (2016) proposed an extension to multiple databases, while optionally allowing for duplicates within each database. However, Steorts et al. (2016) discovered a limitation of the their model and the model by Tancredi and Liseo (2011) -the uniform prior on the linkage structure is highly informative about the number of entities present in the data. They noted that future work ought to consider more appropriate priors on the linkage structure for ER.
The models by Tancredi and Liseo (2011) and Steorts et al. (2016) both assume entities are described by a set of latent categorical attributes (e.g., date of birth, gender) which are distorted in the records. However, their model of the distortion process is simple and is unable to capture realistic distortions in string-type attributes, such as names. Steorts (2015) addressed this problem by proposing a string pseudo-likelihood and an empirically-motivated prior in a model known as blink. The blink model was later used as a foundation by Marchant et al. (2021) for developing more scalable Bayesian graphical ER techniques. They proposed an end-to-end method that jointly performs blocking and ER, where inference can be distributed or parallelized at the block level. Importantly, this enables propagation of blocking uncertainty to the ER task. They observed a 200× speed-up, which allowed them to scale blink to a data set containing over one million records. However, the blink model uses the same uniform prior on the linkage structure as Tancredi and Liseo (2011) and Steorts et al. (2016), and suffers from the same limitations.
Motivated by the shortcomings of existing Bayesian graphical ER models, we propose and evaluate several modeling refinements in this paper. First, we propose a flexible and tractable set of priors for the linkage structure that are the Ewens-Pitman (EP) family of random partition models (Pitman, 2006). These are the most general family of priors that satisfy exchangeability (an elementary requirement) and they are more flexible than the uniform priors used in previous work. Second, we incorporate hyperpriors on the EP parameters to further increase flexibility and reduce the need for tuning. This is motivated by the informativeness of the uniform prior used in previous work (Steorts et al., 2016). Third, we propose a more nuanced distortion model for categorical/discrete attributes, which corrects an inconsistency with the standard hit-miss model used by Tancredi and Liseo (2011); Steorts et al. (2016); Steorts (2015). Fourth, we design a partially collapsed Gibbs sampler to fit our model which incorporates computational optimizations.
We evaluate our modeling contributions independently and jointly on a selection of private and non-private data sets, and compare our model with the Bayesian graphical ER model by Steorts (2015) and the Bayesian FS model by Sadinle (2014). We also evaluate our model (and the alternatives) in a controlled simulation study, where we generate synthetic household survey data, with varying numbers of records, levels of distortion, and rates of duplication. Overall, we find our model is more robust across the various scenarios tested, and it typically achieves superior ER accuracies. We provide open source software for all of the ER methods under evaluation, and we provide a discussion of our contributions and directions for future work.
The rest of the paper proceeds as follows. Section 2 provides background on ER and exchangeable random partitions and outlines notation used throughout the paper. Section 3 outlines our proposed Bayesian graphical ER model. Section 4 presents a partially collapsed Gibbs sampling algorithm for approximating the posterior distribution and other computational speedups. Section 5 presents an empirical study of our proposed distortion model and linkage structure priors and includes a comparison to two recent Bayesian ER models. Section 5.5 summarizes a controlled simulation study that is in the Appendix. Section 6 summarizes our findings.

Background and Notation
In this section, we provide notation, assumptions, and a review of exchangeable random partitions which are used as a prior in our model. Figure 2 includes an index of symbols used throughout the paper.

Notation and Assumptions
We review notation and assumptions used throughout the paper. We assume the data (from one or more sources) is structured, meaning that it has been standardized using schema alignment techniques. For the purposes of our paper, the data is represented in a tabular format, where rows correspond to records and columns correspond to attributes. This in contrast to unstructured entity resolution, which deals with textual descriptions (paragraphs) or images. For a full review of these terms, see Papadakis et al. (2021).
Let ∈ {1, . . . , } be an index over the data sources and ∈ {1, . . . , } be an index over the records, which is unique across all sources. The source of the -th record is denoted by ∈ {1, . . . , } and the record's attribute values are represented as a tuple x = ( 1 , . . . , ) indexed by ∈ {1, . . . , }. Assume ∈ D for all and , where the domain of the -th attribute D is a finite set of strings. Suppose there exists a (possibly infinite) population of entities indexed by ∈ ℕ, which is represented in the data. Denote the entity referenced in the -th record by ∈ ℕ and define the linkage structure as = ( 1 , . . . , ).
We consider the most general case where there are no constraints on -i.e., we permit duplicates within sources and arbitrary links across sources. The linkage structure induces a partition of the records into clusters. We label the clusters according to their associated entities, allowing for empty clusters. The size of cluster is denoted by = [ = ] and the number of non-empty clusters is denoted by = We are interested in the fully unsupervised setting where no information is known about the linkage structure or the entities. Our goal is to infer the linkage structure based solely on the observed record attributes {x 1 , . . . , x } and source identifiers { 1 , . . . }. Since we are working in a Bayesian setting, we seek a full posterior (not merely a point estimate) over the linkage structure so that uncertainty can be propagated to post-ER tasks, which may include regression, multiple systems estimation, among other examples (Kaplan et al., 2022;Tancredi and Liseo, 2015;Steorts et al., 2018;Tancredi et al., 2020;Sadinle, 2018). While the post-ER task is not a goal of this paper, the previous references propose recent approaches of such tasks.

Exchangeable Random Partitions
The linkage structure is the primary variable of interest for entity resolution, so we pay special attention to it when designing our model. Since we are working in a Bayesian setting, we must specify a prior on the linkage structure. We previously noted that the linkage structure can be interpreted as a partition of the records into subsets, where the records in each subset correspond to the same entity. This interpretation is convenient, as we can drawn on related work on random partitions when considering potential priors. In this section, we review a special class of random partitions called the Ewens-Pitman (EP) family (Pitman, 2006, p. 62), which we use as a prior on the linkage structure in our model (see Section 3).
Before defining the EP family of random partitions, we define key concepts and notation. We can equivalently define a partition in terms of the linkage structure = ( ) =1... where labels the subset (entity) record is assigned to. Using this notation, the above examples for = 3 could be written as = (1, 1, 2) and = (1, 2, 3).
Let P [ ] denote the set of all partitions of [ ]. A random partition of [ ] is a random variable whose values lie in P [ ] . The EP family are the most general class of random partitions that satisfy One can use any labels to identify the entities. All that matters is that = if records and are assigned to the same entity and ≠ otherwise.
the following two desirable properties: 1. Exchangeability. This means the distribution over the partitions P is invariant under permutations of the record identifiers [ ]. Or equivalently, the distribution over is exchangeable. This is a reasonable requirement if the records have no natural ordering -e.g., it is not known whether one record was generated before or after another.

2.
Consistency. This is a property of the distribution as varies. It ensures the distribution is not altered when more records are observed. This is desirable because the model can be learned sequentially in a consistent manner. We say that the sequence of distributions 1 , 2 , . . . over P [1] , P [2] , . . . is consistent if the distribution on P [ ] induced by for > is . Mathematically, this means In fact, the EP family ensures these properties hold in a limiting sense as → ∞ (Pitman, 2006, p. 62).
We can develop an intuitive understanding of the EP family by examining how a random partition is generated sequentially, one record at a time. Let ( ) =1,2,... be a sequence of EP random partitions where is a random partition of [ ]. We begin at step 1 with 1 = {1} -i.e., a single record assigned to a single entity. The random partition at any later step > 1 is generated conditional on the random partition −1 at step − 1. Let be the number of subsets (occupied entities) in −1 and be the size of subset (entity) in −1 . Then is generated by assigning record to: • an existing subset (entity) with probability − + , or • a "new" subset (entity) with probability + + , where and are EP parameters. This construction is known as a two-parameter Chinese Restaurant Process and is visualized in Figure 1.
The allowable values of the EP parameters fall into two regimes depending on the sign of : • < 0 and = − for some ∈ ℕ. We refer to this regime as the generalized coupon partitions, since they are closely related to the coupon-collector's partition (Pitman, 2006, p. 46). These partitions are generated by sampling with replacement from a finite population of size , where the mixing proportions are drawn from a symmetric Dirichlet distribution with concentration parameter − . The coupon-collector's partition is obtained in the limit → −∞.
• 0 ≤ ≤ 1 with > − . These are called Pitman-Yor partitions after Pitman and Yor (1997), and are generated by sampling with replacement from an infinite population. The resulting Entity 1 Entity 2 Entity New entity · · · · · · 1 − + 2 − + − + + + Figure 1: Illustration of a sequential construction of a Ewens-Pitman random partition. At step a record (black circle) is assigned to one of the occupied entities or a new entity (grey circles) conditioned on the assignments of the previous − 1 records. The probabilities assigned to the entities are given inside the grey circles and are dependent on the Ewens-Pitman parameters and .
partitions demonstrate preferential attachment behavior. The special case = 0 corresponds to the Ewens partition (Kingman, 1978).
To illustrate the varying behavior of the random partitions as a function of , we can examine the asymptotic number of subsets in the partition (entities) as → ∞. Pitman (2006, p. 70) shows where is a strictly positive random variable. Thus, by varying , we can encode a prior belief that the number of entities is asymptotically constant, logarithmic, or sub-linear in .

Graphical Bayesian ER
In this section, we propose a generative model for entity resolution that incorporates a latent population of entities, each with a set of unknown attributes. Our model employs the Ewens-Pitman class of priors on the linkage structure and a modified record distortion model that deviates from the common "hit-miss" model used by Liseo (2011), Steorts (2015) and Steorts et al. (2016). We provide an index of the model's variables and an illustration of their dependence structure in Figure 2. We close the section with a discussion of our model's hyperparameters, including recommendations about how to set these values when limited prior information is available.

Model Specification
Entities. We assume each entity ∈ {1, 2, . . .} is associated with a tuple of attribute values y = ( 1 , . . . , ), drawn independent and identically distributed (i.i.d.) from an unknown distribution G with support on D = D . To improve tractability, we assume correlations (1) (1) index over records index over sources index over attributes index over entities attribute for record distortion indicator for attribute of record distortion propensity for attribute of record source of record linked entity for record distortion distribution for attribute of entity concentration of ψ base distribution for distortion probability for attribute in source attribute for entity distribution over domain for attribute π mixing proportions , Ewens-Pitman parameters D domain of attribute dist distance measure for attribute between attributes are negligible, and place independent Dirichlet Process (DP) priors on each component of G = ( 1 , . . . , ): ind.
where > 0 is a concentration parameter and is a base distribution on domain D .
Links. Each record ∈ {1, . . . , } is linked to an entity ∈ {1, 2, . . .} which is assumed to be drawn from the population with replacement, according to unknown mixing proportions π = ( 1 , 2 , . . .). This process induces a partition of the records into clusters according to their linked entities. Following the discussion about exchangeability in Section 2.2, we assume the partition is drawn from the Ewens-Pitman (EP) family with parameters ( , ). The corresponding distribution on the mixing proportions π depends on the sign of , or equivalently, whether the population of entities is finite or infinite.
For the finite regime (generalized coupon partitions) we let = − < 0 and = for some ∈ ℕ. Our model with hyperpriors on and is as follows: where (0) , (1) , > 0 and 0 < ≤ 1 are hyperparameters, and κ is a vector of length with identical entries . The hyperprior on is a shifted negative binomial distribution with density defined in Appendix A.4.2.
In the infinite regime (Pitman-Yor partitions) the mixing proportions are drawn from a two-parameter Poisson-Dirichlet distribution (Pitman and Yor, 1997). Our model with hyperpriors on and is as follows: where (0) , (1) , (0) , (1) > 0 are hyperparameters. Here we assume > 0 and 0 < < 1, which is a subset of the admissible parameter space: 0 ≤ ≤ 1 and > − . We also consider the case where = 0, which corresponds to the Ewens partition.
Remark. By placing hyperpriors on the EP parameters, we can improve robustness to misspecified hyperparameters which are difficult to set in a non-informative manner. Special cases of the above priors have been used in other ER models, albeit with fixed hyperparameters. Tancredi and Liseo (2011), Steorts (2015) and Steorts et al. (2016) used a coupon-collector's partition with → ∞ and fixed, which was shown to be highly informative for the observed population size. Steorts et al. (2018) used a Pitman-Yor partition with and fixed.

Sources.
We assume the data source ∈ {1, . . . , } associated with record is drawn i.i.d. from a discrete distribution ξ over the sources {1, . . . , }. There is no need to specify ξ since it is independent of the other model parameters, and the source indicators are assumed to be fully observed.
Distortion. We assume the attributes x for record are generated by distorting the associated entity attributes y . For simplicity, we assume the distortion process occurs independently for each attribute. To decide whether the -th attribute is distorted, a binary indicator is drawn which depends on the distortion propensity scaled by a source/attribute-level factor . We place a Beta prior on and assume the distortion propensity is deterministic given the true attribute value . Concretely, we have ind.
where ( The distortion propensity accounts for the fact that some entity attribute values ∈ D are more likely to be distorted than others. It makes use of prior information in the attribute distance measure dist ( , ) (see Section 3.2). If is not close to any other values in the domain, it is unlikely to be distorted and approaches zero. On the other hand, if is close to at least one other value in the domain, distortion can occur and approaches one. This logic is not included in a similar model by Steorts (2015), which effectively assumes = 1.
After drawing the distortion indicator , record attribute is generated by copying the linked entity attribute directly (if = 0) or subject to distortion (if = 1). If = 1, the distorted value is drawn from a distortion distribution associated with the linked entity . We assume itself is drawn from a Dirichlet Process: | , ind.
Summarizing this symbolically, we have where ( ) denotes a point mass at . This is reminiscent of a hit-miss model (Copas and Hilton, 1990). However, our construction differs, in that the record and entity attributes are forbidden from matching ( ≠ ) if the record attribute is distorted ( = 1).
Remark. The hit-miss model of Copas and Hilton (1990) was designed for modeling distortion of continuous attributes. For continuous attributes, the probability of drawing the non-distorted value ( ) from the miss component is zero, assuming is described by a continuous density function. This ensures the record value is always distorted ( ≠ ) if = 1. Our proposal replicates the same behavior for discrete attributes by ensuring has no mass on . This is especially important if were to place significant mass on , as the line between distorted and non-distorted values would become blurred. Apart from the modeling advantages, excluding from the support of also makes inference more tractable as we can collapse (see Appendices A.2 and A.3).

Choice of Hyperparameters
In this section, we provide recommendations for setting the hyperparameters in our model. Distance Measures. Our proposed distortion model is parameterized by a set of distance measures {dist }, one for each attribute ∈ {1, . . . , }. They encode prior knowledge about the likelihood that a record attribute value appears as a distorted alternative to an entity attribute value . The larger the distance dist ( , ), the less likely is a distortion of . Since the likelihood of distorting to may not be the same as the likelihood of distorting to , we do not require that the distance measures are symmetric. We recommend selecting the distance measures carefully, leveraging prior knowledge about the distortion process where possible. For instance, one might select edit distance to model typographic distortion in a generic string-type attribute. For categorical attributes, one could select a constant distance function dist ( , ) ≡ 0, which encodes the prior belief that all values in the domain are equally likely as a distorted alternative to .
Distortion Base Distribution. We recommend using the distance measures to set the base distribution ψ ( ) in Equation (8). Specifically, we recommend a softmax distribution where the temperature parameter is absorbed in the definition of the distance measure, and the indicator function excludes from the support. This places more weight on values in the domain closer to and less weight on values further away. Unlike Steorts (2015), we do not include a factor proportional to the empirical frequency of , as distorted values (e.g., typographical errors) tend to be infrequent for the applications we consider. For a categorical attribute with dist ( , ) ≡ 0, Equation (10) reduces to the uniform distribution. In this case, it may be appropriate to incorporate a factor proportional to the empirical frequencies by setting Other Hyperparameters. In the absence of prior knowledge, we recommend setting the remaining hyperparameters to yield vague priors -i.e., priors that provide little information relative to the experiment (Tiao and Box, 1973;Bernardo and Smith, 2009). We note that there are different views in the Bayesian community about how to specify vague and/or uninformative priors. For more on this, we refer to Syversveen (1998) and Irony and Singpurwalla (1997). Putting aside such debates, our recommendations are as follows: • For the shifted negative binomial prior on , we set and so that the prior mean is and the prior variance is 2 .
• For the beta prior on , we set (0) = (1) = 1 to yield a flat prior.
• For the Dirichlet prior on the entity attribute distribution, we recommend setting = 1 and using a uniform base distribution for all .
• For the beta priors on , , we encode a weak prior belief of low distortion by setting (0) = 1 and (1) = 4 for all , .
Remark. The hyperparameters can be varied to encourage over-linkage (linking records that do not correspond to the same entity) or under-linkage (failing to link records that correspond to the same entity). Since perfect linkage is not always possible, practitioners may have to decide whether over-linkage or under-linkage is preferred for a given application. We can encourage over-linkage in our model by setting: • 1 (prior belief of low diversity in attribute among entities), (1) and (0) (1) (prior belief of more links for Pitman-Yor prior), or • close to 0 and close to 1 (prior belief of more links for generalized coupon prior).
Similarly, we can encourage under-linkage by reversing the inequalities above. We measure the extent to which our model over-or under-links in our empirical evaluation (Section 5) using precision and recall metrics defined in Equations (12) and (13).

Inference
To perform entity resolution using our model, we must find the posterior distribution over the linkage structure conditional on the observed record attributes and their sources. Since the posterior is not analytically tractable, we propose an approximate inference scheme based on Markov chain Monte Carlo (MCMC).
MCMC produces approximate samples from the posterior distribution by constructing a Markov chain whose equilibrium distribution matches the posterior distribution. The samples produced by MCMC are approximate in the sense that they are autocorrelated, and they may only match the equilibrium (posterior) distribution asymptotically. Various algorithms exist within the MCMC framework -we refer the reader to Gamerman and Lopes (2006) or Brooks et al. (2011) for an introduction to the field.
In this paper, we use an MCMC algorithm called partially collapsed Gibbs (PCG) sampling (van Dyk and Park, 2008). It is a generalization of Gibbs sampling that reduces the extent of conditioning in the variable updates by collapsing (marginalizing out) variables and/or updating variables in groups. This can significantly improve convergence and reduce autocorrelation, so long as prescribed rules are followed to ensure the equilibrium distribution of the Markov chain is preserved.
Ideally, we would like to reduce the extent of conditioning as much as possible, however this must be balanced with computational and mathematical constraints. In our proposed sampling scheme, we fully collapse the entity mixing proportions π and the distortion distributions . We partially-collapse the distortion indicators in a joint update for the entity attributes and for the distortion distribution concentration . By collapsing the mixing proportions, we obtain an urn-based scheme for updating the linkage structure similar to those used for nonparametric mixture models (Neal, 2000). In the remainder of this section, we highlight some less trivial aspects of inference -full details are provided in Appendix A.

Nonconjugacy
While we attempted to maintain conjugacy in our model, we were unable to avoid nonconjugate priors in some cases. This complicates inference, as the posterior conditional distributions used in Gibbs sampling are no longer of a standard form. There are several well-established methods for dealing with nonconjugacy, including Metropolis-Hastings algorithms (Chib and Greenberg, 1995), rejection sampling (Gilks and Wild, 1992) and auxiliary variable methods (Damlen et al., 1999). We opt to use auxiliary variable methods owing to their simplicity, as there is no need to design proposals or monitor acceptance rates.
There are three sets of parameters in our model for which nonconjugacy is an issue: 1. The distortion probabilities defined in Equation (4), where the incorporation of the distortion propensities breaks the conjugacy of the beta prior. We propose an auxiliary variable sampling scheme to update in Appendix A.1. (2) or and defined in Equation (3), depending on the regime. We use an auxiliary variable scheme proposed by Teh (2006), to update and under a gamma and beta prior, as summarized in Appendix A.4.1. We design an auxiliary variable update for and under a gamma and shifted negative binomial prior in Appendix A.4.2.

The EP parameters: and defined in Equation
3. The distortion distribution concentration defined in Equation (7). We design an auxiliary variable update for in Appendix A.5. Marchant et al. (2021) demonstrated the importance of collapsing the distortion indicators { } to improve convergence/mixing for a hit-miss model similar to Equation (9). The posterior factors involving factorize over and , so that collapsing yields:

Collapsing the Distortion Indicators
We use this result to implement a collapsed update for the entity attributes { }. While it is possible to implement a collapsed update for the linkage structure { }, we opt not to do so, since conditioning on the distortion indicators allows us to reduce computational complexity via indexing (see Section 4.3). This seems to be more efficient empirically (Marchant et al., 2021), so long as the level of distortion is not too high.

Computational Considerations
We now discuss ways of improving the computational complexity. The main bottleneck is the update for the linkage structure which scales naïvely as ( · ) where is the number of instantiated entities. The update for the entity attributes may also be problematic for large domains D as it scales as ( · |D |) for the -th attribute.
We are able to reduce the computational complexity of the linkage structure update by exploiting constraints imposed by the distortion model. Close inspection of the update for the entity linked to record (see Appendix A.3) reveals that some entities can be immediately excluded from consideration. Specifically, only those entities whose attributes match the corresponding nondistorted record attributes ( with = 0) may be linked to record . In order to efficiently query this set of entities, we maintain inverted indices that map an attribute value ∈ D to the set of entities instantiated with that value { : = }. This approach is considerably more efficient than iterating over all entities sequentially, so long as the level of distortion is relatively low. However it is important to note that it relies crucially on not collapsing the distortion indicators.
To improve the complexity of the entity attribute update, we can impose a cut-off on the distance When stating time complexities in this section, we assume a categorical random variate can be drawn in Θ( ) time where is the number of categories. The algorithm proposed by Vose (1991) satisfies this constraint. measures. Concretely, for attribute we replace the "raw" distance measure dist by where (cut) ∈ (0, ∞) is a configurable cut-off. This approximation eliminates the need to consider unlikely distortions from entity attribute to record attribute , for which dist( , ) > (cut) . It plays a similar role to blocking in the record linkage literature (Christen, 2012b) and resembles an approach proposed by Marchant et al. (2021). To make use of this approximation, we build indices that can efficiently answer range queries -one for each attribute. The index for the -th attribute takes a query value ∈ D and returns the set of entity attribute values that fall below the cut-off:

Model Comparisons
We conduct an empirical study of our ER model using data sets for which the true linkage structure is known. Section 5.1 describes the data sets used in the study, which are motivated by ER applications in private and non-private settings. We explain how our model (and baseline models) are evaluated in Section 5.2, by computing metrics that assess how well the posterior predictions align with the true linkage structure. Section 5.3 assesses the impact of our modeling contributions by varying the distortion model and the prior on the linkage structure. Section 5.4 compares our ER model against baselines proposed by Sadinle (2014) and Steorts (2015). Finally, Section 5.5 summarizes a controlled simulation study that can be found in Appendix C.

Data Sets
We study entity resolution in private and non-private settings, both of which are encountered by practitioners. The data sets we use in our study are summarized in Table 1.
Private Setting. In this setting the practitioner has access to de-identified data, where sensitive attributes such as names, addresses, phone numbers, etc. are removed. This can make ER quite challenging, as the remaining non-sensitive attributes may carry limited information about the identity of records. To study ER in this setting, we use data extracted from the National Long Term Care Survey (Manton, 2010), which we refer to as nltcs.
Our extract contains de-identified respondent records from the 1982, 1989 and 1994 waves of the survey in the U.S. state of Alabama. We use all of the available attributes for ER, which include date of birth (DOB_YEAR, DOB_MONTH, DOB_DAY), registration office (REGOFF) and sex (SEX). Since the data is well-curated, the only distortion that can occur is when a valid attribute value is replaced by another valid attribute value. We, therefore, model the attributes as categorical by employing a constant distance function.
Non-private Setting. In this setting, the practitioner has access to data with sensitive attributes, such as names. We assume unique identifiers such as social security numbers are not available, as ER would otherwise be trivial. Obtaining real survey data for a non-private setting with ground truth is challenging, so we use three publicly-available data sets from the ER literature. Although these data sets cover other domains, they exhibit characteristics one would expect to encounter in real survey data. Namely, we observe the presence of multiple "name-like" attributes, as well as different levels of variation and distortion. Below, we provide a brief description of each data set and the attributes used for ER: • RLdata is a synthetic person data set, where 10% of the records are duplicates with random errors (Sariyar and Borg, 2010). We model the name attributes (fname_c1 and lname_c1) using the normalized Levenshtein distance measure. The attributes related to date of birthbd, bm and by -are modeled as categorical attributes with a constant distance measure.
• cora is a collection of computer science citation records hosted on the RIDDLE repository (Bilenko, 2003). It is the "dirtiest" of all the data sets we consider, as it was extracted from various online sources with different citation styles. As a pre-processing step, we separate hyphenated words and remove punctuation. We also correct several erroneous ground truth labels. The title, venue and authors attributes generally contain multiple words with semantic and character-level variations, and are therefore modeled using a hybrid token/edit distance measure described in Appendix B. The year attribute is modeled using normalized Levenshtein distance.
• rest is a collection of restaurant records from the Fodor and Zagat restaurant guides hosted on the RIDDLE repository (Bilenko, 2003). It is not as "dirty" as cora as there are fewer sources and less variation between them. We applied the same pre-processing steps as for cora. The name and addr attributes generally contain multiple words and are therefore modeled using the same hybrid distance measure as for cora. The city and type (cuisine) attributes are modeled as categorical with a constant distance measure.

Model Evaluation
We evaluate an ER model on a data set by comparing the inferred linkage structureΛ to the true linkage structure Λ. Recall that Λ = ( 1 , . . . , ) specifies the corresponding entity for each record in the data set. The agreement betweenΛ and Λ can be measured using pairwise precision and recall. The pairwise precision is the proportion of record pairs linked inΛ that are also linked This is a benchmark data set that is widely used in the literature.
in Λ: It takes on values from 0 to 1, where larger values indicate fewer false positive errors. The pairwise recall is the proportion of record pairs linked in Λ that are also linked inΛ: It takes on values from 0 to 1, where larger values indicate fewer false negative errors. It is rarely possible to achieve high precision and recall -one must usually make a trade-off depending on which types of errors are more costly in a given application. If precision and recall are equally important, then one can measure the agreement betweenΛ and Λ using the pairwise F1 score which is the harmonic mean of precision and recall: Since the models in our study are Bayesian, the inferred (posterior) linkage structureΛ is a random variable and the metrics in Equations (12)-(14) can be regarded as random variables. We estimate the distribution of the metrics under the posterior using samples generated via MCMC. In doing so, we are able to account for posterior uncertainty in our evaluation. For each metric, we report a point estimate using the median, along with a 95% equi-tailed credible interval. Further details about the MCMC implementation and configuration for each model are provided in Appendix E and MCMC diagnostics are in Appendix G.

Study of Linkage Structure Priors and Distortion Model
In this section, we study the effect of two modeling contributions proposed in Section 3.1 -the Ewens-Pitman (EP) linkage structure priors and the refined distortion model. Our objective is to determine the impact of each modeling contribution in isolation, using the blink model as a baseline. We summarize the results here and refer the reader to Appendix D for comprehensive results covering eight combinations of linkage structure priors and distortion models.
3. GenCoupon: generalized coupon regime with = − < 0 and hyperpriors on , as detailed in Equation (2). Under-linkage is observed for cora, which is likely due to significant noise that is not well-captured by the distortion model.
The first three priors are flexible, in the sense that hyperpriors are placed on the EP parameters. The last prior is a particular instance of GenCoupon where the EP parameters are fixed. It is used in models by Tancredi and Liseo (2011), Steorts (2015) and Steorts et al. (2016), and serves as a baseline here.
ER evaluation metrics are presented in Table 2 for the four linkage structure priors, assuming the rest of the model follows the specification in Section 3.1. Another perspective on ER accuracy is provided in Figure 3, which plots the relative error in the inferred number of entities. Both results demonstrate the benefit of placing hyperpriors on the EP parameters, as is done for PY, Ewens, and GenCoupon. These linkage structure priors achieve superior F1 scores compared to Coupon, where the EP parameters are fixed. Figure S8 (Appendix D) is consistent with this finding, demonstrating that vastly different values of the EP parameters are inferred for each data set when hyperpriors are used. Another interesting observation is the fact that the ER accuracy is relatively similar among PY, Ewens and GenCoupon. This was unexpected at first, given the three parameter regimes are known to exhibit distinct asymptotic behavior (see equation 1). This suggests all three regimes may be flexible enough to model the linkage structure of the data sets we consider here. It would be interesting to see if these observations translate to much larger data sets, where the asymptotic behavior of the three regimes would become more apparent.
Distortion Model. We compare our proposed distortion model (specified in the latter part of Section 3.1) to the distortion model proposed by Steorts (2015). For brevity, we refer to our distortion model as Ours and that of Steorts (2015) as blink. Here, we report results for the GenCoupon linkage structure prior -the results for the other linkage structure priors are reported in Appendix D and exhibit similar trends. Figure 4 plots the inferred level of distortion for each attribute under both distortion models. It shows that blink tends to encourage high distortion, particularly for attributes with non-constant distance measures. For example, the fname_c1 and lname_c1 attributes for RLdata are predicted The attributes modeled with non-constant distance measures are: all attributes for cora, name and addr for rest, and fname_c1 and lname_c1 for RLdata.   Table 2: Posterior evaluation metrics for our model under four linkage structure priors corresponding to distinct Ewens-Pitman (EP) parameter regimes. A point estimate for each evaluation metric is reported based on the median, along with a 95% equi-tailed credible interval. Similar performance is observed for the three regimes where the EP parameters are permitted to vary (PY, Ewens and GenCoupon). A significant drop in performance is observed for the Coupon regime on RLdata and cora. . The blink distortion model tends to favor higher levels of distortion -in some cases approaching 100 percent -which is not consistent with expectations.
to be almost 100% distorted under the blink distortion model, which is inconsistent with expectations for this data set. Our distortion model does not appear to suffer from this problem, as it requires disagreement between entity and record attributes in order to classify them as "distorted". Since high distortion makes reliable linkage more challenging, we expect that our distortion model is likely to perform better in practice. Indeed, it achieves a better balance between precision and recall in our full results (see Figure S7 in Appendix D).
Summary. We return to the original goal of this section and summarize what we have learned in this study. First, we have learned that our proposed linkage structure prior is generally more robust due to the use of hyperpriors. In addition, our inferences are relatively insensitive to the EP parameter regime, which may be due to the fact that the data sets are relatively small in size. This behavior also holds for the blink distortion model when combined with our proposed linkage structure priors. Second, when studying the performance of the distortion models (blink versus our proposed distortion model), we find that ours predicts more reasonable distortion rates and improves the linkage accuracy as measured by F1 score. Thus, based on this study, we would recommend our distortion model and linkage structure priors moving forward for data sets similar to those we have considered. However, we stress that further exploration is needed to provide more general recommendations for other data sets.

Comparison with Baseline Models
In this section, we study how our entity resolution model performs in comparison with models by Steorts (2015) and (Sadinle, 2014). The blink model by Steorts (2015) is a natural baseline to consider, as it served as inspiration for our model. Compared to our model, blink is less Bayesian in its design, as many of the parameters are set empirically or arbitrarily. Both our model and blink, are examples of direct modeling approaches to ER -i.e., they model how the observed records are generated, incorporating the linkage structure as a latent variable. In contrast, the model by Sadinle (2014) (which we refer to as Sadinle) adopts a comparison-based approach to ER. Instead of modeling the data directly, it models attribute-level comparisons between pairs of records, incorporating the presence/absence of a link between the pair as a latent variable. Sadinle (2018) compares direct and comparison-based approaches from a methodological perspective, however we are not aware of any empirical comparisons in the literature. Our goal in this section is to provide a comparison for the first time on a variety of data sets, where we make no strong claims that our results generalize to all applications or data sets.
In order to make the comparison as fair as possible, we use the same distance functions to model the distortion in our model, blink, and Sadinle. For instance, if we use edit distance to model distortion for a name attribute in our model and blink, then we also use edit distance to compare the same name attribute in Sadinle. We set the distance cut-offs for our model and blink (see Section 4.3) to align with the blocking design used for Sadinle. Further information about our experimental setup is provided in Appendix E.
ER evaluation metrics are presented in Table 3 for all three models. For simplicity we only provide results for our model under the GenCoupon prior, which is denoted Ours in the table. Our model achieves the highest (or equal-highest) F1 score for all four data sets. We expect the poorer performance of blink is due to its use of subjective (inflexible) priors and its distortion model, which tends to favour high distortion and over-linkage. Sadinle achieves the second highest F1 score in the non-private setting (RLdata, cora, rest), and the lowest F1 score in the private setting (nltcs). The poorer performance for nltcs may be partly related to the blocking scheme, which is less aggressive, leaving the model more susceptible to over-linkage. Another important factor is the sensitivity of Sadinle to the truncation points for the priors on the -probabilities. We perform coarse-grained tuning of the truncation points in Appendix F, however fine-grained tuning could result in additional performance gains.

Summary.
This study provides evidence that our model achieves a better balance between precision and recall than blink and Sadinle. We stress that our results are based on four data sets -further experimentation is required to determine whether our results generalize to other data sets and applications. We speculate that the better performance of our model is mainly due to improved flexibility resulting from the addition of priors and hyperpriors, which can be viewed as performing model selection.    (Steorts, 2015) and Sadinle (Sadinle, 2014). A point estimate for each evaluation metric is reported based on the median, along with a 95% equi-tailed credible interval. Our model achieves the highest (or equal-highest) F1 score within the credible intervals for all data sets.

Controlled Simulation Study
We conduct a simulation study to evaluate our model under controlled conditions, where we vary the size of the data set, the level of distortion, and the level of duplication. Due to space constraints, we summarize the study here -full details can be found in Appendix C. We design a simulator for household survey data sets, where responses are collected for individuals within households. Since the attributes of individuals within a household are dependent -e.g., the address is the same, family members may share the same last name, the age of individuals may be correlated -the simulated data follows a more complex generative process than our entity resolution model. This is intentional, as it allows us to evaluate our model in a more realistic setting where it is misspecified for the data. The dataset simulator also incorporates a record generation process which is misspecified for our model. Rather than sampling individuals from the population, our data set simulator iterates over all individuals, randomly deciding whether to include the individual, and if so, how many distorted records to create.
We run entity resolution using our model on 16 simulated data sets, using the blink and Sadinle models as baselines. We summarize the results across three factors below: • Duplication level. The level of duplication has minimal impact on the performance of our model. blink performs well for moderate to high levels of duplication, however it over-links severely when the level of duplication is low. The performance of Sadinle does not seem to follow a consistent trend as the duplication varies -it achieves a lower F1 score than our model and blink in all cases.
• Distortion level. We find the more distorted data sets are more difficult to link. Specifically, we observe a drop in recall of around 10 percentage points for our model and blink when compared to the data sets with low distortion. Larger drops in recall of 15-20 percentage points are observed for Sadinle.
• Data set size. We find that our model performs similarly for both data set sizes (1000 and 10000 records). blink also performs similarly in most scenarios, however, we observe a drop in precision for the larger data set when the level of duplication is low. Sadinle performs significantly worse for the larger data sets in terms of precision, however, the recall is relatively stable.
In summary, the simulation study shows that our model achieves the most consistent performance across all scenarios tested. blink is also competitive, but it is has poor performance in the low duplication scenario. Sadinle achieves the lowest F1 score when the level of duplication is non-negligible, and is somewhat competitive when the level of distortion is low.

Discussion
In this section, we summarize our contributions and provide a discussion regarding future work. We have proposed a Bayesian model for entity resolution that addresses limitations of previous work (Steorts et al., 2016;Steorts, 2015;Marchant et al., 2021). Our model can be viewed as performing graphical entity resolution, where observed records are clustered to (unobserved) latent entities. To improve upon the scalability of previous work, we designed a partially collapsed Gibbs sampler with an optimized implementation that can handle data sets of around 10,000 records. This allowed us to provide comparisons with models by Steorts (2015) and Sadinle (2014), which was previously only possible for toy-sized data sets. We provided comparisons to real and synthetic data sets and a controlled simulation study. We observed that our model tends to be less sensitive to changes in the hyperparameters than competing models for the data sets considered. Further analysis is required to make more general conclusions beyond the data sets and simulations considered in this paper.
There are many potential avenues for future work. First, it would be of interest to explore scaling for our proposed model and the model by Sadinle (2014). This could be achieved by designing parallel/distributed inference algorithms, by investigating more efficient MCMC algorithms, or by resorting to blocking techniques. Another area of interest, is exploring more diverse data sets to understand the strengths and weaknesses of each method in practice. Although we made recommendations based on four data sets and a simulation study, more comparisons would help to alleviate any selection bias regarding data sets and provide guidance to users. Finally, future work could consider microclustering priors (Miller et al., 2015) to assess their effectiveness compared to the infinitely-exchangeable linkage structure priors considered here. This would require modifying the sampling scheme and selecting an appropriate class of microclustering priors.

A Gibbs Updates
In this appendix, we derive updates for the partially collapsed Gibbs sampler used to perform approximate inference for the ER model introduced in Section 3. Some of the updates are non-trivial due to non-conjugacy of the proposed model.

A.1 Update for the Distortion Probabilities
In this section, we provide the update for the distortion probability (source and attribute ). This update is complicated by the presence of the distortion propensity variables , which breaks the conjugacy of the beta prior. To overcome this problem, we introduce the following auxiliary variables: and modify the conditional distribution for the distortion indicators as follows: It is straightforward to show that one recovers the original model in Equation (5) when the auxiliary variables are marginalized out.
Observe that the contribution to the posterior involving is Thus, the distribution of conditional on the other variables is: Next, observe that the contribution to the posterior involving is Hence, the distribution of conditional on the other variables is: It is also straightforward to see that the distribution of conditional on the other variables is a point mass. In particular, we have In summary, to update the distortion probabilities, one would first compute the distortion indicators { } using Equation (S17). Then, conditional on the other variables, one would draw auxiliary variables { } using Equation (S15). Finally, one can update the distortion probabilities { } using Equation (S16). The updates for the other model parameters are unaffected by the introduction of the auxiliary variables { }.

A.2 Update for the Entity Attributes
In this section, we provide the update for the entity attributes. When updating entity attribute , we collapse the base distribution and distortion indicators Z.
The posterior factors involving after collapsing are as follows: where B(·) is the multivariate beta function, V = : = { } \ { } are the distorted record values for the -th attribute associated with entity , ( ) = : = [ = ] is the number of records linked to entity whose -th attribute is equal to and¯( ) = : = [ ≠ ] is the number of records linked to entity whose -th attribute is not equal to .
We can rewrite the above expression in a more computationally convenient form by repeatedly applying the recurrence relation for the Gamma functions to yield: Observe that the above distribution may only have support on a subset of the full domain D when distance thresholds are applied, as discussed in Section 4.3. In particular, one can show that the support is a subset of This fact can be used to implement the update more efficiently, since it is not necessary to construct a pmf over the entire domain D .

A.3 Update for the Linkage Structure
In this section, we provide the update for the linkage structure. When updating the linkage structure, we use an urn-based scheme as described by Neal (2000). In doing so, we only need to keep track of entities in the population that are linked to records -any isolated entities not linked to records are ignored. This is important, as the population may be infinite in size for some Ewens-Pitman parameter regimes (when ≥ 0).
To update the linked entity for record , we remove the current link and allow the record to either join one of the remaining instantiated entities (with at least one other record) or instantiate a "new" entity. The conditional distribution has the following form: where • is a normalization constant; • − = ( 1 , . . . , −1 , +1 , . . . , ) are the linked entities for all records excluding ; Repeated application of the recurrence relation for the Gamma function yields Γ( ) = Γ( + + 1) ( + 1) · · · ( + ) for complex (excluding zero and the negative integers) and non-negative integer . • ℍ 0, is the prior for ; and • ℍ − , is the posterior for given the observed distorted record attributes for which ≠ and = (also conditioned on and ).
Recall that the prior ℍ 0, for conditioned on and is Dirichlet( ψ ( )). Since is Categorical( ) if = 1 (and a point mass at if = 0), the posterior ℍ − , is also Dirichlet by conjugacy. In particular, one can show that for ∈ D \ { }. We can therefore simplify the integral in Equation (S18) with respect to ℍ − , as follows: By a similar argument, the integral in Equation (S18) with respect to ℍ 0, can be simplified to: Putting Equations (S19) and (S20) in (S18), then gives:

A.4 Update for the Ewens-Pitman Parameters
In this section, we describe the update for the Ewens-Pitman parameters. Since the priors on the Ewens-Pitman parameters and are non-conjugate, we cannot perform a direct Gibbs update. Thus, we describe tractable updates which require the introduction of auxiliary variables. The updates (and priors) differ depending on the range of . Teh (2006) proposed a scheme for beta/gamma priors when 0 ≤ < 1 and > 0, which is summarized in Section A.4.1. In Section A.4.2 we propose a similar scheme for gamma/shifted negative binomial priors when < 0.

A.4.1 Case 0 ≤ < 1 and > 0
Teh (2006) proposed an auxiliary variable scheme for the regime 0 ≤ < 1 and > 0 such that the priors ∼ Beta (0) , (1) and ∼ Gamma (0) , (1) are conjugate. We provide a summary of the scheme here, but refer the reader to (Teh, 2006) for further details. The scheme introduces the following sets of auxiliary variables conditional on the two parameters and : Here = |{ : = }| denotes the number of records linked to entity , and = [ > 1] denotes the number of entities linked to at least one record.
It follows that the posterior distributions of and conditional on the auxiliary variables and other model parameter are given by: Thus, to update and , one would first draw auxiliary variables , { } and { } conditional on the linkage structure and the old values of and using Equation (S21). Then, conditional on the auxiliary variables and the linkage structure, one would draw new values for and using Equation (S22).

A.4.2 Case < 0 and = for ∈ ℕ
We describe an auxiliary variable scheme for updating the Ewens-Pitman parameters in the regime < 0 and = , where ∈ ℕ and > 0. This scheme is inspired by Teh (2006). The likelihood factor associated with the partition of records into entities is as follows (Pitman, 2006): where "partition config" is a representation of the linkage structure as a partition , is the number of records linked to the -th entity, ( ) ↑ = is the falling factorial. We begin by expressing the denominator in this equation as which allows us to introduce the following auxiliary variable: Expressing each of the latter factors in Equation (S23) as 1− permits us to introduce the following additional auxiliary variables: With this representation, we can place conjugate priors on and , namely: ∼ Gamma( (0) , (1) ) and ∼ NegativeBinomial( , ) + 1.
The distribution on is a shifted negative binomial with support on the positive integers. The parameterization we adopt for the negative binomial is in terms of the number of failures ∈ {0, 1, 2, . . .} in a sequence of trials before a given number of successes > 0 occur. Each trial is an i.i.d. draw from a Bernoulli distribution with success probability . The density of is given by Finally, we combine the priors in Equation (S26) with the likelihood factors to obtain the following posterior distributions for the and , conditional on the other model parameters: Thus, to update and , one would first draw auxiliary variables and { } conditional on the linkage structure and the old values of and using Equations (S24) and (S25). Then, conditional on the auxiliary variables and the linkage structure, one would draw new values for and using Equation (S27).

A.5 Update for the Distortion Distribution Concentration
In this section, we provide the update for the distortion distribution concentration . Since we cannot rely on conjugacy for the update, we propose an auxiliary variable scheme. When updating , we condition on the entity attribute values { } =1... , the record attribute values { } =1... and the links = { } =1... . We collapse the distortion distributions { } =1... . The contribution to the likelihood involving is: for all . We can also express each of the latter factors in Equation (S28) as which permits us to introduce the following auxiliary variables: for all , ∈ V and ∈ {1, . . . , ( )}.

Now since the prior on is Gamma( (0)
, (1) ), we obtain the following posterior distribution for conditional on the other parameters: Thus to update , one would first draw auxiliary variables { } and { } conditional on the record attributes X, entity attributes Y , linkage structure , and the previous value of using Equations (A.5) and (S29). Then, conditional on the auxiliary variables and X, Y , , one would draw a new value for using Equation (S30).

B Hybrid Distance Measure
In this appendix, we describe a hybrid distance measure that is useful for comparing text strings containing multiple tokens (words), where individual tokens may be subject to distortion. We use the measure in this paper for comparing name and address attributes in the cora and rest data sets (see Section 5.1), however it may have wider applications beyond this paper. Our measure draws inspiration from a hybrid similarity measure proposed by Monge and Elkan (1996). However, unlike Monge and Elkan, we attempt to match the tokens in each string while incorporating penalties for tokens that are "missing" in one of the strings.
Suppose we would like to compare a pair of multi-token strings and . As a running example, we consider = "University of California, San Diego" and = "Univ. Calif., San Diego". Given a separator character (e.g., a space), we can map each string to a set of tokens. For example, string from our running example would be mapped to = {"California,", "Diego", "of", "San", "University"}.
Note that we have used capital to denote the token set representation of string -a convention we adopt throughout this appendix. Also note that is a lossy representation of , as it discards information about the token order. This is desirable for our applications to names and addresses , where permutation of the tokens does not significantly change the meaning of the strings.
We propose to measure the distance from to via a generalized edit distance on the token sets and . We consider three elementary edit operations: • token insertions where a token is appended to the input set; • token deletions where a token is removed from the input set; and • token substitutions where a token in the input set is replaced by a token ≠ .
Each elementary operation takes an input set to an output set , which we write as → , and has an associated cost ( → ) ≥ 0. We let Technically we consider a multi-set, since we allow tokens to appear multiple times. Specifically, the title, venue and authors attributes in cora, and the name and addr attributes in rest.
where , and are non-negative weights; is the null string; and dist inner (·, ·) is an inner distance measure on tokens (strings). We then define the hybrid distance between and as the minimum average cost of transforming into via a sequence of elementary edit operations , = ( → 1 , 1 → 2 , . . . , −1 → ). Symbolically, we write We can compute the hybrid distance using an off-the-shelf linear sum assignment problem (LSAP) solver (Crouse, 2016). In order to do so, we need to add null string tokens to and to account for all possible insertion and deletion operations. Concretely, we add | | null tokens to to allow for insertions and | | null tokens to to allow deletions. We then construct a pairwise cost matrix by applying dist inner to all pairs of tokens in (the amended) and . The resulting matrix is then passed to the LSAP solver, which returns the optimal set of edit operations and their cost.

C Simulation Study
In this appendix, we conduct a simulation study to understand how our model performs in controlled scenarios. Specifically, we simulate entity resolution data sets where we vary the number of records, the level of distortion and the level of duplication. Since our model is generative, we could use it to simulate data, however the resulting data would have negligible specification error for our model, which is not realistic. We therefore simulate data that is purposefully misspecified for our model by adding additional dependencies between the entities and entity attributes, and by using a different process to generate records from entities. We were unable to find an existing data set simulator that generated such data, so we implemented our own.

C.1 Data Set Simulator
We provide an overview of our simulator, which generates personal records describing a population of households. For brevity, we omit low-level details here and refer the reader to the included Python script. Our simulator operates in two stages: in the first stage it generates a population of households, then in the second stage it iterates over individuals in all households, generating a random number of distorted records for each individual. By generating households rather than individuals in the first stage, we are able to incorporate additional dependencies between individuals (entities) that are not present in our ER model (see Section 3.1).
Generating Households. We now describe how households are generated in the first stage. In our simplified model, a household may be a couple, a single, a couple or single with children, or a group of unrelated adults. Individuals within a household are described by the following attributes: first and last name (first_name and last_name), date of birth (birth_year, birth_month, birth_day), gender and zipcode. The zipcode is constrained to be the same for all individuals within a household, and the first name is conditioned on the gender, however the other attributes may vary as described below. Random values for attributes are generated using the Faker Python library , which attempts to mimic real-world frequency distributions. We make the distributions more concentrated for the name and zipcode attributes to ensure the entities are not too unique (otherwise entity resolution would be too easy).
We begin by generating the head(s) of the household, which are a male and female couple (for simplicity) or a single male or female. If a couple is generated, they have a high chance of sharing the same last name and their birth years are likely not too far apart. Next we randomly decide whether to generate children. If children are generated, they share the same last name as the head(s) of the household (the parents) and there is an appropriate gap between their birth year and their parents' birth year. If no children are generated, then we randomly decide whether to generate unrelated adults who live with the head(s) of household. The unrelated adults are constrained to be of a similar age as the head(s) of the household. When simulating the household composition, we attempt to follow aggregate statistics from the Current Population Survey (U.S. Census Bureau, 2016).

Generating Records.
In the second stage, records are generated for individuals across all households. We simulate a single database/file with duplicate records by including an individual with probability inc = 0.9 and sampling the number of records according to a Poisson distribution with rate parameter , truncated to the interval [1,4]. Each record is obtained by copying the entity attributes subject to distortion. This is done by iterating over the attributes in a random order, and deciding whether to activate the distortion process with a probability that varies for each attribute. The distortion process for birth day, birth month, gender and zipcode involves drawing a replacement value according to the distribution used in the first stage. The distortion process for birth year involves adding discrete Gaussian noise to the true birth year. The distortion process for first and last name may proceed in one of three ways: (1) by making a random typographical error (character insertion, deletion, substitution or transposition); (2) by replacing the name with a variant drawn uniformly at random; or (3) by generating a replacement according to the distribution used in the first stage. Variant names for (2) are sourced from the WeRelate.org Variant Names Project .

C.2 Results
We generate 16 data sets using our simulator for each combination of the following variables:  Records per entity (cluster size) Relative frequency Figure S5: Distribution of records per entity for each level of duplication: low, medium, high and very high. The Poisson rate parameter for each level is given in parentheses.
• Number of records. We consider data sets with 1000 and 10000 records in expectation. The number of records is random and depends on the number of individuals and the Poisson rate parameter. Since the Poisson rate parameter is fixed (see below), we control the number of records by varying the number of individuals generated in the first stage.
• Level of distortion. We consider two levels of record distortion which we refer to as "low" and "high". These correspond to different choices for the probabilities of activating the distortion process as detailed in Table S4.
• Level of duplication. We consider four levels of duplication which we refer to as "low", "medium", "high" and "very high". These levels correspond to Poisson rate parameters of 0.1, 1, 8 and 100, respectively. When the duplication is "low" ( = 0.1) over 95% of the entities represented in the data only appear once. Whereas when the duplication is "very high" ( = 100) over 95% of the entities represented in the data appear four times. The distribution of records per entity is plotted for each level in Figure S5.
We perform a comparative evaluation of our model, blink, and Sadinle on the 16 simulated data sets. The model evaluation procedure is described in Section 5.2 and the blink and Sadinle models are introduced in Section 5.4. ER evaluation metrics are plotted for each data set in Figure S6. We now make several observations about the results.
First, we observe that our model and blink perform similarly when the duplication level is medium, high or very high. For these duplication levels, blink has a slight advantage in terms of recall when the distortion level is high. The largest difference is observed for the medium duplication/high distortion scenario, where the recall for blink is roughly 10 percentage points higher than for our model. This difference is due to the priors placed on the concentration parameters in our model, which favour high concentrations. This corresponds to a prior belief that distortions occur in the same way, rather than in multiple different ways. However, this is not true for distortions in the simulated data -e.g., an individual whose first name is "JONATHON" may appear in the data with four distinct first names: "JOHN", "JOJN", "JONATHON" and "ALEX". If we wanted to exploit this knowledge, we could increase (0) for our model to favour lower concentrations.
Second, we observe that our model significantly outperforms blink in terms of precision when the duplication level is low. We believe this is due to the highly informative prior on the linkage structure used in blink -it uses a coupon prior with fixed to the number of records and → ∞. However our model under the generalized coupon prior selects a value for of approximately 8 × and of approximately 100, which allows it to more accurately model a low duplication scenario.
Thirdly, we observe that Sadinle achieves the lowest F1 score when the duplication level is medium, high or very high. For these duplication levels, the performance gap in F1 score is largest when the distortion is high -approximately 20 percentage points. The gap is less significant when the distortion is low -around 5 percentage points. These differences in F1 score are mainly due to lower recall in most cases -the precision is generally competitive with the other models.
Fourthly, we comment on the effect of the dataset size, measured in terms of the expected number of records (1000 or 10000). We observe similar trends for all three models: the precision tends to be larger for the smaller dataset, while the recall tends to be larger for the larger dataset (however there are exceptions). The difference in performance is most pronounced for the low duplication setting, where the precision of blink and Sadinle drops considerably for the larger dataset, and the recall of Sadinle drops also drops considerably for the larger dataset.

D Study of Linkage Structure Priors and the Distortion Model
In this appendix, we provide additional results for the study of linkage structure priors and distortion models presented in Section 5.3. Our goal is to study the impact of the modeling contributions independently, to determine whether each contribution is beneficial in its own right, and/or whether one contribution is more beneficial than the other.
Recall from Section 5.3 that we considered four parameter regimes for the linkage structure priors -PY, Ewens, GenCoupon and Coupon -and two distortion models -Ours as proposed in Section 3.1 and blink as proposed by Steorts (2015). Thus, there are eight model variants to test -one for each linkage structure prior and distortion model. Figure S7 presents pairwise evaluation metrics (F1 score, precision and recall) for the eight model variants and four data sets in a single plot. We interpret the results for each modeling contribution below.  Linkage Structure Prior. While we discuss the effect of the linkage structure prior in Section 5.3, we only present results for our distortion model (Ours) due to space constraints. In Section 5.3, we draw two main conclusions from the results in Table 2 which are replicated in Figure S7 (represented by circular vermilion markers): 1. Our proposal to place vague hyperpriors on the EP parameters (for PY, Ewens and GenCoupon) improves robustness and yields the highest ER accuracy for three of data sets, as measured by pairwise F1 score (nltcs is the exception). We observe significantly lower F1 scores when hyperpriors are not used (see Coupon in Figure S7), particularly for cora and RLdata. Figure S8 provides further justification for this argument, as it shows vastly different values of the EP parameters are selected for each data set, facilitated by the vague hyperpriors.
2. Our inferences are relatively insensitive to the EP parameter regime (PY, Ewens or GenCoupon) despite the fact that each regime is known to exhibit distinct asymptotic behavior (see Section 2.2). Figure S7 shows that these conclusions also hold for the blink distortion model (represented by triangular teal markers). We expect the competitive performance for nltcs under the Coupon linkage prior may be a coincidence, as the population size under the prior is 3,387, which happens to be very close to the true value of 3,307 (see Table 1).
Distortion Model. In Section 5.3, we discuss the effect of the distortion prior under the GenCoupon linkage structure prior. We now extend the discussion to include results for the three other linkage structure priors (PY, Ewens and Coupon), as presented in Figure S7.
We find that our distortion model achieves the highest F1 score for all but one of the data sets and linkage structure priors. The exception is for cora under the Coupon linkage structure prior, where the blink distortion model has a slight edge. An explanation for the improved performance of our model is given in Section 5.3, which we summarize here. The blink distortion model is susceptible to entering a high distortion mode, particularly for attributes with non-constant distance measures. This is because it allows a record attribute value to be marked as "distorted" even if it is not actually distorted. Our model corrects this inconsistency, and in doing so appears to be more robust. In general, we expect the blink distortion model to result in over-linkage (high recall, low precision), while our model is expected to be more balanced. Figure S7 supports this argument, with the difference being most apparent for RLdata, where we see a difference of ∼0.7 in the F1 score.

E Further Details of Experimental Setup
In this appendix, we provide further details about the experiments presented in Section 5.

Implementation and Hardware.
All experiments were conducted in R version 3.4.4, running on a local server fitted with two 28-core Intel Xeon Platinum 8180M CPUs and 12 TB of RAM.
We developed an open-source R package called exchanger which implements variants of our model (under different linkage structure priors and distortion models) in addition to the blink model (Steorts, 2015). Since an implementation of the model proposed by Sadinle (2014) was not publicly available, we developed our own which we released as an open-source R package called BDD . For efficiency reasons, we implemented inference for all models in C++ using the Rcpp interface (Eddelbuettel and François, 2011). The data set simulator described in Appendix C.1 was implemented as a Python script. A Pipfile is provided to specify the dependencies used when running the script.
Hyperparameter Settings. We followed the recommendations in Section 3.2 when setting hyperparameters for our model. When setting hyperparameters for the two baseline models, we attempted to follow the recommendations of the authors. For blink, we set = for the coupon-collector's prior and (0) = /1000 and (1) = /10 for the Beta prior on the distortion probabilities (here is the total number of records). For Sadinle, we set the agreement levels by inspecting the distribution of distances for each attribute. We used truncated uniform priors on the -probabilities and a uniform prior on the -probabilities, as recommended by the author. We set the lower truncation points for the -probabilities to 0.95, based on tuning experiments presented in Appendix F.

Initialization and MCMC.
For our model and blink, we initialized the linkage structure , entity attributes Y and distortion indicators Z by linking each record to a unique entity and copying the record attributes into the entity attributes, assuming no distortion. The distortion probabilities and entity attributes distributions G were initialized by drawing from their conditional distributions. The Ewens-Pitman parameters and distortion distribution concentration parameters were initialized using their prior means.
A similar initialization was used for the Sadinle model. We assigned each record to a unique entity (cluster). The -and -probabilities were initialized by drawing from their conditional distributions. When fitting each model, we ran Markov chain Monte Carlo (MCMC) for 2 × 10 5 iterations, discarding the first 10 5 iterations as burn-in, and applying thinning with an interval of 10. This produced 10 4 approximate posterior samples.

F Tuning Hyperparameters for Sadinle (2014)
Our aim in this appendix is to determine reasonable values for the hyperparameters in the entity resolution model by Sadinle (2014), which we refer to as Sadinle. We assume the distance functions (used to compare attributes) and agreement levels (mappings from real-valued distances to discrete levels) are fixed, and that flat priors are used, as recommended by Sadinle. Given these assumptions, the only hyperparameters that remain unspecified are the lower truncation points for the -probabilities.
The -probabilities are a set of parameters { } where is the probability that a pair of records referring to the same entity agree at level on attribute , given they do not agree at levels 0, 1, . . . , − 1. Sadinle recommends using truncated flat priors on , so that the allowed values lie in the interval [ , 1], where ∈ [0, 1] is a hyperparameter typically close to 1. More specifically, he recommends setting = 0.95 if attribute is a "nearly-accurate" quasi-identifier and = 0.85 if attribute is an "inaccurate" quasi-identifier. Since it is not clear which setting for is best for our data sets, we run the experiments described in Section 5.4 for four different values: = 0, 0.5, 0.85, 0.95. When setting , we use the same value for all attributes and agreement levels for simplicity. The results are reported in Figure S9 and Table S5. Figure S9 plots the posterior values of the -probabilities (on the -axis) for each truncation point (corresponding to the horizontal panels). We observe that the posterior values of tend to be close to , especially for agreement level = 0 and ≥ 0.5. This suggests that the model favors small values of , despite the fact that is expected to be close to 1. Consequently, the model has a tendency to "over-link"-linking records that do not refer to the same entity. Understanding why the model exhibits this behavior would require further exploration and is beyond the scope of this paper. However, we speculate that it may be related to the use of flat priors, or known stability issues with Fellegi-Sunter-type models (Goldstein et al., 2017). As a result, we conclude that the posterior value of is highly sensitive to the choice of . Table S5 shows the impact of the truncation point on entity resolution performance. It shows that the performance is relatively stable for cora and rest as a function of , despite the fact that there is some variation in the -probabilities (as shown in Figure S9). The reason for the stability may be related to the blocking scheme used for these data sets, which rules out a relatively large number of potential links, thereby guarding against over-linkage. On the other hand, the performance for RLdata and nltcs is far less stable. We find that the precision drops considerably as is reduced, The chain was slower to converge for the cora data set, so we increased the number of iterations to 2.5 × 10 5 and the burn-in interval to 1.5 × 10 5 .   while the recall remains relatively stable. This is a sign of over-linkage, which is expected since the posterior -probabilities are significantly smaller. Based on these results, we set = 0.95 as the default value for our other experiments since it seems to achieve balanced performance.

G.1 Study of Linkage Structure Priors
Here we present convergence diagnostics for the models fitted in Section 5.3. We present Geweke diagnostic plots and trace plots for a selection of model variables for each data set, linkage structure prior and distortion model. Each pair of plots is preceded by a title of the form "Data set | Linkage structure prior | Distortion model". The Geweke diagnostic plot (on the left) depicts a Z-score on the x-axis for each variable on the y-axis. The Z-score tests for equality of the means of the first 10% and final 50% of the Markov chain, and is typically expected to be in the range [−2, 2] (Geweke, 1992). The trace plot (on the right) depicts the value of variables (labeled in the right panel) for each step in the chain (on the x-axis). Note that variable denotes the number of instantiated entities. We replace integer indices for the attributes by named indices. For instance, 0,city refers to the distortion probability in source 0 of the attribute called "city".

G.2 Comparison with Baseline Models
Here we present convergence diagnostics for the models fitted in Section 5.4. We present Geweke diagnostic plots and trace plots for a selection of model variables for each data set and model. Each pair of plots is preceded by a title of the form "Data set | Model". The Geweke diagnostic plot (on the left) depicts a Z-score on the x-axis for each variable on the y-axis. The Z-score tests for equality of the means of the first 10% and final 50% of the Markov chain, and is typically expected to be in the range [−2, 2] (Geweke, 1992). The trace plot (on the right) depicts the value of variables (labeled in the right panel) for each step in the chain (on the x-axis). Note that variable denotes the number of instantiated entities. We replace integer indices for the attributes by named indices. For instance, 0,city refers to the distortion probability in source 0 of the attribute called "city".