Econometrics, the application of statistical methods to economic data, can be instrumental in the identification of anticompetitive behavior by assisting in analyses including the assessment of market power, the evaluation of competitive effects resulting from alleged anticompetitive conduct, and the quantification of damages.1
Difference-in-Differences (DiD) analysis has been a popular method in econometrics for estimating causal effects and is often employed in antitrust litigation. The essence of DiD lies in comparing the changes in outcome variables of interest (e.g., price) over time between a group that is exposed to the alleged anticompetitive conduct and a control group that is not (e.g., comparing different groups of consumers, different firms, or different geographic regions). It gets its name “difference-in-differences” because it essentially combines two types of variation—the first from a before-and-after analysis and the second from comparing an affected and an unaffected group.
The key advantage of DiD is its ability to control for time-invariant unobservable factors that may influence the outcome of interest. By differencing out the common time trends between the groups that are and are not affected by the anticompetitive conduct (i.e., “treatment” and “control” groups), DiD isolates the treatment effect by focusing on the differential changes in outcomes that occur after the introduction of the treatment. The DiD methodology has been implemented in antitrust analyses in various settings.2 In merger analysis, for example, DiD has often been implemented to estimate retroactively the impact of past consolidations to inform future policy.3
Despite its strengths, DiD is not immune to potential biases. Choosing the right quantitative tool, such as DiD, in an antitrust setting involves careful consideration of various factors to ensure the validity of the causal inference. Under the Daubert Standard,4 it is important for an expert to demonstrate the adequacy of a chosen tool, such as regression, and the appropriateness of a chosen research design.5 Since biases in the canonical DiD may arise from the violation of distinct conditions, there is no single recipe solution, and experts need to carefully analyze the case in question.6
There may be situations where a simple pre– and post-treatment formulation is not enough to capture the dynamics. For example, a company’s pricing policy may go into effect in distinct regions at different times as opposed to being simultaneously launched. There might be a need to study the effect of successive acquisitions by the same company in different markets. A firm may choose to implement a new policy to distinct groups of stakeholders at different times. As in these examples, the resulting bias of the estimates obtained by applying the standard DiD will be particularly problematic when there is heterogeneity in the treatment effect over time. However, there have been a few methodological alternatives proposed in the literature,7 some of which have been used in litigation.8 One could, for example, use a matching algorithm in each period to pick the best control group (where only those units that are untreated in that period are candidates),9 and once the control groups are selected, proceed as usual.
DiD also requires the treatment and control groups to have similar trends over time in the absence of the alleged anticompetitive conduct. In practice, this means that, absent a merger, and with everything else held constant, prices in markets where both merging parties are present (treatment group) and markets where at least one of them is not (control group) would have trended in a similar fashion. A violation of this assumption need not be the end of DiD analysis, but it does require one to adjust one’s specifications, as the regression will no longer produce consistent estimates merely by incorporating time-independent variables. If this violation of the parallel trends happens due to an observable factor, it is possible to extend the assumption by conditioning on variables that are observable pre-treatment.10
Most DiD literature imposes the requirement that potential outcomes of a unit are unaffected by the treatment assignment of other units – in other words, the variable of interest for that unit only depends on whether that unit and that unit only has been exposed to the anticompetitive conduct, which guarantees independence and essentially rules out any spillover effects. In our earlier example, customers can only be affected if the conduct has occurred in their market, but ought to be unaffected otherwise, all else held constant. However, it is possible that, if individuals are connected by a network, there may be spillover effects. A growing literature has already accounted for some extensions of the general framework that account for these network effects,11 but there will likely be many more developments in this area, which may particularly impact how antitrust litigation views competition when platforms are involved.12 For example, one might consider how changes in Gen AI policy that are applicable only to European markets start affecting the way companies conduct business in the United States, despite the absence of any such policy change in the United States.
In conclusion, DiD remains a valuable tool for estimating causal effects, offering a quasi-experimental approach to understanding and estimating the economic implications of alleged anticompetitive practices. Recent econometric developments have significantly enhanced the method’s applicability, addressing concerns related to control group selection, unobserved heterogeneity, and group trends. By incorporating appropriate adjustments to their DiD specifications, antitrust experts can improve the robustness of their estimates, ensuring that antitrust enforcement remains grounded in sound economic principles and evidence-based reasoning. As econometrics continues to evolve, it is paramount that practitioners stay up to date with state-of-the-art quantitative techniques, allowing DiD analysis to contribute to more accurate and reliable causal inference in antitrust cases.
1See, for example, U.S. Department of Justice & Federal Trade Commission, Merger Guidelines, (2023) (henceforth Merger Guidelines), §1 & ft. 7.
2See, for example, Messner v. Northshore University HealthSystem, 669 F.3d 802 (United States Court of Appeals, 7th Cir. 2012) concluded that experts can use “difference-in-differences methodology to estimate [] anti-trust impact”.; In re AMR Corporation, 625 B.R. 215 (United States Bankruptcy Court, S.D.N.Y. 2021); Mr. Dee’s Inc. v. Inmar Inc., No. 1:19cv141, (United States District Court, M.D. North Carolina. 2021); In re Dealer Management Systems Antitrust Litig., 581 F. Supp. 3d 1029 (Dist. Court, ND Illinois. 2022); Tevra Brands LLC v. Bayer Healthcare LLC, No. 19-cv-04312-BLF, (N.D. Cal. Apr. 15, 2024).
3See, for example, Joseph Farrell et al. (2009), Economics at the FTC: Retrospective Merger Analysis with a Focus on Hospitals, 35 (4 – Special Issue: Antitrust and Regulatory Review) Review of Industrial Organization, 369-385 (2009); Graeme Hunter et al., Merger Retrospective Studies: A Review, 23 (1) Antitrust, pp. 34-41 (2008); Dennis Carlton et al., Are legacy airline mergers pro- or anti-competitive? Evidence from recent U.S. airline mergers, 62 International Journal of Industrial Organization, pp. 58-95 (2019).
4The Daubert Standard was established in the U.S. Supreme Court case Daubert v. Merrell Dow Pharmaceuticals Inc., 509 U.S. 579 (1993), and provides a systematic framework for a trial court judge to assess the reliability and relevance of expert witness testimony before it is presented to a jury.
5See, for example, Mia. Prods. & Chem. Co. v. Olin Corp., No. 1:19-CV-00385 EAW (W.D.N.Y. Dec. 28, 2023), where regression model was classified as “not methodologically sound, for multiple reasons”, including endogeneity and misclassifying data; Reed Constr. Data Inc. v. McGraw-Hill Cos., 49 F. Supp. 3d 385 (S.D.N.Y. 2014) where Daubert motion to exclude expert’s regression analysis was granted due to significant failures, including faulty model design, omitted variable bias, and multicollinearity.
6There are some excellent papers that summarize the recent advances in the literature. See, notably, Jonathan Roth et al., What’s trending in difference-in-differences? A synthesis of the recent econometrics literature, 235(2) Journal of Econometrics, 2218 (2023) (henceforth “Roth et al. (2023)”).
7See, for example, Andrew Goodman-Bacon, Difference-in-differences with variation in treatment timing, 225(2) Journal of Econometrics, 254, (2021); Brantley Callaway & Pedro H.C. Sant’Anna, Difference-in-Differences with multiple time periods, 225(2) Journal of Econometrics, 200, (2021) (henceforth “Callaway & Sant’Anna (2021)”); Kirill Borusyak, Xavier Jaravel, & Jann, Spiess, Revisiting Event Study Designs: Robust and Efficient Estimation, arXiv preprint arXiv:2108.12419 (2021)
8See, for example, Ryan LLC v. Federal Trade Commission, Docket No. 3:24-cv-00986 (N.D. Tex. Apr 23, 2024), ECF 210.
9This has been a gross overview of the methods described in Callaway & Sant’Anna (2021), supra note 7.
10There are several ways that the literature has proposed to operationalize the implementation of conditional parallel trends, such as: i) regression adjustment which essentially entails including additional observable and measurable characteristics (these observable and measurable characteristics from each unit can be called covariates) in the regression model to control for potential confounding factors, and allows for a more nuanced analysis of the variable of interest (inference with this approach can become complicated with a fixed number of matches); ii) inverse probability weighting which will explicitly model the probability that each unit belongs to the treated/control given some covariates (see Alberto Abadie, Semiparametric Difference-in-Differences Estimators, 72(1) The Review of Economic Studies, 1 (2005) for original derivation); iii) doubly-robust estimators which combines both methods previously mentioned (See Pedro HC Sant’Anna & Jun Zhao, Doubly robust difference-in-differences estimators, 219(1) Journal of Econometrics, 101 (2020)).
11See, for example, Kyle Butts, JUE Insight: Difference-in-differences with geocoded microdata, 133 Journal of Urban Economics 103493 (2023); Martin Huber & Andreas Steinmayr, A framework for separating individual-level treatment effects from spillover effects, 39(2) Journal of Business & Economic Statistics 422 (2021).
12There is a growing concern by competition agencies with respect to potential spillover effects and the need to account for these in antitrust investigation. See, for example, Merger Guidelines, supra note 1, §2.9.: “Network effects occur when platform participants contribute to the value of the platform for other participants and the operator. The value for groups of participants on one side may depend on the number of participants either on the same side (direct network effects) or on the other side(s) (indirect network effects).”