Published Papers



The Political Economy of Voluntary Standard Setting

Upstream, Downstream: Diffusion and Impact of the Universal Product Code (with Emek Basker)

We study the adoption, diffusion, and impacts of the Universal Product Code (UPC) between 1975 and 1992, during the initial years of the barcode system. We find evidence of network effects in the diffusion process. Matched-sample difference-in-difference estimates show that firm size and trademark registrations increase following UPC adoption by manufacturers. Industry-level import penetration also increases with domestic UPC adoption. Our findings suggest that barcodes, scanning, and related technologies helped stimulate variety-enhancing product innovation and encourage the growth of international retail supply chains.

Differentiation Strategies in the Adoption of Environmental Standards: LEED from 2000-2014 (with Marc Rysman and Yanfei Wang)

We study the role of vertical differentiation in the adoption of LEED (Leadership in Energy & Environmental Design), a multi-tier environmental building certification system. Our identification strategy relies on the timing of adoption, and shows that builders seek to differentiate from each other when choosing a certification level. We estimate a model that incorporates both differentiation incentives and correlated market-level unobservables, and find that differentiation accounts for 16.5 percent of the variation due to observed factors. Finally, we use our estimates to simulate the impact of reducing the number of LEED tiers from four to two, and find that the impact on environmental investments depends upon the location of the threshold between levels.

Forking, Fragmentation and Splintering (with Jeremy Watson)

Although economic theory suggests that markets may tip towards a dominant platform or standard, there are many prominent examples of persistent incompatibility, inter-platform competition and standards proliferation. This paper examines the phenomena of forking, fragmentation and splintering in markets with network effects. We illustrate several causes of mis-coordination, as well as the tools that firms and industries use to fight it, through short cases of standardization in railroad gauges, modems, operating systems, instant messaging and Internet browsers. We conclude by discussing managerial implications and the potential welfare effects of efforts to promote inter-operability.

Standard Setting Committees: Consensus Governance for Shared Technology Platforms

Voluntary Standard Setting Organizations (SSOs) use a consensus process to create new compatibility standards. Practitioners have suggested that SSOs are increasingly politicized, and perhaps incapable of producing timely standards. This paper develops a simple model of standard setting committees and tests its predictions using data from the Internet Engineering Task Force, an SSO that produces many of the standards used to run the Internet. The results show that an observed slowdown in standards production between 1993 and 2003 can be linked to distributional conflicts created by the rapid commercialization of the Internet.

Four Paths to Compatibility (with Joe Farrell)

We describe four ways to achieve product compatibility: decentralized adoption, negotiation in a consensus Standard Setting Organization (SSO), following a leader, and using converters or multi-homing. Each means has costs and benefits in terms of the likelihood of coordination, the time and resources involved, and the implications for ex post competition and innovation. We discuss what determines which technologies follow which path to compatibility, and consider hybrid mechanisms that combine two or more paths.

Government Green Procurement Spillovers: Evidence from Municipal Building Policies in California, JEEM (with Mike Toffel)

We investigate whether government green procurement policies stimulate private-sector demand for similar products and the supply of complementary inputs. Specifically, we measure the impact of municipal policies requiring governments to construct green buildings on private-sector adoption of the US Green Building Council's Leadership in Energy and Environmental Design (LEED) standard. Using matching methods, panel data, and instrumental variables, we find that government procurement rules produce spillover effects that stimulate both private-sector adoption of the LEED standard and supplier investments in green building expertise. Our findings suggest that government procurement policies can accelerate the diffusion of new environmental standards that require coordinated complementary investments by various types of private adopters.

Governing the Anti-commons: Institutional Design for Standard Setting Organizations

Shared technology platforms are often governed by standard setting organizations (SSOs), where interested parties use a consensus process to address problems of technical coordination and platform provision. Economists have modeled SSOs as certification agents, bargaining forums, collective licensing arrangements and R&D consortia. This paper integrates these diverse perspectives by adapting Elinor Ostrom’s framework for analyzing collective self-governance of shared natural resources to the problem of managing shared technology platforms. There is an inherent symmetry between the natural resource commons problem (over-consumption) and the technology platform anti-commons problem (over-exclusion), leading to clear parallels in institutional design. Ostrom’s eight principles for governing common pool resources illuminate several common SSO practices, and provide useful guidance for resolving ongoing debates over SSO intellectual property rules and procedures.

Choosing the Rules for Consensus Standardization RAND Journal of Economics (with Joe Farrell)

Consensus standardization—explicit agreement on compatibility standards—is marred by severe delays. We explore tradeoffs between speed and the quality of outcomes in a private-information model of the war of attrition. In this model, the consensus process can be excessively slow—even on an optimistic view of its quality-selection merits. However, we find that adding “vendor neutral” players can mitigate the tradeoff between screening and delay. We also show that intellectual property policies designed to reduce vested interest, and hence delays, do not necessarily weaken the players' incentive to innovate.

Appendix: Proofs and Calculations to accompany "Choosing the Rules for Consensus Standardization"

Modularity and The Evolution of the Internet

This chapter offers an empirical case study of the Internet architecture from an economic viewpoint. Data collected from the two main Internet standard setting organizations (IETF and W3C) demonstrate the modularity of the Internet architecture, and the specialized division of labor that produces it. An analysis of citations to Internet standards provides evidence on the diffusion and commercial applications of new protocols. I tie these observations together by arguing that modularity helps the Internet (and perhaps digital technology more broadly) avoid long-run decreasing returns to investments in innovation, by facilitating low-cost adaptation of a shared general-purpose technology to the demands of heterogeneous applications.

Patents and the Performance of Voluntary Standard Setting Organizations (with Marc Rysman)

This paper measures the technological significance of voluntary standard setting organizations (SSOs) by examining citations to patents disclosed in the standard setting process. We find that SSO patents are cited far more frequently than a set of control patents, and that SSO patents receive citations for a much longer period of time. Furthermore, we find a significant correlation between citation and the disclosure of a patent to an SSO, which may imply a marginal impact of disclosure. These results provide the first empirical look at patents disclosed to SSOs, and show that these organizations not only select important technologies, but may also play a role in establishing their significance.

A NAASTy Alternative to RAND Pricing Commitments (with Marc Rysman)

Voluntary standard setting organizations typically require participants to disclose their patents during the standard-setting process, and will only endorse a standard if patent holders commit to license them on “reasonable and non-discriminatory” or RAND terms. We argue that this policy is unworkable—the RAND standard is inherently ambiguous and thus extremely hard to adjudicate. As an alternative, we propose a policy of Non-Assertion After Specified Time, or NAAST pricing. Under our proposal, technology producers would be compensated, vendors would have access to standards and uncertainty due to litigation would be largely eliminated.

Competing on Standards? Entrepreneurship, Intellectual Property and Platform Technologies (with Stuart Graham and Maryann Feldman)

This paper studies the intellectual property strategy of firms that participate in the formal standards process. Specifically, we examine litigation rates in a sample of patents disclosed to thirteen voluntary Standard Setting Organizations (SSOs). We find that SSO patents have a relatively high litigation rate, and that SSO patents assigned to small firms are litigated more often than those of large publicly-traded firms. We also estimate a series of difference-in-differences models and find that small-firm litigation rates increase following a patent's disclosure to an SSO while those of large firms remain unchanged or decline. We interpret this result as evidence of a "platform paradox" -- while small entrepreneurial firms rely on open standards to lower the fixed cost of innovation, these firms are also more likely to pursue an aggressive IP strategy that may undermine the openness of a new standard.

Explaining the Increase in Intellectual Property Disclosure (“The Standards Edge, vol. 3”)

This short book chapter documents a large and rather sudden increase in intellectual property disclosure at nine standard setting organizations during the early 1990s. It also examines the specificity of disclosure statements, the significance of disclosed patents, and the differences between disclosing firms. After considering several possible explanations for the increase in disclosure, the paper concludes with a discussion its policy implications.

Open Standards and Intellectual Property Rights (“Open Innovation: Researching a New Paradigm” OUP)

This is a book chapter that explores the tension between collaboration and competition in the non-market standard setting process—with particular emphasis on the role of intellectual property rights. The chapter develops a simple framework that emphasizes the distinction between standards, implementations, and products. The framework is used to explore a number of factors that influence the efficiency of the standards developing process. I also develop a simple taxonomy of “IPR strategies” for standard setting and close with a discussion of the ongoing policy debates about the hold-up problems created by IPR in standards.


Other Projects

Tax Credits and Small Firm R&D Spending (with Ajay Agrawal and Carlos Rosell)

We use a change in Canadian tax law to examine how small private firms respond to the R&D tax credit. Our estimates imply an R&D user-cost elasticity above unity. Contract R&D expenditures are more elastic than the R&D wage bill. Firms that perform contract research or recently invested in R&D capital are more responsive to a change in the after-tax cost of R&D. We interpret the latter findings as evidence of adjustment costs.

Learning From Testimony on Quantitative Research in Management (with Andrew King and Brent Goldfarb)

Published testimony in management, as in other sciences, includes cases where authors overstate the inferential value of their analysis. Where some scholars have diagnosed a current crisis, we detect an ongoing and universal difficulty: the epistemic problem of learning from testimony. Overcoming this difficulty will require responses suitable to the conditions of management research. To that end, we review the philosophical literature on the epistemology of testimony, which describes the conditions under which common empirical claims provide a basis for knowledge, and we evaluate ways these conditions can be verified. We conclude that in many areas of management research, popular proposals such as pre-registration and replication are unlikely to be effective. We propose revised modes of testimony which could help researchers and readers avoid some barriers to learning from testimony. Finally, we imagine the implications of our analysis for management scholarship and propose how new standards could come about.

Patent Policy and American Innovation After eBay: An Empirical Examination (with Filippo Mezzanotti)

The 2006 Supreme Court ruling in eBay vs. MercExchange marked a sea change in U.S. patent policy. The eBay decision removed the presumption of injunctive relief. Subsequent legal and policy changes reduced the costs of challenging patent validity and narrowed the scope of patentable subject matter. Proponents of these changes argue that they have made the U.S. patent system more equitable, particularly for sectors such as information technology, where patent ownership is fragmented and innovation highly cumulative. Opponents suggest the same reforms have weakened intellectual property rights and curtailed innovation. After reviewing the legal background and relevant economic theory, we examine patenting, R&D spending, venture capital investment and productivity growth in the wake of the eBay decision. Overall, we find no evidence that changes in patent policy have harmed the American innovation system.

Patent Examiner Specialization (with Cesare Righi)

We study the matching of patent applications to examiners at the U.S. Patent and Trademark Office. The distribution of technology classes is more concentrated than would occur under random matching and F-tests reject the hypothesis that family size and claim scope are randomly distributed across examiners. Using the application text, we show that examiner specialization persists even after conditioning on technology sub-classes. Specialization is less pronounced in computers and software than other technology fields. More specialized examiners have a lower grant rate. These findings undermine the idea that random matching justifies instrumental variables based on examiner behaviors or characteristics.

How Essential are Standard Essential Patents? (with Mark Lemley)

In this study, we explore what happens when standard-essential patents SEPs go to court. We expected that proving infringement of a SEP would be easy, but that the breadth of the patents might make them invalid. In fact, the evidence shows the opposite. Conditional on reaching judgment, SEPs are more likely to be held valid than a matched set of litigated non-SEP patents, but they are significantly less likely to be infringed. Standard-essential patents, then, don’t seem to be all that essential, at least when they make it to court. At least part of the explanation for this surprising result comes from another one of our findings: many SEPs asserted in court are asserted by non-practicing entities (NPEs), also known as patent trolls. NPEs do much worse in court, even when they assert SEPs. And the fact that they have acquired a large number of the SEPs enforced in court may bring the overall win rate down significantly.

Final Report of the Berkeley Center for Law & Technology Patent Damages Workshop (with Stuart Graham, Peter Menell and Carl Shapiro)

The determination of patent damages lies at the heart of patent law and policy, yet it remains one of the most contentious topics in this field, particularly as regards the calculation of a reasonable royalty. In March 2016, the authors convened a workshop of leading “insiders” (in-house counsel, litigators (from both the assertion and defense sides), patent licensing professionals, and testifying expert witnesses) and academics (both law professors and economists) to clarify areas of consensus and disagreement regarding the treatment of patent damages. This report summarizes the discussion, key findings, and ramifications for patent case management.

Identifying the Age Profile of Patent Citations: New Estimates of Knowledge Diffusion (with Aditi Mehta and Marc Rysman)

A growing body of research uses patent citations to analyze economic phenomena, and many of these papers are interested in the distribution of citations over the life of a patent. However, this question leads directly to the age-year-cohort identification problem, i.e. co-linearity between the birth year, citation year, and "age" of a patent. Existing research has relied on functional form assumptions to separate these three effects. This paper proposes an alternative non-parametric identification strategy which uses the lag between application and grant as a source of exogenous variation. We provide statistical evidence to support our assumption that the "citation clock" should not start ticking until a patent actually issues, and we examine the potential bias introduced by our method if the lag between application and grant is correlated with citation levels. Finally, we use our proposed identification strategy to re-examine some prior results on the citation age profile of patents from different technological fields and application-year cohorts.


Other Projects

CEO Overconfidence and Innovation (with Alberto Galasso)

Are CEOs’ attitudes and beliefs linked to their firms’ innovative performance? This paper uses Malmendier and Tate’s measure of overconfidence, based on CEO stock-option exercise, to study the relationship between a CEO’s "revealed beliefs" about future performance and standard measures of corporate innovation. We begin by developing a career concern model where CEOs innovate to provide evidence of their ability. The model predicts that overconfident CEOs, who underestimate the probability of failure, are more likely to pursue innovation, and that this effect is larger in more competitive industries. We test these predictions on a panel of large publicly traded firms for the years 1980 to 1994. We find a robust positive association between overconfidence and citation-weighted patent counts in both cross sectional and fixed-effect models. This effect is larger in more competitive industries. Our results suggest that overconfident CEOs are more likely to take their firms in a new technological direction.

Status, Quality and Attention: What’s in a (Missing) Name? (with David Waguespack)

How much are we influenced by an author’s identity when evaluating their work? This paper addresses this question in the context of open standards development. We exploit a natural experiment, whereby author names were occasionally replaced by "et al" in a series of email messages used to announce new submissions to the Internet Engineering Task Force (IETF). By comparing the effect of obscuring high versus low status author names, we measure the impact of status signals on the IETF publication process. Our results suggest that name-based signals can explain up to three-quarters of the difference in publication outcomes across status cohorts. However, this signaling effect disappears for a set of pre-screened proposals that receive more scrutiny than a typical submission. We also show that working papers from high status authors receive more attention on electronic discussion boards, which may help in developing these ideas and bringing them forward to publication.

Diversification, Diseconomies of Scope and Vertical Contracting: Evidence from the Taxicab Industry (with Evan Rawley)

This paper studies how firms reorganize following diversification, and proposes that firms use outsourcing, or vertical dis-integration, to manage diseconomies of scope. We also consider the origins of scope diseconomies, showing how different underlying mechanisms generate contrasting predictions about the link between within-firm task heterogeneity and the incentive to outsource following diversification. We test these propositions using micro-data on taxicab and limousine fleets from the Economic Census. The results show that taxicab fleets outsource, by shifting towards owner-operator drivers, when they diversify into the limousine business. The magnitude of this shift toward driver ownership is larger in less urban markets where the task performed by taxicab and limousine drivers are more similar. These findings suggest that: (1) firms use outsourcing to manage diseconomies of scope; and (2) inter-agent conflicts are an important source of scope diseconomies.

Information Technology, Productivity and Asset Ownership: Evidence from Taxicab Fleets (with Evan Rawley)

We develop a simple model that links the adoption of a productivity-enhancing technology to increased vertical integration and a less skilled workforce. We test the model’s key prediction using novel micro data on vehicle ownership patterns from the Economic Census during a period when computerized dispatching systems were first adopted by taxicab firms. Controlling for time-invariant firm-specific effects, firms increase the proportion of taxicabs under fleet-ownership by 12 percent when they adopt new computerized dispatching systems. These findings suggest that increasing a firm’s productivity can lead to increased vertical integration, even in the absence of asset specificity.

Who Benefits Most in Disease Management Programs? Improving Target Efficiency (with Maryaline Catillon and Paul Gertler)

Disease Management Programs aim to save cost by improving the quality of care for chronic diseases. Evidence for their effectiveness is mixed. Reducing health care spending sufficiently to cover program costs has proved particularly challenging. This study uses a difference in differences design to examine the impact of a Diabetes Dis- ease Management Program for high risk patients on preventive tests, health outcomes and cost of care. Heterogeneity is examined along the dimensions of severity (mea- sured using the proxy of poor glycemic control) and preventive testing received in the baseline year. While disease management programs tend to focus on the sickest, the impact of this program concentrates in the group of people who had not received recom- mended tests in the pre-intervention period. If confirmed, such findings are practically important to improve cost effectiveness in disease management programs by targeting relevant subgroups defined both based on severity and on (missing) test information.