Lucian A. Bebchuk, Robert J. Jackson Jr., James D. Nelson & Roberto Tallarita
This Article seeks to contribute to the heated debate on the disclosure of political spending by public companies. A rulemaking petition urging SEC rules requiring such disclosure has attracted over 1.2 million comments since its sub- mission almost nine years ago, but the SEC has not yet made a decision on the petition. The petition has sparked a debate among academics, members of the investor and issuer communities, current and former SEC commissioners, and members of Congress. In the course of this debate, opponents of mandatory dis- closure have put forward a wide range of objections to such SEC mandates. This Article provides a comprehensive and detailed analysis of these objections, and it shows that they fail to support an opposition to transparency in this area.
Among other things, we examine claims that disclosure of political spend- ing would be counterproductive or at least unnecessary; that any beneficial pro- vision of information would best be provided through voluntary disclosures of companies; and that the adoption of a disclosure rule by the SEC would violate the First Amendment or at least be institutionally inappropriate. We demonstrate that all of these objections do not provide, either individually or collectively, a good basis for opposing a disclosure rule. The case for keeping political spend- ing under the radar of investors, we conclude, is untenable.
Brian Broughman & Jesse M. Fried
Black & Gilson (1998) argue that an IPO-welcoming stock market stimu- lates venture deals by enabling VCs to give founders a valuable “call option on control.” We study 18,000 startups to investigate the value of this option. Among firms that reach IPO, 60% of founders are no longer CEO. With little voting power, only half of the others survive three years as CEO. At initial VC financ- ing, the probability of getting real control of a public firm for three years is 0.4%. Our results shed light on control evolution in startups, and cast doubt on the plausibility of the call-option theory linking stock and VC markets.
John Armour & Horst Eidenmüller
What are the implications of artificial intelligence (AI) for corporate law? In this essay, we consider the trajectory of AI’s evolution, analyze the effects of its application on business practice, and investigate the impact of these develop- ments for corporate law. Overall, we claim that the increasing use of AI in cor- porations implies a shift from viewing the enterprise as primarily private and facilitative, towards a more public, and regulatory, conception of the law gov- erning corporate activity. Today’s AI is dominated by machine learning applica- tions which assist and augment human decision-making. These raise multiple challenges for business organization, the management of which we collectively term “data governance.” The impact of today’s AI on corporate law is coming to be felt along two margins. First, we expect a reduction across many standard dimensions of internal agency and coordination costs. Second, the oversight challenges—and liability risks—at the top of the firm will rise significantly. Tomorrow’s AI may permit humans to be replaced even at the apex of corporate decision-making. This is likely to happen first in what we call “self-driving sub- sidiaries” performing very limited corporate functions. Replacing humans on corporate boards with machines implies a fundamental shift in focus: from con- trolling internal costs to the design of appropriate strategies for controlling “al- gorithmic failure,” that is, unlawful acts by an algorithm with potentially severe negative effects (physical or financial harm) on external third parties. We dis- cuss corporate goal-setting, which in the medium term is likely to become the center of gravity for debate on AI and corporate law. This will only intensify as technical progress moves toward the possibility of fully self-driving corpora- tions. We outline potential regulatory strategies for their control. The potential for regulatory competition weakens lawmakers’ ability to respond, and so even though the self-driving corporation is not yet a reality, we believe the regulatory issues deserve attention well before tomorrow’s AI becomes today’s.
Robert K. Rasmussen & Michael Simkovic
Many scholars and courts have championed a plain meaning approach to interpreting commercial contracts between sophisticated parties. These parties are assumed to carefully draft contracts to make their rights and obligations clear and knowable if the language is enforced as written. However, recent events in the commercial lending arena have raised questions about the efficacy of this approach. Aggressive parties have combed through reams of complex documents looking for ways around seemingly clear contractual barriers. For example, Hovnanian promised to intentionally default on a debt payment to one of its wholly-owned subsidiaries in exchange for favorable financing from a hedge fund whose substantial CDS short position would have otherwise become worthless. In another case, J. Crew, faced with financial distress, found a way to divert the crown jewels from the collateral package pledged to its lenders, and instead use this value to prevent a default on unsecured notes that were coming due. Both of these transactions upended the expectations of those who put the original deals together. They raise the question: how can systems that depend on clear rules evolve, correct problems and reduce unintended consequences with- out resorting to a subjective standard? One approach is to crowdsource error- checking to market-participants by paying bounties to those who detect and pub- licize flaws in rules-based systems so that problems can be diagnosed and cor- rected (or, at least, their consequences mitigated) by subsequently revising the rules. This article considers such an iterative approach in the context of the Credit Default Swaps Market and the syndicated loan market.
Hilary J. Allen
While safety concerns are at the forefront of the debate about driverless cars, such concerns seem to be less salient when it comes to the increasingly sophisticated algorithms driving the financial system. This Article argues, how- ever, that a precautionary approach to sophisticated financial algorithms is jus- tified by the potential enormity of the social costs of financial collapse. Using the algorithm-driven fintech business models of robo-investing, marketplace lending, high frequency trading and token offerings as case studies, this Article illustrates how increasingly sophisticated algorithms (particularly those capable of machine learning) can exponentially exacerbate complexity, speed and corre- lation within the financial system, making the system more fragile. This Article also explores how such algorithms may undermine some of the regulatory re- forms that were implemented in the wake of the 2008 financial crisis to make the financial system more robust. Through its analysis, this Article demonstrates that the algorithmic automation of finance (a phenomenon I refer to as “driver- less finance”) deserves close attention from a financial stability perspective. This Article argues that regulators should become involved with the processes by which the relevant algorithms are created, and that such efforts should begin immediately—while the technology is still in its infancy and remains somewhat susceptible to regulatory influence.