AI Company Fraud Must Stop
Christopher Marquis: Professor of Management at the University of Cambridge and Author of “How Companies Privatize Profits and Generalize Costs” (PublicAffairs, 2024).
Cambridge — As artificial intelligence increasingly permeates our lives, it seems clear that it is unlikely to create a technological utopia or lead to the extinction of humanity. The most probable outcome is somewhere in between — a future shaped by emergencies, compromises, and, importantly, the decisions we make now about how to constrain and guide AI development.
As a global leader in artificial intelligence, the United States plays a crucial role in shaping this future. However, the AI Action Plan recently announced by former President Donald Trump dashed hopes for enhanced federal oversight, instead embracing a growth-supporting approach to technology development. This makes the focus from state governments, investors, and the American public on a less-discussed accountability tool — corporate governance — all the more urgent.
As journalist Karen Hao documents in her book “Empire of AI,” leading companies in the field are already engaged in mass surveillance, exploiting workers, and exacerbating climate change. Ironically, many of them are public benefit corporations (PBCs), a governance structure purportedly designed to avoid such violations and protect humanity. However, it is clear that this structure is not functioning as intended.
The structuring of AI companies as public benefit corporations has been a highly successful form of “ethical washing.” By sending virtue signals to regulators and the public, these companies create a veneer of accountability that allows them to avoid greater systemic scrutiny of their daily practices, which remain opaque and potentially harmful.
For example, xAI, owned by Elon Musk, is a public benefit corporation whose stated mission is to “understand the universe.” However, the company’s actions — from secretly building a polluting supercomputer near a predominantly Black neighborhood in Memphis, Tennessee, to creating a chatbot that praises Hitler — demonstrate a disturbing level of disregard for transparency, ethical oversight, and the affected communities.
Public benefit corporations are a promising tool for enabling companies to serve the public good while pursuing profit. But in their current form — especially under Delaware law, where most U.S. public companies are incorporated — they are riddled with loopholes and weak enforcement tools, rendering them incapable of providing the necessary barriers to safeguard AI development. To prevent harmful outcomes, improve oversight, and ensure companies are committed to integrating the public good into their operational principles, state legislators, investors, and the public must call for a reformation of public benefit corporations and strengthen their capabilities.
It is impossible to evaluate or hold companies accountable in the absence of clear, time-bound, and measurable objectives. Consider how public benefit corporations in the AI sector rely on comprehensive and vague purported benefits that supposedly guide their operations. OpenAI states that its goal is “to ensure that artificial general intelligence benefits all of humanity,” while Anthropic aims “to achieve the greatest amount of positive outcomes for humanity in the long run.” Such noble aspirations are meant to inspire, but their vagueness can be used to justify nearly any course of action — including ones that endanger the public good.
Delaware law does not require companies to activate their public benefit through measurable criteria or independent assessments. While it mandates biennial reports on benefit performance, it does not require companies to disclose results. Consequently, companies can meet their obligations — or neglect them — behind closed doors, without the public ever knowing.
Regarding enforcement, shareholders theoretically have the power to file lawsuits if they believe the board has failed to uphold the company’s public benefit mission. But this is a hollow avenue for redress, as the harms caused by AI tend to be widespread, long-term, and generally beyond the control of shareholders. Affected stakeholders — such as marginalized communities and underpaid workers — lack practical means to challenge in court.
To play a meaningful role in AI governance, the public benefit corporations model must be more than just a reputation shield. This means rethinking how the “public good” is defined, governed, measured, and protected over time. In the absence of federal oversight, this structure must be reformed at the state level.
Public benefit corporations must be compelled to commit to clear, measurable, and time-bound goals, written into their operational documents, supported by internal policies, and tied to performance reviews and reward systems. For any company operating in the AI space, these goals could include ensuring the safety of institutional models, reducing bias in model outputs, minimizing the carbon footprint from training and deployment cycles, implementing fair labor practices, and training engineers and product managers on human rights, ethics, and participatory design. Clearly defined objectives, rather than vague aspirations, will help companies build a foundation for trustworthy internal alignment and external accountability.
The structures of boards and oversight processes must also be reimagined. Boards should include directors with verifiable expertise in AI ethics, safety, social impact, and sustainability. Each company should appoint a chief ethics officer with a clear mandate, independent authority, and direct access to the board. These officers should oversee ethical review processes and be empowered to halt or reshape product plans when necessary.
Finally, AI companies structured as public benefit corporations should be mandated to publish detailed annual reports that include comprehensive and classified data related to safety and security, bias and fairness, social and environmental impact, and data governance. Independent audits — conducted by experts in AI, ethics, environmental science, and labor rights — should assess the accuracy of this data and the company’s governance practices, ensuring alignment with public benefit goals.
Trump’s AI Action Plan underscored his administration’s unwillingness to regulate this fast-moving sector. Yet even in the absence of federal oversight, state legislators, investors, and the public can enhance corporate governance of AI by applying pressure for reforms to the public benefit corporation model. An increasing number of tech leaders seem to believe that ethics are optional. Americans must prove them wrong, or risk leaving deception, inequality, labor exploitation, and unchecked corporate power to shape the future of AI.