Who Owns the Machine’s Work? AI, Copyright, and the Supreme Court’s Coming Reckoning

The legal system is struggling to answer a question that would have seemed like science fiction just a decade ago: when an artificial intelligence creates something — a painting, a news article, a piece of code — who owns it? And when an AI system trains itself by reading millions of copyrighted works without permission, has it stolen something?

These are not theoretical puzzles. They are live disputes working their way through the courts right now, and at least some of them are destined to reach the Supreme Court. When they do, the stakes will be enormous — for artists, writers, journalists, software developers, and any company hoping to profit from generative AI. The question is whether the current Court is equipped to handle them wisely.

The Cases Already in Motion

The litigation is sprawling and moves fast. Getty Images sued Stability AI in both the United States and the United Kingdom, alleging that Stability’s image-generation system, Stable Diffusion, was trained on millions of Getty photographs without a license and can produce images bearing distorted versions of the Getty watermark. That case is proceeding in federal court in Delaware. Meanwhile, a group of visual artists filed a separate class action against Stability AI, Midjourney, and DeviantArt, claiming their copyrighted work was scraped without consent.

The biggest case may be The New York Times Co. v. OpenAI Corp., filed in December 2023 in the Southern District of New York. The Times alleges that OpenAI and Microsoft used millions of its articles to train ChatGPT without permission and that the system can reproduce Times content nearly verbatim. OpenAI argues that training on publicly available text constitutes fair use. The outcome could determine the financial model for the entire AI industry.

On the output side — who owns what AI produces — the Copyright Office has already drawn a line. It has repeatedly refused to register works created autonomously by AI, most prominently in the case of Stephen Thaler, who sought copyright protection for an image generated entirely by his AI system called the “Creativity Machine.” The Copyright Office denied the application, the D.C. district court affirmed the denial in Thaler v. Perlmutter (2023), and the D.C. Circuit Court of Appeals affirmed as well in 2025. Thaler petitioned the Supreme Court, which declined to hear the case — for now. The underlying question, however, has not gone away.

There is also the broader backdrop of the Authors Guild’s long-running battles with Google over the Google Books project, which the Second Circuit ultimately resolved in Google’s favor in 2015. That fair use ruling is already being cited by AI companies as precedent for training on copyrighted data — a stretch, critics say, since scanning books to enable search is different from ingesting them to produce competing commercial content.

What the Law Actually Says — and Doesn’t

Copyright law, codified in Title 17 of the U.S. Code, was written for human authors. The statute grants protection to “original works of authorship” — a phrase the Supreme Court interpreted in Feist Publications v. Rural Telephone Service (1991) to require a minimum of human creativity. The Copyright Office’s position, backed so far by the courts, is that AI alone cannot be an “author” under existing law.

Fair use is the more contested battlefield. Under 17 U.S.C. § 107, courts weigh four factors: the purpose and character of the use, the nature of the copyrighted work, the amount used, and the effect on the market for the original work. The Supreme Court’s most recent major fair use ruling — Andy Warhol Foundation v. Goldsmith (2023) — cut against the idea that a commercial use can easily qualify as “transformative.” The majority, written by Justice Sonia Sotomayor, held that Warhol’s licensed use of a Lynn Goldsmith photograph of Prince was not fair use because it served the same commercial purpose as Goldsmith’s original. AI companies training on scraped content will need to grapple with that precedent.

The Court’s Track Record on Technology

The Supreme Court’s record on technology should temper optimism about its ability to resolve AI questions wisely.

The Justices have demonstrated genuine confusion about how the internet works. In oral arguments for Gonzalez v. Google (2023) — a case involving whether YouTube’s recommendation algorithm could be liable for terrorist content — several Justices struggled with basic concepts about how social media platforms function. The Court ultimately dismissed that case on narrow grounds, avoiding the harder questions. In Twitter v. Taamneh (2023), decided the same day, the Court unanimously ruled for Twitter but produced a ruling so narrow it resolved little about platform liability going forward.

Before that, in Google LLC v. Oracle America (2021), the Court ruled 6-3 in Google’s favor on fair use grounds related to the copying of Java software interfaces — a genuinely important ruling. But critics noted that the majority’s reasoning was criticized as internally inconsistent, and the decision has not produced the clarity that tech law needed.

The Jurisprudential Problem

Here is where the Court’s dominant interpretive philosophies become relevant — and troubling.

Originalism, associated most closely with the late Justice Antonin Scalia and now with Justices Clarence Thomas and Neil Gorsuch, asks what a legal text meant when it was enacted. Textualism asks what the statutory text says, read according to its ordinary meaning. Both approaches can produce sensible results with stable, well-understood technologies. They tend to produce confusion when applied to technologies the framers of a statute — let alone the Framers of the Constitution — could never have imagined.

Copyright law was first enacted in 1790 and has been periodically updated, most recently in 1976. When Congress wrote “works of authorship,” it was thinking about human beings holding quill pens or typewriter keys. An originalist reading of that phrase produces the Copyright Office’s current position: AI is not an author and gets no protection. That outcome may be correct. But the same interpretive framework offers little guidance on the harder question of whether training a neural network on copyrighted works infringes those works — a technical process the statute’s drafters could not have anticipated.

The Court’s conservative supermajority has also shown, in other contexts, a willingness to reach beyond the facts before it to reshape legal doctrine. The overruling of Roe v. Wade in Dobbs v. Jackson Women’s Health Organization (2022) and the elimination of Chevron deference in Loper Bright Enterprises v. Raimondo (2024) both demonstrated that this Court does not shy away from major doctrinal shifts. Applied to AI and copyright, that ambition could produce either a principled new framework or a ruling that reflects the Justices’ limited technical understanding dressed up in constitutional language.

What Is Actually at Stake

The policy stakes are high enough to demand serious judicial engagement.

If training AI on copyrighted work is ruled fair use without compensation, creators lose control over their life’s work. Journalism, fiction, photography, music — all of it becomes raw material for systems that can then undercut the original creators in the marketplace. The economic model that sustains creative industries collapses.

If training AI is ruled infringement in all cases, the practical effect may be to entrench the companies that already trained their models on pre-regulation data, while locking out future competitors and open-source projects. The legal risk would be concentrated on smaller players who cannot afford licensing deals with major publishers.

The better answer almost certainly lies somewhere in between — something like a licensing framework, a statutory exception conditioned on compensation, or a new category of permissible use. But that kind of nuanced, forward-looking solution is not what courts are designed to produce. It requires legislation.

Congress has held hearings on AI and copyright. The Copyright Office has been issuing a series of reports on the subject: Part 1 (2024) recommended federal legislation to address AI-generated digital replicas of real people; Part 2 (2025) addressed the copyrightability of AI outputs. A third installment, covering training data and licensing liability, was still pending as of early 2025. Notably, the Office has expressed reservations about compulsory licensing regimes, preferring to let voluntary market-based licensing develop — a position that offers little immediate protection to creators whose work is already being ingested at scale. The legislative process has yet to produce results.

A Court That Must Do Better

The Supreme Court cannot avoid this issue forever. As the circuit courts continue to rule — and potentially reach conflicting conclusions — the pressure to grant certiorari will grow. When that moment comes, the Court will face a choice between careful, technically informed reasoning and the kind of outcome-driven doctrinal reach it has shown on other contested questions.

The history of the Court and technology does not inspire confidence. Creators, publishers, and the public deserve a Court that will take the time to understand what is actually at stake. Whether this one will is an open question — and a reason for continued attention to who sits on it, and how they got there.