

Dear Creative,
The rapid innovation and democratization of Generative Artificial Intelligence (GenAI) have brought the global creative industries to a critical juncture. The GenAI art market is expected to grow by 42% through 2029, potentially reaching an estimated market size of over $2.5 billion. This technological shift challenges fundamental concepts of creativity, ownership, and copyright. The core dispute focuses on GenAI firms' ability to train their systems on artists’ existing works, enabling users to replicate established art styles at low costs, often without seeking permission or providing payment to the original creators.
Under U.S. Copyright Law, a work must be an "original work of authorship" fixed in a "tangible form of expression" to be copyrighted. The definition and application of "original works of authorship" is where GenAI technologies have sparked major conflict.

The U.S. Copyright Office (USCO) has repeatedly reiterated its long-standing human authorship requirement, dictating that copyrightable works must "owe their origin to a human agent". The USCO guidance maintains that outputs wholly generated by AI are not copyrightable. Furthermore, outputs from GenAI systems resulting from human prompt engineering generally do not satisfy this requirement, as the GenAI model itself is seen as controlling or determining the output rather than the human prompter.
The USCO demonstrated this stance in early 2023 when it rescinded the copyright protection granted to the comic book Zarya of the Dawn after discovering the use of Midjourney to generate images, protecting only the author’s selection, coordination, and arrangement of the work’s elements.
The Enforcement Challenge: Difficulties arise because the USCO currently lacks the means to distinguish AI-generated elements from non-AI elements, raising questions about the ability to enforce its provisions without voluntary disclosure from rights seekers. Artists may be disincentivized to report AI-authorship if they know those portions will not be protected.
A related and critical unresolved issue is the application of the Fair Use doctrine regarding the use of copyrighted data for training GenAI models. This doctrine grants unlicensed use of copyrighted works for specific purposes (such as commentary or teaching).
GenAI developers argue that training models constitutes a transformative use, similar to how courts ruled in favor of Google scanning millions of books to create a searchable database that enhanced discoverability. They contend that AI tools learn patterns from massive samples of human works and produce wholly new media, thus transforming the purpose for which the work is used.

However, creators and rights holders argue that AI systems are engineered to replicate creative styles and expressions, and the AI-generated content competes directly against human creators in commercial markets. This competition introduces the concept of market dilution (the fourth factor of fair use analysis), where an influx of cheap, AI-generated material harms the market for original human works.

The appropriation of creative works for training AI has led to significant backlash and legal action:
Visual Artists' Lawsuits: At least 16 lawsuits have been filed against major AI companies over copyright infringement. In 2023, several visual artists, including Kelly McKernan and Karla Ortiz, filed a class action lawsuit against image GenAI firms like Stability AI, Midjourney, and DeviantArt. The artists argue that their work was stolen to build models that now produce derivative works competing against their originals. McKernan noted that artists feel like "David against Goliath" as tech companies profit while creators struggle. However, an early district court ruling in 2024 dismissed many claims against Stability AI, though it did allow artists to pursue claims that the image-generation systems infringe upon their copyright protections. Critics argued that the initial lawsuit was vague and based on a misunderstanding of how AI learning works.
Hollywood's Identity Crisis: Hollywood studios, agencies, and unions are fiercely fighting GenAI encroachment. The launch of OpenAI’s Sora 2, which allows users to upload likenesses of real people and characters into AI-generated videos, sparked immediate alarm. Groups like SAG-AFTRA and agencies such as WME and
The lack of legislative clarity on data mining copyrighted materials enables tech companies to exploit the gap between existing legal frameworks and market realities. This creates economic insecurity for creators.

The Creator's Double Bind: Many creators, such as screenwriters in the Writers Guild of America (WGA), face a "double bind"—a desire to use AI for its potential alongside a fear of AI's capacity to replace them. The WGA navigated this by using private negotiations to safeguard their right to use AI tools while contractually requiring human writers and ensuring their works would not be used to train Large Language Models without consent.
A Shift to Labor Rights: Some scholars argue that prioritizing copyright extension primarily benefits corporate intermediaries (labels, publishers) rather than individual artists, who already receive minimal compensation under the existing system. They propose that fighting the battle on the terrain of labor law—treating artistic output as valuable labor rather than property—provides a better starting position for artists to secure rights and conditions, such as a living wage or compensation funds.
The Path Forward: Experts suggest that rather than relying solely on existing copyright law, flexible frameworks are needed. One potential solution is establishing a new category of copyright specifically for human-AI collaborative work to meaningfully recognize human creative direction. Others insist that GenAI companies must move toward a licensing model, noting that the myth that licensing is impossible is false, pointing to companies already built on fairly sourced data.
Policymakers face the urgent task of balancing innovation with fair compensation and ensuring that human authorship remains both legally and economically viable in the evolving art market.
Dear Creative,
The rapid innovation and democratization of Generative Artificial Intelligence (GenAI) have brought the global creative industries to a critical juncture. The GenAI art market is expected to grow by 42% through 2029, potentially reaching an estimated market size of over $2.5 billion. This technological shift challenges fundamental concepts of creativity, ownership, and copyright. The core dispute focuses on GenAI firms' ability to train their systems on artists’ existing works, enabling users to replicate established art styles at low costs, often without seeking permission or providing payment to the original creators.
Under U.S. Copyright Law, a work must be an "original work of authorship" fixed in a "tangible form of expression" to be copyrighted. The definition and application of "original works of authorship" is where GenAI technologies have sparked major conflict.

The U.S. Copyright Office (USCO) has repeatedly reiterated its long-standing human authorship requirement, dictating that copyrightable works must "owe their origin to a human agent". The USCO guidance maintains that outputs wholly generated by AI are not copyrightable. Furthermore, outputs from GenAI systems resulting from human prompt engineering generally do not satisfy this requirement, as the GenAI model itself is seen as controlling or determining the output rather than the human prompter.
The USCO demonstrated this stance in early 2023 when it rescinded the copyright protection granted to the comic book Zarya of the Dawn after discovering the use of Midjourney to generate images, protecting only the author’s selection, coordination, and arrangement of the work’s elements.
The Enforcement Challenge: Difficulties arise because the USCO currently lacks the means to distinguish AI-generated elements from non-AI elements, raising questions about the ability to enforce its provisions without voluntary disclosure from rights seekers. Artists may be disincentivized to report AI-authorship if they know those portions will not be protected.
A related and critical unresolved issue is the application of the Fair Use doctrine regarding the use of copyrighted data for training GenAI models. This doctrine grants unlicensed use of copyrighted works for specific purposes (such as commentary or teaching).
GenAI developers argue that training models constitutes a transformative use, similar to how courts ruled in favor of Google scanning millions of books to create a searchable database that enhanced discoverability. They contend that AI tools learn patterns from massive samples of human works and produce wholly new media, thus transforming the purpose for which the work is used.

However, creators and rights holders argue that AI systems are engineered to replicate creative styles and expressions, and the AI-generated content competes directly against human creators in commercial markets. This competition introduces the concept of market dilution (the fourth factor of fair use analysis), where an influx of cheap, AI-generated material harms the market for original human works.

The appropriation of creative works for training AI has led to significant backlash and legal action:
Visual Artists' Lawsuits: At least 16 lawsuits have been filed against major AI companies over copyright infringement. In 2023, several visual artists, including Kelly McKernan and Karla Ortiz, filed a class action lawsuit against image GenAI firms like Stability AI, Midjourney, and DeviantArt. The artists argue that their work was stolen to build models that now produce derivative works competing against their originals. McKernan noted that artists feel like "David against Goliath" as tech companies profit while creators struggle. However, an early district court ruling in 2024 dismissed many claims against Stability AI, though it did allow artists to pursue claims that the image-generation systems infringe upon their copyright protections. Critics argued that the initial lawsuit was vague and based on a misunderstanding of how AI learning works.
Hollywood's Identity Crisis: Hollywood studios, agencies, and unions are fiercely fighting GenAI encroachment. The launch of OpenAI’s Sora 2, which allows users to upload likenesses of real people and characters into AI-generated videos, sparked immediate alarm. Groups like SAG-AFTRA and agencies such as WME and
The lack of legislative clarity on data mining copyrighted materials enables tech companies to exploit the gap between existing legal frameworks and market realities. This creates economic insecurity for creators.

The Creator's Double Bind: Many creators, such as screenwriters in the Writers Guild of America (WGA), face a "double bind"—a desire to use AI for its potential alongside a fear of AI's capacity to replace them. The WGA navigated this by using private negotiations to safeguard their right to use AI tools while contractually requiring human writers and ensuring their works would not be used to train Large Language Models without consent.
A Shift to Labor Rights: Some scholars argue that prioritizing copyright extension primarily benefits corporate intermediaries (labels, publishers) rather than individual artists, who already receive minimal compensation under the existing system. They propose that fighting the battle on the terrain of labor law—treating artistic output as valuable labor rather than property—provides a better starting position for artists to secure rights and conditions, such as a living wage or compensation funds.
The Path Forward: Experts suggest that rather than relying solely on existing copyright law, flexible frameworks are needed. One potential solution is establishing a new category of copyright specifically for human-AI collaborative work to meaningfully recognize human creative direction. Others insist that GenAI companies must move toward a licensing model, noting that the myth that licensing is impossible is false, pointing to companies already built on fairly sourced data.
Policymakers face the urgent task of balancing innovation with fair compensation and ensuring that human authorship remains both legally and economically viable in the evolving art market.
Global Protests: In the UK, the government consulted on a plan allowing AI developers to use copyrighted material freely unless the rights holder actively opts out. In protest, over a thousand musicians, including Paul McCartney and Annie Lennox, released a silent album, Is This What We Want?, to encourage the government to "enforce copyright laws" and not "legalize music theft". They argue that shifting the burden onto creators to opt out is nearly impossible given the thousands of developers worldwide.
Global Protests: In the UK, the government consulted on a plan allowing AI developers to use copyrighted material freely unless the rights holder actively opts out. In protest, over a thousand musicians, including Paul McCartney and Annie Lennox, released a silent album, Is This What We Want?, to encourage the government to "enforce copyright laws" and not "legalize music theft". They argue that shifting the burden onto creators to opt out is nearly impossible given the thousands of developers worldwide.
>300 subscribers
>300 subscribers
Share Dialog
Share Dialog
1 comment
Thank you for the comprehensive overview. You might note that invoke.ai - a Georgia based start up secured one of the first AI copyrights for artwork, just before being acquired by Adobe. This space is moving fast, and will be critical to the survival of creative work in AI.