Their IP mountain was drilled away by AI 'mining' companies for profit. Are they owed anything?

Their IP mountain was drilled away by AI 'mining' companies for profit. Are they owed anything?
Estimated reading time: 10 minutes

Data ownership in the context of large language models is a thorny, complex and contentious issue. AI companies have unlocked huge productivity potnetial by training their gigantic models, but at the cost of freely appropriating data from original writers, artists, musicians and creators in general, who were never paid. How do we square the data ownership circle?


The rise of generative models, triggered by openAI’s chatGPT has ignited a revolution in creative expression. These models can produce eerily realistic text, images, and even music, blurring the lines between human and machine-made art. However, this exciting development comes with a crucial question: who owns the data used to train these models, and consequently, who has the rights for the profits the models generate?

This essay delves into the complex issue of data ownership in the context of generative models. It will explore the ethical considerations, legal challenges, and societal implications arising from this technological shift. By examining the core of this debate, I will try to illuminate the potential pitfalls and see if we can build a future where innovation and creativity coexist harmoniously.

Central Question and Diverging Perspectives

At the heart of the issue lies the question of ownership. Traditionally, copyright and intellectual property laws have protected the creative works of individuals. However, generative models operate differently. They don’t simply copy existing content; they learn from vast datasets of text, images, and code, extracting patterns and generating entirely new outputs. This raises a crucial question: does the ownership of the training data automatically translate to ownership of the generated content?

The answer is far from clear-cut. On one hand, some argue that the model simply acts as a tool, and the true creators are the individuals who provided the data. They believe that the output reflects the collective creativity embedded in the training data, and therefore, ownership should be shared. This perspective aligns with the “fair use” doctrine, which allows for limited use of copyrighted material for transformative purposes.

On the other hand, others argue that the model itself is the true creator, utilizing its complex algorithms to transform the data into something entirely new. They believe that the developer of the model should hold ownership rights, similar to how a painter owns the final artwork even though they use pigments and brushes created by others.

Ethical Considerations

This debate extends beyond the legal realm and delves into the ethical considerations of data ownership. When AI models generate content that closely resembles human-created works, it raises concerns about plagiarism, fair use, an essence of creativity. Does the model simply mimic existing styles, or does it possess a genuine spark of originality? Should the same standards of originality apply to AI-generated content as they do to human works?

The widespread adoption of generative models has already had significant societal implications. If AI models can produce content indistinguishable from human-made creations, it could lead to job displacement in creative industries. The potential for misuse of these models, such as creating deepfakes for malicious purposes, also poses a serious threat to societal trust and information integrity.

The legal landscape surrounding data ownership in the context of generative models is still largely uncharted territory. Existing copyright and intellectual property laws were designed for a world where humans were the sole creators, and they struggle to adapt to the complexities of AI-generated content. This lack of clarity creates uncertainties for both creators and developers, hindering innovation and potentially stifling artistic expression.

Competing Arguments:

One key challenge lies in determining who holds the copyright for AI-generated works. Is it the individual who provided the training data, the developer of the model, or even the model itself? Current legal frameworks offer conflicting interpretations, leading to confusion and potential legal battles.

  • Case for Data Owners: Proponents of data ownership argue that the content generated by AI models is inherently derivative, relying on the patterns and structures present in the training data. They believe that the individuals who contributed their data should be compensated for its use and share ownership of the generated outputs. This perspective aligns with the “fair use” doctrine, but applying it to AI-generated content raises questions about the degree of transformation involved and the potential for commercial exploitation.
  • Case for Model Developers: On the other hand, some argue that the developer of the model should hold the copyright, similar to how a painter owns the final artwork even though they use pigments and brushes created by others. They highlight the significant investment of time, resources, and expertise required to develop and train these complex models. This perspective emphasizes the role of the developer’s creativity and skill in shaping the final output, even though it relies on pre-existing data.

The Gray Areas and Potential Solutions:

Some models might heavily rely on specific pieces of data, while others might synthesize information from vast datasets in ways that are difficult to trace or attribute. Additionally, the level of human intervention in the creation process can vary greatly, further blurring the lines of ownership.

These complexities highlight the need for a nuanced approach to legal frameworks. One potential solution is to adopt a “multi-party” ownership model, recognizing the contributions of both data providers and model developers. This might involve revenue sharing mechanisms or licensing agreements that ensure fair compensation for all involved.

Shifting Focus: From Ownership to Collaboration

One must acknowledge that legal solutions alone cannot address the ethical and societal concerns surrounding data ownership in the age of generative models. It is important to engage in broader discussions about the very nature of creativity in the digital age.

The emergence of generative models forces us to confront fundamental questions about the nature of creativity and ownership. If machines can produce works indistinguishable from human-created art, does the concept of originality lose its meaning? What ethical considerations should guide the development and use of these powerful tools?

  • The Challenge of Authorship: One of the most pressing concerns is the potential for plagiarism. When an AI model generates content that closely resembles a specific artist’s style or even incorporates elements of their work directly, who is the true author? Does the model simply mimic existing styles, or can it be said to possess a genuine spark of its own creativity? Defining authorship becomes murky, raising questions about attribution and potential misuse. It is important however, if we want to preserve economic and legal protections for creative output.

  • Blurring Lines and Mitigating Misuse: The ease with which AI models can generate content raises concerns about the potential for misuse. Deepfakes, for example, can be used to create realistic but fabricated videos of people saying or doing things they never did. This poses a serious threat to trust and information integrity. While it was possible to create fake images and post fake news in the past, the scale, ease and cost of such operations has decreased by several orders of magnitude, allowing every homedweller to run an elaborate trolling and fake-news network from their backyard. To mitigate such risks, transparency becomes crucial. Developers must be upfront about the data they use and how their models work, while robust mechanisms are needed to hold them accountable for any misuse.

Beyond Ownership: A Collaborative Ecosystem

One way to move forward might be to shift the focus from singular ownership towards a collaborative ecosystem. It promises shared prosperity: a system where data contributors, model developers, artists, and consumers work together to create an interconnected network where value is equitably shared. This could involve:

  • Data Cooperatives: Allowing data providers to collectively manage and monetize their data, ensuring they have a say in how it’s used and receive fair compensation. This promotes data ownership and transparency within the ecosystem.

  • Open-Source Models: Encouraging transparent development practices where algorithms are accessible for scrutiny and improvement. This fosters community innovation, mitigates potential biases, and allows for wider collaboration on model development.

  • Creative Commons Licenses: Implementing flexible licensing frameworks that allow for responsible use and adaptation of AI-generated content, while ensuring proper attribution and protecting rights. This strikes a balance between encouraging use and protecting intellectual property.

It is however easier said than done. Organizing millions, if not billions of individual contributors spread across decades is a hard task. Getting explicit consent for ai training on data already out there is impossible. A carefully-tread path might lead to some level of equity, but bad actors could ignore all safeguards and misuse the data for their personal gain, releasing the models to undermine efforts towards collaborative development.

Upskilling and Reskilling for a Changing Landscape

The potential threat of job displacement in creative industries due to AI is real, we must prioritize proactive measures. Governments and educational institutions must collaborate to design upskilling and reskilling programs that equip individuals with the skills needed to adapt to the changing landscape, ensuring they thrive in the new ecosystem.


In this essay I raised some concerns about the issue of data ownership in the era of generative ai models, and proposed a step towards a semblance of solution. It is important to acknowledge the legal and ethical complexities, and understand that the revolution is real, and undermining it would be a real step back in terms of productivity gains, if at all possible – considering large models are open-source or have been leaked. We must ensure AI empowers, and not hinders, creativity, fuels progress, and benefits all members of society.

Progress must not come at the cost of the livelihoods of millions of artists and creators who have followed the letter of the law in expectation of getting rewarded. The owners of powerful computational tools must not be the sole winners of our productive future: we must be guide ourselves by ethical principles and a shared vision for a brighter future.

Royalty-free stock image above from Pexels.


This is where all my quirky comments will go.