How We Could Build a Better Future for Creators in the Age of AI

How We Could Build a Better Future for Creators in the Age of AI

Whatever you think of AI right now (we are in Gartner's trough of disillusionment after all), for those of us who create digital works – whether it’s music, art, writing, or code – the easy availability of tools that can enable amazing amounts of similar work unleashes a mix of incredible opportunity/excitement and significant challenges/fear.

One of the biggest sources of friction right now is the feeling that creators' rights are being disrespected. Those who make their income over their creation of digital works in particular are understandably angry that their work, often placed freely in the public view, is being scraped and used for training big proprietary commercial models without permission or compensation.

People using AI models trained on this data can certainly diminish the demand for those original creators and thus their income (even if actually copying their works almost never happens). The current competitive and greedy environment around AI technology companies incentivises them to scrape everything they possibly can, which not only tramples on many creators' wishes but also risks degrading the quality of our entire information ecosystem and future AI by training on a lot of "trash and "slop".

But before we go further, let's also be honest about the global environment that many creators have already been working in. The promise of the internet was a level playing field, but the reality for most is a "winner-take-all" market, often global. Take music streaming, where platforms are valued in the tens of billions, yet artists earn fractions of a cent per stream, meaning only a small percentage of top artists can support themselves with their music full-time. This dynamic is repeated across almost every creative field, creating a few lucky ones and a long tail of creators who are (maybe) passionate and (maybe) talented but unable to earn a sustainable income. The current system actually isn't working for most creators and in my opinion it’s not something to protect too dearly. Even banning AI outright wouldn't fix things.

Instead I think this technological shift with AI gives us a unique opportunity to design a much better, more equitable environment for creators than ever before.

While it's necessary to understand the problems, that's just a step towards building solutions. How can we build a healthier ecosystem where AI is a powerful tool for creativity, not a threat to it?

Here’s a rough plan I've been formulating (as part of a larger Utopia I’m working on). I believe these are all achievable and necessary steps to rebalance the scales for creators:

  1. Enforce robots.txt in law. This is an existing technical standard for a simple text file on any site that allows that website to state its policy on scraping. Well-behaved search engines and scraper bots respect this file, while badly-behaved bots do not. By giving this currently-abused convention the force of law, we would give clear control back to publishers and creators, allowing them to decide if and how their content is used by AI crawlers. Infringements and deceptions can be proved by logs or by entrapment, and there should be capable technical people able to judge the veracity of disputed cases.
  2. Mandate labelling of all AI-involved content. This is an absolutely urgent step, because we are reaching the point where AI output is becoming impossible to distinguish from human work. As I’ve written before in a previous post, "Giving credit where it's due", transparency is crucial. We must all fight for a future where we can easily know if and how AI was involved in the creation of any work. This allows everyone to make their own informed choices.
  3. Demand transparency on training data. We need to require any organisation that builds and deploys an AI system to be transparent about the data it was trained on. This is a core requirement for true Open Source AI, as canonically defined at opensource.org, but I believe it should be required for any publicly-available AI. This information should be clearly accessible from the login and signup pages, presented in both a human-searchable and machine-readable format. Even AIs designed for internal use within an organisation, which are presumably full of sensitive knowledge, should still be required to disclose the general sources of their data.
  4. Create a real market for quality training data. Instead of a scraping free-for-all, we should establish trustworthy places where creators can voluntarily opt-in to license their naturally copyrighted work to large public training datasets, which are labelled intentionally for quality. This could be managed through collective licensing systems, ensuring that the people who create quality foundational data are fairly compensated. The licensing costs to commercial AI companies should be quite high to raise funds for creators.
  5. Publicly fund the creation of "missing" data. AI models are a reflection of their training data, and that data is full of historical and cultural biases. We can actively correct this by using public funds to support the creation of high-quality datasets that cover underrepresented areas, like First Nations stories, minor languages, or crucial investigative journalism.
  6. Support public development of ethical AI. To avoid a future where this powerful technology is controlled by just a handful of large corporations (mostly in one or two countries with very particular cultures!), countries should invest in publicly-owned state and non-profit AI systems using local data that can be used locally and ethically.
  7. Regulate AI services with smart incentives. Finally, we need thoughtful regulation for all public AI. This means creating laws and incentives that encourage AI companies to pay for ethically curated datasets, but it goes deeper than that. We need to actively prevent the clearly unhealthy uses of AI. What does that mean? It means discouraging applications that unnecessarily replace genuine human connection, enable mass surveillance, or are designed to influence people in ways they don't want. It means putting up strong barriers against using AI to hack or hurt people with things like deepfakes. On the other hand, we should be actively encouraging the healthy uses: AI as a tool to help us make sense of complex data, to polish our own rough drafts, to explore new concepts, and to take on the boring or dangerous jobs we don't want to do. The goal of regulation should be to ensure AI is a tool for human empowerment, not control or the further concentration of wealth.
  8. Tax AI companies to fund a safety net. Large AI companies should be taxed at a significant rate. This revenue could then be used to create a fund, perhaps a form of Universal Basic Income (UBI) or a creator's stipend, to support those whose livelihoods are disrupted by AI and to ensure a basic standard of living for everyone in a more automated future.

See? Only 8 steps. Easy.

Look, obviously none of these ideas are simple to implement, but they are far from impossible for governments to make happen if they actually wanted to. Most (including my own country, Australia) seem to lack any long-term plan, and that's what this is. The challenges are significant, but the opportunity is even greater.

By taking a proactive, principled approach, we can build a future where AI genuinely augments human creativity, rather than simply exploiting it. We have a chance to create a more equitable, transparent, and ultimately more creative world for everyone.

To make big changes, all we have to do is look at what we can do locally and go do it. None of us has to do everything ourselves. Human systems and organisations always change. All the money in the world comes from consumer and voter decisions, so let’s do the right thing to make the systems support all of us, and build it together.