Hi Mythcreants! I’m a big fan of this blog, and have been using it for speculative fiction writing for several years. Thank you so much for what you do. The question I’m asking has to do with ChatGPT and other AI writing tools that are all the rage these days. Do you think there is something ethically or normatively wrong if a speculative fiction writer utilizes ChatGPT to assist in coming up with ideas, outlining, etc.?

Obviously ChatGPT cannot write prose for us, but if one uses the skeletons it can provide us, does that make one a kind of thief, hack, or pseudo-writer? I ask because some people are sharing the sentiment that anyone using these tools to spitball barebones outlines are not “real writers” and thus should not use them. Thank you so much,

Luke

Hey Luke, thanks for writing in!

Oh boy, if it isn’t the most contentious topic in all of creative writing these days. AI in general is a constantly moving target, especially in the world of fiction, because the situation on the ground changes so rapidly. Any take we might have could be out of date by next week, if not sooner. That said, let’s give it a try. 

The first thing to understand is that we cannot offer simple condemnation or absolution in this matter. The issue is far too complex for that. What we can do is tell you some of the issues with generative AI as it currently operates and why the rapid advances are so worrying.

From our perspective, ChatGPT and similar systems represent a dangerous centralization of power. Generative AI isn’t alone in doing that, but it is one of the more extreme examples. It takes a lot of resources to run a system with ChatGPT’s capabilities, so only large entities like corporations or governments (which would introduce a whole new can of worms) can afford them. But at the same time, these models depend on work from the rest of us to generate all that text. 

For sites like Mythcreants, this is an existential threat. ChatGPT can remix our content and deliver it to users without compensating us or even acknowledging the content came from us. The same users who find us by searching online today may never know we exist tomorrow. Mythcreants and other sites that focus on labor-intensive, unique content could end up shutting down, replaced with sites that churn out low-quality AI content. To be clear, some of these trends have already been happening, with Google taking snippets of content to feature in its preview text. But generative AI has massively accelerated this anti-competitive process.

We could end up in a situation where no one can make a living creating the content ChatGPT needs to generate its text. Then, we’ll see what it’s like when various AI models only have each other to iterate off of. This is basically the same problem digital artists are facing, which is why they’ve sued

This doesn’t mean we don’t also appreciate the utility that AI offers. Wouldn’t it be neat if people could visit our site, type a question in a prompt, and get a custom answer based on everything we said in our articles? The problem is, right now, it would not be Mythcreants that benefits from this feature, but Google or Microsoft. We do not have the ability to demand compensation, get exposure, or even opt out of having our content scraped. What if the cost of this cool feature was all the future articles we might have written if we were in business?

Of course, the success of sites like Mythcreants isn’t all that matters here. It’s also worth thinking about what we want as fiction writers. It’s possible that in a few years AI will be able to generate novels that feel like they were written by Neil Gaiman or N.K. Jemisin. No doubt that would be cool in some ways, but do we want AI to mimic our personal voices, without our permission, to create a profit for a large company?

If AI manages to generate profitable novels, big publishers will likely stop working with unknown writers and instead pay editors to manage loads of AI stories generated based on market demand. These publishers have the funds for marketing and the connections for distribution. Independent writers usually do not. Generative novels could make professional fiction writing a thing of the past. 

Of course, we can’t know the future. Maybe in a year or two, AI companies will need to license the training data that feeds generative AI, spreading the economic benefits of AI more fairly. Maybe ChatGPT and similar systems will hit a wall and never push unique, labor-intensive content out of the market. But the models are getting more capable all the time. We can’t count on them forever being tools that are useful but can’t replace us.

In that context, is it ethically wrong for individuals to use and therefore support generative AI? Maybe. A little. We don’t want to point a finger at any individual who is using tools that help them, but we do want everyone to consider the implications. And we want everyone to understand why some people are so upset by AI – even if they don’t have the knowledge or expertise to properly articulate why they’re upset. There’s still time to influence the direction AI takes.

There are also practical implications to using generative AI like ChatGPT in your writing. These systems are much better at creating content that is plentiful. It could make your stories more generic, even cliched. And outlining is where we trust ChatGPT’s accuracy the least. Outlining is the most conceptual stage, which is where AI models have the most trouble. They can definitely generate an outline, but is it a good outline? Is it any better than one of the many premade outlines you’d find Googling? If you’re going to put in all that work into drafting a story, you don’t want to use an outline just because a program could spit it out in a second.

As to whether you can be a “real” writer while using ChatGPT, obviously that’s a loaded question with no objective answer. On a craft level, is using writing prompts from ChatGPT that different from using writing prompts from any other source? Probably not. If you get a premise or high-level concept from ChatGPT, you still have all the real work of writing left to do.

However, the best reporting we can find suggests that specialized AI models are actually much better at generating fictional prose than they are at the really abstract stuff. Granted, this is hard to verify, because said specialized models are behind paywalls, and we often don’t know what prompts were used to create the text in the first place. (A lot of people jealously guard their AI prompts, which is endlessly ironic.)

The more detailed your ChatGPT outline gets, and the more prose it writes for you, the more it becomes, in essence, another writer you are collaborating with. If you had a human doing that work, would they expect to be listed as an author? If so, then it would be a deceptive practice to hide your extensive use of AI. Just as it’s up to you whether you want to use AI tools, we think consumers should be able to choose whether they want to purchase and consume AI-generated works. 

We hope that answers your question, and that you have fair winds writing your story.

Keep the answer engine fueled by becoming a patron today. Want to ask something? Submit your question here.