Award-Winning Editor of Clarkesworld Magazine, Forever Magazine, The Best Science Fiction of the Year, and More

A Concerning Trend

Since the early days of the pandemic, I’ve observed an increase in the number of spammy submissions to Clarkesworld. What I mean by that is that there’s an honest interest in being published, but not in having to do the actual work. Up until recently, these were almost entirely cases of  plagiarism, first by replacing the author’s name and then later by use of programs designed to “make it your own.” The latter often results in rather ham-fisted results like this one I received in 2021:

Sitting on its three years' experience, the fittest Shell was originally the size of more android subliminal observations than any other single subject in the Grandma. Obey three hundred retorts can't even a couple was issued for wages to the apparently that dropped the storage station.

These are the same sentences from the original story, “Human Error” by Raymond F. Jones, published in If (April 1956).

During its three years' existence, the first Wheel was probably the subject of more amateur astronomical observations than any other single object in the heavens. Over three hundred reports came in when a call was issued for witnesses to the accident that destroyed the space station.

These cases were often easy to spot and infrequent enough that they were only a minor nuisance. Sometimes it would pick up for a month or two, but overall growth was very slow and number of cases stayed low. Anyone caught plagiarizing was banned from future submissions. Some even had the nerve to complain about it. “But I really need the money.”

Towards the end of 2022, there was another spike in plagiarism and then “AI” chatbots started gaining some attention, putting a new tool in their arsenal and encouraging more to give this “side hustle” a try. It quickly got out of hand:

Graph starts in June 2019 and displays monthly data through February. Minor bars start showing up in April 2020. Mid-21 through Sept 22 are a bit higher, but it starts growing sharply from there out. Where months were typically below 20, it hits 25 in November, 50 in December, over 100 in January, and nearly 350 so far in February 2023.

(Note: This is being published on the 15th of February. In 15 days, we’ve more than doubled the total for all of January.)

I’m not going to detail how I know these stories are “AI” spam or outline any of the data I have collected from these submissions. There are some very obvious patterns and I have no intention of helping those people become less likely to be caught. Furthermore, some of the patterns I’ve observed could be abused and paint legitimate authors with the same brush. Regional trends, for example.

What I can say is that the number of spam submissions resulting in bans has hit 38% this month. While rejecting and banning these submissions has been simple, it’s growing at a rate that will necessitate changes. To make matters worse, the technology is only going to get better, so detection will become more challenging. (I have no doubt that several rejected stories have already evaded detection or were cases where we simply erred on the side of caution.)

Yes, there are tools out there for detecting plagiarized and machine-written text, but they are prone to false negatives and positives. One of the companies selling these services is even playing both sides, offering a tool to help authors prevent detection. Even if used solely for preliminary scoring and later reviewed by staff, automating these third-party tools into a submissions process would be costly. I don’t think any of the short fiction markets can currently afford the expense.

I’ve reached out to several editors and the situation I’m experiencing is by no means unique. It does appear to be hitting higher-profile “always open” markets much harder than those with limited submission windows or lower pay rates. This isn’t terribly surprising since the websites and channels that promote “write for money” schemes tend to focus more attention on “always open” markets with higher per-word rates.

This might suggest to some that it is in the best interest of a market to have limited submission windows, but I have no doubt that such reprieves would be short-lived. (That, however, might be all some editors need.) Others might seek the safety of solicited submissions or offering private submission opportunities to a narrower set of “known” authors instead of open calls. Editors might even find themselves having to push back on the privacy-minded desire among some authors to provide less contact information. Some might resort to blocking submissions from sources that mask their location with a VPN or other services. Taken a step further, others might employ regional bans as a strategy–much as we have seen happen with financial transactions–due to the high percentage of fraudulent submissions coming from those places.

It’s clear that business as usual won’t be sustainable and I worry that this path will lead to an increased number of barriers for new and international authors. Short fiction needs these people.

It’s not just going to go away on its own and I don’t have a solution. I’m tinkering with some, but this isn’t a game of whack-a-mole that anyone can “win.” The best we can hope for is to bail enough water to stay afloat. (Like we needed one more thing to bail.)

If the field can’t find a way to address this situation, things will begin to break. Response times will get worse and I don’t even want to think about what will happen to my colleagues that offer feedback on submissions. No, it’s not the death of short fiction (please just stop that nonsense), but it is going to complicate things.

Edit 2/17/2023 — I’ve closed comments on this post. There are plenty of places to have fights about publishing or AI. The world doesn’t need one more.

Edit 2/20/2023 — Submissions spiked this morning–over 50 before noon–so I’ve temporarily closed submissions. Here’s a refreshed version of the above graph:

Previous

2022 Clarkesworld Poll – Nomination Phase

Next

Amazon Subscriptions

41 Comments

  1. I think it’s one thing getting machine written text to give you some suggestions for outlines to get you over that “blank page” problem that most people struggle with, but to ask it to write the entire story that they obviously can’t even be bothered to read is just crazy and insulting to the person you’re sending it to. Boggles the mind!

  2. This just makes me want to scream. Alas, I’ve encountered too many people (generally those in the “crack ’em out” mode) who really think that AI writing is a good thing. I don’t even support using AI for getting past the “blank page” struggle, or worldbuilding, just because of the dubious origins of the text.

    That said, I can see a real use for AI in tech writing. As a former special education teacher and case manager, I saw some really hideous drafting in student Individual Education Plans, especially the narrative section. AI could lead to better drafting of those documents with better potential outcomes for students as a result of having a more thorough plan developed.

    But fiction? No. Freaking. Way.

    • Nate

      First, for getting ideas, seeds, or even some direction to help with the “blank page” struggle is absolutely fine. The writer in all those cases is going to craft their own submission.

      Second, at least the AI written submissions standout so easily as bad writing, Probably cuts down on direct plagiarism too.

      • Kevin R S

        The “obvious” example given here is 2 years old. Clearly current submissions are much more advanced and might pass at first glance. There is probably a whole range, from terrible to publishable, and the publishable may be missed if the editor hasn’t read the original that was published in a pulp magazine 60+ years ago

    • Yuli

      I have to disagree there on one spot. Using AI to help with writing and ideas was a godsend to me to such an extent that it actually reinvigorated me to return to fiction writing.

      Having ChatGPT learn my ideas and help flesh them out or provide character dialog for me to converse with, that was everything I needed. To the point I generated a whole novella.

      Where I draw the line is at publishing an unaltered AI output as your own work. Indeed, I’d go so far as to say attribution is my entire problem with synthetic media. AI assistance is perfectly fine to me, though I place higher value on purely human-created writing. But my tolerance falls apart if you pass off any AI-assistance as your own work. Indeed, I have no intention of sharing that aforementioned novella unless I rewrite it myself.

      At the very least, that’s my opinion, and I respect yours.

      • spencer

        “I generated a whole novella”

        I find it telling that you, as the author of the novella you’re talking about, use the word “generated” to describe the process you engaged in, rather than “wrote.”

      • Snek

        If you need an AI to help you “generate” ideas and write a story, then you’re missing out on the achievement of being a writer. The struggle and the mastery of the form are a big part of doing it a massive big part. Short story writing is a flat out creative life and death challenge, and it’s also a human challenge. If you can’t get it up and done yourself, you’re not ready, and it’s possible writing is not your ‘thing.’ Using tech is your thing, but creativity, it ain’t in ya.

    • We specifically state that we don’t want “AI” involved at all in the process. Assisted, written, co-written… whatever you want to call it. No.

      • There is likely to come a moment when a purely A.I. is published somewhere prominent. I’d put money on that happening with the next 5 years. There is a fiction singularity coming.

    • I’ve worked in special education for over thirty years and if an educator can’t draft a well written IEP with proper verb tenses and complete sentences, what does that say about that teacher? As human beings we should always be learning and growing. Turning over the simple things to AI is lazy. And if we don’t keep challenging ourselves what will that do to the human brain fifty years down the road? Just food for thought.

  3. Relating to “whack-a-mole,” what happens when AI evolves beyond its current abilities and is good enough to generate well-crafted content? That reality may not be so far off.

  4. I come from an art background, and some of these bots really do output great-looking art that can fool anyone. But it is a series of pastiches build on the backs of real artists.

    The apologists for machine learning wrongly equate it with human learning. The difference is that masterful artists and storytellers and composers get beyond the learning-by-imitation stage and eventually create something beyond pastiche. Their works are not 100% derivative. They add something new to the genre, or they add some fresh take on some ongoing societal “conversation.”

    I think machines like ChatGBT are an accelerant to the devaluation of creativity and originality in the arts. It’s painful to watch. I see writers who cynically state–and who even celebrate–the supposed “fact” that everything is derivative of everything else.

    There is always room for new art and new stories and fresh ideas. I hope it becomes normal to value and celebrate those things again.

    • TJ

      I co-authored a few conference papers (IEEE, ACM) about computer poetry, along with a Comp Sci professor specializing in generative algorithms. We began with a Canadian poet, bpNichol, and worked from there. Seeing that he was writing poetry with hypertext and animations, publishing with a Shareware-style floppy disc distribution model around 1982ish, we began looking at him as an Ur-example of cybernetic poetry. (He also wrote episodes of Fraggle Rock, which is about as organic and Gaia-minded as I think tv ever gets).

      I am hardly, just barely, an expert. But I think that the spam-submitting, plagiarist users of ChatGPT and other supposedly “AI” poetry generators are not thinking about cognition or cybernetics or an honest attempt at writing. I think they are using a kind of litigious magical thinking about what constitutes a submission, and playing into a Post-Marxist (yikes that phrase) caricature of commodity fetishism where a story is whatever you can get paid for, and let the victims of the fraud try to catch up and get some kind of legal recourse.

      As a bit of a critical legal scholar as well (I am all over the map as you might see), we can obviously tell where governments and corporations want to steer the law and the most basic concepts of “fraud” and “intellectual property.” The surge of generative algorithm plagiarists has profound consequences that have already been baked into some conferences I attended at law schools and universities. the plagiarists have been given a sort of clear legal runway to land their cultural contributions, which makes editors and publishers very important gatekeepers. (Regardless how we feel about the elitism of editors, their human sociopolitical biases, etc etc etc).

    • Yuli

      I disagree with the idea that this will devalue human artistry. On the contrary, such easily produced media ought to have a counter effect on human art, where we place GREATER value on the human-created. As someone who has extensively used synthetic media (I hate the term “AI art”), I’ve already noticed this phenomenon in myself and others. Synthetic media is neat to have and helps with projects greatly when budget and capital aren’t in my favor (at all), but all it really accomplished was making me and my followers value human-created art much more. Synthetic media rarely catches the same attention as human-created art, even if it’s of much higher quality.

      Personally I believe human-made and synthetic media ought to be segregated from each other, rather than one being obsolete or the other being banned.

      And if the AI “artists” are unwilling to play nice, we’ll inevitably get all our toys taken away, or rather profoundly regulated.

      • Aludren

        It needs to be regulated now, somehow, before there is no way to tell the difference and lose actual human art forever. With improvements in 3D printing even “analog art” won’t be safe when the 3D printer just prints oil paints on a canvas. Or prints the canvas, too.

    • Filip

      I have to disagree. I don’t think AI is going to devaluate creativity. If anything, they’re going to increase its value.

      The way I figure, AI is a tool, nothing more. Yes, it is a spectacular tool, but it’s not capable of conceptual creativity. It’s like easy-bake cakes, the kind where you’d buy a powder, mix it with milk or water, toss it in the oven, and voila, cake!

      Not great cake, but competent. Good to serve in a pinch. Nothing you’d want at your wedding day, but good enough.

      Easy-bake didn’t replace bakeries. It didn’t replace home baked cakes. It even faded from popularity. It’s still there, but there aren’t cookbooks and shows devoted to it any longer (there were, just as there were ones devoted to the microwave oven.)

      Same will happen with AI.

      I’m old enough to remember the massive protests when newspapers introduced the Macintosh as a replacement for human typesetters and layouters. It was going to be the death of newspapers.

      It wasn’t. It was a was just another tool.

      The same will happen with AI. Some of us will use it. Some of us will abuse it (to quote the song). And some of us will shrug and keep writing. In ten years, you won’t be able to publish plebian writing even in a for the love market, because AI is great at average. Its existence will push the limits of what is acceptable quality. Some of us won’t be able to keep up.

      But I believe that most of us will adapt, and the capital-A Arts will grow to encompass niches we haven’t even seen yet, niches made possible by a new tool, just like modern web design was made possible by the ideas brought by Apple computers in the 1980’s.

      (Of course, I’m an optimist. YMMV.)

  5. TJ

    One way I protect myself from this is to write a long “short bio” that includes links to many years’ worth of publications. Also, I list a recent writing grant at the end of my bio, in case editors (or readers) want government verification that I exist. Editors, however, have been responding by demanding shorter bios, shortening it themselves, and probably rejecting me because of a bio longer than the poems I submit. Submittable also deletes all links and italics from bios, which might make my miniaturized CV look like a random citation generator, I suppose. Submittable needs to correct this incredible failure of their imaginations before it becomes useless as a spam sieve. Their lack of foresight in their own cybernetic bailiwick is stunning, TBH.

  6. Mike

    It is not surprising that the same skills you bring as an editor also allow you to easily separate AI stories.

  7. They came for the assembly line workers and I did nothing
    They came for the accountants and I did nothing
    They came for the truck drivers and I did nothing
    Then they came for me, and no one was there to help me.

    Quite the conundrum. You can look at this from many angles, including one that points out that one of the most popular themes in sf is whether or not to accept AIs as valid selves. I just finished reading Infinity Gate by M. R. Carey, and it makes the case quite strongly that any society that uses AIs to function but doesn’t recognize their validity as selves is based on slavery. No, we’re not there yet, but it’s not all that far off.

    On the other hand, maybe we can come up with a way to register ourselves as actual humans and provide some sort of certification with our submissions. Of course, that would require us to be honest, not a hardwired human trait.

    I don’t have an account with any of the chatbots, because I write for my amusement and hope that it connects with others. I have no doubt that an AI either could or will soon be able to provide better insights and more engaging prose than me, but that’s no reason for me to stop writing.

    (wow, that was almost confused enough to be written by a chatbot, except that an AI would have gotten a better score from grammarly)

    • Kevin R S

      These are not real “AI” though, not even close. These are just algorithms that take an input, for example a bunch of categorized text or images pulled from internet sources, iteratively and pseudorandomly mix and match bits, and offer an output. A human selects the best output, and possibly requests more iteration.
      In the example given, the human was probably either not an english speaker, or didn’t even read the output.

    • Jason P

      You’ve provided a helpful paradigm. I’m reminded of Heinlein’s “Jerry Was a Man” and its speculation on future legal systems’ considerations of what it means to be a person. If we have corporate personhood today, why not AI?

      I believe another perspective may be that we grow to have a symbiotic relationship with AI, where humans may leverage these tools and continue to refine them.

      Why can’t I become a muse for a chatbot by providing the uniquely refined prompt (with the right underlying dataset of knowledge / experience / memories) to generate a work that I can subsequently sculpt to reflect my own predilection and turns of phrase?

  8. JS Obtuss

    So I wonder if something like Git and version history tools could help with this? As a software engineer for years now, Git has been a great tool for collaborating and tracking the progress of coding projects. Each time you commit a piece of work to your git repository, it is saved with cryptographic hashes; you end up with a digital paper trail of your works development from draft to finished product. So what if you have potential authors supply not just their work but their git log with all the associate hashes so it can be checked as an authenticate work history?

    You could have authors submit links to their online version control repositories sites like GitHub, so there’s an independent verification and tracking of the development of their work.

    You could have rules over a minimum number of commit in a submissions history that is required. Commits have to be spaced out over a length of time (hours, days, etc.). a limit on how many words or line edits can be written before they have to commit their work in progress. The idea being that if writers want to use AI to get over blank page syndrome, that’s fine. But then they work to shape the story themselves, they record the crafting of their voice and their story for editors to review. It’s an artificial barrier to ensure only those committed to getting published keep going.

    It’s not a perfect system, given AI will definitely catch up to making fake histories for the development of such work. It’s feeling like we’re at the start of a technological paradigm shift on par with the printing press, the wheel, and fire.

    • I’ve had to teach people how to use Git. Thanks, but that’s not a path I’m interested in taking again.

  9. Hmm. Might this herald the return of paper submissions? On the surface, it seems unlikely that someone who isn’t bothering to write their own stories would bother to print them out and mail them. Yes, that’s an added barrier for writers in other countries which electronic submissions took down (sorry, folks), bit our first responsibility has to be to the readers.

    • I really don’t see that happening and it would undo years of progress, particularly with international works and authors with limited means. Besides, it’s just a different type of submission fee where the postal service gets the money.

  10. Randy Tayler

    Do you tell people who’ve been banned that they’ve been banned, or is it a shadow ban? (I think the latter might cut down on workload – they don’t know to create new aliases and emails, and instead keep submitting, but now into the void.)

    • We don’t tell them. We used to, but they seemed to think that meant we were willing to talk about it.

      • Brent Kellmer

        How do authors know if they’ve been black-listed, then? Certainly the vast majority of those you flag are using AI, but what about false positives?

        • As I’ve said, we’re erring on the side of caution and only banning people we are certain have violated this policy. They can still log in and check the status of their submission (rejected) but they are blocked submitting and can still email us if they have questions.

          • Brent Kellmer

            Makes sense. Thank you for clarifying.

  11. A Curtis

    Charge a small fee per submission to cover the time to review it. $25 may be adequate to start. Refunded upon being printed or on editors discretion.

    • Even if it was just $1, it would be dead on arrival. The SFF community is firmly entrenched when it comes to the concept of submission fees. They simply aren’t tolerated. Yog’s Law: “Money flows towards the writer.”

      It also creates a financial barrier to entry that locks out financially disadvantaged and many foreign authors (many have issues with international transactions).

  12. First, there was mass media. Very few creators (often well paid) and very many consumers.
    Second, there is internet bubble media. Many creators (often creating part-time and pro bono) serving small communities of consumers.
    Third, may be an A.I. future where each individual has a Content A.I. who makes customized content in all art forms and genres that is perfectly optimized to what the consumer wants to experience. Everyone lives in their own content bubble and no two people will ever see the same book or movie unless they share it.

    I don’t like plotting those dots, but I can see that line.

  13. Programming Q&A website StackOverflow banned answers written by AI. People were flooding the site with ChatGPT-written answers (which were often wrong), to the point of overwhelming moderator resources.

    This definitely feels like a new thing. It was never possible to spam-generate creative writing (or “creative” “writing”) at scale before. Text written by ChatGPT is usually easy to spot, but someone still has to spot it.

  14. It’s definitely on our minds at our little journal (Space Squid). Some specific and time-honored formats are particularly susceptible to abuse (of course, not going to say which). Obviously hack writers will always be terrible, but we fear to think about what a skilled and unethical writer could do with these tools. And they’ll get better… the tools and the hacks.

    Have you guys considered requiring some kind of human verification for submissions? Perhaps 2FA or even a credit card charge of a few cents that is immediately refunded after the writer verifies the amount?

    As for the discussion of whether this will augment or devalue art — my vote is the latter. I think there will be some amazing AI-augmented work that we’ll all admire. They’ll be astounding. And of course the 1% of human writers will continue to flourish in the limelight. But the rank and file writers will suffer in obscurity, just like realist painters and advertising artists suffered when photography took the fore. Follow the money. We the gatekeepers are already struggling to tell the fakes from the originals, and the fakes are basically FREE. That’s the collapse of the economy, people.

    As Neil noted, it strains resources that already are strained, making slushing a real chore. On the other hand, I have to admit that a lot of ChatGPT pieces are easier to read (if less meaningful) than our slush…!

  15. Kevin Postlewaite

    Consider charging for submissions, $2 each, or your full actual cost if the problem continues to grow. Refunded for reasonable quality submissions, waived for previously published authors.

    • Harry Morse

      Yeah, this is the inevitability of the capitalization of literary writing, and editors choosing shitty content-building works by esteemed writers (generally rich best friends) and shitting on great works by unknowns. This is the logical conclusion of the current way litmags and publishers do business.

  16. Thurman

    Today, in the time it takes you to open up a submission, read enough to see that it’s AI, ban the account, the submitter can make 10x more accounts and submissions.

    I see from reading the previous comments that charging is “dead on arrival”, but I would ask you to look at Craigslist and how they handle automotive sales. They charge $5 because without some sort of impediment to quickly creating a new posting, people are always gaming the system (in their case, sellers always want to be on page 1 and not wait 1 week to “renew”).

    You could create a forum where they need to have 5 posts that will give some credence to their identity (it doesn’t have to be a real name, it just needs to match their submission name). Discourse has free forum software which could be used for example. This would do two things – allow you to build a community and also you could use that community to get feedback on delicate things such as this.

    It still boggles my mind that people are willing to do almost anything to be successful except to do the actual work. Sigh. I feel for you.

  17. Pedro Dias

    Not in any way advocating in favor of AI creation. But it feels a lot like we’re shoving fingers into dikes as the whole thing crumbles. It seems to me inevitable that a) AI will evolve to levels of proficiency that makes the output commercially viable; and b) that at that point, *someone* will come along to publish that material – we don’t live in a world where money gets left on the table. The discourse in the writing community is entirely of the We Shall Overcome!, None Shall Pass variety. I think we’d be better off developing plans to survive, and hopefully thrive, when the (seems to me) inevitable comes to pass.

Powered by WordPress & Theme by Anders Norén