Jonathan Moeller, Pulp Writer

The books of Jonathan Moeller

administratawriting

Should You Stop Writing Or Pursuing Creative Efforts Because Of AI?

Without major spoilers, the chief villain of the new MISSION IMPOSSIBLE move is an evil artificial intelligence. That makes it timely to write another post about generative AI! 🙂

I recently saw a long, somewhat maundering social media post arguing that since soon AI would advance to the point that it could spit out a fully completed novel at the press of the button, there was no point in attempting to write any longer. (The post author’s claimed it was a “blackpilled” post, though in my experience “blackpilled” is usually Internet shorthand for “I will use my fears as an excuse to avoid action.”) I also saw a New York Times article about a father worried about encouraging his son’s creative interests because he feared that AI would soon replace all of that.

So that leads to the question – should you stop writing fiction because of AI? Or engaging in any creative pursuit at all?

Short answer:

No. Get ahold of yourself. Maybe splash some cold water on your face.

Longer, more elaborate answer:

1.) Using fear of AI as a reason not to do something is excuse making. In fact, this is a formal logical fallacy known as the Nirvana Fallacy, which states that if conditions are not perfect, or the outcome is not perfect, then something is not worth doing. The usually cited example of this is that people wearing seat belts can die in traffic accidents, therefore seatbelts are not worth wearing. The counterpoint to this is that it has been well-proven that seat belts reduce traffic fatalities and injuries, and an improved but imperfect outcome is better than no improvement at all.

Writers, in general, seem to be prone to this. You will see many, many, many excuses for why writers do not want to write. Some of them are, of course, valid (illness, life crisis, etc.), but quite a few of them boil down to the Nirvana Fallacy – conditions are not perfect, or the outcome will not be perfect, therefore it is better not to start at all.

“Fear of AI” is merely the latest excuse to slot into the Nirvana Fallacy.

2.) AI is worse than you think it is.

It is regrettable that the various image generations and large language models get saddled with the term “AI”, because there’s nothing terribly intelligent about them. It’s basically Fancy Autocomplete, whether for pictures or words. Granted, further refinements in the technology have made it into Very Super-Duper Fancy Autocomplete, but there’s still nothing particularly intelligent about it.

AI is also a lot harder to use effectively than most people think. If you want to get a decent result out of an AI, you need to spend a lot of work refining the prompts. People can make some beautiful images in Midjourney, but for every beautiful image that comes out of Midjourney, there’s like 40 billion terrible ones. Every really good image you see probably took like a 400 word prompt after several hundred iterations. Getting acceptable fiction out of a chatbot is so much work that it’s easier to simply write it yourself.

Ironically, if you want fiction out of a chatbot, ask if about something factual. 🙂

Also, whenever people try to rely on AI to do something important, bad things seem to happen. A nonprofit website devoted to treating eating orders got rid of its volunteer counselors and replaced it with a chatbot, only for the chatbot to start dispensing bad diet advice. Recently some lawyers in New York got in big trouble when they used ChatGPT for legal research, only for it to invent cases that had never happened. (To be fair, the lawyers in question apparently failed to double-check anything, and ChatGPT repeatedly said in its answer that it is a large language model and not a lawyer.)

As an amusing aside, the morning I wrote this paragraph I got a text from a teacher I know complaining how much he hates ChatGPT – it’s incredibly obvious when his summer school students use ChatGPT to do their homework because the answers are so similar. As it turns out, ChatGPT isn’t even good at cheating!

The point is that whenever there are situations that involve personal or criminal liability, using “AI” is a very bad idea. Obviously, writing a novel is a much lower-stakes endeavor. But that leads directly to our next point.

3.) You can’t see the future. Just because everyone says AI is the Next Big Thing doesn’t mean that it is.

The problem with a lot of tech CEOs is that they all want to be Steve Jobs.

Steve Jobs was unquestionably a major figure in tech history, but he’s been mythologized. His keynote presentations were masterpieces of showmanship, which means that people remember his career that way. Like, Steve Jobs strode onto stage, dramatically unveiled the transformative Next Big Thing – the iPod, the iPad, the iPhone, changed the world, and made billions of dollars in front of an applauding crowd. (To be fair, I typed this paragraph on a MacBook Air.)

But that overlooks the actual history, which is that Jobs failed at a whole lot of stuff. He got booted from Apple in the 1980s, his subsequent company NeXT Computer didn’t do all that great, and when Jobs returned to Apple in the late 90s the company was in such dire straits it needed a deal from Microsoft to stay afloat until the eMac and the iMac came along. The “triumphant keynote” phase of his career was in many ways his second act as an older, wiser man after a lot of setbacks. And a lot of obsessive work went into all the Apple products mentioned above. The iPod and the iPhone in particular went through prototype after prototype, and were the work of large and skilled teams of engineers.

The trouble with remembering the mythology instead of the actual history is that people try to copy the mythology without doing the mountains of work that inspired the myth. Like, these tech CEOs want their products to be the Next Big Thing, but the problem is that the product 1.) often isn’t very good and is less of a product and more of an excuse to extract money from the customer, and 2.) isn’t actually all that useful.

Like, regardless of one might think about an iPhone or an iPad, it cannot be denied that they are useful devices. I refused to use Apple devices at all in the 2000s because they were so expensive, a criticism that in my opinion remains valid, but in the mid 2010s a combination of job changes (I suddenly became responsible for a lot of Mac computers after a layoff) and just the sheer usefulness of many Apple devices meant I started using them. I still have an iPod Touch I use when I go running or when I do outdoor work, and since Apple doesn’t manufacture them any more, I will be sad when it finally dies.

By contrast, a lot of new tech products aren’t that good. Their CEOs forget that to extract money from the customer, you actually have to provide value in exchange. An iPad is expensive, but it does provide value.

NFTs are a good example of this phenomenon of failing to add value for the customer. For a while, all the Big Brains on social media were convinced that NFTs were going to be the Next Big Thing. The idea was that NFTs would create digital collectibles and artificial scarcity. People talked endlessly about minting their NFTs and how this was going to revolutionize online commerce, but I think it is safe to say that outside of a few niches, NFTs have been soundly rejected by the general public. They don’t add value. If you buy, for example, a collectible Boba Fett figure, it is a physical object that you own and if anyone takes it without your permission, you can charge them with theft. By contrast, if you buy an NFT for a JPEG of Boba Fett artwork, you have an entry in a blockchain, and there’s nothing to stop people from copying the JPEG of Boba Fett. What’s the point of the NFT, then? Even if you don’t keep the Boba Fett figure in its packaging and give it to a child as a toy, it still provides value in the form of entertaining the kid.

Cryptocurrency was another Next Big Thing – for a while some people were sure that crypto was going to end central banks and and government-issued fiat currency. Of course, while there are many legitimate criticisms to be made of central banks and fiat currency, it turns out they do a good job of shutting down a lot of the scams that infested the crypto space. The late great science fiction author Jerry Pournelle used to say that unregulated capitalism inevitably led to the sale of human flesh in the market, and crypto seems to have proven that unregulated securities trading leads inevitably to FTX and crypto marketplace collapses.

The Metaverse is a much more expensive version of this. Mark Zuckerberg, worried about the future of Facebook, decided to pivot to his virtual reality Metaverse. Likely Mr. Zuckerberg thought the rise in remote work during the peak of the pandemic would permanently change social dynamics, and Facebook, if it acted right now, could be to virtual reality what Microsoft was to the personal computer and Google to search engines. Facebook changed its name to Meta, and burned a lot of money trying to develop the Metaverse. However, this plan had two major flaws. 1.) While some people preferred the new social arrangements during COVID, a vastly larger majority hated them and wanted things to go back to normal as soon as possible, and 2.) Meta spent like $15 billion dollars to build the Metaverse, but ended up with a crappier version of Second Life that required very expensive virtual reality goggles.

Meta ended up wiping out like two-thirds of its company value.

So, right now, generative AI is the Next Big Thing, but as the examples above show, this might not last.

4.) Public derision

Generative AI also could be following a similar track as NFTs and cryptocurrencies – an initial surge of enthusiasm, followed by widespread disdain and mockery and a retreat to a smaller niche.

For a while several big gaming companies were very excited about NFTs, and a smaller number were interested in cryptocurrency. They would roll neatly into the growth of microtransactions, which the gaming industry really loves. Like, you would buy a new skin or avatar for your character, and then you’d also get an NFT saying that you had #359 out of #5,000, that kind of thing. Digital collectibles, as we mentioned above.

Except the backlash was immense, and people widely mocked every effort by game companies to insert NFTs into their product. It smacked too much of previous “extract money” efforts like microtransactions and loot boxes.

Cryptocurrency has likewise experienced an increasing level of public disdain – see how “crypto bros” have been mocked after the collapse of FTX and other large crypto companies.

Generative AI is very popular in some quarters, but it is beginning to experience a growing level of public disdain as well. One recent example was fantasy author Mark Lawrence’s self-publishing contest. An AI-designed cover won the competition, and the outrage was high enough that Mr. Lawrence canceled the cover competition in future years. (To be fair, part of the problem was that the artist lied about using AI.) The Marvel show SECRET INVASION used a bunch of AI generated images for its title sequences, and there was backlash against that.

Various professional organizations have come out against generative AI, and apparently one of the sticking points in the Hollywood writers’ strike is restrictions on AI. The Screen Actors’ Guild also just went on strike, and one of their points of contention was use of AI. Though one of the sticking points is less about AI and more about using AI to enable irrational greed – the studios want to be able to use an individual actor’s likeness in AI generation forever without payment. It’s too soon to say how it will turn out, but it appears that a significant portion of public opinion on the side of the writers and the actors. It probably helps that the CEOs of major media companies invariably manage to come across as cartoon villains – David Zaslav of Warner Discovery is clearly there to just loot the company as efficiently as possible, and Bob Iger of Disney is currently dealing with all the very expensive mistakes he made during his previous tenure as CEO. Like, if these guys are excited about AI, why should anyone think it’s a good idea?

So it’s possible that the public derision against AI might push it into niche uses, which would be bad news for the companies that spent billions on it.

5.) Synthesis

All that said, generative AI is objectively more useful than NFTs and less likely to lose all your money than crypto (though it might have about the same low-level risk of getting sued if you use Midjourney for commercial purposes). I mean, those kids who were cheating on their homework above? If they had thought about it a little more and rewritten ChatGPT’s response a little bit, maybe thrown in a couple of typos, then they probably would have gotten away with it. To use a less unethical example, imagine you’re applying for jobs and you need to crank out thirty different customized cover letters. You could spend all day sweating over a handcrafted letter that some HR drone will glance at for a second before throwing away, or you could use ChatGPT to generate them. There are lots of tedious documents which no one enjoys writing but are necessary parts of daily life, and something like ChatGPT is ideal for them.

Or, for that matter, specialized chatbots – one that are specifically designed to write marketing copy and nothing else. AI audio will probably end up at a point where it’s simply another feature integrated into ereaders – hit play and an AI voice will read in an accent of your choice, while the human-narrated version will be a premium product.

I think that generative AI will settle into a halfway point between AI Will Transform Everything hype and AI Will Destroy Civilization doomerism. That’s how things usually go – a new idea comes along (thesis), a backlash to it arises (antithesis), and after some struggle they settle into a halfway point (synthesis).

Then it becomes just another tool.

Photoshop offers some evidence for this position. Adobe has been integrating its Firefly generative AI stuff into Photoshop with the new Generative Fill tool. If you know anything about Adobe, you know that they are as corporate and litigious as it gets. The company isn’t exactly into taking big bold swings with its products – they’ve been incrementally updating Photoshop and the other Creative Suite products forever. So if Adobe feels safe integrating generative AI into its products, it’s probably not going anywhere for a while.

But here’s the important point. On social media, you see a lot of impressive images created with Generative Fill, but if you try it yourself, 99% of what it churns out is not very good. Refinement, iterations, and testing are vital. If AI doesn’t go away, I think that’s where it’s going – providing the raw materials for further refinement or improvement.

6.) The Conclusion

As you might guess from the tone of my posts on the subject, I don’t like generative AI very much and I don’t think it adds very much of value, though this might just be my overall grumpiness. If overreacting legislation came along that crippled further AI research, I don’t think much of value would be lost.

No one can see the future, as the many examples above demonstrate.

But, overall, I think generative AI is going to be just another tool, and one that will require more practice to effectively use than people think. Stopping writing (or encouraging a child in creative pursuits) is a bit like stopping carpentry because someone invented the electric saw.

And think about how many people you see every day who obviously don’t think things through at all. Encouraging a child in creative pursuits will definitely serve him or her well later in life, regardless of actual career.

-JM

One thought on “Should You Stop Writing Or Pursuing Creative Efforts Because Of AI?

  • Mary Catelli

    The legal firm got into trouble AFTER it had hit the news about the false references it had made

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *