Imagine this: Everything Everywhere All at Once was partially scripted using ChatGPT. The writer prompted it to “write a screenplay about what James Cameron dreamt last night.” It wins Best Picture.
Now what?
Singularity Shmingularity
Knowing what was created by humans and what by AI has become increasingly difficult. Many refer to this phenomenon as the singularity, where tech and humans are indistinguishable, and tech surpasses humans in ability.
Different feats are commonly cited throughout modern history to talk about this phenomenon.
In 1997, IBM’s Deep Blue defeated world chess champion Kasparov. In sum, a robot won a chess match by outmaneuvering the best human at his craft. Chess is a series of mathematical decisions based on probabilities; albeit impressive, the task is not creative in nature, so onlookers were not threatened.
In 2016, an AI-Written Novella Almost Won a Literary Prize. In October of 2021, AI completed Beethoven's unfinished 10th symphony.
These feats were more impressive to onlookers due to the creative nature of these accomplishments. Albeit impressive, most humans couldn’t access the technology themselves. Only the companies designing the technology had access to the functionality. Therefore, parsing out human from machine wasn’t a concern, because most humans weren’t able to use it.
In 2023, consumer AI applications like ChatGPT and Dall-e have taken the world by storm. This is because they are assisting with tasks most of us do in our everyday lives. The wave of consumer-focused AI applications can assist with creative writing, coding, or generating images.
It’s a revolution of Copernican proportions: AI is the center of the universe, not us.
A board game designer recently won the Colorado State Fair’s annual art competition using Midjourney, an AI image generator.
Recapping the progression of the singularity:
Mathematical tasks are mastered by AI (ex: Deep blue def. Kasparov).
Creative tasks are performed well by AI, but are inaccessible to most people (ex: AI novella).
Many tasks are performed well by AI, and are accessible to everyone with a computer (ex: ChatGPT).
With great power comes great responsibility
Did that student really write their essay? Did Poe write that poem, or is this an AI generated poem, written in Poe’s style? Whose Line Is It Anyway?
As soon as the power of AI could be harnessed by the masses, AI had a human problem.
If AI can create art by way of mimicking the style of human creations, what’s stopping nefarious actors from harnessing AI to ascribe works to us that are not ours? Said differently, how can we say definitively whose work is whose?
Cryptographic stamps
The oldest known human art is a handprint dating back 45,000 years ago. We are not strangers to the importance of stamping our work:
Fred Wilson, co-founder of Union Square Ventures, explains why he believes it’s critical to sign everything, cryptographically:
I think we will need to sign everything to signify its validity. When I say sign, I am thinking cryptographically signed, like you sign a transaction in your web3 wallet.
You can see that “author address” and click on it to see that it is one of the various web3 addresses I own/control. That signifies that it was me who posted the blog.
That’s step one. The responsibility is on us humans to say “I did this, I’m a human, and I can prove it.”
For the uninitiated, this requires connecting your crypto wallet, and minting your article as an NFT (ERC721) on a decentralized platform, such as Mirror.xyz, which proves that I, Bryce Baker, wrote this, because it came from my wallet and I authorized it.
It inspired me to do the same and post this on mirror here:
Signing your work will become even more critical as AI gets better at mimicking our respective styles. The more we create, the more data the AI can train with, meaning our human output makes the AI more human, too.
For example, listen to Joe Rogan interviewing Steve Jobs here. Do it.
This interview never really happened, of course. However, both Rogan and Jobs have many clips for the AI to train with, yielding a result indistinguishable from “reality.”
While signing everything is a great first step, it only solves part of the equation. It allows us to:
Prove we are human
Prove we are who we say we are
Attach ourselves to a specific body of work
However, it does not tell us how to detect when humans plagiarize AI and pass it off as their work. We could hypothetically take an AI generated work, sign it cryptographically, saying “I am him and I created this”, which is one part true and one part false.
Preventing AI Plagiarism
Let’s say for my newsletter this month, I type into ChatGPT “write me a 500 word article about how blockchain can help AI attribute its creations to the right people, in the style of Ben Thompson”. For fun, here is the output:
The perspicacity of the output is mind-boggling. I can now copy paste this text, edit it a little if I want, and sign it cryptographically.
Who should get the credit?
The answer for some is “who cares, it’s a tool, like a calculator or any other.” It’s certainly simpler from a legal perspective to say that it’s all fair game. This way, if someone writes a symphony that AI assisted with, or an award winning screenplay, no due diligence is needed, because we all have access to the same tools.
Our job as creators would be to edit content, rather than create it from scratch.
However, this doesn’t seem fair to the person whose work is being utilized to teach the AI model. In the example I gave, I asked ChatGPT to mimic the style of Ben Thompson. Without his consent, AI utilizes his body of work to teach itself how to write something in his style, which I could publish and stamp as mine.
There are tools being built to help detect if people are “plagiarizing AI.” While they may not tell you what data was plagiarized, these tools can raise a red flag. Here’s a college kid demoing his creation:
Rest assured: another college kid will make a tool which extirpates these AI flagging tools. If the cybersecurity industry taught us anything, hackers are always one step ahead of security.
Does that mean that our only way out is the way of the luddites?
Sign everything, including the use of AI services
The blockchain provides way to make sure attributions are made. If every time ChatGPT was used, it required you to sign, cryptographically, that you are using the tool, you could:
Establish a link between the person who generated the AI output and created content using that output.
Monetize AI applications via micropayments drawn from the user’s wallet.
Establish a link:
The first step to figuring out how attribution or royalties should be applied in this new paradigm is establishing a link between the person who utilized AI to create the content and the content itself.
If we require cryptographic sign ins to utilize AI, and then again to publish works online, you could click on a person's wallet and see that this person pinged ChatGPTs database. Based on the level of metadata provided, you may even be able to see exactly what they entered into ChatGPT.
Monetize AI applications via micropayments:
Open AI just announced that during times of surplus, only paid users will be able to use ChatGPT ($42/mo.). While subscription is certainly one way of monetization, another way would be to charge users a small amount, say 10 cents, every time they used ChatGPT. This amount would be adjusted based on the desired output, compute power required, and current demand level.
Paying for computer processing power is foreign to most modern day internet users, but for crypto-enthusiasts, it's daily routine. For example, whenever you do anything on the Ethereum network (e.g., buy an NFT, mint an NFT, send money), you pay a fee for using the network . The fee you pay, called “gas fees”, are determined by the demand at the time of use and the size of the transaction.
Where does this leave us?
While I don’t see consumer AI applications recording their usage with blockchain anytime soon, the future may require this transition. When problems associated with AI copyright law proliferate in meaningful ways, AI consumer applications will be forced to find solutions which fit this new paradigm.
If you disagree or agree with my writing, I’d love to hear from you.
Email me: bryce@enpassantdigital.com
-BRYCE BAKER