Recently there has been some controversy in the Chia community over the Space Marmots decision to publish their NFTs directly to the blockchain. Bram Cohen himself was asked about it in a Twitter space and said “Just don’t do it”. I think that is a attitude born of not understanding the problem, which I am going to explain here. Buckle up, because this is going to be a long one.
The ‘tl;dr’ of the entire argument is that the only way to prevent coherence issues of the asset – one portion available and the other not – is to keep them in the same place, the blockchain. An NFT is normally consistent of two pieces; the underlying asset (the picture, document, or song being tokenized) and the signed URI on the blockchain – the token itself. For most NFTs the token on the blockchain contains the unique signatures and code that make it an NFT along with a pointer to a file on an external service. That service can range from a web server somewhere, to an IPFS address hosted by multiple nodes (hopefully), to a blockchain storage service like Arweave or Filecoin. These produce different levels of risk that the NFT’s underlying asset will disappear but all of them have a different risk profile than the blockchain hosting the token that gets traded.
Running storage is hard. I have made this point before that running large amounts of storage comes with unique problems that don’t crop up when running a Chia farm the size of mine. But even harder than running lots of storage is running any amount of storage for a LONG time. And keeping storage accessible and available online for a long time is even harder.
Think about how files were stored 50 years ago. In 1971 / 1972 the very first floppy disks hit the market – coincidentally with about 175KB of usable space, exactly the same as we gave ourselves for testing the on-chain Marmots.
So for this thought experiment we put 120KB of Marmot on one of these brand new, cutting edge floppy disks. But this isn’t the kind of floppy disk you are thinking of, the 5 1/4 or 3 1/2 inch varieties. No, this is an 8″ floppy disk that even I have only seen in museums. So if it isn’t moved to a 5 1/4″, and then a few years later a 3.5″ floppy, there will soon be nothing that can read our 1972 Marmot JPG (the JPEG file format would not be invented for 20 more years, but you get my point). Or we can put in on one of the ancient hard drives they had back then. But those weren’t hard drives the way we know them, they were more like fixed reels of magnetic tape. Storage has changed a lot in 50 years.
So because of the rapid nature of technology we would spend the 70s and 80s and 90s moving our Marmot around from storage medium to storage medium until it ended up on an IDE hard drive – something recognizable to computing enthusiasts. But that wouldn’t be final resting place. There are very few IDE spinning hard drives left alive from the early days, and probably none that have been actively in use since the late 80s. So a human person with a paycheck needs to keep moving that file. From storage location to storage location until today it is sitting on cloud storage managed by a huge company.
But even storage managed by huge companies online has to be actively managed. And someone needs to pay for that active management. While there might be 20 year old web sites still around with all the same content they had at the beginning, there aren’t many and almost none of them would be running on the original hardware they started on. They would have required expensive and risky migrations, with teams of people planning and executing. They would be running on entirely new software platforms, and would have needed active human involvement to keep them compatible with the modern web. And hard drives simply do not last that long, so the data on them needs to be kept redundant with backups, and tests of those backups to ensure that it stays viable. More human-hours.
Now, a lot of this process is automated at this point and files stored on cloud storage are a lot easier to keep safe than files stored on an 8″ floppy in 1972 but its not perfect or free. So imagine our Marmot JPG from 1972 that has survived over a dozen medium transfers so far and how it will spend the next 50 years once it ends up in the cloud.
First, someone is paying the bills for that storage account. Almost certainly the original creators of the NFT or a service they paid to do that for them. And that is an active process. No cloud storage provider offers a “pay us X dollars and we will host your files exactly as-is in perpetuity”. You pay by the second, the month, or the year. So the first risk is that someone just stops paying the bills and Amazon or MS or Google just shuts it off. And that assumes that the cloud provider is still even in the same business in 50 years as they are today. None of these companies existed in 1972. Microsoft is the oldest of them and is only 47 years old and certainly did not think they would end up the business of cloud computing when they started. Obviously. So the second risk is that the cloud provider simply stops hosting your data.
And this is not a mitigatable risk once an NFT has been minted using that data as an underlying asset as the pointer in the NFT cannot be changed. Even if that pointer points to a cloud provider simply hosting an IPFS node with multiple nodes on different cloud providers (a very reliable architecture) a human person or system of them will still need to manage that process long term. Keep those IPFS nodes up. Create new ones as necessary. Update them, and move them to more modern platforms ad nauseum. None of this just happens on its own. And even if, somehow, for the next 50 years somebody does do all that work, and pays those bills there is the protocol problem. Does anyone still use IPFS in 50 years? Possibly not. Remember AFS? The Andrew File System? You probably don’t, even though it was a very popular distributed file system in the 80s and 90s. The reason I mention it is I worked somewhere with a very large, old AFS network and keeping old files available and in good condition was hard, and took a lot of work. And it wasn’t perfect, stuff would get lost or damaged over the years and need restoration from backup and truly active management.
This is important because neither cloud providers nor IPFS nodes have a perfect track record for keeping files secure. Good records, sure. But not perfect. And none of them have been around as long as the AFS network I worked on. And good luck plugging into that AFS network now for modern data access. I am sure they have since moved away from it entirely through another costly migration. So the idea that it is certain that currently running storage solution will be exactly as-is and unchanged enough so that the URI in an NFT pointing to it continues to work is a huge stretch. No one can know that, and if anything history leans against it.
All these risks add up and compound over the years. But through all this there is another component that needs to be accessible for any of it to matter – the NFT itself, the string of code written into the blockchain and shared amongst every node and constantly verified. If the blockchain goes away then it doesn’t matter if the underlying file survives, the NFT is gone. So that presents us with a unique opportunity to sidestep the question of coherence entirely and give both parts of the asset the same risk profile – blockchain storage.
Now, blockchains are meant to store data. That’s what they do. Normally this data is long strings of hashes representing chain state, and coins and transactions and wallet addresses. With “smart chains” like Chia, or Ethereum, they also store code that is executed when transactions interact with them. This is all just data storage, so the argument isn’t whether data should be stored on the blockchain, its about which data should be stored there.
To a lot of people NFT assets aren’t valuable enough and that is where the main disagreement comes in, because value is not an objective measurement; value is subjective to the beholder. Clearly people are spending upwards of a million US dollars on these things and so value them greatly. I think to those people, who are equal participants to the public block chain, storing their asset as permanently and securely as possible is worth the block space.
The argument against is that by storing it there you are “forcing” thousands of nodes to store and replicate that image forever. And you are, that is the point. Because without that there is no way to ensure that the image data will last as long as the pointer. This argument can apply to anything, any transaction or smart contract. Everyone is forced to store of all it on full nodes because that’s how a distributed, permissionless database works.
I predict that in the next few decades, if NFTs are not just a passing fad, the 404 error that begins to plague popular NFTs as the underlying images go offline is going to become a gigantic issue. Assets currently worth millions will be instantly worth nothing as there is no value in holding ownership to a dead link. And the fact that the assets are worth millions does nothing to incentivize all the work it is going to take to move around and keep online those images over the decades. Its not the original creators, who will all die at some point, who need the assets to stay online it is the people who eventually own them. For the most popular projects this will take a while, and effort will be put into it. But how many companies are still around and doing the same thing from the 1600s? Meanwhile, we have art from the 1600s still and its VERY valuable. Because it survived.
If NFTs truly are another long term stage of art creation, like many expect them to be, then I think they must be stored together – pointer and asset – otherwise the risk makes them ephemeral data and should not be treated as permanent. But NFTs stored on-chain will last as long as the chain itself. For the Space Marmots stored on Chia those will last, intact, as long as Chia itself does. And Chia plans to be around a long time.