Right now there are a group of Chia users running Timelord nodes configured as “bluebox timelords” that are spending their CPU cycles to help all of us. What are these special Timelords? Well, what they are is a set of Timelord nodes configured to go through the existing blocks of the blockchain and “compactify it” by running more computationally intensive proofs that require less space. Basically compressing it, but not actually compressing it. This will increase utility of the chain, and decrease the sync time needed to setup new nodes. Awesome, right? Yes! Yes it is.
The problem is that it requires a ton of compute power, and the network needs more nodes! Right now according to the Dev team they are compressing about 0.5% of the blockchain a day. That’s awesome, but it is also growing so at the current rate this will be a very slow process.
So please, if you are running a farm on Linux and have spare CPU cycles either because you have a massive setup or are done plotting, please consider reading through those instructions and setting up a BlueBox Timelord.
There are some issues with this kind of Timelord, and answers in the #timelords channel in the Public Chia Keybase channel. But as long as you have the equipment and expertise to do this without interfering with your farming or plotting setup then it will do all of us a favour to help the compression effort catch up to the blockchain size so those Timelords can stabilize the process with new blocks.
The current sync time (Dec 2021) is about 40 hours (more or less irrespective of your download speed). That means that the physical download speed is less than 2Mbps / 250 KBps. Also, if you monitor your node, it looks like it is getting data from one peer at a time (not from all those peers that have higher height than yours).
Therefore, it appears that the sync time is not really a function of the physical blockchain size, but rather the number of records it has, and how the syncing is handled by your node (how efficient is the db handling, how efficient is the download process, and how efficient is synchronization between those two processes).
At the moment, more and more nodes are pushed to host blockchain db on NVMes, what implies that we are close to the limits of handling syncing on those nodes. So, let’s assume that blockchain compression will give us 10% better handling, is that really going to make a difference? Are we just pushing the problem a bit further down the road?