Yesterday there was a weird “network event” on the Chia Network where the netspace dipped a bit below, there were fewer signage points and the pool netspaces dropped a little bit. This wasn’t the kind of event comparable to the dust storm and it is unlikely to have caused too many folks serious issues, but it was enough to be noticeable and warrant an investigation from Chia Network engineers.
When you look at the 24 hour chart it does look like something was slowly taking down nodes to the tune of a little under 1 Exbibyte. However when you look at the 48 hour chart it looks like a little more than a big spike of noise and not some disaster on the network. Especially when you consider that even the lowest point of the drop was still higher than the netspace from a few days ago.
After talking to Chia Network they have a couple of theories, one which is easy to address and one which is a little more difficult. The second one is that there have been issues in AWS, either with the nodes being run by the pools or with the virtual networking connecting them to the rest of the internet. I don’t find this a particularly compelling argument as it would likely have caused a lot more issues around the world than just Chia, but if it is the problem then solving it is simply a matter of pool architecture and reducing reliance on a single AWS region. This is not cheap and easier said than done.
The primary theory they have though is that it is similar to the issues caused by the dust storm on nodes running versions earlier than 1.2.11 and those nodes are getting a little bit behind when a transaction spike hits related to the new issuances of tokens on the network. And once a node running older software gets behind it doesn’t have the recent optimizations necessary to easily catch back up. It wouldn’t take a whole lot of nodes slightly behind to cause netspace dips so the solution here is to take advantage of those optimizations by upgrading your Chia Node software.
Some older versions of chia software have been found to have trouble syncing up to the head of the chain with larger specifically CAT related traffic spikes. Once the bug presents itself these nodes find themselves unable to catch up to the head of the chain once they fall behind. This is a bug that is hard to identify in flight and so it is highly recommended you upgrade to 1.2.11 immediately if you are not running it or are unsure about your nodes performance.
Justin England, Sr Director of Devops, Chia Network
Even though the mempool has not seen spikes anywhere close to where it was during the dust storm, nor is there fee pressure on transactions on Chia so we aren’t seeing massive volume or anything, but I do still recommend upgrading and I agree with Chia about it being a good idea. In fact, I have been running 1.2.10 because I haven’t yet had the motive to update so I am going to take this opportunity to update to the latest version right now.
I don’t know if its going to help, but I do know that if you haven’t upgraded yet it is a perfect excuse to take the few minutes necessary and do the update. I just completed my update, everything went fine and my node is back up and farming.
I had to downgrade due to bugs. I don’t update every release because every update causes different problems
Solo farming is horrible! 490 TB and no wins in over a month.