Yea some kind of fork of the torrent protocol where you can advertise “I have X amount of space to donate” and there’s a mechanism to give you the most endangered bytes on the network maybe. Would need to be a lot more granular than torrents to account for the vast majority of nodes not wanting or being capable of getting to “100%”.
I don’t think the technical aspects are insurmountable, and there’s at least some measure of a builtin audience in that a lot of people run archiveteam warrior containers/VMs. But storage is just so many orders of magnitude more expensive than letting a little cpu/bandwidth limited process run in the background. I don’t know that enough people would be willing/able to donate enough to make it viable?
~70 000 data hoarders volunteering 1TB each to be a 1-1 backup of the current archive.org isn’t a small number of people, and that’s only to get a single parity copy. But it also isn’t an outrageously large number of people.
You might not necessarily have to fork BitTorrent and instead if you have your own protocol for grouping and breaking the data into manageable chunks of a particular size and each one of those represents an actual full torrent. Then you won’t necessarily have to worry about completion levels on those torrents and you can rely on the protocol to do its thing.
Instead of trying to modify the protocol modify the process that you wish to use protocol with.
Yea some kind of fork of the torrent protocol where you can advertise “I have X amount of space to donate” and there’s a mechanism to give you the most endangered bytes on the network maybe. Would need to be a lot more granular than torrents to account for the vast majority of nodes not wanting or being capable of getting to “100%”.
I don’t think the technical aspects are insurmountable, and there’s at least some measure of a builtin audience in that a lot of people run archiveteam warrior containers/VMs. But storage is just so many orders of magnitude more expensive than letting a little cpu/bandwidth limited process run in the background. I don’t know that enough people would be willing/able to donate enough to make it viable?
~70 000 data hoarders volunteering 1TB each to be a 1-1 backup of the current archive.org isn’t a small number of people, and that’s only to get a single parity copy. But it also isn’t an outrageously large number of people.
You might not necessarily have to fork BitTorrent and instead if you have your own protocol for grouping and breaking the data into manageable chunks of a particular size and each one of those represents an actual full torrent. Then you won’t necessarily have to worry about completion levels on those torrents and you can rely on the protocol to do its thing.
Instead of trying to modify the protocol modify the process that you wish to use protocol with.