Enshittification marches on
Google never did make backups of the Internet, why are we pretending like they ever did? Cached webpages were a basic workaround for third-party website downtime; a guarantee that you could reliably see the information you searched for, even if the linked site was down. It was nothing more than a snapshot of the webpage their crawlers saw, where older copies are permanently deleted with every new crawl of the page.
It was never an archival effort, it was a rotating cache. If you were under the impression for all these years that Google was preserving Internet history, I don’t know why, because Google never claimed to be doing that. Maybe it’s time to reevaluate any other altruistic things you’re assuming that mega corporations are up to…
If possible, please use the internet archive extension and upload pages that haven’t been uploaded ever, or in the last year.
Likewise, if you know or use another service, archive it there too!
Use SearXNG which still gives you a cached option (via internet archive). If it is not there the option to make a new snapshot will be available.
Keeping records of things bad people say and do would be considered not being evil, so it makes sense.
Well surely this means that archive.org will be allowed to exist in peace, since it would be ridiculous to make the information and culture produced in the year of our lord 20fucking24 the most ephemeral it has ever been in human history, right?
Right?
Dicks.
I wonder if this is related to why their searches have been going to hell. Like They changed how the engine indexes or something.
I feel like this is so they can deny that they fed all the webpages that they cached to their ‘AI’ training datasets later when someone accuses them of that. Now when asked about the copies of webpages that they have they can be like “What copies?” and end the conversation there.
I noticed this yesterday when I tried to load a cached version of a site. How disappointing.
Three guesses at if they even attempted to donate this data to Internet Archive/Wayback Machine, and the first two don’t count.
Google cached content is pruned down into a space-saving format and rotated/deleted after less than a year, so it would be pretty worthless to the IA.
Internet Archive likely wouldn’t be able to handle it. They’re already struggling currently, as it is, and dumping a few petabytes of caches of the entire internet onto them probably won’t help.
This is the best summary I could come up with:
Google Search’s “cached” links have long been an alternative way to load a website that was down or had changed, but now the company is killing them off.
The feature has been appearing and disappearing for some people since December, and currently, we don’t see any cache links in Google Search.
Cached links used to live under the drop-down menu next to every search result on Google’s page.
As the Google web crawler scoured the Internet for new and updated webpages, it would also save a copy of whatever it was seeing.
That quickly led to Google having a backup of basically the entire Internet, using what was probably an uncountable number of petabytes of data.
In 2020, Google switched to mobile-by-default, so for instance, if you visit that cached Ars link from earlier, you get the mobile site.
The original article contains 438 words, the summary contains 139 words. Saved 68%. I’m a bot and I’m open source!
You can’t cache stuff, politicians and the media needs ways to be able to delete content whenever they please.