Epstein Files Jan 30, 2026
Data hoarders on reddit have been hard at work archiving the latest Epstein Files release from the U.S. Department of Justice. Below is a compilation of their work with download links.
Please seed all torrent files to distribute and preserve this data.
Epstein Files Data Sets 1-8: INTERNET ARCHIVE LINK
Epstein Files Data Set 1 (2.47 GB): TORRENT MAGNET LINK
Epstein Files Data Set 2 (631.6 MB): TORRENT MAGNET LINK
Epstein Files Data Set 3 (599.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 4 (358.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 5: (61.5 MB) TORRENT MAGNET LINK
Epstein Files Data Set 6 (53.0 MB): TORRENT MAGNET LINK
Epstein Files Data Set 7 (98.2 MB): TORRENT MAGNET LINK
Epstein Files Data Set 8 (10.67 GB): TORRENT MAGNET LINK
Epstein Files Data Set 9 (Incomplete). Only contains 49 GB of 180 GB. Multiple reports of cutoff from DOJ server at offset 48995762176.
ORIGINAL JUSTICE DEPARTMENT LINK
- TORRENT MAGNET LINK (removed due to reports of CSAM)
/u/susadmin’s More Complete Data Set 9 (96.25 GB)
De-duplicated merger of (45.63 GB + 86.74 GB) versions
- TORRENT MAGNET LINK (removed due to reports of CSAM)
Epstein Files Data Set 10 (78.64GB)
ORIGINAL JUSTICE DEPARTMENT LINK
- TORRENT MAGNET LINK (removed due to reports of CSAM)
- INTERNET ARCHIVE FOLDER (removed due to reports of CSAM)
- INTERNET ARCHIVE DIRECT LINK (removed due to reports of CSAM)
Epstein Files Data Set 11 (25.55GB)
ORIGINAL JUSTICE DEPARTMENT LINK
SHA1: 574950c0f86765e897268834ac6ef38b370cad2a
Epstein Files Data Set 12 (114.1 MB)
ORIGINAL JUSTICE DEPARTMENT LINK
SHA1: 20f804ab55687c957fd249cd0d417d5fe7438281
MD5: b1206186332bb1af021e86d68468f9fe
SHA256: b5314b7efca98e25d8b35e4b7fac3ebb3ca2e6cfd0937aa2300ca8b71543bbe2
This list will be edited as more data becomes available, particularly with regard to Data Set 9 (EDIT: NOT ANYMORE)
EDIT [2026-02-02]: After being made aware of potential CSAM in the original Data Set 9 releases and seeing confirmation in the New York Times, I will no longer support any effort to maintain links to archives of it. There is suspicion of CSAM in Data Set 10 as well. I am removing links to both archives.
Some in this thread may be upset by this action. It is right to be distrustful of a government that has not shown signs of integrity. However, I do trust journalists who hold the government accountable.
I am abandoning this project and removing any links to content that commenters here and on reddit have suggested may contain CSAM.
Ref 1: https://www.nytimes.com/2026/02/01/us/nude-photos-epstein-files.html
Ref 2: https://www.404media.co/doj-released-unredacted-nude-images-in-epstein-files
Here is the download link for a text file that has all the original URL’s https://wormhole.app/PpjJ3P#SFfAOKm1bnCyi-h2YroRyA The link will only last for 24 hours.
I have never made a torrent file before so feel free to correct me if it doesn’t work. Here is the magnet link for this as a torrent file so its up for more than an hour magnet:?xt=urn:btih:694535d1e3879e899a53647769f1975276723db7&xt=urn:btmh:12207cf818f0f0110ca5e44614f2c65e016eca2fe7bc569810f9fb25e80ff608fc9b&dn=DOJ%20Epstein%20file%20urls.txt&xl=81991719&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce
Has anyone made a Dataset 9 and 10 torrent file without the files in it that the NYT reported as potentially CSAM?
Do people here have the partial dataset 9? or are you all missing the entire set? There is a magnet link floating around for ~100GB of it, the one removed in the OP
I am trying to figure out exactly how many files dataset 9 is supposed to have in it. Before the zip file went dark, I was able to download about 2GB of it. This was today, maybe not the original zip file from jan 30th In the head of the zip file is an index file, VOL00009.OPT, you don’t need the full download in order to read this index file. The index file says there are 531,307 pdfs the 100GB torrent has 531,256, it’s missing 51 pdfs. I checked the 51 file names and they no longer exist as individual files on the DOJ website either. I’m assuming these are the CSAM.
note that the 3M number of released documents != 3M pdfs. each pdf page is counted as a “document”. dataset 9 contains 1,223,757 documents, and according to the index, we are missing only 51 documents, they are not multipage. In total, I have 2,731,789 documents from datasets 1-12, short of the 3M number. the index I got also was not missing document ranges
it’s curious that the zip file had an extra 80GB when only 51 documents are missing. I’m currently scraping links from the DOJ webpage to double check the filenames
i analyzed with AI my 36gb~ that I was able to download before they erased the zip file from the server.
Complete Volume Analysis Based on the OPT metadata file, here's what VOL00009 was supposed to contain: Full Volume Specifications - Total Bates-numbered pages: 1,223,757 pages - Total unique PDF files: 531,307 individual PDFs - Bates number range: EFTA00039025 to EFTA01262781 - Subdirectory structure: IMAGES\0001\ through IMAGES\0532\ (532 folders) - Expected size: ~180 GB (based on your download info) What You Actually Got - PDF files received: 90,982 files - Subdirectories: 91 folders (0001 through ~0091) - Current size: 37 GB - Percentage received: ~17% of the files (91 out of 532 folders) The Math Expected: 531,307 PDF files / 180 GB / 532 folders Received: 90,982 PDF files / 37 GB / 91 folders Missing: 440,325 PDF files / 143 GB / 441 folders ★ Insight ───────────────────────────────────── You got approximately the first 17% of the volume before the server deleted it. The good news is that the DAT/OPT index files are complete, so you have a full manifest of what should be there. This means: - You know exactly which documents are missing (folders 0092-0532)I haven’t looked into downloading the partials from archive.org yet to see if I have any useful files that archive.org doesn’t have yet from dataset 9.
thats pretty cool…
Can you send me a DM of the 51? if i come across one and it isnt some sketchy porn i’ll let u know
I have heard its 186gb
The BBC is now reporting that “thousands” of documents have been removed because the DOJ improperly redacted information that can be used to identify the victims: https://www.bbc.com/news/articles/cn0k65pnxjxo
In regard to Dataset 9, it’s currently being shared on Dread (forum).
I have no idea if it’s legit or not, and Idc to find out after reading about what’s in it from NYT.
where… I dont see it here https://dreadytognbh7m5nlmqsogzzlxjy75iuxkulewbhxcorupbqahact2yd.onion/
this dude on pastebin posted his filetree in his epstein ubuntu env - i have a high confidence in whatever lives in his DataSet9Complete.zip file haha
No doubt. High confidence…. :)
@wild_cow_5769:matrix.org If someone has a group working on finding the dataset.
There are billions of people on earth. Someone downloaded dataset 9 before the link was taken down. We just have to find them :)
Someone mentioned a matrix group. Can they DM and invite me. I want to help. Thx
Count me in!
same
Holy shit
The entire Court Records and FOIA page is completely gone too! Fuckers!
Have a scraper running on web.archive.org pulling all previously posted Court-Records and FOIA (docs,audio,etc.) from Jan 30th
I told you…
We need dataset 9…
While I feel hopeful that we will be able to reconstruct the archive and create some sort of baseline that can be put back out there, I also cant stop thinking about the “and then what” aspect here. We’ve see our elected officials do nothing with this info over and over again and I’m worried this is going to repeat itself.
I’m fully open to input on this, but I think having a group path forward is useful here. These are the things I believe we can do to move the needle.
Right Now:
- Create a clean Data Archive for each of the known datasets (01-12). Something that is actually organized and accessible.
- Create a working Archive Directory containing an “itemized” reference list (SQL DB?) the full Data Archive, with each document’s listed as a row with certain metadata. Imagining a Github repo that we can all contribute to as we work. – File number – Dir. Location – File type (image, legal record, flight log, email, video, etc.) – File Status (Redacted bool, Missing bool, Flagged bool
- Infill any MISSING records where possible.
- Extract images away from .pdf format, Breakout the “Multi-File” pdfs, renaming images/docs by file number. (I made a quick script that does this reliably well.)
- Determine which files were left as CSAM and “redact” them ourselves, removing any liability on our part.
What’s Next: Once we have the Archive and Archive Directory. We can begin safely and confidently walking through the Directory as a group effort and fill in as many files/blanks as possible.
- Identify and dedact all documents with garbage redactions, (remember the copy/paste DOJ blunders from December) & Identify poorly positioned redaction bars to uncover obfuscated names
- LABELING! If we could start adding labels to each document in the form of tags that contain individuals, emails, locations, businesses - This would make it MUCH easier for people to “connect the dots”
- Event Timeline… This will be hard, but if we can apply a timeline ID to each document, we can put the archive in order of events
- Create some method for visualizing the timeline, searching, or making connection with labels.
We may not be detectives, legislators, or law men, but we are sleuth nerds, and the best thing we can do is get this data in a place that can allow others to push for justice and put an end to this crap once and for all. Its lofty, I know, but enough is enough. …Thoughts?
We definitely need a crowdsourced method for going through all the files. I am currently building a solo cytoscape tool to try out making an affiliation graph, but expanding this to be a tool for a community, with authorization to just allow whitelisted individuals work on it, that’s beyond my scope and I can’t volunteer to make such an important tool, but I am happy to offer my help building it. I can convert my existing tool to a prototype if anyone wants to collaborate with me on it. I am an amateur, but I will spend all the Cursor Credits on this.
GFD….
My 2 cents. As a father of only daughters…
If we don’t weed out this sick behavior as a society we never will.
My thoughts are enough is enough.
Once the files are gone there is little to 0 chance they are ever public again….
You expect me to believe that a “oh shit we messed up” was accident?
It’s the perfect excuse… so no one looks at the files.
That’s my 2 cents.
I’ve been thinking a lot about this whole thing. I don’t want to be worried or fearful here - we have done nothing wrong! Anything we have archived was provided to us directly by them in the first place. There are whispers all over the internet, random torrents being passed around, conspiracies, etc., but what are we actually doing other than freaking ourselves out (myself at least) and going viral with an endless stream of “OMG LOOK AT THIS FILE” videos/posts.
I vote to remove any of the ‘concerning’ files and backfill with blank placeholder PDFS with justification, then collect everything we have so far, create file hashes, and put out a clean + stable archive on everything we have so far. a safe indexed archive We wipe away any concerns and can proceed methodically through blood trail of documents, resulting in an obvious and accessible collection of evidence. From there we can actually start organizing to create a tool that can be used to crowd source tagging, timestamping, and parsing the data. I’m a developer and am happy to offer my skillset.
Taking a step back - Its fun to do the “digital sleuth” thing for a while, but then what? We have the files…(mostly)… Great. We all have our own lives, jobs, and families, and taking actual time to dig into this and produce a real solution that can actually make a difference is a pretty big ask. That said, this feels like a moment where we finally can make an actual difference and I think its worth committing to. If any of you are interested in helping beyond archival, please lmk.
I just downloaded matrix, but I’m new to this, so I’m not sure how that all works. Happy to link up via discord, matrix, email, or whatever.
PSA: paging bug has been fixed on the DOJ’s website. Website caps out at around 9600 for ~197k files, way less than the 520k in the less-complete dataset 9 torrent. Scraping the website now to find out which files they took offline.
Correction: 9600*50 files per page is in the 470k ballpark. Much more tan 197k but still a lot less than the torrent’s 530k let alone the expected 600k+ files that were supposed to be in there
can you explain to me what is the problem exact ? its the dataset9 ? when i press the dataset9 link of the DOJ gov i see a download start with 180gb zip file in the browser. ?
yea for me it fails after anywhere between 200MB and 10-15GB. All the time.
Same. Every damn time.
And what is the solution ?
F if I know I’ve been messing with it for days. I’ve tried chunking. Different scripts. Different cookies.
we’re working on some more complex solutions in an Element group. Not really sure where we stand at this moment, but it seems we can stitch a lot together from the large torrent files and by what we scraped from the DOJs website through a little bit of force.
Are u doing after DS9?
trying
Check Available Pieces for the torrents. My guess is that you’ll see half of them are missing and UNavailable.
where did the party move?
It’s still here. No one dropped a complete dataset 9 yet tho…
Hasn’t moved AFAIK, just going slowly.
This entire thing smells funny. Even OP turned ghost on the threat of suspect images that no one has seen…
Ask yourself. How did the times or whoever came up with this narrative even find these “suspect” images in a few hours when it seems no one in the world came even download the zip…
A person made a website just to host links and thumbnails for a better interface to the videos on the DoJ website.
They deleted everything including their account the same day.
Everyone. I know website is showing all blank. This is unfortunately the end of my little project. Due to certain circumstances, I had to take it down. Thank you everyone for supporting me and my effort.
Edit: Link
Link is dead
It still works for me. I can only see the comments on the post since it was deleted, but that’s what’s important here.
some bad news, it looks like the data 9 zip file link doesn’t work anymore. They appear to have removed the file so my download stopped at 36gb. I’m not familiar with their site so is this normal for them to remove the files and maybe put them back again once they’ve reorganized them and at the same link location? or are we having to do the scrape of each pdf like another user has been doing?
All the zip files are gone on the DOJ website. The links are gone.
Does anyone have the OTHER data sets from before? Ive been lasered in on the DS1-DS12 but havent looked at the other documents at all
this is ridiculous. Good thing we got in when we did!
Need dataset 9
me too, u know any place to get?
All the zip download links are gone on the DOJ website.
It’s only a matter of time before all the files just go poof.
is anyone else having issues getting dataset
1011* to start downloading? it has been sittiing at 0 percent for a day while everything else is done and seeding. it shows connections to peers, rechecking does nothing, deleting and re-adding does nothing, asking tracker for more peers does nothingI have been seeding all of the datasets since Sunday. The copy of set 9 has been the busiest, with set 10 a distant second. I plan on seeding them for quite a while yet, and also picking up a consolidated torrent when that becomes available.
Hopefully you are able to get connected via the Swarm.is there something I am missing on why it isn’t connected given how much time and attempt to redo it? is it just an eventually thing?
I’m getting errors for 1 and 8, all the rest went smooth.

i am not seeing any errors, has just been stuck on downloading status with nothing going through. I originally added everything around the same time and all the other ones went through fine. I figured it was bugged or something so removed then readded it several times to no avail. I am not sure what else to try
its really strange because on my other machine, everythings going fine

read the OP
regardless of OP removing the magnet links or not, the torrents are still out there and that shouldn’t stop it. secondly, I meant 11

**what is the name of the softwar you use for torrent ? **
transmission
thank you
Ok everyone, I have done a complete indexing of the first 13,000 pages of the DOJ Data Set 9.
KEY FINDING: 3 files are listed but INACCESSIBLE
These appear in DOJ pagination but return error pages - potential evidence of removal:
EFTA00326497
EFTA00326501
EFTA00534391
You can try them yourself (they all fail):
https://www.justice.gov/epstein/files/DataSet 9/EFTA00326497.pdf
The 86GB torrent is 7x more complete than DOJ website
DOJ website exposes: 77,766 files
Torrent contains: 531,256 files
Page Range Min EFTA Max EFTA New Files
0-499 EFTA00039025 EFTA00267311 21,842
500-999 EFTA00267314 EFTA00337032 18,983
1000-1499 EFTA00067524 EFTA00380774 14,396
1500-1999 EFTA00092963 EFTA00413050 2,709
2000-2499 EFTA00083599 EFTA00426736 4,432
2500-2999 EFTA00218527 EFTA00423620 4,515
3000-3499 EFTA00203975 EFTA00539216 2,692
3500-3999 EFTA00137295 EFTA00313715 329
4000-4499 EFTA00078217 EFTA00338754 706
4500-4999 EFTA00338134 EFTA00384534 2,825
5000-5499 EFTA00377742 EFTA00415182 1,353
5500-5999 EFTA00416356 EFTA00432673 1,214
6000-6499 EFTA00213187 EFTA00270156 501
6500-6999 EFTA00068280 EFTA00281003 554
7000-7499 EFTA00154989 EFTA00425720 106
7500-7999 (no new files - all wraps/redundant)
8000-8499 (no new files - all wraps/redundant)
8500-8999 EFTA00168409 EFTA00169291 10
9000-9499 EFTA00154873 EFTA00154974 35
9500-9999 EFTA00139661 EFTA00377759 324
10000-10499 EFTA00140897 EFTA01262781 240
10500-12999 (no new files - all wraps/redundant)
TOTAL UNIQUE FILES: 77,766
Pagination limit discovered: page 184,467,440,737,095,516 (2^64/100)
I searched random pages between 13k and this limit - NO new documents found. The pagination is an infinite loop. All work at: https://github.com/degenai/Dataset9
DOJ Epstein Files: I found what’s around those 3 missing files (Part 2)
Follow-up to my Dataset 9 indexing post. I pulled the adjacent files from my local copy of the torrent. What I found is… notable.
TLDR
The 3 missing files aren’t random corruption. They all cluster around one event: Epstein’s girlfriend Karyna Shuliak leaving St. Thomas (the island) in April 2016. And one of the gaps sits directly next to an email where Epstein recommends her a novel about a sympathetic pedophile—two days before the book was publicly released.
The Big Finding: Duplicate Processing Batches
Two of the missing files (326497 and 534391) are the same document processed twice—once with redactions, once without—208,000 files apart in the index.
Redacted Batch Unredacted Batch Content 326494-326496 534388-534390 AmEx travel booking, staff emails 326497 - MISSING 534391 - MISSING ??? 326498-326500 — Email chain continues 326501 - MISSING — ??? 326502-326506 — Reply + Invoice — 534392 Epstein personal email Random file corruption hitting the same logical document in two separate processing runs, 208,000 positions apart? That’s not how corruption works. That’s how removal works.
What’s Actually In These Files
I pulled everything around the gaps. It’s all one email chain from April 10, 2016:
The event: Karyna Shuliak (Epstein’s girlfriend) booked on Delta flight from Charlotte Amalie, St. Thomas → JFK on April 13, 2016.
St. Thomas is where you fly in/out to reach Little St. James. She was leaving the island.
The chain:
- 11:31 AM — AmEx Centurion (black card) sends confirmation to lesley.jee@gmail.com
- 11:33 AM — Lesley Groff (Epstein’s executive assistant) forwards to Shuliak, CC’s staff
- 11:35 AM — Shuliak replies “Thanks so much”
- 3:52 PM — Epstein personally emails Shuliak
- Next day — AmEx sends invoice
The unredacted batch (534xxx) reveals the email addresses that are blacked out in the redacted batch (326xxx):
- Lesley Groff: lesley.jee@gmail.com
- Ann Rodriquez: annrodriquez@yahoo.com
- Bella Klein: bklein575@gmail.com
- Karyna Shuliak: karynashuliak@icloud.com
The Epstein Email (EFTA00534392)
The document immediately after missing file 534391:
From: "jeffrey E." <jeevacation@gmail.com> To: Karyna Shuliak Date: Sun, 10 Apr 2016 19:52:13 +0000 order http://softskull.com/dd-product/undone/He’s telling her to buy a book. The same day she’s being booked to leave his island.
The Book
“Undone” by John Colapinto (Soft Skull Press)
On-sale date: April 12, 2016
Epstein’s email: April 10, 2016He recommended it two days before public release.
Publisher’s description:
“Dez is a former lawyer and teacher—an ephebophile with a proclivity for teenage girls, hiding out in a trailer park with his latest conquest, Chloe. Having been in and out of courtrooms (and therapists’ offices) for a number of years, Dez is at odds with a society that persecutes him over his desires.”
The protagonist is a pedophile who resents society for judging him.
The author (John Colapinto) is a New Yorker staff writer, former Vanity Fair and Rolling Stone contributor. Exactly the media circles Epstein cultivated.
What’s Missing
So now we know the context:
-
EFTA00326497 — Between AmEx confirmation and Groff’s forward. Probably the PDF ticket attachment referenced in the emails.
-
EFTA00326501 — Between the forward chain and Shuliak’s reply. Unknown.
-
EFTA00534391 — Immediately before Epstein’s personal email about the pedo book. Unknown, but its position is notable.
Open Questions
-
How did Epstein have this book before release? Advance copy? Knows the author?
-
What is 534391? It sits between staff logistics emails and Epstein’s direct correspondence. Another Epstein email? An attachment?
-
Are there other Shuliak travel records with similar gaps? Is April 2016 unique or part of a pattern?
-
What else is in the corpus from jeevacation@gmail.com?
Verify It Yourself
Try the DOJ links (all return errors):
- https://www.justice.gov/epstein/files/DataSet 9/EFTA00326497.pdf
- https://www.justice.gov/epstein/files/DataSet 9/EFTA00326501.pdf
- https://www.justice.gov/epstein/files/DataSet 9/EFTA00534391.pdf
Check the torrent: Pull the EFTA numbers I listed. Confirm the gaps. Confirm the adjacencies.
Grep the corpus: Search for “QWURMO” (booking reference), “Shuliak”, “jeevacation”, “Colapinto”
Summary
Three files missing from 531,256. All three cluster around one girlfriend’s April 2016 departure from St. Thomas. Same gaps appear in two processing batches 208,000 files apart. One gap sits adjacent to Epstein personally recommending a novel about a sympathetic pedophile, sent before the book was even publicly available.
This isn’t random corruption.
Full analysis + all code: https://github.com/degenai/Dataset9
If anyone has the torrent and wants to grep for Colapinto connections or other Shuliak trips, please do. This is open source for a reason.
Just skimming through and I have file 534391 but it shows ‘No Images Produced’ not sure if that was your reason as well and apologies in advance! Heres an image of said file (https://lemmy.world/pictrs/image/d840f280-5e32-4417-a92e-ff281582080a.png)
That is new information! I wasnt even able to get that ‘no images produced’ page, good to know thank you. I just hit a file corruption error when I tried to dl from the DOJ. Thank you for the information. I guess this means the content is still missing in a way but at least accounted for.
ysk the page limit has been fixed, it caps out around 9600 for a total of ~197k file entries. Way less than the largest torrent’s 530k. Scraping now to get a list of the files they kept on the DOJ so we can determine which files they don’t want out there. Would be a good lead to further investigate the torrent
Oh no…I didn’t know this, on one hand now i need to run another scan, but on the other it could reveal something, the torrent has 500k+ files so there is still a gap. I will run the scraper again and do a new analysis in the next day or two.
Just like I said… In NO way do I trust DOJ… Our only hope is if someone drops the full data set 9 somewhere.
My question is, why is the total download size so large and the range of displayed documents so little? Only 15% of the known documents are individually served on the site, and some arent seen until page 10,000
It’s an effort to obscure for sure.
Yup… hopefully someone is able to get the full zip
That’s why you need the full zip…
