Maybe I didn't run the programs properly or thoroughly enough, but in Recuva on Win 10, I still see the sensitive file titles pop up during its recovery routine.˙ Now if I try and open them, they won't, but the file names still remain.
Tried both Cipher and Sdelete.˙˙ I ran sdelete64.exe -z E:.˙ Cipher was cipher /w:E:
Both programs seemed to run and I felt confident they were performing their tasks, that is until I ran Recuva.
What am I doing wrong?˙ Not enough passes?˙ Suggestions for remedy welcome and thanks in advance.
On Sun, 3/29/2026 4:32 PM, George Garth wrote:
Maybe I didn't run the programs properly or thoroughly enough, but in Recuva on Win 10, I still see the sensitive file titles pop up during its recovery routine.˙ Now if I try and open them, they won't, but the file names still remain.
Tried both Cipher and Sdelete.˙˙ I ran sdelete64.exe -z E:.˙ Cipher was cipher /w:E:
Both programs seemed to run and I felt confident they were performing their tasks, that is until I ran Recuva.
What am I doing wrong?˙ Not enough passes?˙ Suggestions for remedy welcome and thanks in advance.
Let me start by saying, I'm not surprised.
I have done one or two forensic test cases, which failed.
I could not even justify why I got so many hits, let alone
why the erasers didn't work.
Since I don't know enough about NTFS to finish writing
my file lister program, let's ask CoPilot where that
filename is coming from.
Note that "the AI makes mistakes" does not cover exactly
what it is up to. You might notice a change of phraseology,
part way through the answer. When it states things in
inconsistent ways, this is an indication of a low-confidence
observation. Usually, things with strong evidence, are
stated the same way each time. While the AI does not try
to bore you by using the same exact sentence any time,
you have to watch for these little slips, as they
indicate you are going to have to verify a Copilot "hypothesis"
when it constructs these. If CoPilot had been trained on an article
that exactly addressed this issue, it would not have to use
"constructive techniques" to answer the question. And when it
does do a bit of constructing, you can tell it's embellishing
a bit. This is not "hallucination", it's the request to be
a "helpful assistant" plus "overplaying its hand of cards".
If you add to the question, "work slowly and carefully,
giving a precise answer" to a question, you will find that
CoPilot will show you its hand of cards, what it is not
sure about, and this makes a refreshing contrast to the first
answer it crafted.
********** CoPilot Question ************
A user is using an NTFS file system. They delete a file.
The $MFT 1 kilobyte entry should indicate the slot is not in use.
When the Sysinternals sdelete.exe utility is run, it seems
unable to clean the $MFT entry (in particular, the filename).
When the Recuva utility is run, it is able to locate the
erased file (by listing its name).
Where might the filename that Recuva has found, be located ?
Is it in the directory entry and $I30 ? Is it somewhere else ? Is sdelete.exe actually
able to clean deleted file $MFT 1 kilobyte slots ?
********** CoPilot Answer **************
Here?s the short version up front: **Recuva is not getting the filename from the $MFT entry that sdelete failed to wipe.**
It is getting it from **somewhere else** - and NTFS has *multiple* places where
a filename can survive even after the file record is marked unused.
Let?s walk through the exact places the name can persist and why sdelete doesn?t always remove them.
---
# Where Recuva Finds the Filename After Deletion
NTFS stores filenames in **more than one place**, and *none* of them are guaranteed to be wiped by sdelete.
Below are the real sources Recuva can use.
---
## 1. The directory index ($I30) ? the most common culprit
Every NTFS directory contains a B?tree index called **$I30**, which stores:
- Filenames
- File reference numbers
- Timestamps
- Flags
When a file is deleted, the directory index entry is **not immediately erased**.
Instead, NTFS simply marks the entry as ?not in use? inside the index.
**Recuva absolutely scans $I30 entries**, including deleted ones.
### Why sdelete doesn?t wipe $I30
sdelete wipes:
- File contents
- The $MFT entry for the file
But it **does not** rewrite or purge directory index entries.
Microsoft never documented a supported API to rewrite $I30 safely, so sdelete avoids touching it.
So yes - **the filename Recuva found is very likely coming from the directory?s $I30 index.**
---
## 2. The $MFT entry *may* still contain the filename
Even though you expect the 1 KB MFT record to be cleared, NTFS behavior is subtle:
- When a file is deleted, the MFT entry is marked free
- But the attribute data (including the filename attribute) is **not guaranteed to be overwritten**
- sdelete attempts to overwrite the MFT entry, but **it cannot always reclaim the exact MFT slot immediately**
Why?
Because NTFS may:
- Delay reuse of MFT entries
- Cache metadata
- Keep attributes until the record is reallocated
So sometimes the filename attribute persists in the MFT record until NTFS reuses that slot.
### Is sdelete able to clean MFT entries?
**Sometimes yes, sometimes no.**
It depends on whether NTFS reassigns the MFT record to sdelete?s temporary files.
If NTFS does not reuse that exact MFT slot, sdelete cannot overwrite it.
---
## 3. The USN Journal ($Extend\$UsnJrnl)
If the USN change journal is enabled (it usually is):
- Every file creation
- Every rename
- Every deletion
is logged, including the filename.
Recuva can read the USN journal and extract filenames of deleted files.
sdelete **does not** wipe the USN journal.
---
## 4. Directory slack space
NTFS directory records can contain leftover entries in slack space.
Recuva scans slack space inside directory records.
sdelete does not touch directory slack.
---
# Summary Table
| Location | Contains filename? | Recuva reads it? | sdelete wipes it? |
|--------------------------|--------------------|------------------|-------------------|
| **$MFT entry** | Yes | Yes | *Sometimes* |
| **$I30 directory index** | Yes | Yes | **No** |
| **USN Journal** | Yes | Yes | **No** |
| **Directory slack** | Yes | Yes | No |
| **File content** | Maybe | Yes | Yes |
---
# Final Answers
### **Where is the filename that Recuva found?**
Most likely in:
- **The directory?s $I30 index**, or
- **The USN Journal**, or
- **Slack space in directory records**
### **Is it in the directory entry and $I30?**
**Very likely yes.**
$I30 is the #1 source of ?deleted file names? that recovery tools find.
### **Is it somewhere else?**
Yes - USN Journal and directory slack are also common sources.
### **Is sdelete able to clean deleted file $MFT slots?**
- **Only if NTFS reuses the same MFT record for sdelete?s temporary files.** - NTFS often does *not* reuse the slot immediately.
- Therefore, **sdelete cannot guarantee MFT metadata wiping**.
*******
The answer surprises me, more than a little bit. I know
NTFS leaks like a pig, but some of this answer sounds
a little far fetched. When a $MFT 1KB slot has the In-use indicator flipped, that's atomic, it indicates it's flipped, not that it will be flipped next week. If it did not flip it, then the file would not be erased (functional failure).
Operations have to be thread safe. A thing which executes right after
another thing does a del file.ext , it should *not* find that the file still exists. This is the beauty of the in-use flag. Once flipped, the job is done. Now, if there was a delay, while a very large $I30 was walked, maybe I could understand a millisecond delay before in-use is flipped. But on the other hand, any other attempt to access the $MFT should block until the file system is in a consistent state. The file system is serialized, to ensure it is consistent when any other process looks at it and makes a query. That's why it is so slow. If the filesystem was a slop-bucket, it would go like snot, but a lot of errors would be thrown about phantom conditions. Instead, the filesystem is methodical, as near as I can tell from here.
Since Heidi Eraser knows how to erase entries (real time erasure),
I suspect there is a way to do it. After the first release of Heidi
came out, there was around two years worth of bug reports, before
various slack spaces were cleaned, and before people stopped finding
things. That's just to give you some idea, how long it takes
with continuous feedback, to fix this stuff. The approach Heidi uses
can be expensive - if you have a 10GB file, it's going to need to
zero the whole file as part of the procedure. Recuva should not find anything, or you would hope it would not find anything.
The USN Journal can be disabled, but then you might have trouble
enabling it again. I don't know all the details of why I was getting
a failure there. I did at one time, disable USN and re-enable it again
and it worked. But later, I tried the same thing and I was getting
failures. As a result, using fsutil to trash USN, isn't on
my dinner menu any more.
Paul
Maybe I didn't run the programs properly or thoroughly enough, but in
Recuva on Win 10, I still see the sensitive file titles pop up during
its recovery routine. Now if I try and open them, they won't, but the
file names still remain.
Tried both Cipher and Sdelete. I ran sdelete64.exe -z E:. Cipher was cipher /w:E:
Both programs seemed to run and I felt confident they were performing
their tasks, that is until I ran Recuva.
What am I doing wrong? Not enough passes? Suggestions for remedy
welcome and thanks in advance.
On 3/29/26 5:56 PM, Paul wrote:
On Sun, 3/29/2026 4:32 PM, George Garth wrote:
Maybe I didn't run the programs properly or thoroughly enough, but in Recuva on Win 10, I still see the sensitive file titles pop up during its recovery routine.˙ Now if I try and open them, they won't, but the file names still remain.
Tried both Cipher and Sdelete.˙˙ I ran sdelete64.exe -z E:.˙ Cipher was cipher /w:E:
Both programs seemed to run and I felt confident they were performing their tasks, that is until I ran Recuva.
What am I doing wrong?˙ Not enough passes?˙ Suggestions for remedy welcome and thanks in advance.
Let me start by saying, I'm not surprised.
I have done one or two forensic test cases, which failed.
I could not even justify why I got so many hits, let alone
why the erasers didn't work.
Since I don't know enough about NTFS to finish writing
my file lister program, let's ask CoPilot where that
filename is coming from.
Note that "the AI makes mistakes" does not cover exactly
what it is up to. You might notice a change of phraseology,
part way through the answer. When it states things in
inconsistent ways, this is an indication of a low-confidence
observation. Usually, things with strong evidence, are
stated the same way each time. While the AI does not try
to bore you by using the same exact sentence any time,
you have to watch for these little slips, as they
indicate you are going to have to verify a Copilot "hypothesis"
when it constructs these. If CoPilot had been trained on an article
that exactly addressed this issue, it would not have to use
"constructive techniques" to answer the question. And when it
does do a bit of constructing, you can tell it's embellishing
a bit. This is not "hallucination", it's the request to be
a "helpful assistant" plus "overplaying its hand of cards".
If you add to the question, "work slowly and carefully,
giving a precise answer" to a question, you will find that
CoPilot will show you its hand of cards, what it is not
sure about, and this makes a refreshing contrast to the first
answer it crafted.
********** CoPilot Question ************
A user is using an NTFS file system. They delete a file.
The $MFT 1 kilobyte entry should indicate the slot is not in use.
When the Sysinternals sdelete.exe utility is run, it seems
unable to clean the $MFT entry (in particular, the filename).
When the Recuva utility is run, it is able to locate the
erased file (by listing its name).
Where might the filename that Recuva has found, be located ?
Is it in the directory entry and $I30 ? Is it somewhere else ? Is sdelete.exe actually
able to clean deleted file $MFT 1 kilobyte slots ?
********** CoPilot Answer **************
Here?s the short version up front: **Recuva is not getting the filename from >> the $MFT entry that sdelete failed to wipe.**
It is getting it from **somewhere else** - and NTFS has *multiple* places where
a filename can survive even after the file record is marked unused.
Let?s walk through the exact places the name can persist and why sdelete doesn?t always remove them.
---
#˙ Where Recuva Finds the Filename After Deletion
NTFS stores filenames in **more than one place**, and *none* of them are guaranteed to be wiped by sdelete.
Below are the real sources Recuva can use.
---
##˙˙ 1. The directory index ($I30) ? the most common culprit
Every NTFS directory contains a B?tree index called **$I30**, which stores: >>
- Filenames
- File reference numbers
- Timestamps
- Flags
When a file is deleted, the directory index entry is **not immediately erased**.
Instead, NTFS simply marks the entry as ?not in use? inside the index.
**Recuva absolutely scans $I30 entries**, including deleted ones.
### Why sdelete doesn?t wipe $I30
sdelete wipes:
- File contents
- The $MFT entry for the file
But it **does not** rewrite or purge directory index entries.
Microsoft never documented a supported API to rewrite $I30 safely, so sdelete avoids touching it.
So yes - **the filename Recuva found is very likely coming from the directory?s $I30 index.**
---
##˙ 2. The $MFT entry *may* still contain the filename
Even though you expect the 1 KB MFT record to be cleared, NTFS behavior is subtle:
- When a file is deleted, the MFT entry is marked free
- But the attribute data (including the filename attribute) is **not guaranteed to be overwritten**
- sdelete attempts to overwrite the MFT entry, but **it cannot always reclaim the exact MFT slot immediately**
Why?
Because NTFS may:
- Delay reuse of MFT entries
- Cache metadata
- Keep attributes until the record is reallocated
So sometimes the filename attribute persists in the MFT record until NTFS reuses that slot.
### Is sdelete able to clean MFT entries?
**Sometimes yes, sometimes no.**
It depends on whether NTFS reassigns the MFT record to sdelete?s temporary files.
If NTFS does not reuse that exact MFT slot, sdelete cannot overwrite it.
---
##˙ 3. The USN Journal ($Extend\$UsnJrnl)
If the USN change journal is enabled (it usually is):
- Every file creation
- Every rename
- Every deletion
˙˙ is logged, including the filename.
Recuva can read the USN journal and extract filenames of deleted files.
sdelete **does not** wipe the USN journal.
---
##˙ 4. Directory slack space
NTFS directory records can contain leftover entries in slack space.
Recuva scans slack space inside directory records.
sdelete does not touch directory slack.
---
#˙˙ Summary Table
| Location˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ | Contains filename? | Recuva reads it? | sdelete wipes it? |
|--------------------------|--------------------|------------------|-------------------|
| **$MFT entry**˙˙˙˙˙˙˙˙˙˙ | Yes˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ | Yes˙˙˙˙˙˙˙˙˙˙˙˙˙ | *Sometimes*˙˙˙˙˙˙ |
| **$I30 directory index** | Yes˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ | Yes˙˙˙˙˙˙˙˙˙˙˙˙˙ | **No**˙˙˙˙˙˙˙˙˙˙˙ |
| **USN Journal**˙˙˙˙˙˙˙˙˙ | Yes˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ | Yes˙˙˙˙˙˙˙˙˙˙˙˙˙ | **No**˙˙˙˙˙˙˙˙˙˙˙ |
| **Directory slack**˙˙˙˙˙ | Yes˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ | Yes˙˙˙˙˙˙˙˙˙˙˙˙˙ | No˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ |
| **File content**˙˙˙˙˙˙˙˙ | Maybe˙˙˙˙˙˙˙˙˙˙˙˙˙ | Yes˙˙˙˙˙˙˙˙˙˙˙˙˙ | Yes˙˙˙˙˙˙˙˙˙˙˙˙˙˙ |
---
#˙˙ Final Answers
### **Where is the filename that Recuva found?**
Most likely in:
- **The directory?s $I30 index**, or
- **The USN Journal**, or
- **Slack space in directory records**
### **Is it in the directory entry and $I30?**
**Very likely yes.**
$I30 is the #1 source of ?deleted file names? that recovery tools find.
### **Is it somewhere else?**
Yes - USN Journal and directory slack are also common sources.
### **Is sdelete able to clean deleted file $MFT slots?**
- **Only if NTFS reuses the same MFT record for sdelete?s temporary files.** >> - NTFS often does *not* reuse the slot immediately.
- Therefore, **sdelete cannot guarantee MFT metadata wiping**.
*******
The answer surprises me, more than a little bit. I know
NTFS leaks like a pig, but some of this answer sounds
a little far fetched. When a $MFT 1KB slot has the In-use indicator flipped, >> that's atomic, it indicates it's flipped, not that it will be flipped next >> week. If it did not flip it, then the file would not be erased (functional failure).
Operations have to be thread safe. A thing which executes right after
another thing does a del file.ext , it should *not* find that the file still >> exists. This is the beauty of the in-use flag. Once flipped, the job is done.
Now, if there was a delay, while a very large $I30 was walked, maybe I could >> understand a millisecond delay before in-use is flipped. But on the other
hand, any other attempt to access the $MFT should block until the file system
is in a consistent state. The file system is serialized, to ensure it is
consistent when any other process looks at it and makes a query. That's why >> it is so slow. If the filesystem was a slop-bucket, it would go like snot, >> but a lot of errors would be thrown about phantom conditions. Instead, the >> filesystem is methodical, as near as I can tell from here.
Since Heidi Eraser knows how to erase entries (real time erasure),
I suspect there is a way to do it. After the first release of Heidi
came out, there was around two years worth of bug reports, before
various slack spaces were cleaned, and before people stopped finding
things. That's just to give you some idea, how long it takes
with continuous feedback, to fix this stuff. The approach Heidi uses
can be expensive - if you have a 10GB file, it's going to need to
zero the whole file as part of the procedure. Recuva should not find
anything, or you would hope it would not find anything.
The USN Journal can be disabled, but then you might have trouble
enabling it again. I don't know all the details of why I was getting
a failure there. I did at one time, disable USN and re-enable it again
and it worked. But later, I tried the same thing and I was getting
failures. As a result, using fsutil to trash USN, isn't on
my dinner menu any more.
˙˙˙ Paul
Wow, that's quite an explanation and it seems to me at least that a full deletion will not be possible, at least if trying to "shred" the free space of the hard drive.
I suppose if I had to, I could transfer over the non-deleted files to a separate hard drive and then use something like DBAN to totally erase the drive.˙ However, that's a LOT of work.˙ I used DBAN several years ago to wipe some drives before ditching them.˙ Recuva and other programs did not detect anything relevant after the wiping.
I wonder if anything in Linux might be up to the task?˙ I use Clonezilla to somewhat regularly back up my hard drives.˙ I had read that it simply zeroed free space when backing up.˙ I decided to try backing up the drive in question but, yet again, Recuva was able to show the titles of the files!
On Sun, 29 Mar 2026 16:32:01 -0400, George Garth wrote:
Maybe I didn't run the programs properly or thoroughly enough, but in
Recuva on Win 10, I still see the sensitive file titles pop up during
its recovery routine. Now if I try and open them, they won't, but the
file names still remain.
Tried both Cipher and Sdelete. I ran sdelete64.exe -z E:. Cipher was
cipher /w:E:
Both programs seemed to run and I felt confident they were performing
their tasks, that is until I ran Recuva.
What am I doing wrong? Not enough passes? Suggestions for remedy
welcome and thanks in advance.
Secure delete may not completely wipe all data if the storage is flash
memory based with wear-levelling feature (e.g. SSD).
Because such storage device type remaps/shuffles physical disk sectors
around when one sector is about to be overwritten.
i.e. the physical sector which hold the old data is simply marked as unused, then other physical sector which was oldest marked as unused, is used to
hold the new data.
https://en.wikipedia.org/wiki/Wear_leveling
AFAIK, it's not possible to disable the wear-levelling feature. And because it operates independently from the OS, software based secure delete can't do anything to work around the problem.
For this problem, there's a low-level disk command for secure delete (may only work for the whole disk instead of specific disk sector). It requires a special software, and a disk may not yet support the command if it's too old or is of low quality.
https://en.wikipedia.org/wiki/HDDerase
The above tool doesn't directly delete the data. It simply tell the disk firmware to (secure) delete its own data.
Maybe I didn't run the programs properly or thoroughly enough, but in
Recuva on Win 10, I still see the sensitive file titles pop up during
its recovery routine.˙ Now if I try and open them, they won't, but the
file names still remain.
Tried both Cipher and Sdelete.˙˙ I ran sdelete64.exe -z E:.˙ Cipher was cipher /w:E:
Both programs seemed to run and I felt confident they were performing
their tasks, that is until I ran Recuva.
What am I doing wrong?˙ Not enough passes?˙ Suggestions for remedy
welcome and thanks in advance.
George Garth wrote:
Maybe I didn't run the programs properly or thoroughly enough, but in
Recuva on Win 10, I still see the sensitive file titles pop up during
its recovery routine.˙ Now if I try and open them, they won't, but the
file names still remain.
Tried both Cipher and Sdelete.˙˙ I ran sdelete64.exe -z E:.˙ Cipher was
cipher /w:E:
Both programs seemed to run and I felt confident they were performing
their tasks, that is until I ran Recuva.
What am I doing wrong?˙ Not enough passes?˙ Suggestions for remedy
welcome and thanks in advance.
I think you probably need to clean your shellbags:
https://www.google.com/search?client=firefox-b-1-d&q=what+is+a+shellbag+on+w10
Best program for doing that (that I'm aware of) is Privazer:
https://privazer.com/en/
On 3/30/26 5:48 AM, John C. wrote:
George Garth wrote:
Maybe I didn't run the programs properly or thoroughly enough, but in
Recuva on Win 10, I still see the sensitive file titles pop up during
its recovery routine.˙ Now if I try and open them, they won't, but the
file names still remain.
Tried both Cipher and Sdelete.˙˙ I ran sdelete64.exe -z E:.˙ Cipher was
cipher /w:E:
Both programs seemed to run and I felt confident they were performing
their tasks, that is until I ran Recuva.
What am I doing wrong?˙ Not enough passes?˙ Suggestions for remedy
welcome and thanks in advance.
I think you probably need to clean your shellbags:
https://www.google.com/search?client=firefox-b-1-d&q=what+is+a+shellbag+on+w10
Best program for doing that (that I'm aware of) is Privazer:
https://privazer.com/en/
I was skeptical after trying the other programs and file names would still show in Recuva.˙ I'm happy to say that they no longer do after running the free version of this program! Thanks for suggesting it.
So I guess, should I have future files I want to truly delete, use something like Sdelete and then follow up with this program?
Well, at this point, I'm pondering a new back up.˙ Clonezilla is set up to do a whole drive to whole drive clone. I'm assuming it will now copy the now "cleaned" drive over the existing dirty copy, at least keeping my fingers crossed that it will.
I think Paul had a good point about file names for future reference. Problem is that if I use my own "code" with them too much, I'll forget what they are in the future.˙ Let's say I had my social security number in a text file, for example.˙ "SSN" would be too obvious, but if I simply renamed it as "social" I won't know what that is (or was)... a party, get together, etc etc, so file renaming is a good idea as long as I don't make it too vague.
On Mon, 3/30/2026 7:15 AM, George Garth wrote:
On 3/30/26 5:48 AM, John C. wrote:
George Garth wrote:
Maybe I didn't run the programs properly or thoroughly enough, but in
Recuva on Win 10, I still see the sensitive file titles pop up during
its recovery routine.˙ Now if I try and open them, they won't, but the >>>> file names still remain.
Tried both Cipher and Sdelete.˙˙ I ran sdelete64.exe -z E:.˙ Cipher was >>>> cipher /w:E:
Both programs seemed to run and I felt confident they were performing
their tasks, that is until I ran Recuva.
What am I doing wrong?˙ Not enough passes?˙ Suggestions for remedy
welcome and thanks in advance.
I think you probably need to clean your shellbags:
https://www.google.com/search?client=firefox-b-1-d&q=what+is+a+shellbag+on+w10
Best program for doing that (that I'm aware of) is Privazer:
https://privazer.com/en/
I was skeptical after trying the other programs and file names would still show in Recuva.˙ I'm happy to say that they no longer do after running the free version of this program! Thanks for suggesting it.
So I guess, should I have future files I want to truly delete, use something like Sdelete and then follow up with this program?
Well, at this point, I'm pondering a new back up.˙ Clonezilla is set up to do a whole drive to whole drive clone. I'm assuming it will now copy the now "cleaned" drive over the existing dirty copy, at least keeping my fingers crossed that it will.
I think Paul had a good point about file names for future reference. Problem is that if I use my own "code" with them too much, I'll forget what they are in the future.˙ Let's say I had my social security number in a text file, for example.˙ "SSN" would be too obvious, but if I simply renamed it as "social" I won't know what that is (or was)... a party, get together, etc etc, so file renaming is a good idea as long as I don't make it too vague.
I have known a couple people with "steel trap minds", who can map any number of arbitrary names to their activities on a computer. That's not going to work
for everyone. You can use multiple tags, but it's likely the tags will be
a little too obvious for other people. klaatu-aubergine-duck.txt (maybe the first
letter of each tag is the key...)
As far as I know, Clonezilla does a smart copy and does not copy white space. Any file marked as "In-use", the clusters should get copied. It's quite likely
the $MFT is copied verbatim, with no cleanup or expunging of Not-In-Use
$MFT slots. Not many tools will consolidate the $MFT. I haven't done enough testing to comment on a method I know works for sure.
Macrium does a smart copy, but also has a "dd like" forensic option as a tick box
in the options.
Not many backup tools are limited to just the "dumb" method of copying every sector.
If you copy every sector, the destination has exactly the same forensic problems
as the source would have.
I would think a lot of tools, if they don't have a handler for the file system,
they switch to using "dd". Something like GParted, if it wants, it can
have the source partition dismounted and copy the item with "dd" approach. One problem with GParted, it is refuses to touch the "Microsoft Reserved" partition, which contains no filesystem header sector. A smart tool
(like Macrium), copies Microsoft Reserved with "dd", whereas Gparted
will tell you in effect "I ain't touching that" if you poke Microsoft Reserved.
Clonezilla should do a job the way a user expects it to be done.
It does not disambiguate GUIDs (as seen in "blkid" command on Linux),
but it at least should make decent copies without "excess copying time".
If you have 20GB of files on a 1TB partition, it should take the time
of 20GB of writes to the destination disk drive. Whereas with "dd"
you pay for 1TB worth of writes (of which only 20GB really mattered).
If this experience teaches me nothing else, it does remind me that nothing is secure once it exists in the PC world.
Tried both Cipher and Sdelete. I ran sdelete64.exe -z E:. Cipher was cipher /w:E:
Both programs seemed to run and I felt confident they were performing
their tasks, that is until I ran Recuva.
Tried both Cipher and Sdelete. I ran sdelete64.exe -z E:. Cipher was cipher /w:E:
Both programs seemed to run and I felt confident they were performing
their tasks, that is until I ran Recuva.
Well, I tested this out today. After running both Sdelete and Privazer,
and first checking with Recuva, I did a Clonezilla backup and then
restored it. Recuva showed identical results in both cases, the
original and the clone.
Paul wrote:
George Garth wrote:
John C. wrote:
George Garth wrote:
Maybe I didn't run the programs properly or thoroughly enough, but in >>>>> Recuva on Win 10, I still see the sensitive file titles pop up during >>>>> its recovery routine.˙ Now if I try and open them, they won't, but the >>>>> file names still remain.
Tried both Cipher and Sdelete.˙˙ I ran sdelete64.exe -z E:.˙ Cipher
was cipher /w:E:
Both programs seemed to run and I felt confident they were performing >>>>> their tasks, that is until I ran Recuva.
What am I doing wrong?˙ Not enough passes?˙ Suggestions for remedy
welcome and thanks in advance.
I think you probably need to clean your shellbags:
https://www.google.com/search?client=firefox-b-1-
d&q=what+is+a+shellbag+on+w10
Best program for doing that (that I'm aware of) is Privazer:
https://privazer.com/en/
I was skeptical after trying the other programs and file names would
still show in Recuva.˙ I'm happy to say that they no longer do after
running the free version of this program! Thanks for suggesting it.
So I guess, should I have future files I want to truly delete, use
something like Sdelete and then follow up with this program?
Well, at this point, I'm pondering a new back up.˙ Clonezilla is set
up to do a whole drive to whole drive clone. I'm assuming it will now
copy the now "cleaned" drive over the existing dirty copy, at least
keeping my fingers crossed that it will.
I think Paul had a good point about file names for future reference.
Problem is that if I use my own "code" with them too much, I'll
forget what they are in the future.˙ Let's say I had my social
security number in a text file, for example.˙ "SSN" would be too
obvious, but if I simply renamed it as "social" I won't know what
that is (or was)... a party, get together, etc etc, so file renaming
is a good idea as long as I don't make it too vague.
I have known a couple people with "steel trap minds", who can map any
number
of arbitrary names to their activities on a computer. That's not going
to work
for everyone. You can use multiple tags, but it's likely the tags will be
a little too obvious for other people. klaatu-aubergine-duck.txt
(maybe the first
letter of each tag is the key...)
As far as I know, Clonezilla does a smart copy and does not copy white
space.
Any file marked as "In-use", the clusters should get copied. It's
quite likely
the $MFT is copied verbatim, with no cleanup or expunging of Not-In-Use
$MFT slots. Not many tools will consolidate the $MFT. I haven't done
enough
testing to comment on a method I know works for sure.
Well, I tested this out today.˙ After running both Sdelete and Privazer,
and first checking with Recuva, I did a Clonezilla backup and then
restored it.˙ Recuva showed identical results in both cases, the
original and the clone.
Macrium does a smart copy, but also has a "dd like" forensic option as
a tick box
in the options.
Not many backup tools are limited to just the "dumb" method of copying
every sector.
If you copy every sector, the destination has exactly the same
forensic problems
as the source would have.
I would think a lot of tools, if they don't have a handler for the
file system,
they switch to using "dd". Something like GParted, if it wants, it can
have the source partition dismounted and copy the item with "dd"
approach.
One problem with GParted, it is refuses to touch the "Microsoft Reserved"
partition, which contains no filesystem header sector. A smart tool
(like Macrium), copies Microsoft Reserved with "dd", whereas Gparted
will tell you in effect "I ain't touching that" if you poke Microsoft
Reserved.
Clonezilla should do a job the way a user expects it to be done.
It does not disambiguate GUIDs (as seen in "blkid" command on Linux),
but it at least should make decent copies without "excess copying time".
If you have 20GB of files on a 1TB partition, it should take the time
of 20GB of writes to the destination disk drive. Whereas with "dd"
you pay for 1TB worth of writes (of which only 20GB really mattered).
Even after running these tools (or practically any others available to
the typical consumer), if someone wants to badly enough, they could
still recover data.˙ With my recent experience finding the "leakage"
left behind after running Sdelete/ Cipher, I'm almost certain there will
be additional leakage present with the right software, even after
running this additional Privazer tool.˙ If this experience teaches me
nothing else, it does remind me that nothing is secure once it exists in
the PC world.
On 3/31/2026 2:53 AM, George Garth wrote:
Well, I tested this out today.˙ After running both Sdelete and Privazer,
and first checking with Recuva, I did a Clonezilla backup and then
restored it.˙ Recuva showed identical results in both cases, the
original and the clone.
You will be unknowingly wondering into the world of computer forensic tools!!! ;)
Is there some tool to delete all entries of deleted files from the
directory table? (Wiping free space of USB flash drive FAT32) : r/antiforensics <https://www.reddit.com/r/antiforensics/comments/v13q1b/is_there_some_tool_to_delete_all_entries_of/>
Yes. Purchase an Oxyacetylene torch. Wear goggles or safety glasses.
On 31/03/2026 2:52 pm, Mr. Man-wai Chang wrote:
On 3/31/2026 2:53 AM, George Garth wrote:Hmm! Don't know specifically about a USB drive but, back in Win7 Days, if you Defraged your Hard Drive, that would move everything back towards the start of the Drive ... and, apparently, Zeroing out all the rest of the Drive.
Well, I tested this out today.˙ After running both Sdelete and Privazer, >>> and first checking with Recuva, I did a Clonezilla backup and then
restored it.˙ Recuva showed identical results in both cases, the
original and the clone.
You will be unknowingly wondering into the world of computer forensic tools!!! ;)
Is there some tool to delete all entries of deleted files from the directory table? (Wiping free space of USB flash drive FAT32) : r/antiforensics
<https://www.reddit.com/r/antiforensics/comments/v13q1b/is_there_some_tool_to_delete_all_entries_of/>
Could apply to USB drives as well .... but don't quote me.
Mr. Man-wai Chang wrote on 3/30/2026 10:52 PM:
On 3/31/2026 2:53 AM, George Garth wrote:
Well, I tested this out today.˙ After running both Sdelete and Privazer, >>> and first checking with Recuva, I did a Clonezilla backup and then
restored it.˙ Recuva showed identical results in both cases, the
original and the clone.
You will be unknowingly wondering into the world of computer forensic
tools!!! ;)
Is there some tool to delete all entries of deleted files from the
directory table? (Wiping free space of USB flash drive FAT32) : r/
antiforensics
<https://www.reddit.com/r/antiforensics/comments/v13q1b/
is_there_some_tool_to_delete_all_entries_of/>
Yes. Purchase an Oxyacetylene torch.˙ Wear goggles or safety glasses.
On 3/31/26 8:14 PM, Hank Rogers wrote:
Mr. Man-wai Chang wrote on 3/30/2026 10:52 PM:
On 3/31/2026 2:53 AM, George Garth wrote:
Well, I tested this out today.˙ After running both Sdelete and
Privazer,
and first checking with Recuva, I did a Clonezilla backup and then
restored it.˙ Recuva showed identical results in both cases, the
original and the clone.
You will be unknowingly wondering into the world of computer forensic
tools!!! ;)
Is there some tool to delete all entries of deleted files from the
directory table? (Wiping free space of USB flash drive FAT32) : r/
antiforensics
<https://www.reddit.com/r/antiforensics/comments/v13q1b/
is_there_some_tool_to_delete_all_entries_of/>
Yes. Purchase an Oxyacetylene torch.˙ Wear goggles or safety glasses.
I actually did just that one time, except used one of the small torches
with little tanks, maybe a foot long. I had been given the tanks and
torch by someone not using them anymore and decided to try them out on a 5.25" drive.˙ Mind you, this was before SSD with the somewhat heavy
platter HDD.˙ Much to my surprise, it took a lot of torching!˙ Even
after 10 min or so, about 40% of the drive was still mostly intact with
the remainder being a partially molten mess.˙ I suspect that if I had
used the larger industrial tanks with similarly sized torch, the job
would have gone much more quickly with superior results.˙ It was an interesting experiment though!
| Sysop: | Tetrazocine |
|---|---|
| Location: | Melbourne, VIC, Australia |
| Users: | 13 |
| Nodes: | 8 (0 / 8) |
| Uptime: | 58:32:16 |
| Calls: | 211 |
| Files: | 21,502 |
| Messages: | 81,624 |