Page 1 of 2
PureShredLib (flawed)
Posted: Mon Feb 11, 2008 1:03 am
by Inf0Byt3
Later edit:
This code was just made as a coding exercise, nothing more. I am offering it as-is and I do not claim it can securely delete files, so using it is your own responsibility. No warranties, express or implied, are given.
Posted: Mon Feb 11, 2008 8:27 pm
by oridan
Nice tools & Good job!
Thank you Inf0Byt3 for this interesting shared lib.
I will use it in my program Windows Cleaner!
bye
Posted: Mon Feb 11, 2008 8:41 pm
by Inf0Byt3
That would be nice

. If you run into problems with it or need any help drop me a line

. Maybe if I have some time in the future i'll add some more schemes or even a low-level mode (that's a bit dangerous though).
Posted: Tue Feb 12, 2008 12:43 am
by Intrigued
Thanks from me as well...
Posted: Tue Feb 12, 2008 1:16 am
by Dare
Thanks Inf0Byt3.
Posted: Tue Feb 12, 2008 1:12 pm
by Inf0Byt3
My pleasure. I know it's not much, but somebody may need it sometime.
Posted: Tue Feb 12, 2008 1:25 pm
by srod
Looks nice info.
'scuse the daft question though and excuse my lack of knowledge of the Windows file system etc.

but, when you overwrite a file like this and bearing in mind all the buffering going on (the PB file commands are certainly buffered) are we absolutely guaranteed to be overwriting the exact same bytes on the hard disc as occupied by the file being overwritten?
Posted: Tue Feb 12, 2008 1:34 pm
by dell_jockey
srod wrote:Looks nice info.
'scuse the daft question though and excuse my lack of knowledge of the Windows file system etc.

but, when you overwrite a file like this and bearing in mind all the buffering going on (the PB file commands are certainly buffered) are we absolutely guaranteed to be overwriting the exact same bytes on the hard disc as occupied by the file being overwritten?
That's something that has been bothering me as well.
And there's more that's bothering me: why would one want to overwrite a file that many times? I mean, if you would overwrite each and every byte twice, once with $AA and once with $55 (or any other pair with similar properties), you'd be sure that all bits got flipped at least once. What's the purpose of multiple passes beyond that?
Posted: Tue Feb 12, 2008 1:58 pm
by Dare
dell_jockey wrote:Why would one want to overwrite a file that many times? I mean, if you would overwrite each and every byte twice, once with $AA and once with $55 (or any other pair with similar properties), you'd be sure that all bits got flipped at least once. What's the purpose of multiple passes beyond that?
I think it helps eliminate "ghost" signals, where the residual signal can be read or at least inferred.
Posted: Tue Feb 12, 2008 2:11 pm
by Inf0Byt3
when you overwrite a file like this and bearing in mind all the buffering going on (the PB file commands are certainly buffered) are we absolutely guaranteed to be overwriting the exact same bytes on the hard disc as occupied by the file being overwritten?
Yup, I used FlushFileBuffers() after each written MB, so it should be safe.
Why would one want to overwrite a file that many times? I mean, if you would overwrite each and every byte twice, once with $AA and once with $55 (or any other pair with similar properties), you'd be sure that all bits got flipped at least once. What's the purpose of multiple passes beyond that?
Assuming you talk about gutmann, each pass was at first made for a type of disk: hdd, floppy, etc.. What I did, in a way is overkilling... I implemented the whole method (35 passes), but 7 passes are more than necessary on newer storage media. Depending on the writing scheme of the hdd, those signals will overwrite the old ones, decreasing their magnetic field so much that it can't be recovered. Since that magnetic field (in order to be reliable is strong), a few passes are needed to kill the data.
Posted: Tue Feb 12, 2008 2:31 pm
by srod
Aye, the reason I asked is because I remember reading an article a couple of years ago about some of these algorithms and in the article they moved onto implementation details on the Win platform. It was stated at that time that the only sure fire way of ensuring the exact same bytes were overwritten was to get down low, I mean real low, and talk to the disc controler directly. The code used all kinds of interrupts in order to get the job done.
Now whether this was done to avoid various 'protection utilities' (anti virus perhaps) from possibly interering with the operation I cannot remember.
Posted: Tue Feb 12, 2008 3:08 pm
by Foz
With FAT, FAT16 and FAT32, yes, these algorithms will work simply because the file system is lazy - if there is more data, it simply chains on the extra data into the first available slot.- thereby being very quick to write, and also being highly fragmented.
Now with NTFS, things are somewhat more complicated, because of a Master File Table. Yes, you can delete the file, but a record of the file existing will still remain until that section of the MFT is replaced.
The *only* sure fire way of removing evidence of a file existing is to talk to the disk controller directly to wipe the areas involved, or use a professional defragmenting program that will Zero the hard disk on the free space on the disk.
Posted: Sun Oct 19, 2008 7:37 pm
by Jacobus
Hi, I see this code and I'd like to know why you add
+Fill at the
*Pattern
here for example:
Code: Select all
For Fill = 0 To #BlockSize
PokeB(*Pattern+Fill,%00100100)
Next
So if I reduce
#BlockSize in 1024 instead of 1024 * 1024, I have an error
FreeMemory (* Pattern) => invalid memory access.
As against no mistake if I write
PokeB (* Pattern,% 00100100)
@+
Posted: Sun Oct 19, 2008 10:07 pm
by Inf0Byt3
I'd like to know why you add +Fill at the *Pattern
I use that because we need to poke a byte and increment the value so we can poke the new byte and so on. The variable "Fill" gets bigger and bigger until it fills up that buffer. If you just use "PokeB (*Pattern,%00100100)" the you'll just put all the values at the first position in the buffer and the rest will remain unfilled.
However I wouldn't recommend you to use this code as it doesn't work correctly in practice. Some parts of the file may be recovered as you can't make sure Windows will allocate new sectors for the data or just overwrite the old ones. A reliable shredder must use low level access and wipe a file at sector level. And even that might fail sometimes. Just like Foz said, the best way is to zero the free space on a drive a few times and then the MFT as well. There was some source code in C for this written by Mark Russinovich (SysInternals). Just Google for SDelete, the code should be easy to track down.
Posted: Sun Oct 19, 2008 10:26 pm
by PB
Be very careful posting or using something like this. If your app offers such
a function to a user, and that user's data later DOES get recovered anyway,
then you are open to a big lawsuit because your app failed to do its job.