What happens during an unlink(2)

Daniel R. Levy levy at ttrdc.UUCP
Sat May 10 10:36:44 AEST 1986


In article <634 at ihdev.UUCP>, pdg at ihdev.UUCP (P. D. Guthrie) writes:
>In article <861 at ttrdc.UUCP> levy at ttrdc.UUCP (Daniel R. Levy) writes:
>>In article <438 at ukecc.UUCP>, edward at ukecc.UUCP (Edward C. Bennett) writes:
>>>In article <238 at chronon.chronon.UUCP>, eric at chronon.UUCP (Eric Black) writes:
>>>> >	[discussion of what unlink(2) does]
>>>> Spooks aren't the only people who might desire disks & memory to be
>>>> cleansed when released, by the way.
>>>	You're absolutely right. I never though about that way.
>>>Edward C. Bennett
>>Hmmmm.  Maybe there should be an option to 'rm' to cause it to zero out
>>files before unlinking them?  (like rm -e [for erase], similar to VMS's
>>DELETE/ERASE)
>>
>The trouble with this is that is really would have to be an option to
>unlink(2), which would make a lot of current software obsolete.  The
>only other way would be to have rm directly write to disk,  but there is
>too much margin for error or mass destruction here.

Why either problem? The hypothetical 'rm -e' would check to see if the
file was an ordinary file, and if so try to open() it.  If it succeeded,
it would write nulls all the way to the end (rm does a stat() anyway
so that the file length is easy to get) then blow it away with an
unlink() (or maybe vice versa, so that if it were interrupted in mid-zeroing,
which would purposely be made difficult to do by ignoring interrupts, quits,
and hangups during the zeroing process, at least there wouldn't be a partially
munged file left).  No mass destruction possible (at least not worse than the
present rm), since ordinary user calls are being used, no setuid or direct
device access stuff.  If the file could be unlinked but not opened for
writing, an attempt to 'rm -e' would warn the user about this [as it does
now if the user cannot write the file] and ask for confirmation before
unlinking sans zeroing out.

Main disadvantage would be slowness.  Things could be speeded up (for the
user) by forking a nohup'ed background process to finish off the zeroing
out for each file (but a 'rm -e *' might run up against a process limit
problem, so maybe for big jobs this backgrounding would not happen or
be done in small groups of processes).

Directory files would be a different problem.  They could be "zeroed"
from user code by filling them with empty files like "a", "b", etc.
after deleting the contents, then unlinking those empty files, then doing
the rmdir.  Device files would not need zeroing, obviously (except maybe
for fifos--does the last data to go down them before close stay on disk?).
-- 
 -------------------------------    Disclaimer:  The views contained herein are
|       dan levy | yvel nad      |  my own and are not at all those of my em-
|         an engihacker @        |  ployer or the administrator of any computer
| at&t computer systems division |  upon which I may hack.
|        skokie, illinois        |
 --------------------------------   Path: ..!{akgua,homxb,ihnp4,ltuxa,mvuxa,
						vax135}!ttrdc!levy



More information about the Comp.sources.bugs mailing list