Temporary folder that automatically destroyed after process exit
up vote
10
down vote
favorite
Can we use temporary folders like temporary files
TMP=$(mktemp ... )
exec 3<>$TMP
rm $TMP
cat <&3
which will be destroyed automatically after this shell exit?
file-descriptors tmpfs
New contributor
add a comment |
up vote
10
down vote
favorite
Can we use temporary folders like temporary files
TMP=$(mktemp ... )
exec 3<>$TMP
rm $TMP
cat <&3
which will be destroyed automatically after this shell exit?
file-descriptors tmpfs
New contributor
Related: exit trap in dash vs ksh and bash
– Stéphane Chazelas
Nov 8 at 8:09
add a comment |
up vote
10
down vote
favorite
up vote
10
down vote
favorite
Can we use temporary folders like temporary files
TMP=$(mktemp ... )
exec 3<>$TMP
rm $TMP
cat <&3
which will be destroyed automatically after this shell exit?
file-descriptors tmpfs
New contributor
Can we use temporary folders like temporary files
TMP=$(mktemp ... )
exec 3<>$TMP
rm $TMP
cat <&3
which will be destroyed automatically after this shell exit?
file-descriptors tmpfs
file-descriptors tmpfs
New contributor
New contributor
New contributor
asked Nov 7 at 10:44
Bob Johnson
694
694
New contributor
New contributor
Related: exit trap in dash vs ksh and bash
– Stéphane Chazelas
Nov 8 at 8:09
add a comment |
Related: exit trap in dash vs ksh and bash
– Stéphane Chazelas
Nov 8 at 8:09
Related: exit trap in dash vs ksh and bash
– Stéphane Chazelas
Nov 8 at 8:09
Related: exit trap in dash vs ksh and bash
– Stéphane Chazelas
Nov 8 at 8:09
add a comment |
3 Answers
3
active
oldest
votes
up vote
12
down vote
In the case of a temporary file, your example in the question would create it, then unlink it from the directory (making it "disappear"), and when the script closes the filedescriptor (probably upon termination), the space taken by the file would be reclaimable by the system. This is a common way to deal with temporary files in languages like C.
It is, as far as I know, not possible to open a directory in the same way, at least not in any way that would make the directory usable.
A common way to delete temporary files and directories at the termination of a script is by installing a cleanup EXIT
trap. The code examples given below avoids having to juggle filedescriptors completely.
tmpdir=$(mktemp -d)
tmpfile=$(mktemp)
trap 'rm -f "$tmpfile"; rm -rf "$tmpdir"' EXIT
# The rest of the script goes here.
Or you may call a cleanup function:
cleanup () {
rm -f "$tmpfile"
rm -rf "$tmpdir"
}
tmpdir=$(mktemp -d)
tmpfile=$(mktemp)
trap cleanup EXIT
# The rest of the script goes here.
The EXIT
trap won't be executed upon receiving the KILL
signal (which can't be trapped), which means that there will be no cleanup performed then. It will however execute when terminating due to an INT
or TERM
signal (if running with bash
or ksh
, in other shells you may want to add these signals after EXIT
in the trap
command line), or when exiting normally due to arriving at the end of the script or executing an exit
call.
5
It's not just shell that can't use already-unlinked temporary directories — neither can C programs. Problem is that unlinked directories can't have files in them. You can have an unlinked empty directory as your working directory, but any attempt to create a file will give an error.
– derobert
Nov 7 at 18:04
1
@derobert And such an unlinked directory does not even have the.
and..
entries. (Tested on Linux, I don't know if that's consistent across platforms.)
– kasperd
Nov 7 at 19:03
unix.stackexchange.com/a/434437/5132
– JdeBP
Nov 8 at 0:07
@JdeBP, SE Comment Link Helper
– Stéphane Chazelas
Nov 8 at 7:38
1
Note that the EXIT trap is not executed either if the script callsexec another-command
obviously.
– Stéphane Chazelas
Nov 8 at 8:01
|
show 2 more comments
up vote
6
down vote
Write a shell-function that will be executed when your script if finished. In the example below I call it 'cleanup' and set a trap to be executed on exit levels, like: 0 1 2 3 6
trap cleanup 0 1 2 3 6
cleanup()
{
[ -d $TMP ] && rm -rf $TMP
}
See this post for more info.
Those are not "exit levels" but signal numbers, and the answer to question you're linking to explains just that. The trap will runcleanup
before a clean exit (0) and on receiving SIGHUP(1), SIGINT(2), SIGQUIT(3) and SIGABRT(6). it will not runcleanup
when the script exits because of SIGTERM, SIGSEGV, SIGKILL, SIGPIPE, etc. This is clearly deficient.
– mosvy
Nov 8 at 13:43
add a comment |
up vote
6
down vote
You can chdir into it and then remove it, provided that you don't try to use paths inside it afterwards:
#! /bin/sh
dir=`mktemp -d`
cd "$dir"
exec 4>file 3<file
rm -fr "$dir"
echo yes >&4 # OK
cat <&3 # OK
cat file # FAIL
echo yes > file # FAIL
I haven't checked, but it's most probably the same problem when using openat(2) in C with a directory that no longer exists in the file system.
If you're root and on Linux, you can play with a separate namespace, and mount -t tmpfs tmpfs /dir
inside it.
The canonical answers (set a trap on EXIT) don't work if your script is forced into an unclean exit (eg. with SIGKILL); that may leave sensitive data hanging around.
Update:
Here is a small utility which implements the namespace approach. It should be compiled with
cc -Wall -Os -s chtmp.c -o chtmp
and given CAP_SYS_ADMIN
file capabilities (as root) with
setcap CAP_SYS_ADMIN+ep chtmp
When run (as a normal) user as
./chtmp command args ...
it will unshare its filesystem namespace, mount a tmpfs filesystem on /proc/sysvipc
, chdir into it and run command
with the given arguments. command
will not inherit the CAP_SYS_ADMIN
capabilities.
That filesystem will not be accessible from another process not started from command
, and it will magically disappear (with all the files that were created inside it) when command
and its children die, no matter how that happens. Notice that this is just unsharing the mount namespace -- there are no hard barriers between command
and other processes run by the same user; they could still sneak inside its namespace either via ptrace(2)
, /proc/PID/cwd
or by other means.
The hijacking of the "useless" /proc/sysvipc
is, of course silly, but the alternative would've been to spam /tmp
with empty directories that would have to be removed or greatly complicate this small program with forks and waits. Alternatively, dir
can be changed to eg. /mnt/chtmp
and have it created by root at installation; do not make it user-configurable and do not set it to a user-owned path as that may expose you to symlink traps and other hairy stuff not worth spending time on.
chtmp.c
#define _GNU_SOURCE
#include <err.h>
#include <sched.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/mount.h>
int main(int argc, char **argv){
char *dir = "/proc/sysvipc"; /* LOL */
if(argc < 2 || !argv[1]) errx(1, "usage: %s prog args ...", *argv);
argv++;
if(unshare(CLONE_NEWNS)) err(1, "unshare(CLONE_NEWNS)");
/* "modern" systemd remounts all mount points MS_SHARED
see the NOTES in mount_namespaces(7); YUCK */
if(mount("none", "/", 0, MS_REC|MS_PRIVATE, 0))
err(1, "mount(/, MS_REC|MS_PRIVATE)");
if(mount("tmpfs", dir, "tmpfs", 0, 0)) err(1, "mount(tmpfs, %s)", dir);
if(chdir(dir)) err(1, "chdir %s", dir);
execvp(*argv, argv);
err(1, "execvp %s", *argv);
}
1
Even if you're not root, you can do this with namespaces by creating a new user namespace and doing the tmpfs mount inside it. Smuggling access to the new dir out to the outside world is a bit tricky but should be possible.
– R..
Nov 7 at 23:50
That still requires CAP_SYS_ADMIN. I have the idea of a small setcap-enabled utility that will do that, I will update the answer with it.
– qubert
Nov 8 at 0:10
1
Unless the kernel has been locked down to disallow it, creation of user namespaces is not a privileged operation. The underlying design is such that it's supposed to be safe to allow ordinary users to do without any special capability. However there is sufficient attack surface/risk that many distros disable it, I think.
– R..
Nov 8 at 0:15
I tried in terminal. In some temporary dir,rm $PWD
work, shell is still in that dir. But no new files can be put into this "folder". Only you can do is read/write with file &3,&4. So this is still "temporary file", not "temporary folder".
– Bob Johnson
Nov 8 at 1:12
@BobJohnson That's not different from what I was already saying in my answer ;-)
– qubert
Nov 8 at 2:06
add a comment |
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
12
down vote
In the case of a temporary file, your example in the question would create it, then unlink it from the directory (making it "disappear"), and when the script closes the filedescriptor (probably upon termination), the space taken by the file would be reclaimable by the system. This is a common way to deal with temporary files in languages like C.
It is, as far as I know, not possible to open a directory in the same way, at least not in any way that would make the directory usable.
A common way to delete temporary files and directories at the termination of a script is by installing a cleanup EXIT
trap. The code examples given below avoids having to juggle filedescriptors completely.
tmpdir=$(mktemp -d)
tmpfile=$(mktemp)
trap 'rm -f "$tmpfile"; rm -rf "$tmpdir"' EXIT
# The rest of the script goes here.
Or you may call a cleanup function:
cleanup () {
rm -f "$tmpfile"
rm -rf "$tmpdir"
}
tmpdir=$(mktemp -d)
tmpfile=$(mktemp)
trap cleanup EXIT
# The rest of the script goes here.
The EXIT
trap won't be executed upon receiving the KILL
signal (which can't be trapped), which means that there will be no cleanup performed then. It will however execute when terminating due to an INT
or TERM
signal (if running with bash
or ksh
, in other shells you may want to add these signals after EXIT
in the trap
command line), or when exiting normally due to arriving at the end of the script or executing an exit
call.
5
It's not just shell that can't use already-unlinked temporary directories — neither can C programs. Problem is that unlinked directories can't have files in them. You can have an unlinked empty directory as your working directory, but any attempt to create a file will give an error.
– derobert
Nov 7 at 18:04
1
@derobert And such an unlinked directory does not even have the.
and..
entries. (Tested on Linux, I don't know if that's consistent across platforms.)
– kasperd
Nov 7 at 19:03
unix.stackexchange.com/a/434437/5132
– JdeBP
Nov 8 at 0:07
@JdeBP, SE Comment Link Helper
– Stéphane Chazelas
Nov 8 at 7:38
1
Note that the EXIT trap is not executed either if the script callsexec another-command
obviously.
– Stéphane Chazelas
Nov 8 at 8:01
|
show 2 more comments
up vote
12
down vote
In the case of a temporary file, your example in the question would create it, then unlink it from the directory (making it "disappear"), and when the script closes the filedescriptor (probably upon termination), the space taken by the file would be reclaimable by the system. This is a common way to deal with temporary files in languages like C.
It is, as far as I know, not possible to open a directory in the same way, at least not in any way that would make the directory usable.
A common way to delete temporary files and directories at the termination of a script is by installing a cleanup EXIT
trap. The code examples given below avoids having to juggle filedescriptors completely.
tmpdir=$(mktemp -d)
tmpfile=$(mktemp)
trap 'rm -f "$tmpfile"; rm -rf "$tmpdir"' EXIT
# The rest of the script goes here.
Or you may call a cleanup function:
cleanup () {
rm -f "$tmpfile"
rm -rf "$tmpdir"
}
tmpdir=$(mktemp -d)
tmpfile=$(mktemp)
trap cleanup EXIT
# The rest of the script goes here.
The EXIT
trap won't be executed upon receiving the KILL
signal (which can't be trapped), which means that there will be no cleanup performed then. It will however execute when terminating due to an INT
or TERM
signal (if running with bash
or ksh
, in other shells you may want to add these signals after EXIT
in the trap
command line), or when exiting normally due to arriving at the end of the script or executing an exit
call.
5
It's not just shell that can't use already-unlinked temporary directories — neither can C programs. Problem is that unlinked directories can't have files in them. You can have an unlinked empty directory as your working directory, but any attempt to create a file will give an error.
– derobert
Nov 7 at 18:04
1
@derobert And such an unlinked directory does not even have the.
and..
entries. (Tested on Linux, I don't know if that's consistent across platforms.)
– kasperd
Nov 7 at 19:03
unix.stackexchange.com/a/434437/5132
– JdeBP
Nov 8 at 0:07
@JdeBP, SE Comment Link Helper
– Stéphane Chazelas
Nov 8 at 7:38
1
Note that the EXIT trap is not executed either if the script callsexec another-command
obviously.
– Stéphane Chazelas
Nov 8 at 8:01
|
show 2 more comments
up vote
12
down vote
up vote
12
down vote
In the case of a temporary file, your example in the question would create it, then unlink it from the directory (making it "disappear"), and when the script closes the filedescriptor (probably upon termination), the space taken by the file would be reclaimable by the system. This is a common way to deal with temporary files in languages like C.
It is, as far as I know, not possible to open a directory in the same way, at least not in any way that would make the directory usable.
A common way to delete temporary files and directories at the termination of a script is by installing a cleanup EXIT
trap. The code examples given below avoids having to juggle filedescriptors completely.
tmpdir=$(mktemp -d)
tmpfile=$(mktemp)
trap 'rm -f "$tmpfile"; rm -rf "$tmpdir"' EXIT
# The rest of the script goes here.
Or you may call a cleanup function:
cleanup () {
rm -f "$tmpfile"
rm -rf "$tmpdir"
}
tmpdir=$(mktemp -d)
tmpfile=$(mktemp)
trap cleanup EXIT
# The rest of the script goes here.
The EXIT
trap won't be executed upon receiving the KILL
signal (which can't be trapped), which means that there will be no cleanup performed then. It will however execute when terminating due to an INT
or TERM
signal (if running with bash
or ksh
, in other shells you may want to add these signals after EXIT
in the trap
command line), or when exiting normally due to arriving at the end of the script or executing an exit
call.
In the case of a temporary file, your example in the question would create it, then unlink it from the directory (making it "disappear"), and when the script closes the filedescriptor (probably upon termination), the space taken by the file would be reclaimable by the system. This is a common way to deal with temporary files in languages like C.
It is, as far as I know, not possible to open a directory in the same way, at least not in any way that would make the directory usable.
A common way to delete temporary files and directories at the termination of a script is by installing a cleanup EXIT
trap. The code examples given below avoids having to juggle filedescriptors completely.
tmpdir=$(mktemp -d)
tmpfile=$(mktemp)
trap 'rm -f "$tmpfile"; rm -rf "$tmpdir"' EXIT
# The rest of the script goes here.
Or you may call a cleanup function:
cleanup () {
rm -f "$tmpfile"
rm -rf "$tmpdir"
}
tmpdir=$(mktemp -d)
tmpfile=$(mktemp)
trap cleanup EXIT
# The rest of the script goes here.
The EXIT
trap won't be executed upon receiving the KILL
signal (which can't be trapped), which means that there will be no cleanup performed then. It will however execute when terminating due to an INT
or TERM
signal (if running with bash
or ksh
, in other shells you may want to add these signals after EXIT
in the trap
command line), or when exiting normally due to arriving at the end of the script or executing an exit
call.
edited Nov 8 at 7:47
answered Nov 7 at 11:06
Kusalananda
114k15218349
114k15218349
5
It's not just shell that can't use already-unlinked temporary directories — neither can C programs. Problem is that unlinked directories can't have files in them. You can have an unlinked empty directory as your working directory, but any attempt to create a file will give an error.
– derobert
Nov 7 at 18:04
1
@derobert And such an unlinked directory does not even have the.
and..
entries. (Tested on Linux, I don't know if that's consistent across platforms.)
– kasperd
Nov 7 at 19:03
unix.stackexchange.com/a/434437/5132
– JdeBP
Nov 8 at 0:07
@JdeBP, SE Comment Link Helper
– Stéphane Chazelas
Nov 8 at 7:38
1
Note that the EXIT trap is not executed either if the script callsexec another-command
obviously.
– Stéphane Chazelas
Nov 8 at 8:01
|
show 2 more comments
5
It's not just shell that can't use already-unlinked temporary directories — neither can C programs. Problem is that unlinked directories can't have files in them. You can have an unlinked empty directory as your working directory, but any attempt to create a file will give an error.
– derobert
Nov 7 at 18:04
1
@derobert And such an unlinked directory does not even have the.
and..
entries. (Tested on Linux, I don't know if that's consistent across platforms.)
– kasperd
Nov 7 at 19:03
unix.stackexchange.com/a/434437/5132
– JdeBP
Nov 8 at 0:07
@JdeBP, SE Comment Link Helper
– Stéphane Chazelas
Nov 8 at 7:38
1
Note that the EXIT trap is not executed either if the script callsexec another-command
obviously.
– Stéphane Chazelas
Nov 8 at 8:01
5
5
It's not just shell that can't use already-unlinked temporary directories — neither can C programs. Problem is that unlinked directories can't have files in them. You can have an unlinked empty directory as your working directory, but any attempt to create a file will give an error.
– derobert
Nov 7 at 18:04
It's not just shell that can't use already-unlinked temporary directories — neither can C programs. Problem is that unlinked directories can't have files in them. You can have an unlinked empty directory as your working directory, but any attempt to create a file will give an error.
– derobert
Nov 7 at 18:04
1
1
@derobert And such an unlinked directory does not even have the
.
and ..
entries. (Tested on Linux, I don't know if that's consistent across platforms.)– kasperd
Nov 7 at 19:03
@derobert And such an unlinked directory does not even have the
.
and ..
entries. (Tested on Linux, I don't know if that's consistent across platforms.)– kasperd
Nov 7 at 19:03
unix.stackexchange.com/a/434437/5132
– JdeBP
Nov 8 at 0:07
unix.stackexchange.com/a/434437/5132
– JdeBP
Nov 8 at 0:07
@JdeBP, SE Comment Link Helper
– Stéphane Chazelas
Nov 8 at 7:38
@JdeBP, SE Comment Link Helper
– Stéphane Chazelas
Nov 8 at 7:38
1
1
Note that the EXIT trap is not executed either if the script calls
exec another-command
obviously.– Stéphane Chazelas
Nov 8 at 8:01
Note that the EXIT trap is not executed either if the script calls
exec another-command
obviously.– Stéphane Chazelas
Nov 8 at 8:01
|
show 2 more comments
up vote
6
down vote
Write a shell-function that will be executed when your script if finished. In the example below I call it 'cleanup' and set a trap to be executed on exit levels, like: 0 1 2 3 6
trap cleanup 0 1 2 3 6
cleanup()
{
[ -d $TMP ] && rm -rf $TMP
}
See this post for more info.
Those are not "exit levels" but signal numbers, and the answer to question you're linking to explains just that. The trap will runcleanup
before a clean exit (0) and on receiving SIGHUP(1), SIGINT(2), SIGQUIT(3) and SIGABRT(6). it will not runcleanup
when the script exits because of SIGTERM, SIGSEGV, SIGKILL, SIGPIPE, etc. This is clearly deficient.
– mosvy
Nov 8 at 13:43
add a comment |
up vote
6
down vote
Write a shell-function that will be executed when your script if finished. In the example below I call it 'cleanup' and set a trap to be executed on exit levels, like: 0 1 2 3 6
trap cleanup 0 1 2 3 6
cleanup()
{
[ -d $TMP ] && rm -rf $TMP
}
See this post for more info.
Those are not "exit levels" but signal numbers, and the answer to question you're linking to explains just that. The trap will runcleanup
before a clean exit (0) and on receiving SIGHUP(1), SIGINT(2), SIGQUIT(3) and SIGABRT(6). it will not runcleanup
when the script exits because of SIGTERM, SIGSEGV, SIGKILL, SIGPIPE, etc. This is clearly deficient.
– mosvy
Nov 8 at 13:43
add a comment |
up vote
6
down vote
up vote
6
down vote
Write a shell-function that will be executed when your script if finished. In the example below I call it 'cleanup' and set a trap to be executed on exit levels, like: 0 1 2 3 6
trap cleanup 0 1 2 3 6
cleanup()
{
[ -d $TMP ] && rm -rf $TMP
}
See this post for more info.
Write a shell-function that will be executed when your script if finished. In the example below I call it 'cleanup' and set a trap to be executed on exit levels, like: 0 1 2 3 6
trap cleanup 0 1 2 3 6
cleanup()
{
[ -d $TMP ] && rm -rf $TMP
}
See this post for more info.
edited Nov 7 at 13:42
answered Nov 7 at 10:57
Dirk Krijgsman
5246
5246
Those are not "exit levels" but signal numbers, and the answer to question you're linking to explains just that. The trap will runcleanup
before a clean exit (0) and on receiving SIGHUP(1), SIGINT(2), SIGQUIT(3) and SIGABRT(6). it will not runcleanup
when the script exits because of SIGTERM, SIGSEGV, SIGKILL, SIGPIPE, etc. This is clearly deficient.
– mosvy
Nov 8 at 13:43
add a comment |
Those are not "exit levels" but signal numbers, and the answer to question you're linking to explains just that. The trap will runcleanup
before a clean exit (0) and on receiving SIGHUP(1), SIGINT(2), SIGQUIT(3) and SIGABRT(6). it will not runcleanup
when the script exits because of SIGTERM, SIGSEGV, SIGKILL, SIGPIPE, etc. This is clearly deficient.
– mosvy
Nov 8 at 13:43
Those are not "exit levels" but signal numbers, and the answer to question you're linking to explains just that. The trap will run
cleanup
before a clean exit (0) and on receiving SIGHUP(1), SIGINT(2), SIGQUIT(3) and SIGABRT(6). it will not run cleanup
when the script exits because of SIGTERM, SIGSEGV, SIGKILL, SIGPIPE, etc. This is clearly deficient.– mosvy
Nov 8 at 13:43
Those are not "exit levels" but signal numbers, and the answer to question you're linking to explains just that. The trap will run
cleanup
before a clean exit (0) and on receiving SIGHUP(1), SIGINT(2), SIGQUIT(3) and SIGABRT(6). it will not run cleanup
when the script exits because of SIGTERM, SIGSEGV, SIGKILL, SIGPIPE, etc. This is clearly deficient.– mosvy
Nov 8 at 13:43
add a comment |
up vote
6
down vote
You can chdir into it and then remove it, provided that you don't try to use paths inside it afterwards:
#! /bin/sh
dir=`mktemp -d`
cd "$dir"
exec 4>file 3<file
rm -fr "$dir"
echo yes >&4 # OK
cat <&3 # OK
cat file # FAIL
echo yes > file # FAIL
I haven't checked, but it's most probably the same problem when using openat(2) in C with a directory that no longer exists in the file system.
If you're root and on Linux, you can play with a separate namespace, and mount -t tmpfs tmpfs /dir
inside it.
The canonical answers (set a trap on EXIT) don't work if your script is forced into an unclean exit (eg. with SIGKILL); that may leave sensitive data hanging around.
Update:
Here is a small utility which implements the namespace approach. It should be compiled with
cc -Wall -Os -s chtmp.c -o chtmp
and given CAP_SYS_ADMIN
file capabilities (as root) with
setcap CAP_SYS_ADMIN+ep chtmp
When run (as a normal) user as
./chtmp command args ...
it will unshare its filesystem namespace, mount a tmpfs filesystem on /proc/sysvipc
, chdir into it and run command
with the given arguments. command
will not inherit the CAP_SYS_ADMIN
capabilities.
That filesystem will not be accessible from another process not started from command
, and it will magically disappear (with all the files that were created inside it) when command
and its children die, no matter how that happens. Notice that this is just unsharing the mount namespace -- there are no hard barriers between command
and other processes run by the same user; they could still sneak inside its namespace either via ptrace(2)
, /proc/PID/cwd
or by other means.
The hijacking of the "useless" /proc/sysvipc
is, of course silly, but the alternative would've been to spam /tmp
with empty directories that would have to be removed or greatly complicate this small program with forks and waits. Alternatively, dir
can be changed to eg. /mnt/chtmp
and have it created by root at installation; do not make it user-configurable and do not set it to a user-owned path as that may expose you to symlink traps and other hairy stuff not worth spending time on.
chtmp.c
#define _GNU_SOURCE
#include <err.h>
#include <sched.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/mount.h>
int main(int argc, char **argv){
char *dir = "/proc/sysvipc"; /* LOL */
if(argc < 2 || !argv[1]) errx(1, "usage: %s prog args ...", *argv);
argv++;
if(unshare(CLONE_NEWNS)) err(1, "unshare(CLONE_NEWNS)");
/* "modern" systemd remounts all mount points MS_SHARED
see the NOTES in mount_namespaces(7); YUCK */
if(mount("none", "/", 0, MS_REC|MS_PRIVATE, 0))
err(1, "mount(/, MS_REC|MS_PRIVATE)");
if(mount("tmpfs", dir, "tmpfs", 0, 0)) err(1, "mount(tmpfs, %s)", dir);
if(chdir(dir)) err(1, "chdir %s", dir);
execvp(*argv, argv);
err(1, "execvp %s", *argv);
}
1
Even if you're not root, you can do this with namespaces by creating a new user namespace and doing the tmpfs mount inside it. Smuggling access to the new dir out to the outside world is a bit tricky but should be possible.
– R..
Nov 7 at 23:50
That still requires CAP_SYS_ADMIN. I have the idea of a small setcap-enabled utility that will do that, I will update the answer with it.
– qubert
Nov 8 at 0:10
1
Unless the kernel has been locked down to disallow it, creation of user namespaces is not a privileged operation. The underlying design is such that it's supposed to be safe to allow ordinary users to do without any special capability. However there is sufficient attack surface/risk that many distros disable it, I think.
– R..
Nov 8 at 0:15
I tried in terminal. In some temporary dir,rm $PWD
work, shell is still in that dir. But no new files can be put into this "folder". Only you can do is read/write with file &3,&4. So this is still "temporary file", not "temporary folder".
– Bob Johnson
Nov 8 at 1:12
@BobJohnson That's not different from what I was already saying in my answer ;-)
– qubert
Nov 8 at 2:06
add a comment |
up vote
6
down vote
You can chdir into it and then remove it, provided that you don't try to use paths inside it afterwards:
#! /bin/sh
dir=`mktemp -d`
cd "$dir"
exec 4>file 3<file
rm -fr "$dir"
echo yes >&4 # OK
cat <&3 # OK
cat file # FAIL
echo yes > file # FAIL
I haven't checked, but it's most probably the same problem when using openat(2) in C with a directory that no longer exists in the file system.
If you're root and on Linux, you can play with a separate namespace, and mount -t tmpfs tmpfs /dir
inside it.
The canonical answers (set a trap on EXIT) don't work if your script is forced into an unclean exit (eg. with SIGKILL); that may leave sensitive data hanging around.
Update:
Here is a small utility which implements the namespace approach. It should be compiled with
cc -Wall -Os -s chtmp.c -o chtmp
and given CAP_SYS_ADMIN
file capabilities (as root) with
setcap CAP_SYS_ADMIN+ep chtmp
When run (as a normal) user as
./chtmp command args ...
it will unshare its filesystem namespace, mount a tmpfs filesystem on /proc/sysvipc
, chdir into it and run command
with the given arguments. command
will not inherit the CAP_SYS_ADMIN
capabilities.
That filesystem will not be accessible from another process not started from command
, and it will magically disappear (with all the files that were created inside it) when command
and its children die, no matter how that happens. Notice that this is just unsharing the mount namespace -- there are no hard barriers between command
and other processes run by the same user; they could still sneak inside its namespace either via ptrace(2)
, /proc/PID/cwd
or by other means.
The hijacking of the "useless" /proc/sysvipc
is, of course silly, but the alternative would've been to spam /tmp
with empty directories that would have to be removed or greatly complicate this small program with forks and waits. Alternatively, dir
can be changed to eg. /mnt/chtmp
and have it created by root at installation; do not make it user-configurable and do not set it to a user-owned path as that may expose you to symlink traps and other hairy stuff not worth spending time on.
chtmp.c
#define _GNU_SOURCE
#include <err.h>
#include <sched.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/mount.h>
int main(int argc, char **argv){
char *dir = "/proc/sysvipc"; /* LOL */
if(argc < 2 || !argv[1]) errx(1, "usage: %s prog args ...", *argv);
argv++;
if(unshare(CLONE_NEWNS)) err(1, "unshare(CLONE_NEWNS)");
/* "modern" systemd remounts all mount points MS_SHARED
see the NOTES in mount_namespaces(7); YUCK */
if(mount("none", "/", 0, MS_REC|MS_PRIVATE, 0))
err(1, "mount(/, MS_REC|MS_PRIVATE)");
if(mount("tmpfs", dir, "tmpfs", 0, 0)) err(1, "mount(tmpfs, %s)", dir);
if(chdir(dir)) err(1, "chdir %s", dir);
execvp(*argv, argv);
err(1, "execvp %s", *argv);
}
1
Even if you're not root, you can do this with namespaces by creating a new user namespace and doing the tmpfs mount inside it. Smuggling access to the new dir out to the outside world is a bit tricky but should be possible.
– R..
Nov 7 at 23:50
That still requires CAP_SYS_ADMIN. I have the idea of a small setcap-enabled utility that will do that, I will update the answer with it.
– qubert
Nov 8 at 0:10
1
Unless the kernel has been locked down to disallow it, creation of user namespaces is not a privileged operation. The underlying design is such that it's supposed to be safe to allow ordinary users to do without any special capability. However there is sufficient attack surface/risk that many distros disable it, I think.
– R..
Nov 8 at 0:15
I tried in terminal. In some temporary dir,rm $PWD
work, shell is still in that dir. But no new files can be put into this "folder". Only you can do is read/write with file &3,&4. So this is still "temporary file", not "temporary folder".
– Bob Johnson
Nov 8 at 1:12
@BobJohnson That's not different from what I was already saying in my answer ;-)
– qubert
Nov 8 at 2:06
add a comment |
up vote
6
down vote
up vote
6
down vote
You can chdir into it and then remove it, provided that you don't try to use paths inside it afterwards:
#! /bin/sh
dir=`mktemp -d`
cd "$dir"
exec 4>file 3<file
rm -fr "$dir"
echo yes >&4 # OK
cat <&3 # OK
cat file # FAIL
echo yes > file # FAIL
I haven't checked, but it's most probably the same problem when using openat(2) in C with a directory that no longer exists in the file system.
If you're root and on Linux, you can play with a separate namespace, and mount -t tmpfs tmpfs /dir
inside it.
The canonical answers (set a trap on EXIT) don't work if your script is forced into an unclean exit (eg. with SIGKILL); that may leave sensitive data hanging around.
Update:
Here is a small utility which implements the namespace approach. It should be compiled with
cc -Wall -Os -s chtmp.c -o chtmp
and given CAP_SYS_ADMIN
file capabilities (as root) with
setcap CAP_SYS_ADMIN+ep chtmp
When run (as a normal) user as
./chtmp command args ...
it will unshare its filesystem namespace, mount a tmpfs filesystem on /proc/sysvipc
, chdir into it and run command
with the given arguments. command
will not inherit the CAP_SYS_ADMIN
capabilities.
That filesystem will not be accessible from another process not started from command
, and it will magically disappear (with all the files that were created inside it) when command
and its children die, no matter how that happens. Notice that this is just unsharing the mount namespace -- there are no hard barriers between command
and other processes run by the same user; they could still sneak inside its namespace either via ptrace(2)
, /proc/PID/cwd
or by other means.
The hijacking of the "useless" /proc/sysvipc
is, of course silly, but the alternative would've been to spam /tmp
with empty directories that would have to be removed or greatly complicate this small program with forks and waits. Alternatively, dir
can be changed to eg. /mnt/chtmp
and have it created by root at installation; do not make it user-configurable and do not set it to a user-owned path as that may expose you to symlink traps and other hairy stuff not worth spending time on.
chtmp.c
#define _GNU_SOURCE
#include <err.h>
#include <sched.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/mount.h>
int main(int argc, char **argv){
char *dir = "/proc/sysvipc"; /* LOL */
if(argc < 2 || !argv[1]) errx(1, "usage: %s prog args ...", *argv);
argv++;
if(unshare(CLONE_NEWNS)) err(1, "unshare(CLONE_NEWNS)");
/* "modern" systemd remounts all mount points MS_SHARED
see the NOTES in mount_namespaces(7); YUCK */
if(mount("none", "/", 0, MS_REC|MS_PRIVATE, 0))
err(1, "mount(/, MS_REC|MS_PRIVATE)");
if(mount("tmpfs", dir, "tmpfs", 0, 0)) err(1, "mount(tmpfs, %s)", dir);
if(chdir(dir)) err(1, "chdir %s", dir);
execvp(*argv, argv);
err(1, "execvp %s", *argv);
}
You can chdir into it and then remove it, provided that you don't try to use paths inside it afterwards:
#! /bin/sh
dir=`mktemp -d`
cd "$dir"
exec 4>file 3<file
rm -fr "$dir"
echo yes >&4 # OK
cat <&3 # OK
cat file # FAIL
echo yes > file # FAIL
I haven't checked, but it's most probably the same problem when using openat(2) in C with a directory that no longer exists in the file system.
If you're root and on Linux, you can play with a separate namespace, and mount -t tmpfs tmpfs /dir
inside it.
The canonical answers (set a trap on EXIT) don't work if your script is forced into an unclean exit (eg. with SIGKILL); that may leave sensitive data hanging around.
Update:
Here is a small utility which implements the namespace approach. It should be compiled with
cc -Wall -Os -s chtmp.c -o chtmp
and given CAP_SYS_ADMIN
file capabilities (as root) with
setcap CAP_SYS_ADMIN+ep chtmp
When run (as a normal) user as
./chtmp command args ...
it will unshare its filesystem namespace, mount a tmpfs filesystem on /proc/sysvipc
, chdir into it and run command
with the given arguments. command
will not inherit the CAP_SYS_ADMIN
capabilities.
That filesystem will not be accessible from another process not started from command
, and it will magically disappear (with all the files that were created inside it) when command
and its children die, no matter how that happens. Notice that this is just unsharing the mount namespace -- there are no hard barriers between command
and other processes run by the same user; they could still sneak inside its namespace either via ptrace(2)
, /proc/PID/cwd
or by other means.
The hijacking of the "useless" /proc/sysvipc
is, of course silly, but the alternative would've been to spam /tmp
with empty directories that would have to be removed or greatly complicate this small program with forks and waits. Alternatively, dir
can be changed to eg. /mnt/chtmp
and have it created by root at installation; do not make it user-configurable and do not set it to a user-owned path as that may expose you to symlink traps and other hairy stuff not worth spending time on.
chtmp.c
#define _GNU_SOURCE
#include <err.h>
#include <sched.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/mount.h>
int main(int argc, char **argv){
char *dir = "/proc/sysvipc"; /* LOL */
if(argc < 2 || !argv[1]) errx(1, "usage: %s prog args ...", *argv);
argv++;
if(unshare(CLONE_NEWNS)) err(1, "unshare(CLONE_NEWNS)");
/* "modern" systemd remounts all mount points MS_SHARED
see the NOTES in mount_namespaces(7); YUCK */
if(mount("none", "/", 0, MS_REC|MS_PRIVATE, 0))
err(1, "mount(/, MS_REC|MS_PRIVATE)");
if(mount("tmpfs", dir, "tmpfs", 0, 0)) err(1, "mount(tmpfs, %s)", dir);
if(chdir(dir)) err(1, "chdir %s", dir);
execvp(*argv, argv);
err(1, "execvp %s", *argv);
}
edited 18 hours ago
answered Nov 7 at 12:27
qubert
4836
4836
1
Even if you're not root, you can do this with namespaces by creating a new user namespace and doing the tmpfs mount inside it. Smuggling access to the new dir out to the outside world is a bit tricky but should be possible.
– R..
Nov 7 at 23:50
That still requires CAP_SYS_ADMIN. I have the idea of a small setcap-enabled utility that will do that, I will update the answer with it.
– qubert
Nov 8 at 0:10
1
Unless the kernel has been locked down to disallow it, creation of user namespaces is not a privileged operation. The underlying design is such that it's supposed to be safe to allow ordinary users to do without any special capability. However there is sufficient attack surface/risk that many distros disable it, I think.
– R..
Nov 8 at 0:15
I tried in terminal. In some temporary dir,rm $PWD
work, shell is still in that dir. But no new files can be put into this "folder". Only you can do is read/write with file &3,&4. So this is still "temporary file", not "temporary folder".
– Bob Johnson
Nov 8 at 1:12
@BobJohnson That's not different from what I was already saying in my answer ;-)
– qubert
Nov 8 at 2:06
add a comment |
1
Even if you're not root, you can do this with namespaces by creating a new user namespace and doing the tmpfs mount inside it. Smuggling access to the new dir out to the outside world is a bit tricky but should be possible.
– R..
Nov 7 at 23:50
That still requires CAP_SYS_ADMIN. I have the idea of a small setcap-enabled utility that will do that, I will update the answer with it.
– qubert
Nov 8 at 0:10
1
Unless the kernel has been locked down to disallow it, creation of user namespaces is not a privileged operation. The underlying design is such that it's supposed to be safe to allow ordinary users to do without any special capability. However there is sufficient attack surface/risk that many distros disable it, I think.
– R..
Nov 8 at 0:15
I tried in terminal. In some temporary dir,rm $PWD
work, shell is still in that dir. But no new files can be put into this "folder". Only you can do is read/write with file &3,&4. So this is still "temporary file", not "temporary folder".
– Bob Johnson
Nov 8 at 1:12
@BobJohnson That's not different from what I was already saying in my answer ;-)
– qubert
Nov 8 at 2:06
1
1
Even if you're not root, you can do this with namespaces by creating a new user namespace and doing the tmpfs mount inside it. Smuggling access to the new dir out to the outside world is a bit tricky but should be possible.
– R..
Nov 7 at 23:50
Even if you're not root, you can do this with namespaces by creating a new user namespace and doing the tmpfs mount inside it. Smuggling access to the new dir out to the outside world is a bit tricky but should be possible.
– R..
Nov 7 at 23:50
That still requires CAP_SYS_ADMIN. I have the idea of a small setcap-enabled utility that will do that, I will update the answer with it.
– qubert
Nov 8 at 0:10
That still requires CAP_SYS_ADMIN. I have the idea of a small setcap-enabled utility that will do that, I will update the answer with it.
– qubert
Nov 8 at 0:10
1
1
Unless the kernel has been locked down to disallow it, creation of user namespaces is not a privileged operation. The underlying design is such that it's supposed to be safe to allow ordinary users to do without any special capability. However there is sufficient attack surface/risk that many distros disable it, I think.
– R..
Nov 8 at 0:15
Unless the kernel has been locked down to disallow it, creation of user namespaces is not a privileged operation. The underlying design is such that it's supposed to be safe to allow ordinary users to do without any special capability. However there is sufficient attack surface/risk that many distros disable it, I think.
– R..
Nov 8 at 0:15
I tried in terminal. In some temporary dir,
rm $PWD
work, shell is still in that dir. But no new files can be put into this "folder". Only you can do is read/write with file &3,&4. So this is still "temporary file", not "temporary folder".– Bob Johnson
Nov 8 at 1:12
I tried in terminal. In some temporary dir,
rm $PWD
work, shell is still in that dir. But no new files can be put into this "folder". Only you can do is read/write with file &3,&4. So this is still "temporary file", not "temporary folder".– Bob Johnson
Nov 8 at 1:12
@BobJohnson That's not different from what I was already saying in my answer ;-)
– qubert
Nov 8 at 2:06
@BobJohnson That's not different from what I was already saying in my answer ;-)
– qubert
Nov 8 at 2:06
add a comment |
Bob Johnson is a new contributor. Be nice, and check out our Code of Conduct.
Bob Johnson is a new contributor. Be nice, and check out our Code of Conduct.
Bob Johnson is a new contributor. Be nice, and check out our Code of Conduct.
Bob Johnson is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f480330%2ftemporary-folder-that-automatically-destroyed-after-process-exit%23new-answer', 'question_page');
}
);
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Related: exit trap in dash vs ksh and bash
– Stéphane Chazelas
Nov 8 at 8:09