Temporary folder that automatically destroyed after process exit











up vote
10
down vote

favorite
3












Can we use temporary folders like temporary files



TMP=$(mktemp ... )
exec 3<>$TMP
rm $TMP

cat <&3


which will be destroyed automatically after this shell exit?










share|improve this question







New contributor




Bob Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Related: exit trap in dash vs ksh and bash
    – Stéphane Chazelas
    Nov 8 at 8:09















up vote
10
down vote

favorite
3












Can we use temporary folders like temporary files



TMP=$(mktemp ... )
exec 3<>$TMP
rm $TMP

cat <&3


which will be destroyed automatically after this shell exit?










share|improve this question







New contributor




Bob Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Related: exit trap in dash vs ksh and bash
    – Stéphane Chazelas
    Nov 8 at 8:09













up vote
10
down vote

favorite
3









up vote
10
down vote

favorite
3






3





Can we use temporary folders like temporary files



TMP=$(mktemp ... )
exec 3<>$TMP
rm $TMP

cat <&3


which will be destroyed automatically after this shell exit?










share|improve this question







New contributor




Bob Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











Can we use temporary folders like temporary files



TMP=$(mktemp ... )
exec 3<>$TMP
rm $TMP

cat <&3


which will be destroyed automatically after this shell exit?







file-descriptors tmpfs






share|improve this question







New contributor




Bob Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question







New contributor




Bob Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question






New contributor




Bob Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Nov 7 at 10:44









Bob Johnson

694




694




New contributor




Bob Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Bob Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Bob Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • Related: exit trap in dash vs ksh and bash
    – Stéphane Chazelas
    Nov 8 at 8:09


















  • Related: exit trap in dash vs ksh and bash
    – Stéphane Chazelas
    Nov 8 at 8:09
















Related: exit trap in dash vs ksh and bash
– Stéphane Chazelas
Nov 8 at 8:09




Related: exit trap in dash vs ksh and bash
– Stéphane Chazelas
Nov 8 at 8:09










3 Answers
3






active

oldest

votes

















up vote
12
down vote













In the case of a temporary file, your example in the question would create it, then unlink it from the directory (making it "disappear"), and when the script closes the filedescriptor (probably upon termination), the space taken by the file would be reclaimable by the system. This is a common way to deal with temporary files in languages like C.



It is, as far as I know, not possible to open a directory in the same way, at least not in any way that would make the directory usable.



A common way to delete temporary files and directories at the termination of a script is by installing a cleanup EXIT trap. The code examples given below avoids having to juggle filedescriptors completely.



tmpdir=$(mktemp -d)
tmpfile=$(mktemp)

trap 'rm -f "$tmpfile"; rm -rf "$tmpdir"' EXIT

# The rest of the script goes here.


Or you may call a cleanup function:



cleanup () {
rm -f "$tmpfile"
rm -rf "$tmpdir"
}

tmpdir=$(mktemp -d)
tmpfile=$(mktemp)

trap cleanup EXIT

# The rest of the script goes here.


The EXIT trap won't be executed upon receiving the KILL signal (which can't be trapped), which means that there will be no cleanup performed then. It will however execute when terminating due to an INT or TERM signal (if running with bash or ksh, in other shells you may want to add these signals after EXIT in the trap command line), or when exiting normally due to arriving at the end of the script or executing an exit call.






share|improve this answer



















  • 5




    It's not just shell that can't use already-unlinked temporary directories — neither can C programs. Problem is that unlinked directories can't have files in them. You can have an unlinked empty directory as your working directory, but any attempt to create a file will give an error.
    – derobert
    Nov 7 at 18:04






  • 1




    @derobert And such an unlinked directory does not even have the . and .. entries. (Tested on Linux, I don't know if that's consistent across platforms.)
    – kasperd
    Nov 7 at 19:03










  • unix.stackexchange.com/a/434437/5132
    – JdeBP
    Nov 8 at 0:07










  • @JdeBP, SE Comment Link Helper
    – Stéphane Chazelas
    Nov 8 at 7:38






  • 1




    Note that the EXIT trap is not executed either if the script calls exec another-command obviously.
    – Stéphane Chazelas
    Nov 8 at 8:01


















up vote
6
down vote













Write a shell-function that will be executed when your script if finished. In the example below I call it 'cleanup' and set a trap to be executed on exit levels, like: 0 1 2 3 6



trap cleanup 0 1 2 3 6

cleanup()
{
[ -d $TMP ] && rm -rf $TMP
}


See this post for more info.






share|improve this answer























  • Those are not "exit levels" but signal numbers, and the answer to question you're linking to explains just that. The trap will run cleanup before a clean exit (0) and on receiving SIGHUP(1), SIGINT(2), SIGQUIT(3) and SIGABRT(6). it will not run cleanup when the script exits because of SIGTERM, SIGSEGV, SIGKILL, SIGPIPE, etc. This is clearly deficient.
    – mosvy
    Nov 8 at 13:43




















up vote
6
down vote













You can chdir into it and then remove it, provided that you don't try to use paths inside it afterwards:



#! /bin/sh
dir=`mktemp -d`
cd "$dir"
exec 4>file 3<file
rm -fr "$dir"

echo yes >&4 # OK
cat <&3 # OK

cat file # FAIL
echo yes > file # FAIL


I haven't checked, but it's most probably the same problem when using openat(2) in C with a directory that no longer exists in the file system.



If you're root and on Linux, you can play with a separate namespace, and mount -t tmpfs tmpfs /dir inside it.



The canonical answers (set a trap on EXIT) don't work if your script is forced into an unclean exit (eg. with SIGKILL); that may leave sensitive data hanging around.



Update:



Here is a small utility which implements the namespace approach. It should be compiled with



cc -Wall -Os -s chtmp.c -o chtmp


and given CAP_SYS_ADMIN file capabilities (as root) with



setcap CAP_SYS_ADMIN+ep chtmp


When run (as a normal) user as



./chtmp command args ...


it will unshare its filesystem namespace, mount a tmpfs filesystem on /proc/sysvipc, chdir into it and run command with the given arguments. command will not inherit the CAP_SYS_ADMIN capabilities.



That filesystem will not be accessible from another process not started from command, and it will magically disappear (with all the files that were created inside it) when command and its children die, no matter how that happens. Notice that this is just unsharing the mount namespace -- there are no hard barriers between command and other processes run by the same user; they could still sneak inside its namespace either via ptrace(2), /proc/PID/cwd or by other means.



The hijacking of the "useless" /proc/sysvipc is, of course silly, but the alternative would've been to spam /tmp with empty directories that would have to be removed or greatly complicate this small program with forks and waits. Alternatively, dir can be changed to eg. /mnt/chtmp and have it created by root at installation; do not make it user-configurable and do not set it to a user-owned path as that may expose you to symlink traps and other hairy stuff not worth spending time on.



chtmp.c



#define _GNU_SOURCE
#include <err.h>
#include <sched.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/mount.h>
int main(int argc, char **argv){
char *dir = "/proc/sysvipc"; /* LOL */
if(argc < 2 || !argv[1]) errx(1, "usage: %s prog args ...", *argv);
argv++;
if(unshare(CLONE_NEWNS)) err(1, "unshare(CLONE_NEWNS)");
/* "modern" systemd remounts all mount points MS_SHARED
see the NOTES in mount_namespaces(7); YUCK */
if(mount("none", "/", 0, MS_REC|MS_PRIVATE, 0))
err(1, "mount(/, MS_REC|MS_PRIVATE)");
if(mount("tmpfs", dir, "tmpfs", 0, 0)) err(1, "mount(tmpfs, %s)", dir);
if(chdir(dir)) err(1, "chdir %s", dir);
execvp(*argv, argv);
err(1, "execvp %s", *argv);
}





share|improve this answer



















  • 1




    Even if you're not root, you can do this with namespaces by creating a new user namespace and doing the tmpfs mount inside it. Smuggling access to the new dir out to the outside world is a bit tricky but should be possible.
    – R..
    Nov 7 at 23:50










  • That still requires CAP_SYS_ADMIN. I have the idea of a small setcap-enabled utility that will do that, I will update the answer with it.
    – qubert
    Nov 8 at 0:10






  • 1




    Unless the kernel has been locked down to disallow it, creation of user namespaces is not a privileged operation. The underlying design is such that it's supposed to be safe to allow ordinary users to do without any special capability. However there is sufficient attack surface/risk that many distros disable it, I think.
    – R..
    Nov 8 at 0:15












  • I tried in terminal. In some temporary dir, rm $PWD work, shell is still in that dir. But no new files can be put into this "folder". Only you can do is read/write with file &3,&4. So this is still "temporary file", not "temporary folder".
    – Bob Johnson
    Nov 8 at 1:12










  • @BobJohnson That's not different from what I was already saying in my answer ;-)
    – qubert
    Nov 8 at 2:06











Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});






Bob Johnson is a new contributor. Be nice, and check out our Code of Conduct.










 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f480330%2ftemporary-folder-that-automatically-destroyed-after-process-exit%23new-answer', 'question_page');
}
);

Post as a guest
































3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
12
down vote













In the case of a temporary file, your example in the question would create it, then unlink it from the directory (making it "disappear"), and when the script closes the filedescriptor (probably upon termination), the space taken by the file would be reclaimable by the system. This is a common way to deal with temporary files in languages like C.



It is, as far as I know, not possible to open a directory in the same way, at least not in any way that would make the directory usable.



A common way to delete temporary files and directories at the termination of a script is by installing a cleanup EXIT trap. The code examples given below avoids having to juggle filedescriptors completely.



tmpdir=$(mktemp -d)
tmpfile=$(mktemp)

trap 'rm -f "$tmpfile"; rm -rf "$tmpdir"' EXIT

# The rest of the script goes here.


Or you may call a cleanup function:



cleanup () {
rm -f "$tmpfile"
rm -rf "$tmpdir"
}

tmpdir=$(mktemp -d)
tmpfile=$(mktemp)

trap cleanup EXIT

# The rest of the script goes here.


The EXIT trap won't be executed upon receiving the KILL signal (which can't be trapped), which means that there will be no cleanup performed then. It will however execute when terminating due to an INT or TERM signal (if running with bash or ksh, in other shells you may want to add these signals after EXIT in the trap command line), or when exiting normally due to arriving at the end of the script or executing an exit call.






share|improve this answer



















  • 5




    It's not just shell that can't use already-unlinked temporary directories — neither can C programs. Problem is that unlinked directories can't have files in them. You can have an unlinked empty directory as your working directory, but any attempt to create a file will give an error.
    – derobert
    Nov 7 at 18:04






  • 1




    @derobert And such an unlinked directory does not even have the . and .. entries. (Tested on Linux, I don't know if that's consistent across platforms.)
    – kasperd
    Nov 7 at 19:03










  • unix.stackexchange.com/a/434437/5132
    – JdeBP
    Nov 8 at 0:07










  • @JdeBP, SE Comment Link Helper
    – Stéphane Chazelas
    Nov 8 at 7:38






  • 1




    Note that the EXIT trap is not executed either if the script calls exec another-command obviously.
    – Stéphane Chazelas
    Nov 8 at 8:01















up vote
12
down vote













In the case of a temporary file, your example in the question would create it, then unlink it from the directory (making it "disappear"), and when the script closes the filedescriptor (probably upon termination), the space taken by the file would be reclaimable by the system. This is a common way to deal with temporary files in languages like C.



It is, as far as I know, not possible to open a directory in the same way, at least not in any way that would make the directory usable.



A common way to delete temporary files and directories at the termination of a script is by installing a cleanup EXIT trap. The code examples given below avoids having to juggle filedescriptors completely.



tmpdir=$(mktemp -d)
tmpfile=$(mktemp)

trap 'rm -f "$tmpfile"; rm -rf "$tmpdir"' EXIT

# The rest of the script goes here.


Or you may call a cleanup function:



cleanup () {
rm -f "$tmpfile"
rm -rf "$tmpdir"
}

tmpdir=$(mktemp -d)
tmpfile=$(mktemp)

trap cleanup EXIT

# The rest of the script goes here.


The EXIT trap won't be executed upon receiving the KILL signal (which can't be trapped), which means that there will be no cleanup performed then. It will however execute when terminating due to an INT or TERM signal (if running with bash or ksh, in other shells you may want to add these signals after EXIT in the trap command line), or when exiting normally due to arriving at the end of the script or executing an exit call.






share|improve this answer



















  • 5




    It's not just shell that can't use already-unlinked temporary directories — neither can C programs. Problem is that unlinked directories can't have files in them. You can have an unlinked empty directory as your working directory, but any attempt to create a file will give an error.
    – derobert
    Nov 7 at 18:04






  • 1




    @derobert And such an unlinked directory does not even have the . and .. entries. (Tested on Linux, I don't know if that's consistent across platforms.)
    – kasperd
    Nov 7 at 19:03










  • unix.stackexchange.com/a/434437/5132
    – JdeBP
    Nov 8 at 0:07










  • @JdeBP, SE Comment Link Helper
    – Stéphane Chazelas
    Nov 8 at 7:38






  • 1




    Note that the EXIT trap is not executed either if the script calls exec another-command obviously.
    – Stéphane Chazelas
    Nov 8 at 8:01













up vote
12
down vote










up vote
12
down vote









In the case of a temporary file, your example in the question would create it, then unlink it from the directory (making it "disappear"), and when the script closes the filedescriptor (probably upon termination), the space taken by the file would be reclaimable by the system. This is a common way to deal with temporary files in languages like C.



It is, as far as I know, not possible to open a directory in the same way, at least not in any way that would make the directory usable.



A common way to delete temporary files and directories at the termination of a script is by installing a cleanup EXIT trap. The code examples given below avoids having to juggle filedescriptors completely.



tmpdir=$(mktemp -d)
tmpfile=$(mktemp)

trap 'rm -f "$tmpfile"; rm -rf "$tmpdir"' EXIT

# The rest of the script goes here.


Or you may call a cleanup function:



cleanup () {
rm -f "$tmpfile"
rm -rf "$tmpdir"
}

tmpdir=$(mktemp -d)
tmpfile=$(mktemp)

trap cleanup EXIT

# The rest of the script goes here.


The EXIT trap won't be executed upon receiving the KILL signal (which can't be trapped), which means that there will be no cleanup performed then. It will however execute when terminating due to an INT or TERM signal (if running with bash or ksh, in other shells you may want to add these signals after EXIT in the trap command line), or when exiting normally due to arriving at the end of the script or executing an exit call.






share|improve this answer














In the case of a temporary file, your example in the question would create it, then unlink it from the directory (making it "disappear"), and when the script closes the filedescriptor (probably upon termination), the space taken by the file would be reclaimable by the system. This is a common way to deal with temporary files in languages like C.



It is, as far as I know, not possible to open a directory in the same way, at least not in any way that would make the directory usable.



A common way to delete temporary files and directories at the termination of a script is by installing a cleanup EXIT trap. The code examples given below avoids having to juggle filedescriptors completely.



tmpdir=$(mktemp -d)
tmpfile=$(mktemp)

trap 'rm -f "$tmpfile"; rm -rf "$tmpdir"' EXIT

# The rest of the script goes here.


Or you may call a cleanup function:



cleanup () {
rm -f "$tmpfile"
rm -rf "$tmpdir"
}

tmpdir=$(mktemp -d)
tmpfile=$(mktemp)

trap cleanup EXIT

# The rest of the script goes here.


The EXIT trap won't be executed upon receiving the KILL signal (which can't be trapped), which means that there will be no cleanup performed then. It will however execute when terminating due to an INT or TERM signal (if running with bash or ksh, in other shells you may want to add these signals after EXIT in the trap command line), or when exiting normally due to arriving at the end of the script or executing an exit call.







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 8 at 7:47

























answered Nov 7 at 11:06









Kusalananda

114k15218349




114k15218349








  • 5




    It's not just shell that can't use already-unlinked temporary directories — neither can C programs. Problem is that unlinked directories can't have files in them. You can have an unlinked empty directory as your working directory, but any attempt to create a file will give an error.
    – derobert
    Nov 7 at 18:04






  • 1




    @derobert And such an unlinked directory does not even have the . and .. entries. (Tested on Linux, I don't know if that's consistent across platforms.)
    – kasperd
    Nov 7 at 19:03










  • unix.stackexchange.com/a/434437/5132
    – JdeBP
    Nov 8 at 0:07










  • @JdeBP, SE Comment Link Helper
    – Stéphane Chazelas
    Nov 8 at 7:38






  • 1




    Note that the EXIT trap is not executed either if the script calls exec another-command obviously.
    – Stéphane Chazelas
    Nov 8 at 8:01














  • 5




    It's not just shell that can't use already-unlinked temporary directories — neither can C programs. Problem is that unlinked directories can't have files in them. You can have an unlinked empty directory as your working directory, but any attempt to create a file will give an error.
    – derobert
    Nov 7 at 18:04






  • 1




    @derobert And such an unlinked directory does not even have the . and .. entries. (Tested on Linux, I don't know if that's consistent across platforms.)
    – kasperd
    Nov 7 at 19:03










  • unix.stackexchange.com/a/434437/5132
    – JdeBP
    Nov 8 at 0:07










  • @JdeBP, SE Comment Link Helper
    – Stéphane Chazelas
    Nov 8 at 7:38






  • 1




    Note that the EXIT trap is not executed either if the script calls exec another-command obviously.
    – Stéphane Chazelas
    Nov 8 at 8:01








5




5




It's not just shell that can't use already-unlinked temporary directories — neither can C programs. Problem is that unlinked directories can't have files in them. You can have an unlinked empty directory as your working directory, but any attempt to create a file will give an error.
– derobert
Nov 7 at 18:04




It's not just shell that can't use already-unlinked temporary directories — neither can C programs. Problem is that unlinked directories can't have files in them. You can have an unlinked empty directory as your working directory, but any attempt to create a file will give an error.
– derobert
Nov 7 at 18:04




1




1




@derobert And such an unlinked directory does not even have the . and .. entries. (Tested on Linux, I don't know if that's consistent across platforms.)
– kasperd
Nov 7 at 19:03




@derobert And such an unlinked directory does not even have the . and .. entries. (Tested on Linux, I don't know if that's consistent across platforms.)
– kasperd
Nov 7 at 19:03












unix.stackexchange.com/a/434437/5132
– JdeBP
Nov 8 at 0:07




unix.stackexchange.com/a/434437/5132
– JdeBP
Nov 8 at 0:07












@JdeBP, SE Comment Link Helper
– Stéphane Chazelas
Nov 8 at 7:38




@JdeBP, SE Comment Link Helper
– Stéphane Chazelas
Nov 8 at 7:38




1




1




Note that the EXIT trap is not executed either if the script calls exec another-command obviously.
– Stéphane Chazelas
Nov 8 at 8:01




Note that the EXIT trap is not executed either if the script calls exec another-command obviously.
– Stéphane Chazelas
Nov 8 at 8:01












up vote
6
down vote













Write a shell-function that will be executed when your script if finished. In the example below I call it 'cleanup' and set a trap to be executed on exit levels, like: 0 1 2 3 6



trap cleanup 0 1 2 3 6

cleanup()
{
[ -d $TMP ] && rm -rf $TMP
}


See this post for more info.






share|improve this answer























  • Those are not "exit levels" but signal numbers, and the answer to question you're linking to explains just that. The trap will run cleanup before a clean exit (0) and on receiving SIGHUP(1), SIGINT(2), SIGQUIT(3) and SIGABRT(6). it will not run cleanup when the script exits because of SIGTERM, SIGSEGV, SIGKILL, SIGPIPE, etc. This is clearly deficient.
    – mosvy
    Nov 8 at 13:43

















up vote
6
down vote













Write a shell-function that will be executed when your script if finished. In the example below I call it 'cleanup' and set a trap to be executed on exit levels, like: 0 1 2 3 6



trap cleanup 0 1 2 3 6

cleanup()
{
[ -d $TMP ] && rm -rf $TMP
}


See this post for more info.






share|improve this answer























  • Those are not "exit levels" but signal numbers, and the answer to question you're linking to explains just that. The trap will run cleanup before a clean exit (0) and on receiving SIGHUP(1), SIGINT(2), SIGQUIT(3) and SIGABRT(6). it will not run cleanup when the script exits because of SIGTERM, SIGSEGV, SIGKILL, SIGPIPE, etc. This is clearly deficient.
    – mosvy
    Nov 8 at 13:43















up vote
6
down vote










up vote
6
down vote









Write a shell-function that will be executed when your script if finished. In the example below I call it 'cleanup' and set a trap to be executed on exit levels, like: 0 1 2 3 6



trap cleanup 0 1 2 3 6

cleanup()
{
[ -d $TMP ] && rm -rf $TMP
}


See this post for more info.






share|improve this answer














Write a shell-function that will be executed when your script if finished. In the example below I call it 'cleanup' and set a trap to be executed on exit levels, like: 0 1 2 3 6



trap cleanup 0 1 2 3 6

cleanup()
{
[ -d $TMP ] && rm -rf $TMP
}


See this post for more info.







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 7 at 13:42

























answered Nov 7 at 10:57









Dirk Krijgsman

5246




5246












  • Those are not "exit levels" but signal numbers, and the answer to question you're linking to explains just that. The trap will run cleanup before a clean exit (0) and on receiving SIGHUP(1), SIGINT(2), SIGQUIT(3) and SIGABRT(6). it will not run cleanup when the script exits because of SIGTERM, SIGSEGV, SIGKILL, SIGPIPE, etc. This is clearly deficient.
    – mosvy
    Nov 8 at 13:43




















  • Those are not "exit levels" but signal numbers, and the answer to question you're linking to explains just that. The trap will run cleanup before a clean exit (0) and on receiving SIGHUP(1), SIGINT(2), SIGQUIT(3) and SIGABRT(6). it will not run cleanup when the script exits because of SIGTERM, SIGSEGV, SIGKILL, SIGPIPE, etc. This is clearly deficient.
    – mosvy
    Nov 8 at 13:43


















Those are not "exit levels" but signal numbers, and the answer to question you're linking to explains just that. The trap will run cleanup before a clean exit (0) and on receiving SIGHUP(1), SIGINT(2), SIGQUIT(3) and SIGABRT(6). it will not run cleanup when the script exits because of SIGTERM, SIGSEGV, SIGKILL, SIGPIPE, etc. This is clearly deficient.
– mosvy
Nov 8 at 13:43






Those are not "exit levels" but signal numbers, and the answer to question you're linking to explains just that. The trap will run cleanup before a clean exit (0) and on receiving SIGHUP(1), SIGINT(2), SIGQUIT(3) and SIGABRT(6). it will not run cleanup when the script exits because of SIGTERM, SIGSEGV, SIGKILL, SIGPIPE, etc. This is clearly deficient.
– mosvy
Nov 8 at 13:43












up vote
6
down vote













You can chdir into it and then remove it, provided that you don't try to use paths inside it afterwards:



#! /bin/sh
dir=`mktemp -d`
cd "$dir"
exec 4>file 3<file
rm -fr "$dir"

echo yes >&4 # OK
cat <&3 # OK

cat file # FAIL
echo yes > file # FAIL


I haven't checked, but it's most probably the same problem when using openat(2) in C with a directory that no longer exists in the file system.



If you're root and on Linux, you can play with a separate namespace, and mount -t tmpfs tmpfs /dir inside it.



The canonical answers (set a trap on EXIT) don't work if your script is forced into an unclean exit (eg. with SIGKILL); that may leave sensitive data hanging around.



Update:



Here is a small utility which implements the namespace approach. It should be compiled with



cc -Wall -Os -s chtmp.c -o chtmp


and given CAP_SYS_ADMIN file capabilities (as root) with



setcap CAP_SYS_ADMIN+ep chtmp


When run (as a normal) user as



./chtmp command args ...


it will unshare its filesystem namespace, mount a tmpfs filesystem on /proc/sysvipc, chdir into it and run command with the given arguments. command will not inherit the CAP_SYS_ADMIN capabilities.



That filesystem will not be accessible from another process not started from command, and it will magically disappear (with all the files that were created inside it) when command and its children die, no matter how that happens. Notice that this is just unsharing the mount namespace -- there are no hard barriers between command and other processes run by the same user; they could still sneak inside its namespace either via ptrace(2), /proc/PID/cwd or by other means.



The hijacking of the "useless" /proc/sysvipc is, of course silly, but the alternative would've been to spam /tmp with empty directories that would have to be removed or greatly complicate this small program with forks and waits. Alternatively, dir can be changed to eg. /mnt/chtmp and have it created by root at installation; do not make it user-configurable and do not set it to a user-owned path as that may expose you to symlink traps and other hairy stuff not worth spending time on.



chtmp.c



#define _GNU_SOURCE
#include <err.h>
#include <sched.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/mount.h>
int main(int argc, char **argv){
char *dir = "/proc/sysvipc"; /* LOL */
if(argc < 2 || !argv[1]) errx(1, "usage: %s prog args ...", *argv);
argv++;
if(unshare(CLONE_NEWNS)) err(1, "unshare(CLONE_NEWNS)");
/* "modern" systemd remounts all mount points MS_SHARED
see the NOTES in mount_namespaces(7); YUCK */
if(mount("none", "/", 0, MS_REC|MS_PRIVATE, 0))
err(1, "mount(/, MS_REC|MS_PRIVATE)");
if(mount("tmpfs", dir, "tmpfs", 0, 0)) err(1, "mount(tmpfs, %s)", dir);
if(chdir(dir)) err(1, "chdir %s", dir);
execvp(*argv, argv);
err(1, "execvp %s", *argv);
}





share|improve this answer



















  • 1




    Even if you're not root, you can do this with namespaces by creating a new user namespace and doing the tmpfs mount inside it. Smuggling access to the new dir out to the outside world is a bit tricky but should be possible.
    – R..
    Nov 7 at 23:50










  • That still requires CAP_SYS_ADMIN. I have the idea of a small setcap-enabled utility that will do that, I will update the answer with it.
    – qubert
    Nov 8 at 0:10






  • 1




    Unless the kernel has been locked down to disallow it, creation of user namespaces is not a privileged operation. The underlying design is such that it's supposed to be safe to allow ordinary users to do without any special capability. However there is sufficient attack surface/risk that many distros disable it, I think.
    – R..
    Nov 8 at 0:15












  • I tried in terminal. In some temporary dir, rm $PWD work, shell is still in that dir. But no new files can be put into this "folder". Only you can do is read/write with file &3,&4. So this is still "temporary file", not "temporary folder".
    – Bob Johnson
    Nov 8 at 1:12










  • @BobJohnson That's not different from what I was already saying in my answer ;-)
    – qubert
    Nov 8 at 2:06















up vote
6
down vote













You can chdir into it and then remove it, provided that you don't try to use paths inside it afterwards:



#! /bin/sh
dir=`mktemp -d`
cd "$dir"
exec 4>file 3<file
rm -fr "$dir"

echo yes >&4 # OK
cat <&3 # OK

cat file # FAIL
echo yes > file # FAIL


I haven't checked, but it's most probably the same problem when using openat(2) in C with a directory that no longer exists in the file system.



If you're root and on Linux, you can play with a separate namespace, and mount -t tmpfs tmpfs /dir inside it.



The canonical answers (set a trap on EXIT) don't work if your script is forced into an unclean exit (eg. with SIGKILL); that may leave sensitive data hanging around.



Update:



Here is a small utility which implements the namespace approach. It should be compiled with



cc -Wall -Os -s chtmp.c -o chtmp


and given CAP_SYS_ADMIN file capabilities (as root) with



setcap CAP_SYS_ADMIN+ep chtmp


When run (as a normal) user as



./chtmp command args ...


it will unshare its filesystem namespace, mount a tmpfs filesystem on /proc/sysvipc, chdir into it and run command with the given arguments. command will not inherit the CAP_SYS_ADMIN capabilities.



That filesystem will not be accessible from another process not started from command, and it will magically disappear (with all the files that were created inside it) when command and its children die, no matter how that happens. Notice that this is just unsharing the mount namespace -- there are no hard barriers between command and other processes run by the same user; they could still sneak inside its namespace either via ptrace(2), /proc/PID/cwd or by other means.



The hijacking of the "useless" /proc/sysvipc is, of course silly, but the alternative would've been to spam /tmp with empty directories that would have to be removed or greatly complicate this small program with forks and waits. Alternatively, dir can be changed to eg. /mnt/chtmp and have it created by root at installation; do not make it user-configurable and do not set it to a user-owned path as that may expose you to symlink traps and other hairy stuff not worth spending time on.



chtmp.c



#define _GNU_SOURCE
#include <err.h>
#include <sched.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/mount.h>
int main(int argc, char **argv){
char *dir = "/proc/sysvipc"; /* LOL */
if(argc < 2 || !argv[1]) errx(1, "usage: %s prog args ...", *argv);
argv++;
if(unshare(CLONE_NEWNS)) err(1, "unshare(CLONE_NEWNS)");
/* "modern" systemd remounts all mount points MS_SHARED
see the NOTES in mount_namespaces(7); YUCK */
if(mount("none", "/", 0, MS_REC|MS_PRIVATE, 0))
err(1, "mount(/, MS_REC|MS_PRIVATE)");
if(mount("tmpfs", dir, "tmpfs", 0, 0)) err(1, "mount(tmpfs, %s)", dir);
if(chdir(dir)) err(1, "chdir %s", dir);
execvp(*argv, argv);
err(1, "execvp %s", *argv);
}





share|improve this answer



















  • 1




    Even if you're not root, you can do this with namespaces by creating a new user namespace and doing the tmpfs mount inside it. Smuggling access to the new dir out to the outside world is a bit tricky but should be possible.
    – R..
    Nov 7 at 23:50










  • That still requires CAP_SYS_ADMIN. I have the idea of a small setcap-enabled utility that will do that, I will update the answer with it.
    – qubert
    Nov 8 at 0:10






  • 1




    Unless the kernel has been locked down to disallow it, creation of user namespaces is not a privileged operation. The underlying design is such that it's supposed to be safe to allow ordinary users to do without any special capability. However there is sufficient attack surface/risk that many distros disable it, I think.
    – R..
    Nov 8 at 0:15












  • I tried in terminal. In some temporary dir, rm $PWD work, shell is still in that dir. But no new files can be put into this "folder". Only you can do is read/write with file &3,&4. So this is still "temporary file", not "temporary folder".
    – Bob Johnson
    Nov 8 at 1:12










  • @BobJohnson That's not different from what I was already saying in my answer ;-)
    – qubert
    Nov 8 at 2:06













up vote
6
down vote










up vote
6
down vote









You can chdir into it and then remove it, provided that you don't try to use paths inside it afterwards:



#! /bin/sh
dir=`mktemp -d`
cd "$dir"
exec 4>file 3<file
rm -fr "$dir"

echo yes >&4 # OK
cat <&3 # OK

cat file # FAIL
echo yes > file # FAIL


I haven't checked, but it's most probably the same problem when using openat(2) in C with a directory that no longer exists in the file system.



If you're root and on Linux, you can play with a separate namespace, and mount -t tmpfs tmpfs /dir inside it.



The canonical answers (set a trap on EXIT) don't work if your script is forced into an unclean exit (eg. with SIGKILL); that may leave sensitive data hanging around.



Update:



Here is a small utility which implements the namespace approach. It should be compiled with



cc -Wall -Os -s chtmp.c -o chtmp


and given CAP_SYS_ADMIN file capabilities (as root) with



setcap CAP_SYS_ADMIN+ep chtmp


When run (as a normal) user as



./chtmp command args ...


it will unshare its filesystem namespace, mount a tmpfs filesystem on /proc/sysvipc, chdir into it and run command with the given arguments. command will not inherit the CAP_SYS_ADMIN capabilities.



That filesystem will not be accessible from another process not started from command, and it will magically disappear (with all the files that were created inside it) when command and its children die, no matter how that happens. Notice that this is just unsharing the mount namespace -- there are no hard barriers between command and other processes run by the same user; they could still sneak inside its namespace either via ptrace(2), /proc/PID/cwd or by other means.



The hijacking of the "useless" /proc/sysvipc is, of course silly, but the alternative would've been to spam /tmp with empty directories that would have to be removed or greatly complicate this small program with forks and waits. Alternatively, dir can be changed to eg. /mnt/chtmp and have it created by root at installation; do not make it user-configurable and do not set it to a user-owned path as that may expose you to symlink traps and other hairy stuff not worth spending time on.



chtmp.c



#define _GNU_SOURCE
#include <err.h>
#include <sched.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/mount.h>
int main(int argc, char **argv){
char *dir = "/proc/sysvipc"; /* LOL */
if(argc < 2 || !argv[1]) errx(1, "usage: %s prog args ...", *argv);
argv++;
if(unshare(CLONE_NEWNS)) err(1, "unshare(CLONE_NEWNS)");
/* "modern" systemd remounts all mount points MS_SHARED
see the NOTES in mount_namespaces(7); YUCK */
if(mount("none", "/", 0, MS_REC|MS_PRIVATE, 0))
err(1, "mount(/, MS_REC|MS_PRIVATE)");
if(mount("tmpfs", dir, "tmpfs", 0, 0)) err(1, "mount(tmpfs, %s)", dir);
if(chdir(dir)) err(1, "chdir %s", dir);
execvp(*argv, argv);
err(1, "execvp %s", *argv);
}





share|improve this answer














You can chdir into it and then remove it, provided that you don't try to use paths inside it afterwards:



#! /bin/sh
dir=`mktemp -d`
cd "$dir"
exec 4>file 3<file
rm -fr "$dir"

echo yes >&4 # OK
cat <&3 # OK

cat file # FAIL
echo yes > file # FAIL


I haven't checked, but it's most probably the same problem when using openat(2) in C with a directory that no longer exists in the file system.



If you're root and on Linux, you can play with a separate namespace, and mount -t tmpfs tmpfs /dir inside it.



The canonical answers (set a trap on EXIT) don't work if your script is forced into an unclean exit (eg. with SIGKILL); that may leave sensitive data hanging around.



Update:



Here is a small utility which implements the namespace approach. It should be compiled with



cc -Wall -Os -s chtmp.c -o chtmp


and given CAP_SYS_ADMIN file capabilities (as root) with



setcap CAP_SYS_ADMIN+ep chtmp


When run (as a normal) user as



./chtmp command args ...


it will unshare its filesystem namespace, mount a tmpfs filesystem on /proc/sysvipc, chdir into it and run command with the given arguments. command will not inherit the CAP_SYS_ADMIN capabilities.



That filesystem will not be accessible from another process not started from command, and it will magically disappear (with all the files that were created inside it) when command and its children die, no matter how that happens. Notice that this is just unsharing the mount namespace -- there are no hard barriers between command and other processes run by the same user; they could still sneak inside its namespace either via ptrace(2), /proc/PID/cwd or by other means.



The hijacking of the "useless" /proc/sysvipc is, of course silly, but the alternative would've been to spam /tmp with empty directories that would have to be removed or greatly complicate this small program with forks and waits. Alternatively, dir can be changed to eg. /mnt/chtmp and have it created by root at installation; do not make it user-configurable and do not set it to a user-owned path as that may expose you to symlink traps and other hairy stuff not worth spending time on.



chtmp.c



#define _GNU_SOURCE
#include <err.h>
#include <sched.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/mount.h>
int main(int argc, char **argv){
char *dir = "/proc/sysvipc"; /* LOL */
if(argc < 2 || !argv[1]) errx(1, "usage: %s prog args ...", *argv);
argv++;
if(unshare(CLONE_NEWNS)) err(1, "unshare(CLONE_NEWNS)");
/* "modern" systemd remounts all mount points MS_SHARED
see the NOTES in mount_namespaces(7); YUCK */
if(mount("none", "/", 0, MS_REC|MS_PRIVATE, 0))
err(1, "mount(/, MS_REC|MS_PRIVATE)");
if(mount("tmpfs", dir, "tmpfs", 0, 0)) err(1, "mount(tmpfs, %s)", dir);
if(chdir(dir)) err(1, "chdir %s", dir);
execvp(*argv, argv);
err(1, "execvp %s", *argv);
}






share|improve this answer














share|improve this answer



share|improve this answer








edited 18 hours ago

























answered Nov 7 at 12:27









qubert

4836




4836








  • 1




    Even if you're not root, you can do this with namespaces by creating a new user namespace and doing the tmpfs mount inside it. Smuggling access to the new dir out to the outside world is a bit tricky but should be possible.
    – R..
    Nov 7 at 23:50










  • That still requires CAP_SYS_ADMIN. I have the idea of a small setcap-enabled utility that will do that, I will update the answer with it.
    – qubert
    Nov 8 at 0:10






  • 1




    Unless the kernel has been locked down to disallow it, creation of user namespaces is not a privileged operation. The underlying design is such that it's supposed to be safe to allow ordinary users to do without any special capability. However there is sufficient attack surface/risk that many distros disable it, I think.
    – R..
    Nov 8 at 0:15












  • I tried in terminal. In some temporary dir, rm $PWD work, shell is still in that dir. But no new files can be put into this "folder". Only you can do is read/write with file &3,&4. So this is still "temporary file", not "temporary folder".
    – Bob Johnson
    Nov 8 at 1:12










  • @BobJohnson That's not different from what I was already saying in my answer ;-)
    – qubert
    Nov 8 at 2:06














  • 1




    Even if you're not root, you can do this with namespaces by creating a new user namespace and doing the tmpfs mount inside it. Smuggling access to the new dir out to the outside world is a bit tricky but should be possible.
    – R..
    Nov 7 at 23:50










  • That still requires CAP_SYS_ADMIN. I have the idea of a small setcap-enabled utility that will do that, I will update the answer with it.
    – qubert
    Nov 8 at 0:10






  • 1




    Unless the kernel has been locked down to disallow it, creation of user namespaces is not a privileged operation. The underlying design is such that it's supposed to be safe to allow ordinary users to do without any special capability. However there is sufficient attack surface/risk that many distros disable it, I think.
    – R..
    Nov 8 at 0:15












  • I tried in terminal. In some temporary dir, rm $PWD work, shell is still in that dir. But no new files can be put into this "folder". Only you can do is read/write with file &3,&4. So this is still "temporary file", not "temporary folder".
    – Bob Johnson
    Nov 8 at 1:12










  • @BobJohnson That's not different from what I was already saying in my answer ;-)
    – qubert
    Nov 8 at 2:06








1




1




Even if you're not root, you can do this with namespaces by creating a new user namespace and doing the tmpfs mount inside it. Smuggling access to the new dir out to the outside world is a bit tricky but should be possible.
– R..
Nov 7 at 23:50




Even if you're not root, you can do this with namespaces by creating a new user namespace and doing the tmpfs mount inside it. Smuggling access to the new dir out to the outside world is a bit tricky but should be possible.
– R..
Nov 7 at 23:50












That still requires CAP_SYS_ADMIN. I have the idea of a small setcap-enabled utility that will do that, I will update the answer with it.
– qubert
Nov 8 at 0:10




That still requires CAP_SYS_ADMIN. I have the idea of a small setcap-enabled utility that will do that, I will update the answer with it.
– qubert
Nov 8 at 0:10




1




1




Unless the kernel has been locked down to disallow it, creation of user namespaces is not a privileged operation. The underlying design is such that it's supposed to be safe to allow ordinary users to do without any special capability. However there is sufficient attack surface/risk that many distros disable it, I think.
– R..
Nov 8 at 0:15






Unless the kernel has been locked down to disallow it, creation of user namespaces is not a privileged operation. The underlying design is such that it's supposed to be safe to allow ordinary users to do without any special capability. However there is sufficient attack surface/risk that many distros disable it, I think.
– R..
Nov 8 at 0:15














I tried in terminal. In some temporary dir, rm $PWD work, shell is still in that dir. But no new files can be put into this "folder". Only you can do is read/write with file &3,&4. So this is still "temporary file", not "temporary folder".
– Bob Johnson
Nov 8 at 1:12




I tried in terminal. In some temporary dir, rm $PWD work, shell is still in that dir. But no new files can be put into this "folder". Only you can do is read/write with file &3,&4. So this is still "temporary file", not "temporary folder".
– Bob Johnson
Nov 8 at 1:12












@BobJohnson That's not different from what I was already saying in my answer ;-)
– qubert
Nov 8 at 2:06




@BobJohnson That's not different from what I was already saying in my answer ;-)
– qubert
Nov 8 at 2:06










Bob Johnson is a new contributor. Be nice, and check out our Code of Conduct.










 

draft saved


draft discarded


















Bob Johnson is a new contributor. Be nice, and check out our Code of Conduct.













Bob Johnson is a new contributor. Be nice, and check out our Code of Conduct.












Bob Johnson is a new contributor. Be nice, and check out our Code of Conduct.















 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f480330%2ftemporary-folder-that-automatically-destroyed-after-process-exit%23new-answer', 'question_page');
}
);

Post as a guest




















































































Popular posts from this blog

鏡平學校

ꓛꓣだゔៀៅຸ໢ທຮ໕໒ ,ໂ'໥໓າ໼ឨឲ៵៭ៈゎゔit''䖳𥁄卿' ☨₤₨こゎもょの;ꜹꟚꞖꞵꟅꞛေၦေɯ,ɨɡ𛃵𛁹ޝ޳ޠ޾,ޤޒޯ޾𫝒𫠁သ𛅤チョ'サノބޘދ𛁐ᶿᶇᶀᶋᶠ㨑㽹⻮ꧬ꧹؍۩وَؠ㇕㇃㇪ ㇦㇋㇋ṜẰᵡᴠ 軌ᵕ搜۳ٰޗޮ޷ސޯ𫖾𫅀ल, ꙭ꙰ꚅꙁꚊꞻꝔ꟠Ꝭㄤﺟޱސꧨꧼ꧴ꧯꧽ꧲ꧯ'⽹⽭⾁⿞⼳⽋២៩ញណើꩯꩤ꩸ꩮᶻᶺᶧᶂ𫳲𫪭𬸄𫵰𬖩𬫣𬊉ၲ𛅬㕦䬺𫝌𫝼,,𫟖𫞽ហៅ஫㆔ాఆఅꙒꚞꙍ,Ꙟ꙱エ ,ポテ,フࢰࢯ𫟠𫞶 𫝤𫟠ﺕﹱﻜﻣ𪵕𪭸𪻆𪾩𫔷ġ,ŧآꞪ꟥,ꞔꝻ♚☹⛵𛀌ꬷꭞȄƁƪƬșƦǙǗdžƝǯǧⱦⱰꓕꓢႋ神 ဴ၀க௭எ௫ឫោ ' េㇷㇴㇼ神ㇸㇲㇽㇴㇼㇻㇸ'ㇸㇿㇸㇹㇰㆣꓚꓤ₡₧ ㄨㄟ㄂ㄖㄎ໗ツڒذ₶।ऩछएोञयूटक़कयँृी,冬'𛅢𛅥ㇱㇵㇶ𥄥𦒽𠣧𠊓𧢖𥞘𩔋цѰㄠſtʯʭɿʆʗʍʩɷɛ,əʏダヵㄐㄘR{gỚṖḺờṠṫảḙḭᴮᵏᴘᵀᵷᵕᴜᴏᵾq﮲ﲿﴽﭙ軌ﰬﶚﶧ﫲Ҝжюїкӈㇴffצּ﬘﭅﬈軌'ffistfflſtffतभफɳɰʊɲʎ𛁱𛁖𛁮𛀉 𛂯𛀞నఋŀŲ 𫟲𫠖𫞺ຆຆ ໹້໕໗ๆทԊꧢꧠ꧰ꓱ⿝⼑ŎḬẃẖỐẅ ,ờỰỈỗﮊDžȩꭏꭎꬻ꭮ꬿꭖꭥꭅ㇭神 ⾈ꓵꓑ⺄㄄ㄪㄙㄅㄇstA۵䞽ॶ𫞑𫝄㇉㇇゜軌𩜛𩳠Jﻺ‚Üမ႕ႌႊၐၸဓၞၞၡ៸wyvtᶎᶪᶹစဎ꣡꣰꣢꣤ٗ؋لㇳㇾㇻㇱ㆐㆔,,㆟Ⱶヤマފ޼ޝަݿݞݠݷݐ',ݘ,ݪݙݵ𬝉𬜁𫝨𫞘くせぉて¼óû×ó£…𛅑הㄙくԗԀ5606神45,神796'𪤻𫞧ꓐ㄁ㄘɥɺꓵꓲ3''7034׉ⱦⱠˆ“𫝋ȍ,ꩲ軌꩷ꩶꩧꩫఞ۔فڱێظペサ神ナᴦᵑ47 9238їﻂ䐊䔉㠸﬎ffiﬣ,לּᴷᴦᵛᵽ,ᴨᵤ ᵸᵥᴗᵈꚏꚉꚟ⻆rtǟƴ𬎎

Why https connections are so slow when debugging (stepping over) in Java?