These come up in CTFs all the time. One trick I don't see here is you can use `dd` to write into the `/proc` hierarchy to achieve all sorts of fuckery including patching shellcode into a running process.
If memory serves, I got creds for a machine where the git user was able to run `git diff` with setuid, so you could abuse the pager to escape into an elevated shell.
Hey you know what, I've used dd to write into process memory but haven't actually used it to disable KASLR, so it's possible I am misremembering. My bad.
You learn the most random ways to abuse program features, one I still remember because of how long it took to figure it out was an htb box that (after a long exploitation path) used NTFS ADS to hide the flag within the alternate stream in a decoy file; and of course the normal way to extract the stream was disabled so had to do some black magic with other binaries to get it
Not just shell access, but the server would need to be configured to also enable your user to run any of these binaries as root (such as an administrator putting them in the sudoers file).
So they're a pretty niche attack vector, and oftentimes crop up as a result of lazy/incompetent sysadmins.
Like it says in the preamble on the site, don't think of this as a collection of exploits, but rather as a compendium of knowledge about escalation techniques for use in emergencies.
I can't tell you how many times I burned my fingers as a young Unix developer in the 80's by untar'ing things wrongly, or fat-fingering an 'rm -rf /' and thus having a running system that will be catastrophic if I don't fix it before reboot, shell still active and .. what do? Consult this list of great advice and use it to rebuild the system and/or do things that need to be done that otherwise wouldn't be possible ..
GTFOBins is not just for hacking. Its also for system repair and recovery. I'd be as likely to consult this knowledge base after a hacker attack as before, if not more ..
...or something that runs CGI commands. Bash scripts are like the glue of the internet, and many of them are poorly-written. Tons of stuff still runs on PHP or relies on little Python cron jobs behind the scenes. A lot of the way this stuff works depends on being able to chain vulns together...an unescaped query to a database that gets piped to a nightly cron job to sync or backup something becomes an attack vector.
A sterotypical example would be to have an SUID command that does something the user couldn't normally do, and can be tricked into launching one of these other commands.
A less typical example is giving a user restricted shell access where they only have access to a few binaries. I think people used to do access control like that in the 90s, but people stopped because its very hard to get right. Its still a very common challenge in CTFs because its very easy to adjust the skill level and come up with new variations.
It's only relevant as a privilege escalation vector when you're able to execute those programs as root, but don't otherwise have root access on the server.
It's a pretty niche circumstance. Unless an admin allows users on a server to execute some of these random types of binaries as root, it's not going to be a concern. And, if it wasn't already obvious, distros are almost never configured this way OOTB
I've seen plenty of servers in companies configured to allow sudoers to run a restricted subset of binaries as root, usually without a password. Some of them were GTFObins that the admins were not aware of until I reached out to let them know. I've also seen a couple of restricted shell setups where users could only run a handful of commands. Can't recall if I checked to see if any of them were GTFObins.
I wouldn't say this is the most useful h4x0r tool ever, but I wouldn't say it's particularly niche, either. This kinda stuff is definitely relevant in older large enterprise-type Linux/Unix environments.
I think docker was used for these things before. I remember some big service had secrets in env vars and a shell access inside the docker image from a npm post install script let them evacuate these secrets
I am confused. Is this saying that if you don't have access to `cat`, instead of `cat /path/to/input-file` you can use `base64 /path/to/input-file | base64 --decode`?
Or is it saying that `base64 /path/to/input-file | base64 --decode` can bypass read file permission flags?
The first thing. Invoked processes inherit the permissions of the user who invoked them (unless they have the setuid bit). It's just in case you land access to a computer which has all the standard Unix tools disabled to stop attackers from lateral movement.
Put your meagre and limited resources on keeping them outside the hatch.
If they get through the hatch, that is where you fucked up, not that you didn't remove every conceiveable command from yourself should they get through. If they can remotely get some program to execute a shell, they can quite conceivably get the same program to just read them the files directly by writing different shellcode. Running a shell is just a convenience for them.
The number of setups that are insecure enough to allow remote shells by arbitrary attackers, but are secure because you disabled /bin/cat once they get in, is zero.
Typically you do things like this to either work in restricted envs (distroless) or to evade detection logic. It's not about bypassing a boundary, it's about getting things done in the env you have available.
Security is done in layers. Yes, we do our best to keep the adversaries outside the proverbial hatch. But even inside the hatch, the principal of least privilege is important in reducing the damage of attacks.
But you wouldn't, or shouldn't, take a patchwork approach to it.
If the software you're trying to secure actually depends on a full, working, intertwined unix system... you leave that as it is. You can certainly try reducing a process's access to the system it's running on (whether that be by containers, jail(8), SELinux, AppArmor, etc.), but you don't go around deleting 7-zip or your scripting languages or compilers, on the off-chance that'll thwart a hacker.
Sure, you can say, "defense in depth", but if you have one layer that's actually holding up the security guarantees, and a second layer that is largely ineffectual (haha! I removed /bin/cat, now they can't read files! oh and base64 too... and yyencode... and... and... and...), I wouldn't waste much time on the second layer.
I think you have the wrong end of the stick. The OP link is a resource for when you do get access to the processes environment which has already been reduced via containers, jails, or what have you.
If the environment is already restricted, but the process has, for example, access to the base64 tool, here's how you can use that to do something you otherwise aren't able to.
If there's a file your user does not have read access to, but you have the ability to run the `base64` binary as root, you can run `base64` as root, (thus encoding the file contents as base64), then pipe the output to another base64 process to decode the file contents.
So yes, the end result is just `cat` with extra steps.
I just grabbed one of the examples there which was readable and didn't require the reader to know all the extra flags passed. One that would illustrate the purpose of the website. One that Linux newbies who read the question and further answers here could follow along with. Not one that tried to be optimal.
It's the former. Not bypassing permissions but in shells that might be highly restricted to just a couple commands. Like others have said, very very common in CTFs.
I'm not sure I get it. base64 is on the list. That can't do anything but read a file to which the user already has access, I think. Am I mistaken or does "a curated list of Unix-like executables that can be used to bypass local security restrictions in misconfigured systems" not mean what I think it does?
I think the idea is that if you're given an improperly configured restricted shell/command access, you can use any of the listed tools to gain access to some subset of what that user would normally have access to in an unrestricted environment.
A very simple version of this would be if you set a user's default shell to "rbash" but the user can just run "bash" to get a real shell.
Maybe sudoers is configured to allow you to run base64 as root. Why would someone do this? No idea. But if you are in such a situation, now you know how to bypass the intended permissions and read any file on the system.
Or maybe you give Claude Code permission to run `base64` without review without realizing this lets it read any file, including maybe your secrets in .env or something.
The former happens a lot when people try to block specific commands for sudo, instead of taking a "permit these only" approach. If your sudoers file says you can access "all these commands but not cat", the site points out that you can still use base64 to accomplish the same ends. The effective solution is to start from "you can run exactly these commands and no others", which at least allows you to reason about what the user can and can't do.
A common situation is that you have access to a handful of tools that have root permissions, either because they're specifically allowed to be invoked (sudo -l) or because they're invoked by something else with root.
Seeing the confusion in the comments I want to provide some examples of situations where this might come up in a security or CTF context:
* You have a restricted shell or other way to execute a restricted set of commands or binaries, often with arbitrary parameters. You can use GTFOBins in interesting ways to read files, write files, or even execute commands and ultimately break out of your restricted context into a shell.
* Someone allowed sudo access or set the SUID bit on a GTFOBin. Using these tricks, you may be able to read or write sensitive files or execute privileged commands in a way the person configuring sudo did not know about.
This is pretty relevant for things like claude-code, which has a fairly rudimentary way of dealing with permissions with block-lists and allow-lists.
I once accidentally gave my claude "powershell" permissions in one session, and after that any time it found it was blocked from using a tool, e.g. git, it would write a powershell script that did the same thing and execute the script to work around the blocked permission.
Obviously no sane system would have "powershell" in a generic allow-list, but you could imagine some discrepancy in allowed levels between tools which can be worked around with the techniques on this page.
Power Shell or Python scripts to work around restrictions are the go to for LLMs.
And it doesn't stop there.
Yesterday I was trying to figure out some icons issue in KDE plasma (I know nothing about KDE). Both Claude and Codex would run complex bus and debug queries and write and execute QML scripts with more and more tools thrown into the mix.
There's no way to properly block them with just allow- and block lists
I imagine someone probably wrote very specifically about it in the training data that underwent lossy compression, and the LLM is decompressing that how-to.
So I'd say it's more like "surfacing" or "retrieving" than "re-discovering".
They scraped everything on Stackoverflow, likely IRC logs from Freenode, and every book written in the modern era courtesy of Sci-Hub / Library Genesis / Anna's Archive / Z Library.
RIP Aaron Swartz, they're generating trillions in shareholder value from the spiritual successors to the work they were going to imprison you for.
For the LLM it's a probabilistic set of strings that achieves the outcome, the highest probability set didn't work, try the next one until success or threshold met. A human sees the implicit difference between the obvious thing not working indicating someone doesn't want you to do it, but an LLM unless guided doesn't seen that sub-text.
So chmod +x file didn't work, now try python -c "import os; os.chmod('file',744)"
Humans and LLMs both only see that when given the right context. A tool not working in a corporate environment may be anything from oversight, malfunction all the way to security block. Knowing which one it is takes a lot of implicit knowledge. Most people fail to provide this level of context to their LLMs and then wonder why they act so generic. But they are trained to act in the most generic way unless given context that would deviate from it.
> There's no way to properly block them with just allow- and block lists
Especially not when some harnesses rely on the reliability of the LLM to determine what's allowed or not, pretty much "You shouldn't do thing X" and then asking the LLM to itself evaluate if it should be able to do it or not when it comes up. Bananas.
Only right and productive way to run an agent on your computer is by isolating it properly somehow then running it with "--sandbox danger-full-access --dangerously-bypass-approvals-and-sandbox" or whatever, I myself use docker containers, but there are lots of solutions out there.
You have to be extremely careful when you set up a dev container, lock down file access, do not give the agent the power to start other containers or "docker compose up", restrict network access to an allow-list etc. Just running the agent in a container does little to protect you. (Maybe you know this, but a lot of people don't!)
Most of those things are what happens by default. Sure, be careful, but by default it's secure enough to prevent most potential issues. No need to lock down file access for example, by default it only has access to files inside the container, and of course by default containers don't have access to start other containers, and so on.
Good word of caution though, make sure you actually isolate when you set out to isolate something :)
As mentioned, "podman/docker run -it $my-image codex" also actually has the requisite isolation by default, no need for special software. Biggest risk is accidental deletion of stuff, easily solved without running an entire VM, which "smol" machines seems to be. No doubt VMs have their uses too, but for simple isolation like this I personally rather use already existing tooling.
Ok, YMMV, but a smolvm provides macOS-native, per-workload isolation -- vs trad container depending on a daemon and relying on namespaces (w/ a shared kernel). Easy "packing" into single-file executables, and a nice SDK, make it ~ideal for my needs; great balance of security:convenience.
Cool ad bro, but stop claiming container won't get you "per workload isolation" just because they share kernels, in the context of this discussion it hardly matters, containers isolates enough for this.
A few years back, our support team needed to do some network capture with tcpdump. The quick and natural way to allow that was to add a sudo rule for it, with opened arguments (I know it's a bit risky, but tcp port and nic could change).
Looks good enough? Well no...
With tcpdump, you can specify a compress command with the "-z" option. But nothing prevents you from running a "special" compress command and completely take over the server:
This seems trivial, but that the kind of stuff which are really easy to miss. Even if these days, security layers like apparmor mitigate this risk (causing a few headaches along the way), it's still relatively easy to mess it up.
> * Someone allowed sudo access or set the SUID bit on a GTFOBin. Using these tricks, you may be able to read or write sensitive files or execute privileged commands in a way the person configuring sudo did not know about.
Some enterprise security software that is designed to "mediate privilege elevation" includes an allowlist configured by the administrators. My experience seeing this rolled out at one company was that software on the allowlist no longer required a password to run with `sudo`. The allowlist initially included, of course, all kinds of broadly useful software that made its way onto this list (e.g., vim, bash).
I worked from home at this company, and I remember thinking it was a good thing, because this software deployed to "secure" my computer made it drastically weaker to someone walking up to it and trying to run something if I stepped away from the keyboard for a moment and forgot to lock it.
As someone who has had to do some grub editing on the computer in an AirBnB because peripherals were all messed up on the guest account (no internet, no sound, you could only see a tiny part of the screen, I honestly don't know how they had managed to do it) I am super pleased to see this resource. Stuff like this is a bit, you know, hopefully you never need this, but when you do, it is so useful to have it.
Well, now I feel a little vindicated tinkering so that my backup wouldn't run as root. Instead it runs as a regular user with read-all-files capabilities [0] and no login shell.
Of course, that's still probably overkill on my desktop, and any attacker that got that far would still be able to read basically every file on the computer and sneak backdoors into the backup...
It does seem like an LLM’s ability to see a constraint and just say “I’ll write a quick helper to work around it” kinda wrecks some older-world assumptions. We know how to deal with remote human attackers, remote bot attackers, and to some extent local human attackers, but local self-coding bot attackers lately needs more attention than it used to. It’s not even the same category as malware
I’ve been guilty myself of building containers where everything runs as root on the assumption that the container was the relevant domain
If LLMs are involved, I can’t tell whether OS level security is suddenly more relevant, or suddenly utterly obsolete
The last time I used anything similar to this was circa 1995 at secondary school, using Windows 3.11 computers, that has been set up so you could only launch a small number of authorised applications.
One of those was Word.
In Word you could write macros and use shell to launch other applications.
Suddenly the locked down computer that exposed a handful of applications could run anything (well anything a Windows 3.11 machine in 1995 could run).
It was quite exciting at the time, I don't feel like I have hit the same sort of issues since. Ocassionally I see people say that some touch screen information displays (in shops/shopping centres etc) have ways to escape from kiosk mode (locked to an app) so you can use them for anything, I guess that is similar.
Sounds super 1337 and I hope it's actually possible somehow.
https://github.com/arget13/DDexec
So they're a pretty niche attack vector, and oftentimes crop up as a result of lazy/incompetent sysadmins.
I can't tell you how many times I burned my fingers as a young Unix developer in the 80's by untar'ing things wrongly, or fat-fingering an 'rm -rf /' and thus having a running system that will be catastrophic if I don't fix it before reboot, shell still active and .. what do? Consult this list of great advice and use it to rebuild the system and/or do things that need to be done that otherwise wouldn't be possible ..
GTFOBins is not just for hacking. Its also for system repair and recovery. I'd be as likely to consult this knowledge base after a hacker attack as before, if not more ..
A less typical example is giving a user restricted shell access where they only have access to a few binaries. I think people used to do access control like that in the 90s, but people stopped because its very hard to get right. Its still a very common challenge in CTFs because its very easy to adjust the skill level and come up with new variations.
Question from security newbie. Why it is not used to hack all sort of servers all the time then?
It doesn't make it easier to "hack" servers, it's just a list of things that you could use once you're already inside.
It's a pretty niche circumstance. Unless an admin allows users on a server to execute some of these random types of binaries as root, it's not going to be a concern. And, if it wasn't already obvious, distros are almost never configured this way OOTB
I wouldn't say this is the most useful h4x0r tool ever, but I wouldn't say it's particularly niche, either. This kinda stuff is definitely relevant in older large enterprise-type Linux/Unix environments.
But you can't "hack a server" using just these techniques: they would be a (small) part of a chain of exploits.
Or is it saying that `base64 /path/to/input-file | base64 --decode` can bypass read file permission flags?
If someone has the power to execute commands, they are already on the other side of the airtight hatch.
https://devblogs.microsoft.com/oldnewthing/20240102-00/?p=10...
Put your meagre and limited resources on keeping them outside the hatch.
If they get through the hatch, that is where you fucked up, not that you didn't remove every conceiveable command from yourself should they get through. If they can remotely get some program to execute a shell, they can quite conceivably get the same program to just read them the files directly by writing different shellcode. Running a shell is just a convenience for them.
The number of setups that are insecure enough to allow remote shells by arbitrary attackers, but are secure because you disabled /bin/cat once they get in, is zero.
If the software you're trying to secure actually depends on a full, working, intertwined unix system... you leave that as it is. You can certainly try reducing a process's access to the system it's running on (whether that be by containers, jail(8), SELinux, AppArmor, etc.), but you don't go around deleting 7-zip or your scripting languages or compilers, on the off-chance that'll thwart a hacker.
Sure, you can say, "defense in depth", but if you have one layer that's actually holding up the security guarantees, and a second layer that is largely ineffectual (haha! I removed /bin/cat, now they can't read files! oh and base64 too... and yyencode... and... and... and...), I wouldn't waste much time on the second layer.
If the environment is already restricted, but the process has, for example, access to the base64 tool, here's how you can use that to do something you otherwise aren't able to.
So yes, the end result is just `cat` with extra steps.
A very simple version of this would be if you set a user's default shell to "rbash" but the user can just run "bash" to get a real shell.
Or maybe you give Claude Code permission to run `base64` without review without realizing this lets it read any file, including maybe your secrets in .env or something.
* You have a restricted shell or other way to execute a restricted set of commands or binaries, often with arbitrary parameters. You can use GTFOBins in interesting ways to read files, write files, or even execute commands and ultimately break out of your restricted context into a shell.
* Someone allowed sudo access or set the SUID bit on a GTFOBin. Using these tricks, you may be able to read or write sensitive files or execute privileged commands in a way the person configuring sudo did not know about.
I once accidentally gave my claude "powershell" permissions in one session, and after that any time it found it was blocked from using a tool, e.g. git, it would write a powershell script that did the same thing and execute the script to work around the blocked permission.
Obviously no sane system would have "powershell" in a generic allow-list, but you could imagine some discrepancy in allowed levels between tools which can be worked around with the techniques on this page.
And it doesn't stop there.
Yesterday I was trying to figure out some icons issue in KDE plasma (I know nothing about KDE). Both Claude and Codex would run complex bus and debug queries and write and execute QML scripts with more and more tools thrown into the mix.
There's no way to properly block them with just allow- and block lists
Glad to see LLM re-discover this trick.
I imagine someone probably wrote very specifically about it in the training data that underwent lossy compression, and the LLM is decompressing that how-to.
So I'd say it's more like "surfacing" or "retrieving" than "re-discovering".
RIP Aaron Swartz, they're generating trillions in shareholder value from the spiritual successors to the work they were going to imprison you for.
So chmod +x file didn't work, now try python -c "import os; os.chmod('file',744)"
Especially not when some harnesses rely on the reliability of the LLM to determine what's allowed or not, pretty much "You shouldn't do thing X" and then asking the LLM to itself evaluate if it should be able to do it or not when it comes up. Bananas.
Only right and productive way to run an agent on your computer is by isolating it properly somehow then running it with "--sandbox danger-full-access --dangerously-bypass-approvals-and-sandbox" or whatever, I myself use docker containers, but there are lots of solutions out there.
Good word of caution though, make sure you actually isolate when you set out to isolate something :)
1. https://smolmachines.com
https://smolmachines.com/#comparison
A few years back, our support team needed to do some network capture with tcpdump. The quick and natural way to allow that was to add a sudo rule for it, with opened arguments (I know it's a bit risky, but tcp port and nic could change).
Looks good enough? Well no...
With tcpdump, you can specify a compress command with the "-z" option. But nothing prevents you from running a "special" compress command and completely take over the server:
> sudo tcpdump -i any -z '/home/despicable_me/evil_cmd.sh' -w /tmp/dontcare.pcap -G 1 -Z root
This seems trivial, but that the kind of stuff which are really easy to miss. Even if these days, security layers like apparmor mitigate this risk (causing a few headaches along the way), it's still relatively easy to mess it up.
Some enterprise security software that is designed to "mediate privilege elevation" includes an allowlist configured by the administrators. My experience seeing this rolled out at one company was that software on the allowlist no longer required a password to run with `sudo`. The allowlist initially included, of course, all kinds of broadly useful software that made its way onto this list (e.g., vim, bash).
I worked from home at this company, and I remember thinking it was a good thing, because this software deployed to "secure" my computer made it drastically weaker to someone walking up to it and trying to run something if I stepped away from the keyboard for a moment and forgot to lock it.
LOLBAS (https://lolbas-project.github.io/)
Well, now I feel a little vindicated tinkering so that my backup wouldn't run as root. Instead it runs as a regular user with read-all-files capabilities [0] and no login shell.
Of course, that's still probably overkill on my desktop, and any attacker that got that far would still be able to read basically every file on the computer and sneak backdoors into the backup...
[0] https://man7.org/linux/man-pages/man7/capabilities.7.html
I’ve been guilty myself of building containers where everything runs as root on the assumption that the container was the relevant domain
If LLMs are involved, I can’t tell whether OS level security is suddenly more relevant, or suddenly utterly obsolete
Systems with capability-based security, such as seL4[0], do not suffer from this category of problem.
0. https://sel4.systems/About/
One of those was Word.
In Word you could write macros and use shell to launch other applications.
Suddenly the locked down computer that exposed a handful of applications could run anything (well anything a Windows 3.11 machine in 1995 could run).
It was quite exciting at the time, I don't feel like I have hit the same sort of issues since. Ocassionally I see people say that some touch screen information displays (in shops/shopping centres etc) have ways to escape from kiosk mode (locked to an app) so you can use them for anything, I guess that is similar.
For example getting a shell with more:
- Setting SHELL to /bin/false before invoking more
- Switching to less in secure mode
- if using more with sudo: NOEXEC flag
This makes me so worried for the future. AI is only useful _because_ it can pull from all these resources which already exist.