What is a Chromium Sandbox?

Reeshabh Choudhary
6 min readJan 28, 2024
Chrome Sandbox

👷‍♂️ Software Architecture Series — Part 21.

Before we discuss Chromium sandbox, let us first understand what is a sandbox? Anything we do with software if the virtual representation of real-life entities and operations. Hence, we start from the inception of the idea called “Sandbox”.

A sandbox is usually a play area filled with sand for children, found in backyard of a house or common playgrounds. This space was designed in such a way where children can play freely with equipment and without fear of injuries.

Based on this concept, a software sandbox is a safe area to run untrusted code in isolation from rest of the system, so that any vulnerability is unable to affect the system as a whole. Isolation is achieved by limiting the resources and capabilities that the sandboxed process has access to, such as file systems, network interfaces, and graphical interfaces. The sandbox is controlled by an explicit policy designed to prevent unauthorized access and restrictions to resources while allowing the sandboxed processes to freely use CPU cycles and memory. This gives a great advantage in terms of security mechanism by containing potential threats. This containment minimizes the impact of security vulnerabilities and reduces the likelihood of unauthorized access or damage. For example, Operating systems may employ sandboxes for running untrusted applications or processes in a controlled environment. During software development, we run the code in a sandbox environment to prevent any potential malfunction. Web browsers often use sandboxes to isolate individual tabs or processes. If one tab becomes compromised, the impact is limited to that specific sandbox, preventing the entire browser from being affected.

Chromium Sandbox

Security is one of the most important aspects of the Chromium project. Over the years, the Chromium code bases has grown multifold in size as well as diversity, which makes it very hard to predict all possible outcomes to the input given to the Chromium code. The sandbox objective is to provide hard guarantees about what ultimately a piece of code can or cannot do no matter what its inputs are. Hence, a basic assumption is made that anything running inside the sandbox is malicious code. The architecture and exact assurances that a sandbox provides are dependent on the operating system. Chrome has separate implementation of sandbox architecture for Windows, Max and Linux environments.

The basic principle of sandbox architecture is to operate at process level granularity. Anything that needs to be sandboxed needs to live on a separate process. In case of chromium, it is the idea of separate processes for each tab. We discussed in our earlier article; Chrome follows a multi-process architecture. It has a renderer process which controls the rendering of web pages containing logic of handling HTML, JavaScript, images, and so forth. For each tab opened, a new renderer process is created.

Using RPC

For Chromium sandbox implementation, the renderer process was split into trusted and untrusted threads, allowing for selective execution of system calls through a form of remote procedure calls (RPC). The trusted thread is given elevated privileges and is responsible for validating and executing system calls. The untrusted thread operates with restricted privileges and is the one making requests for system calls to the trusted threads. Each untrusted thread has a trusted helper thread running in the same process.

Before executing a requested system call, the trusted thread verifies the validity of the call and its arguments. This verification step is crucial for preventing malicious or inappropriate actions. The trusted thread can assess whether the requested system call aligns with the predefined policies and security constraints of the respective OS. It only allows system calls that are deemed “reasonable” based on the established security policies. Thus, the untrusted thread can perform certain actions, but they are always subject to strict validation and control.

Drawback

However, there is a certain drawback with this approach. System calls were being maintained by RPC in the renderer process and are scattered throughout the codebase. Hence, the process of converting each system call into an RPC a complex and time-consuming task. This maintenance burden includes keeping the RPC mechanisms up to date, ensuring compatibility with changes in the Chromium codebase, and addressing any new system calls introduced by upstream projects (such as WebKit).

Use of Disassemblers

To overcome these limitations and issues of the initial design approach, a dynamic solution had to be devised for finding and patching system calls at runtime by using a disassembler on the executable code. Hence, use of a disassembler was introduced to identify and modify system calls on-the-fly. A disassembler is a tool that translates machine code (binary executable) into assembly language or another human-readable representation. In the Chromium case, the disassembler is applied to the executable code to identify the locations of system calls. Once a system call is identified, the code is modified to replace the system call with an RPC mechanism that communicates with the trusted thread. This enables the trusted thread to handle the execution of the system call. A disassembler does not have to be perfect. Even if it has some imperfections, it’s deemed acceptable as long as it effectively works for the existing codebase. The untrusted code runs in a restricted mode where a process can make only a small, pre-defined set of system calls. Any attempt to make other system calls results in the kernel aborting the thread. This adds an additional layer of security. Even if the dynamic disassembly and patching miss some system calls, the kernel ensures that any unauthorized system calls will lead to the termination of the untrusted thread, preventing security breaches.

In addition, a trusted process is introduced to handle specific aspects of system call verification that cannot be effectively performed within the address space of the untrusted renderer. The objective is to handle time-of-check-to-time-of-use (TOCTOU) race conditions. This vulnerability arises when there is a time gap between checking a condition and using the result, during which the condition may change. In the context of system calls, this can occur when arguments passed via pointers are subject to modification by the untrusted thread after they have been initially checked for validity.

The trusted process shares a few pages of memory with each trusted thread. These memory pages are configured as read-only for the trusted thread and read-write for the trusted process. Some system calls have arguments that either reside in memory or require a verification process that is too complex to be reasonably implemented in assembly code within the trusted thread. The trusted process takes on the responsibility of handling such complex verification tasks. When a system call cannot be fully handled by the trusted thread, the arguments are handed off to the trusted process. The trusted process copies these arguments into its own address space, creating a clear separation between the untrusted code and the critical arguments of the system call. This ensures that these arguments are immune to changes from the untrusted code and prevents potential race conditions where the untrusted thread may attempt to modify the arguments after they have been initially checked. The read-only configuration of memory pages for the trusted thread ensures that it cannot modify the shared memory containing the critical arguments. On the other hand, the read-write access for the trusted process allows it to update and validate the arguments without interference from the untrusted thread.

Ad-Hoc decision

The renderer process is run in a controlled environment and the system calls it makes are monitored and recorded. Each observed system call is then evaluated for reasonability based on predefined criteria such as security implications, necessity for the application’s functionality, and adherence to established policies. The decision-making process is ad hoc, meaning it is based on specific instances or circumstances rather than a predefined, systematic methodology. The goal is to address the observed system calls on a case-by-case basis. As the analysis progresses, a policy is formulated based on the observed and deemed reasonable system calls. This policy serves as a set of rules or guidelines for determining which system calls are allowed during the execution of the renderer.

While the current approach may not be as systematic as a formal verification process, it aims to mitigate risks by carefully considering the system calls and their implications. The emphasis is on avoiding unnecessary or potentially risky calls.

--

--

Reeshabh Choudhary

Software Architect and Developer | Author : Objects, Data & AI.