Last December (2018), I created an advent calendar on the Japanese site adventar.org after seeing some Japanese CTFers creating a PWN-focused calendar there.

You can find it here: https://adventar.org/calendars/3435

The general theme of my calendar was focused around solving browser pwnables from recent CTFs, with a strong focus on V8. I tried to arrange the challenges in such a way that the learning curve would be reasonable and to give myself enough time to solve them. Things got even better when 35C3CTF, which took place right near the end of December, featured a fun V8 challenge that I added to the list. Overall, I finished the last challenge sometime around the last week of January 2019.

Below I’ll briefly discuss each problem I completed. Many of these have been discussed in depth elsewhere on the internet, so I’ll try to keep my contributions short and focus on general thoughts. I freely admit this is not a tutorial post, but more of a summary of my calendar.

Warning, spoilers follow. If you are just interested in solve scripts, check the bottom of the post.

“Blazefox” (BlazeCTF 2018)

BlazeFox was the sole non-V8 challenge on this list. It involved a straightforward method added onto the Array class that would directly set the underlying length field to 420. Since obtaining corrupted length fields on an array is sort of the end state that browser exploits coalesce to, it was a great starting point for me to understand the underlying fundamentals (properties? elements? inline-elements? maps? backing stores?). Overcl0k just published a great blogpost on this challenge, so I’ll not discuss it too much here.

My strategy for browser bugs of this category (those that lead to a corrupted length field) is to use the corrupted array to directly manipulate an adjacent victim ArrayBuffer. ArrayBuffer objects usually consist of few elements beyond a “backing store” pointer, representing the pointer to a raw data buffer, and a length field. By manipulating the backing store, we obtain an arbitrary read/write memory primitive from our weaker relative read/write. From there, I used the same method as describe in this phoenhex article to overwrite a GOT entry in libxul.

V8 Challenge (CSAW 2018 Finals)

Unlike Blazefox, this challenge doesn’t directly hand us a bug. Rather, it defines a new interpreter method Array.prototype.replaceIf(index, callbackfn, replacement) as a builtin, giving us a chance to do some small-scale bughunting. In this case, the bug is related to proxies and a lack of state-flushing after allowing Javascript execution to occur. Javascript proxies are objects that let us override normal object behavior for common operations (getter/setter/method calls), and can be a common source of bugs for code expecting default behavior. We can define a handler to override certain property accessors to fake out the length field when it is requested.

var handler = {
    get: function(obj, prop) {
        if (prop == 'length')
            return 0x1337;
        else
            return obj[prop];
    }
};

new Proxy(new Array(0x8), handler).replaceIf(idx, function(elem) {
        return (idx == 0x33); // index we want to overwrite
    }, 0x13370000);

Now, we can use the replaceIf function to read and write OOB from our array. At this point, the next few exploit steps are similar to Blazefox: find our victim ArrayBuffer, grab its backing store, construct our r64()/w64() functions, etc. How to get PC? As of 2018, V8 now ships without RWX pages enabled by default in the renderer process. However, this challenge has re-disabled that feature for us. So we can walk class/structure offsets to reach the RWX page corresponding to a JSFunction and simply write our shellcode there.

“Roll a d8” (PlaidCTF 2018)

This challenge was the first n-day challenge of the calendar, targeting crbug 821137. Players were given just a V8 version and the following regression test:

// Copyright 2018 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// Tests that creating an iterator that shrinks the array populated by
// Array.from does not lead to out of bounds writes.
let oobArray = [];
let maxSize = 1028 * 8;
Array.from.call(function() { return oobArray }, {[Symbol.iterator] : _ => (
  {
    counter : 0,
    next() {
      let result = this.counter++;
      if (this.counter > maxSize) {
        oobArray.length = 0;
        return {done: true};
      } else {
        return {value: result, done: false};
      }
    }
  }
) });
assertEquals(oobArray.length, maxSize);
// iterator reset the length to 0 just before returning done, so this will crash
// if the backing store was not resized correctly.
oobArray[oobArray.length - 1] = 0x41414141;

Thanks to the comments, the bug is pretty obvious. Shrinking the array you are iterating over, in the iterator callback function, incorrectly changes the array length without resizing the backing store. There really wasn’t a lot different happening here than before - we can see the pattern already. Corrupt array length -> overwrite victim -> clobber function code pointer -> shellcode. Besides implementing the weaponization again, the main difference was getting used to the Chromium project’s bug-reporting and regression system.

“V9” (34C3CTF)

V9 represented a completely different direction from the previous browser challenges. It required an understanding of Chrome’s Turbofan JIT subsystem. This was an interesting opportunity to approach JIT bugs because the provided patchfile was quite small:

@@ -26,6 +26,7 @@ Reduction RedundancyElimination::Reduce(Node* node) {

@@ -167,6 +168,15 @@ bool CheckSubsumes(Node const* a, Node const* b) {
           }
           break;
         }
+        case IrOpcode::kCheckMaps: {
+            // CheckMaps are compatible if the first checks a subset of the second.
+            ZoneHandleSet<Map> const& a_maps = CheckMapsParametersOf(a->op()).maps();
+            ZoneHandleSet<Map> const& b_maps = CheckMapsParametersOf(b->op()).maps();
+            if (!b_maps.contains(a_maps)) {
+                return false;
+            }
+            break;
+        }

The challenge adds a new opcode to the list of those removed by RedundancyElimination, which is a JIT pass responsible for removing redundant nodes in the sea-of-nodes representation. The pass itself is invoked during the “early optimization” and “load elimination” phases of the Turbofan pipeline. We can visualize all Turbofan passes and node graphs using the Turbolizer tool, also available in V8’s git repo. In this case, the added opcode removes a CheckMaps node if one child’s map is strictly a subset of the second. You can imagine that situation occurring with code like this:

var x = [1.1, 2.2, 3.3, 4.4];
x[0] = 5.5; // [A]
console.log(x);
x[1] = 6.6; // [B]

At [A] and [B], a CheckMaps is emitted to ensure that the console.log(x) call has not transitioned x’s underlying element map. Such a node might be emitted as a protection against an object changing from PACKED_DOUBLE_ELEMENTS to DICTIONARY_MODE, for example. However, the Reduce() is incorrect because it does not check exactly that; x will transition and the emitted fast access code will be incorrect. The following code will transition an Array in exactly that way (packed -> dictionary) resulting in OOB access:

var x = [1.1, 1.1, 1.1, 1.1]; // declare a PACKED_DOUBLE_ELEMENTS
x[3] = 1.1; // inlined StoreElement, protected by CheckMaps

x.len = 0x7f0000; // transition to DICTIONARY_MODE

// At this point, x is of type DICTIONARY_ELEMENTS, but the JIT thinks it is PACKED
// The following inlined StoreElement will incorrectly offset from the array, rather than
// resolving the looking with the Elements pointer
x[20] = val;

“krautflare” (35C3CTF)

Much has been written about krautflare elsewhere online, including some excellent writeups (here and here). The key problem in this challenge is how to delay optimization in V8 until the ConstantFoldingReducer will no longer be invoked. Doing so prevents the typing bug, which could be induced to appear in an early typing stage, from being optimized out before it can be used to generate buggy code. In theory, the answer is straightforward - prevent V8 from performing type analysis until a later pass has removed some intermediate construct. One such example, which I and others used, involves forcing a delay until escape analysis:

function diagonal(a) {
    return abs({x:a, y:a});
}

// After Escape Analysis...

function diagonal(a) {
    return Math.sqrt(a*a + a*a);
}

I didn’t solve this challenge during the competition. I knew I had to wait until escape analysis to prevent early optimization, but was having trouble triggering it during the CTF. In the end, through a combination of child functions and hiding arguments I got it to work - as a OOB write. For some reason, Turbofan was not removing the CheckBounds on my OOB read attempts, which I think may be related to a Load node not being inlined, whereas the StoreElement node was lowered to remove its internal bounds check.

One interesting thing to note is that constructions involving escaping object properties, like the following:

function x() {
    return {a: 1}.a;
}
var y = x();

…seem to be optimized during the “load elimination” stage if possible, right before “escape analysis”. Sufficient complexity or child functions will prevent that from happening. This means that contrary to the name of the phase, simple objects will undergo escape analysis optimization prior to the formal “escape analysis phase.” It’s also possible to prevent the “load elimination” phase from optimizing it by including a large number of class members (see kMaxTrackedFields, currently 32), which _tsuro utilized in his reference solution.

“Just-in-time” (GoogleCTF Finals 2018)

This challenge adds a small Reducer to the V8 pipeline, which is basically just a phase (like “dead code elimination”, or “load elimination” as we discussed above). The added buggy DuplicateAdditionReducer combines JSNumber operations with constant double values at JIT compile time. For example, expressions of the form 1.1 + (2.2 + 3.3) would be converted to 1.1 + 5.5. The combination was done by pulling out the underlying double value and adding them with C++ float semantics. Unfortunately, that doesn’t quite match JSNumber addition semantics. While most people online abused the fact that Number.MAX-SAFE_INTEGER + 1 === Number.MAX-SAFE_INTEGER + 2, solving krautflare right before this made me think of

-Infinity + Number.MAX_VALUE + Number.MAX_VALUE == -Infinity

which is correct. However, the DuplicateAdditionReducer combines the two into

-Infinity + Infinity == NaN

which creates an observable typing bug. Afterwards, the problem actually reduces to that of krautflare, just substituting Object.is(..., -0) with Object.is(..., NaN). In fact, my final buggy JITted function for this challenge is almost identical to my krautflare solution.

If you’re interested in reading more about this challenge, __x86 has a great post that dives deep into it here.

“Mr. Mojo Rising” (GoogleCTF Finals 2018)

After completing a series of renderer bugs, it seemed applicable to throw in at least one SBX challenge. This was a P0-discovered nday bug that allowed for relative r/w off of a Mojo datapipe, which are basically mmap’d shm regions in memory. The Mojo documentation is pretty sparse and I ended up having to spend a decent amount of time fiddling with ServiceWorkers to get things to play nice with headless chrome. Eventually, I was able to trigger the primitives and write straightline exploit code with await. Ultimately, this was my most brittle exploit - it’s heavily offset + allocation order dependent. I abuse the predictable ordering of mmap allocations to overwrite a function in libc’s GOT to point to the magic gadget, a classic CTF trick.

All that work for this, an asciinema of it landing.

Parting Thoughts

I had a lot of fun completing the above challenges and will definitely continue working on browser exploitation. While I’m not sure how I feel about the recent trend of “weaponize-nday-as-a-challenge” in CTF, the problems present easy environments to focus on weaponizing bugs in a straightforward way with more focus on the browser internals than any environment factors that might complicate things. At the very least, it’s definitely good practice!

You can find all my solution scripts (as well as collected challenge readmes+patchfiles) here.