M-unition -

Using a Custom VDB Debugger for Exploit Analysis

By on February 14, 2013

Analyzing an exploit and understanding exactly how the exploit lands can take a long time due to inadequate analysis tools.  One way to speed up understanding how an exploit behaves is to use Vtrace and VDB.  In this post I explain how to create a custom VDB debugger in order to detect, analyze, and prevent execution of an exploit payload.

Background on Vtrace, VDB, the vulnerability and the exploit
Vtrace is a cross platform and cross architecture debugging API.   VDB is a cross-platform and cross- architecture debugger that uses Vtrace.   Both tools are available at http://visi.kenshoto.com.

To illustrate why you may want to use VDB and Vtrace to create a custom debugger, I’ll use the NVIDIA exploit released on December 25th, 2012 on the full disclosure mailing list. (See http://seclists.org/fulldisclosure/2012/Dec/261 for the exploit itself). Thanks to @peterwintrsmith for providing me a fun bug and giving me a good example to demonstrate a few capabilities of Vtrace and VDB.

Without going into too much detail about the exploit, the NVIDIA driver installation package installs a service with description ‘NVIDIA Driver Helper Service’ and service name ‘NVSvc’. The service executable points at ‘%systemroot%\system32\nvvsvc.exe’. When the service is started, ‘nvvsvc.exe’ creates or uses an existing named pipe at ‘\\.\pipe\nvsr’. Next, the service waits for a client to connect to the named pipe. When a client connects to the named pipe, the service spawns a thread to read data from the pipe.

There are different types of messages a client can send to the pipe. The message type is specified as the first two bytes of the message. The exploit targets the state machine of the message processing code that parses messages with opcode 0×52. This message format appears to allow clients to send a Unicode registry key name and registry key value. The parts of the message relevant to the exploit are:

  • Opcode (message type)
  • Registry Key Name
  • Registry Key Value Size
  • Registry Key Value

There are at least two problems with the ‘NVSvc’ service.

The first problem is with the permissions on the named pipe. Figure 1 shows the permissions of the named pipe. The permission FILE_ALL_ACCESS indicates that anyone can send messages to the pipe.

Figure 1: Using accesschk to show 'nvsr' pipe permissions

Figure 1: Using accesschk to show ‘nvsr’ pipe permissions

Reviewing the disassembly reveals the reason why the pipe permissions are wide open; the code creates the ‘nvsr’ named pipe with a NULL DACL that allows anyone to read and write to the named pipe.

The second problem is how ‘nvvsvc.exe’ handles messages with opcode 0×52. Pseudocode for the second problem is in Figure 2. Figure 2 shows how the message processing code determines the message type by using the opcode in the message, uses wcsnlen to obtain the length of the registry key name, and subsequently uses that length to index into the message and retrieve the registry key value size. Next, the registry key value size is used, unchecked against the destination buffer size, in a memmove operation. The memmove operation is performed between two fixed-length local buffers allocated on the stack. After the memmove, the code writes to the pipe data copied from the local buffer again using the unchecked registry key size value. As described in the attachment to the email on the full disclosure list, the positioning of the 2 buffers in memory allows for memory disclosure, dynamically determining the version of the ‘nvvsvc’ binary, dynamically determining ROP gadgets, and ultimately, gaining code execution.

Figure 2: Pseudocode for the second problem

Figure 2: Pseudocode for the second problem

Writing a Custom VDB Debugger to Detect and Analyze the Exploit
The purpose of writing this custom VDB debugger is to demonstrate one way to detect and analyze an exploit.  This section walks through figuring out how to detect and analyze the NVIDIA exploit released by @peterwintrsmith.

How can we detect that these vulnerabilities are being exploited? For this post, I assume that ANY time the program counter goes into a memory map that is NOT backed by a file, then that is “a bad thing.” Therefore, if the program counter ends up in the heap, stack, or another allocated region that is not backed by a file, I want to know about it.

Other software, such as OllyDbg, allows the user to break when the program counter is outside or inside a certain range of memory; game protection engines also use this technique to try and restrict hackers from arbitrarily calling ‘protected’ methods from injected code. [1] These methods differ because they do not make a distinction between file backed and non-file backed memory maps.

In order to analyze the exploit, I needed to find the vulnerable binaries, compile the exploit and get everything working. My procedure is documented in the following list:

  1. Installed the 64bit driver package from NVIDIA (version 310.70) on a 64bit system.
  2. Double clicked and extracted, but did not install the package (so I didn’t need to actually have an NVIDIA graphics card on the test system)
  3. Navigated into the ‘Display.Driver’ directory, right-clicked and extracted (with 7zip or similar) ‘NvCplSetupInt.exe’
  4. At an administrator command prompt, changed directory into the extracted directory for ‘NvCplSetupInt.exe’, and performed ‘nvvsvc.exe -install’
  5. Copied the nvvsvc.exe binary to c:\windows\system32
  6. Used services.msc or net start nvsvc to start the service
  7. Downloaded the exploit and redefined the shellcode payload; I made mine a bunch of 90’s and a single 0xcc breakpoint; compiled the exploit
  8. Ran the exploit and made sure it worked

Next I extended the stalker subsystem inside of Vtrace and VDB to implement detection of code executing in non-file backed memory maps. If you haven’t used the stalker subsystem before, it performs dynamic disassembly at user specified entry points and sets new breakpoints in the first instruction of each basic block discovered. Depending on the type of instruction, the breakpoint is removed after being hit the first time. In addition, dynamic branch instructions in basic blocks get a ‘special’ breakpoint called a StalkerDynBreak. When hit, the targets of the dynamic branches are computed and new stalker entrypoint breakpoints are set on the targets of the dynamic branch. This is a partial description of stalker, but the minimum required to understand the rest of the post; review the stalker code and see the wiki at visi.kenshoto.com.

If you think about the goal, you might wonder why stalker doesn’t already detect execution in non-file backed memory maps.  Stalker was designed to work for ‘well formed’ code; not code that manually messes with the stack to alter control flow. The issue is that return instructions do not have stalker breakpoints set on them; stalker assumes that if a basic block was disassembled that contains a call, that at some point later, the program counter will return to the instruction after the call, and eventually hit another basic block that stalker already set a breakpoint on. The NVIDIA exploit manipulates the stack directly to indirectly alter control flow; therefore, I needed to make stalker model all jmp and return instructions as dynamic stalker breakpoints.

Therefore, we will create a new type of stalker break called a ‘StalkerRetBreak’. Below is the relevant code for the ‘StalkerRetBreak’ class.

Figure 3: Code for StalkerRetBreak class

Figure 3: Code for StalkerRetBreak class

The ‘StalkerRetBreak’ breakpoint reads the return value off the stack and sets a new stalker breakpoint at that address. Therefore, if anything during execution of the function manipulated the return value, stalker will still ‘see’ the control flow transfer. Essentially we’ve turned return instructions into dynamic stalker breaks. A similar change was required to model jumps; these changes were made directly in the StalkerDynBreak class. See [2] for all of the changes.

Next we have to write the code that is the automated VDB debugger. Here is the code to do that:

Figure 4: Code for the automated VDB debugger

Figure 4: Code for the automated VDB debugger

When the code in Figure 4 is run, the code restarts the nvsvc service, attaches to the process, sets the initial stalker breakpoint to the start of the function CreateThread specified, and ‘runs’ the debugger. We have a special meta variable that means ‘keep going until someone says to stop.’ When someone says stop, the code outputs the stalker hits and then runs the script ‘disas_hits.py’.

The ‘keepgoing’ variable is set by the StalkerRetBreak. If the breakpoint detects a transition to a non-file backed memory map, the StalkerRetBreak will set this variable, and send a break to the debugger. This causes the while loop to exit.

You might be wondering what ‘disas_hits.py’ does. ‘disas_hits.py’ iterates over each recorded stalker hit, and for each hit, disassembles the first 16 bytes, stopping early if it hits a return instruction. ‘disas_hits.py’ is responsible for creating the highlighted output at the bottom of Figure 5 that displays the memory map, program counter, and disassembly/gadget. You might wonder why I didn’t include the ‘disas_hits.py’ code in the automated debugger; I didn’t include it because I wanted to be able to run it from the VDB PyQT GUI, as well as in my standalone automated debugger. See code at [2] for the ‘disas_hits.py‘ sourcecode.

What does the automated debugger output? Figure 5 shows the output after running the automated debugger (‘c:\python27\python.exe mydebugger.py’) and the exploit:

Figure 5: Output of the automated debugger

Figure 5: Output of the automated debugger

Notice that we detected the call to VirtualProtect (that corresponds to a gadget by address), the gadgets specified in the exploit and the exploit payload that I specified. The gadgets specified in the exploit are in Figure 6.

Figure 6: ROP gadgets in the sourcecode of the exploit

Figure 6: ROP gadgets in the sourcecode of the exploit

The exploit ‘payload’ is not executed (but the ROP gadgets still are) since we detected the control flow transfer into the non-file backed memory map; we just print out what *would* have executed for reference.

See [2] for a ZIP that contains the sourcecode and the patch against the public release of vdb_20121228.

Interested in more posts on VDB/Vtrace/vivisect? Leave me a comment or DM me @darkrelativity.

[1] http://www.gamedeception.net/archive/index.php?t-18635.html

[2] https://sites.google.com/site/mvdbcode/

Category: The Lab

Comments

    1. By Eric G on February 14 at 8:31 pm

      Awesome overview… I will have to pull this stuff down and try it out. Reversing exploits and malware is fascinating stuff.

    2. By Eric G on February 14 at 8:39 pm

      Awesome writeup… reversing malware and exploits has always been somewhat of a black art. Running malware in VMs like Cuckoo never seems to really “dig deep” enough. I’ll have to pull down the code and your examples and give this exercise a shot.

    3. By Juan on March 12 at 9:37 am

      Great Post. First Post I seen of this sort.

      Will definitively would love to see more of this.
      To be honest I was able to follow but will not fully understand most of it.
      I have made note to pull the code down and give it a shot. and see if i can replicate for another exploit.

      thanks again for sharing

    Leave a Comment

Get M-Unition in Your Inbox:

Follow @mandiant

Follow @mandiant on twitter.

Career Opps @ Mandiant

We’re growing fast, but we’re as demanding as ever. Our clients come to us in their hours of need, so we need the best. That means more than just the right education and the right experience in information security.

As Mandiant continues to grow, we are able to offer certain positions in multiple locations. For details on the location(s) of each opening, please refer to the position descriptions.

Click here to view available positions.