Anti-debug with VirtualAlloc’s write watch

A lesser-known feature of the Windows memory manager is that it can maintain write watches on allocations for debugging and profiling purposes. Passing the MEM_WRITE_WATCH flag to VirtualAlloc “causes the system to track pages that are written to in the allocated region”. The GetWriteWatch and ResetWriteWatch APIs can be used to manage the watch counter. This can be (ab)used to catch out debuggers and hooks that modify memory outside the expected pattern.

There are four primary ways to exploit this feature.

The first is a simple buffer manipulation check. Allocate a buffer with write watching enabled, write to it once, get the write count, and see if it’s greater than 1.

The second is an API buffer manipulation check. Allocate a buffer with write watching enabled, pass it as a parameter to an API that expects a buffer, but pass invalid values to other parameters. If an API hook doesn’t check parameters properly, or manipulates parameters, it may write to the buffer. Check the number of writes to the buffer after the call, and if it’s nonzero then there’s a hook in place. Any API will do as long as it writes to some memory. A particularly good trick is to use an API where there’s some kind of count value passed as a reference – in the real API the value will likely not be set, thus producing no memory writes, but in a hook there’s a bigger likelihood that they’ll set some placeholder value regardless.

Third, we can use the buffer to store the result of some check we care about, e.g. IsDebuggerPresent. If the write count is one and the value in the buffer is FALSE then we can assume that there’s no debugger attached and nobody tampered with the result of the call (or skipped the call).

Finally, we can allocate some memory with RWX protection and write watching enabled, copy some anti-debug check there, call ResetWriteWatch to ensure the write counter is zeroed, execute our payload, then check the write count.

Obviously in all cases these checks themselves can be skipped over, but it’s not a well known trick and may be missed by novice reverse engineers.

I’ve contributed these tricks to al-khaser, a tool for testing VMs, debuggers, sandboxes, AV, etc. against many malware-like defences.


ASUS UEFI Update Driver Physical Memory Read/Write

A short while ago, slipstream/RoL dropped an exploit for the ASUS memory mapping driver (ASMMAP/ASMMAP64) which was vulnerable to complete physical memory access (read/write) to unprivileged users, allowing for local privilege escalation and all sorts of other problems. An aside to this was that there were also IOCTLs available to perform direct I/O operations (in/out instructions) directly from unprivileged usermode, which had additional interesting impacts for messing with system firmware without triggering AV heuristics.

When the PoC was released, I noted that I’d reported this to ASUS a while beforehand, later clarifying that I’d actually reported it to them in March 2015. To be fair to ASUS, they were initially very responsive via email, and particularly via Twitter DM on the @ASUSUK account, and it seems like they had some bad luck with both the engineer working on the fixes and the customer support advisor handling my tickets leaving the company, resulting in a significant delay in the triage and patch processes. However, promises to keep me in the loop were not kept, and I was always chasing them up for answers.

In addition to the ASMMAP bugs, I also reported the exact same bugs in their UEFI update driver (AsUpIO.sys). This driver is deployed as part of the usermode UEFI update toolset, and exposes almost identical functionality which (as slipstream/RoL pointed out) is likely from an NT3.1 example driver that was written long before Microsoft took steps to segregate malicious users from physical memory in any meaningful way.

One additional piece of functionality which I believe was missed from the original ASMMAP vulnerability release was the ability to read/write Model Specific Registers (MSRs) as an unprivileged user. This was, again, a function exposed as an IOCTL in the driver. For those of you not versed in MSRs, they’re implementation-specific registers which contain control and status values for the processor and supporting components (e.g. SMM). You can read more about them in chapter 35 of the Intel 64 Architecture Software Developer’s Manual Volume 3. MSRs are particularly powerful registers in that they offer the ability to enable or disable all sorts of internal functionality on the processor, and are at least theoretically capable of bricking hardware if you abuse them in the wrong way. One of the most interesting MSRs is the Extended Feature Enable Register (EFER) at index 0xC0000080, which contains the No-eXecute Enable (NXE) and Secure Virtual Machine Enable (SVME) bits. Switching the NXE bit off on a live VM in VirtualBox crashes the VM with a “Guru Mediation” error (there’s an age cutoff on people who will get that reference), which I suppose is a novel anti-VM trick on its own, not to mention the intended behaviour of switching off NX on real-steel hardware.

Rather than just providing you with a bit of PoC code, I thought I’d take the opportunity to go through exactly how I discovered the bugs and what approach I took towards reliable exploitation.

Generally speaking, Windows drivers have a number of interfaces through which usermode code may communicate with them. The most important are I/O Request Packets (IRPs), which are sent to a driver when code performs a particular operation on the driver’s device object. The exposed functions which IRPs are sent to are known as Major Functions, examples of which include open, close, read, write, and I/O control (otherwise known as IOCTL). The descriptor structure for a driver object contains an array of function pointers, each pointing to a dispatch function for a major function. These are fantastic targets for bug-hunting in drivers, since they’re usermode-accessible (and often accessible from non-admin accounts) and can often result in local privilege escalation to kernelmode.

The first key thing to look for is whether or not the driver object is accessible as a low-privilege user. It’s all well and good finding a bug which gets you kernel code execution, but if you’ve got to be an admin to exploit it it’s a bit of a non-issue. When a driver goes through its initialisation steps, it usually names itself and creates a device object using the IoCreateDevice API, and a symbolic link to the DosDevices object directory using the IoCreateSymbolicLink API. An example is as follows:

    PDEVICE_OBJECT pDeviceObject = NULL;
    UNICODE_STRING driverName, dosDeviceName;
    RtlInitUnicodeString(&driverName, L"\\Device\\Example");
    RtlInitUnicodeString(&dosDeviceName, L"\\DosDevices\\Example"); 
    status = IoCreateDevice(pDriverObject, 0,
                            FALSE, &pDeviceObject);
    // ...
    IoCreateSymbolicLink(&dosDeviceName, &driverName);
    // ...
    return status;

In order to check whether or not the driver’s device object is accessible by low-privilege users, we need to know what name it picked for itself. There are a few approaches to this: we could debug the system and breakpoint on the IoCreateDevice API; we could reverse engineer the driver using a tool such as IDA; or we could simply extract all the strings from the binary and look for any that start with “\\Device\\”.

In the case of AsUpIO.sys, dropping it into IDA shows that it does exactly the above, using the name “AsUpdateio”:


This now tells us exactly what we should be looking for. In order to inspect the device object and view its discretionary access control list (DACL), we can use WinObj.


As we can see here, the Everyone group is given both Read, Write, and Special permissions, allowing the device object to be directly interacted with from low-privilege usermode. Note that these ACEs are not set by the driver; this is a somewhat “hardened” permissions set applied by an up-to-date Windows 10 install, although it is still accessible by everyone. In Windows 8 and earlier, setting a the DACL to null simply results in it having no ACEs, allowing everyone complete access, unless you apply a hardened security policy. This is because, prior to Win10, the root object namespace had no inheritable ACEs.

The snippet above also gives us the address of the HandleIoControl function which is assigned to handle the Create, Close, and IOCTL major functions. Reverse engineering this shows the IOCTL number which is used for mapping memory:


(note: the ASUS_MemMap name was set by me; I renamed it after analysing each function in this set of branches to work out their functions)

Since we now have control over the driver, we now want to exploit the bugs. In this case the IOCTL for doing the memory mapping is 0xA040244Ch, which can be found by reverse engineering the HandleIoControl routine we found above. Just as in the original slipstream/RoL exploit, this IOCTL can be used to map any physical memory section to usermode. The downside, from an exploitation perspective, is that the function covers a wide range of potential memory locations, including addresses where the HAL has to translate to bus addresses rather than the usual physical memory. This is fine if we know a specific location we want to map and access, but it becomes a bit fraught if we want to read through all of physical memory; trying to map and read an area of memory reserved for certain hardware might crash or lock up the system, and that’s not much use for a privesc.

The approach I took was to find a specific location in kernel memory which I knew was safe, then map and read that, and use that single operation to gain the ability to reliably read memory over the lifetime of the system. The ideal object to gain control over is \\Device\PhysicalMemory, as this gives us direct usermode access to the physical memory. The first hurdle is that we need a kernel pointer leak to identify the physical address of that object’s descriptor.

First, we want to know what processes have a handle to this object. By running Process Explorer as and administrator (we don’t need to do this in an actual attack scenario), we can see that the System process keeps a handle to it open:


Using an undocumented feature of the NtQuerySystemInformation API, i.e. the SystemHandleInformation information class, we can pull out information about every single handle open on the system. The returned structure for each handle looks like the following:

    DWORD    dwProcessId;
    BYTE     bObjectType;
    BYTE     bFlags;
    WORD     wValue;
    PVOID    pAddress;
    DWORD    GrantedAccess;

The pAddress field points to the kernel memory address of the object’s descriptor. By enumerating through all open handles on the system and checking for dwProcessId=4 (i.e. the System process) and bObjectType matching the object type ID of a section (this is different between Windows version), we can find all the sections open by the System process, one of which we know will be \\Device\\PhysicalMemory. In fact, System only has three handles open to sections in Windows 10, so we can just give ourselves access to all of them and not worry too much.

Of course, now that we have the address of the section descriptor in kernel memory, we still need to actually take control of that section object somehow. Let’s take a look at the header structure for the object, 0x30 bytes before the section descriptor, in WinDbg:

0: kd> dt nt!_OBJECT_HEADER 0xffffc001`cca13bd0-0x30
   +0x000 PointerCount     : 0n65537
   +0x008 HandleCount      : 0n2
   +0x008 NextToFree       : 0x00000000`00000002 Void
   +0x010 Lock             : _EX_PUSH_LOCK
   +0x018 TypeIndex        : 0x23 '#'
   +0x019 TraceFlags       : 0 ''
   +0x019 DbgRefTrace      : 0y0
   +0x019 DbgTracePermanent : 0y0
   +0x01a InfoMask         : 0x2 ''
   +0x01b Flags            : 0x16 ''
   +0x01b NewObject        : 0y0
   +0x01b KernelObject     : 0y1
   +0x01b KernelOnlyAccess : 0y1
   +0x01b ExclusiveObject  : 0y0
   +0x01b PermanentObject  : 0y1
   +0x01b DefaultSecurityQuota : 0y0
   +0x01b SingleHandleEntry : 0y0
   +0x01b DeletedInline    : 0y0
   +0x01c Spare            : 0x5000000
   +0x020 ObjectCreateInfo : 0x00000000`00000001 _OBJECT_CREATE_INFORMATION
   +0x020 QuotaBlockCharged : 0x00000000`00000001 Void
   +0x028 SecurityDescriptor : 0xffffc001`cca12273 Void
   +0x030 Body             : _QUAD

Now, remember earlier when I said that having a DACL set to null gives everyone access? The SecurityDescriptor field here is, in fact, exactly what gets set to null in such a situation. If we overwrite the field with zeroes, then (theoretically) everyone has access to the object. However, this object is a special case: it has the KernelOnlyAccess flag set. This means that no usermode processes can gain a handle to it. We need to switch this off too, so we set the Flags field to 0x10 to keep the PermenantObject flag but clear the rest:

0: kd> eb (0xffffc001`cca13bd0-0x30)+0x1b 0x10
0: kd> eq (0xffffc001`cca13bd0-0x30)+0x28 0
0: kd> dt nt!_OBJECT_HEADER 0xffffc001`cca13bd0-0x30
   +0x000 PointerCount     : 0n65537
   +0x008 HandleCount      : 0n2
   +0x008 NextToFree       : 0x00000000`00000002 Void
   +0x010 Lock             : _EX_PUSH_LOCK
   +0x018 TypeIndex        : 0x23 '#'
   +0x019 TraceFlags       : 0 ''
   +0x019 DbgRefTrace      : 0y0
   +0x019 DbgTracePermanent : 0y0
   +0x01a InfoMask         : 0x2 ''
   +0x01b Flags            : 0x10 ''
   +0x01b NewObject        : 0y0
   +0x01b KernelObject     : 0y0
   +0x01b KernelOnlyAccess : 0y0
   +0x01b ExclusiveObject  : 0y0
   +0x01b PermanentObject  : 0y1
   +0x01b DefaultSecurityQuota : 0y0
   +0x01b SingleHandleEntry : 0y0
   +0x01b DeletedInline    : 0y0
   +0x01c Spare            : 0x5000000
   +0x020 ObjectCreateInfo : 0x00000000`00000001 _OBJECT_CREATE_INFORMATION
   +0x020 QuotaBlockCharged : 0x00000000`00000001 Void
   +0x028 SecurityDescriptor : (null) 
   +0x030 Body             : _QUAD

Now the KernelOnlyAccess and SecurityDescriptor fields are zeroed out, we can gain access to the object from usermode as a non-adminstrative user:


In a real exploitation scenario we’d do these edits via the driver bug rather than WinDbg, mapping the page containing the object header and writing to it directly.

Disabling the flags and clearing the security descriptor allows us to map the PhysicalMemory object into any process and use it to gain further control over the system, without worrying about the weird intricacies of how the driver handles certain addresses. This can be done by scanning for EPROCESS structures within memory and identifying one, then jumping through the linked list to find your target process and a known SYSTEM process (e.g. lsass), then duplicating the Token field across to elevate your process. This part isn’t really that novel or interesting, so I won’t go into it here.

One tip I will mention is that you can make the exploitation process much more reliable if you set your process’ priority as high as possible and spin up threads which perform tight loops to keep the processor busy doing nothing, while you mess with memory. This helps keep kernel threads and other process threads from being scheduled as frequently, making it less likely for you to hit a race condition and bugcheck the machine. I only saw this happen once during the hundred or so debugging sessions I did, so it’s not critical, but still worth keeping in mind.

In closing, I hope the teardown of this bug and my exploitation process has been useful to you. While you certainly can directly exploit the bug, it’s not without potential peril, and it’s often safer to pivot to a more stable approach.

Take-home points for security people and driver developers in general:

  • WHQL does not mean the code is secure, nor does it even mean the code is stable or safe. Microsoft happily signed a number of these drivers with vulnerability-as-a-feature code within them. These bugs were trivially identifiable; this indicates that WHQL is likely an automated process to ensure adherence to little more than a “don’t use undocumented / unsupported functions” requirement.
  • Ensure that appropriate DACLs are placed on objects, particularly the device object, via the use of IoCreateDeviceSecure and the Attributes parameters to Create* calls (e.g. CreateMutex, CreateEvent, CreateSemaphore). A null DACL mean anyone can access the object.
  • Drivers should not expose administrative functionality (e.g. UEFI updates) to non-administrative users (e.g. the Everyone group). Ensure that object DACLs reflect this.

Take-home points for ASUS:

  • Implement a security contact mailbox as per the guidance in RFC2142 and ensure that it is checked and managed by someone versed in security. Create a page on your website which lists this contact and outlines your expectations from researchers when reporting security issues.
  • Your Twitter support staff are better at communicating with customers than your support ticket people. You could stand to learn from their more informal and responsive model.
  • Ensure that anything assigned to someone who leaves the company is appropriately reassigned with guidance from that individual. This should help ensure that patches don’t end up delayed by 15 months.
  • Get your code assessed by a 3rd party security contractor before releasing it to customers, and ensure that your developers are given appropriate training on secure development practices. The vulnerable code used was likely copied from examples into a number of your drivers, which indicates that problems may be widespread.

Disclosure timeline:

  • 24th March 2015 – Submitted bug as ticket to ASUS (WTM20150324082900771)
  • 25th March 2015 – Acknowledgement from ASUS
  • 25th March 2015 – Sent reply email with additional information.
  • 27th March 2015 – Reply from “J” from ASUS, says engineer has a fix and is liasing with their own security researcher on the matter.
  • < I forgot about the issue for a long time >
  • 4th September 2015 – Sent email to query status of the issue.
  • 7th September 2015 – Reply from “Anthony” from ASUS, informing me that the agent I’d been interacting with before had left the company, asking for more details on the issue.
  • 7th September 2015 – Sent a response with another full report of the issue.
  • 21st September 2015 – No reply, sent a request for a status update.
  • 22nd September 2015 – Contacted @ASUSUK on Twitter. Had conversation via DM trying to get a status update.
  • 28th September 2015 – Chased up @ASUSUK for an update.
  • 29th September 2015 – Reply informing me that the HQ office in Taipei was closed due to a typhoon.
  • 7th October 2015 – Sent another chase-up message to @ASUSUK.
  • 7th October 2015 – Reply from them; no updates from the office but a promise to let me know when the patch is out.
  • 25th November 2015 – Another chase-up DM to @ASUSUK.
  • 25th November 2015 – HQ were offline, told I’d get a reply the next day. No reply came.
  • 9th May 2016 – Still nothing back from ASUS via email or Twitter, sent another chase-up email and DM informing my intent to disclose within 28 days due to the long delays in releasing.
  • 10th May 2016 – Told that Anthony is OOO until Monday.
  • 12th May 2016 – Told that the delays were due to the project leader at HQ leaving, they’re trying to source someone to fix it and push a fix out ASAP.
  • 12th May 2016 – Sent reply asking to be kept in the loop. ASUS replies saying they’ll keep me informed.
  • 12th June 2016 – Disclosed.

Vulnerable file details:

  • MD5: 1392B92179B07B672720763D9B1028A5
  • SHA1: 8B6AA5B2BFF44766EF7AFBE095966A71BC4183FA
  • Signing certificate serial number: ‎12 d5 c9 e2 94 9d 48 ab ac cd 35 14 f0 fb 22 ad

W^X policy violation affecting all Windows drivers compiled in Visual Studio 2013 and previous

Back in June, I was doing some analysis on a Windows driver and discovered that the INIT section had the read, write, and executable characteristics flags set. Windows executables (drivers included) use these flags to tell the kernel what memory protection flags should be applied to that section’s pages once the contents are mapped into memory. With these flags set, the memory pages become both writable and executable, which violates the W^X policy, a concept which is considered good security practice. This is usually considered a security issue because it can give an attacker a place to write arbitrary code when staging an exploit, similar to how pre-NX exploits used to use the stack as a place to execute shellcode.

While investigating these section flags in the driver, I also noticed a slightly unusual flag was set: the DISCARDABLE flag. Marking a section as discardable in user-mode does nothing; the flag is meaningless. In kernel-mode, however, the flag causes the section’s pages to be unloaded after initialisation completes. There’s not a lot of documentation around this behaviour, but the best resource I discovered was an article on Raymond Chen’s “The Old New Thing” blog, which links off to some other pages that describe the usage and behaviour in various detail. I’d like to thank Hans Passant for giving me some pointers here, too. The short version of the story is that the INIT section contains the DriverEntry code (think of this like the ‘main()’ function of a driver), and it is marked as discardable because it isn’t used after the DriverEntry function returns. From gathering together scraps of information on this behaviour, it seems to be that the compiler does this because the memory that backs the DriverEntry function must be pageable (though I’m not sure why), but any driver code which may run at IRQLs higher than DISPATCH_LEVEL must not try to access any memory pages that are pageable, because there’s no guarantee that the OS can service the memory access operation. This is further evidenced by the fact that the CODE section of drivers is always flagged with the NOT_PAGED characteristic, whereas INIT is not. By discarding the INIT section, there can be no attempt to execute this pageable memory outside of the initialisation phase. My understanding of this is incomplete, so if anyone has any input on this, please let me know.

The DISCARDABLE behaviour means that the window of exploitation for targeting the memory pages in the INIT section is much smaller – a vulnerability must be triggered during the initialisation phase of a driver (before the section is discarded), and that driver’s INIT section location must be known. This certainly isn’t a vulnerability on its own (you need at least a write-what-where bug to leverage this) but it is also certainly bad practice.

Here’s where things get fun: in order to compare the driver I was analysing to a “known good” sample, I looked into some other drivers I had on my system. Every single driver I investigated, including ones that are core parts of the operating system (e.g. win32k.sys), had the same protection flags. At this point I was a little stumped – perhaps I got something wrong, and the writable flag is needed for some reason? In order to check this, I manually cleared the writable flag on a test driver, and loaded it. It worked just fine, as did several other test samples, from which I can surmise that it is superfluous. I also deduced that this must be a compiler (or linker) issue, since both Microsoft drivers and 3rd party drivers had the same issue. I tried drivers compiled with VS2005, VS2010, and VS2013, and all seemed to be affected, meaning that pretty much every driver on Windows Vista to Windows 8.1 is guaranteed to suffer from this behaviour, and Windows XP drivers probably do too.

INIT section of ATAPI Driver from Windows 8.1

While the target distribution appears to be pretty high, the only practical exploitation path I can think of is as follows:

  1. Application in unprivileged usermode can trigger a driver to be loaded on demand.
  2. Driver leaks a pointer (e.g. via debug output) during initialisation which can be used to determine the address of DriverEntry in memory.
  3. A write-what-where bug in another driver or in the kernel that is otherwise very difficult to exploit (e.g. due to KASLR, DEP, KPP, etc.) is triggered before the DriverEntry completes.
  4. Bug is used to overwrite the end of the DriverEntry function.
  5. Arbitrary code is executed in kernel.

This is a pretty tall order, but there are some things that make it more likely for some of the conditions to arise. First, since any driver can be used (they all have INIT marked as RWX) you only need to find one that you can trigger from unprivileged usermode. Ordinarily the race condition between step 1 and step 4 would be difficult to hit, but if the DriverEntry calls any kind of synchronisation routine (e.g. ZwWaitForSingleObject) then things get a lot easier, especially if the target sync object happens to have a poor or missing DACL, allowing for manipulation from unprivileged usermode code. These things make it a little easier, but it’s still not very likely.

Since I was utterly stumped at this point as to why drivers were being compiled in this way, I decided to contact Microsoft’s security team. Things were all quiet on that front for a long time; aside from an acknowledgement, I actually only heard back from yesterday (2015/09/03). To be fair to them, though, it was a complicated issue and even I wasn’t very sure as to its impact, and I forgot all about it until their response email.

Their answer was as follows:

After discussing this issue internally, we have decided that no action will be taken at this time. I am unable to allocate more resources to answer your questions more specifically, but we do thank you for your concern and your commitment to computer security.

And I can’t blame them. Exploiting this issue would need a powerful attack vector already, and even then it’d be pretty rare to find the prerequisite conditions. The only thing I’m a bit bummed about is that they couldn’t get anyone to explain how it all works in full.

But the story doesn’t end there! In preparation for writing this blog post, I opened up a couple of Microsoft’s drivers on my Windows 10 box to refresh my memory, and found that they no longer had the execute flag set on the INIT section. It seems that Microsoft probably patched this issue in Visual Studio 2015, or in a hotfix for previous versions, so that it no longer happens. Makes me feel all warm and fuzzy inside. I should note, however, that 3rd party drivers such as Nvidia’s audio and video drivers still have the same issue, which implies that they haven’t been recompiled with a version of Visual Studio that contains the fix. I suspect that many vendor drivers will continue to have this issue.

I asked Microsoft whether it had been fixed in VS2015, but they wouldn’t comment on the matter. Since I don’t have a copy of VS2015 yet, I can’t verify my suspicion that they fixed it.

In closing, I’d like to invite anyone who knows more than me about this to provide more information about how/why the INIT section is used and discarded. If you’ve got a copy of VS2015 and can build a quick Hello World driver to test it out, I’d love to see whether it has the RWX issue on INIT.

Disclosure timeline:

  • 29th June 2015 – Discovered Initial driver bug
  • 30th June 2015 – Discovered wider impact (all drivers affected)
  • 2nd July 2015 – Contacted Microsoft with report / query
  • 2nd July 2015 – Microsoft replied with acknowledgement
  • 6th July 2015 – Follow-up email sent to Microsoft
  • [ mostly forgot about this, so I didn’t chase it up ]
  • 3rd September 2015 – Microsoft respond (see above)
  • 3rd September 2015 – Acknowledgement email sent to Microsoft, querying fix status
  • 4th September 2015 – Microsoft respond, will not comment on fix status
  • 4th September 2015 – Disclosed

The Router Review: From nmap to firmware

When I moved into my flat, I found that the previous tenant had left behind his Sky Broadband router. Awesome – a new toy to break! Sadly I got bogged down with silly things like moving house and going to work, so I didn’t get a chance to play with it. Until now, that is.

This isn’t the first embedded device I’ve played with. Over the years I’ve desoldered EEPROMs from routers, done unspeakable things to photocopiers, and even overvolted an industrial UPS unit via SNMP. The router I shall be discussing in this post, however, was one of the easier and more generic bits of kit I’ve played with.

Now, a little about the device. The model is DG934, and the full part number is 272-10452-01. It’s an ADSL router supplied by Sky (also known as BSkyB) as part of their old broadband package, but it’s actually manufactured by Netgear. It’s got four ethernet ports, an ADSL (phone) port, and takes a 12V power supply. Internally, it runs on the Atheros chipset. Unfortunately, this being a UK-only device, there’s no FCC ID – if there had been, I could’ve looked it up on the FCC OET database and found all sorts of internal photos and test data, which is often valuable when looking at the hardware aspects.

My first job was to power it on and get into the config panel. Since the previous tenant clearly wasn’t security conscious, he’d kindly left the device in its default configuration and I was able to log into the configuration interface using the default admin / sky credentials. I exported the config file to my machine, and took a look. In this case it’s plaintext, so there’s nothing to break here, but it’s not exactly good practice – it includes the passwords for WiFi and the configuration interface.

I ran nmap against the device and got the following results:

80/tcp    open  http    BSkyB DG934G http config
5000/tcp  open  sip     BSkyB/1.0 UPnP/1.0 miniupnpd/1.0 (Status: 501 Not Implemented)
8080/tcp  open  http    BSkyB DG934G http config
32764/tcp open  unknown

Interestingly, the configuration site was available on both 80 and 8080. This seems to be the norm for many routers, but I have no idea why. UPnP on port 5000 is always a fun one to spot, and we’ll take a look at this shortly. Finally, there’s an unknown protocol running on port 32764.

For messing with UPnP, I have the UPnP Developer Tools for Windows. They’re mainly written in C# and are open source, so you can always port to Mono if you want. I used Device Spy to get the following info:

  • It’s a BSkyB DG934 Router.
  • The firmware date is 2007-08-27.
  • You can pull out stats such as total bytes sent/received, total packets sent/received, and uptime in seconds.
  • Port mapping functions are available.
  • SetEnabledForInternet isn’t present – shame, really, since it leads to a nice DoS condition.

Sadly there’s not much you can play with here.

Next, we’ll take a look at that weird unknown protocol on port 32764. When connecting to it, the string “MMcS” is returned, along with two binary IP representations: and I tried playing around with this, but honestly I have no idea what it’s for. Google returned a bunch of people asking what it was, and nobody with any real answers. Potentially it’s for Multimedia Class Schedule Server, but that’s speculation at best. Again, no luck at fun stuff here.

Finally, let’s dig into the firmware. Instead of taking the device apart, desoldering the firmware EEPROM, and interfacing to it with a BusPirate to rip the data off, I decided to go the easy route and download the openly available firmware from Netgear. The file provided is a flat binary, with some interesting data inside it. It’s partitioned into various sections, with conveniently obvious data offsets (e.g. 0x10000). In order to properly dissect the file, I used binwalk. In BackTrack 5 it’s located in /pentest/reverse-engineering/binwalk/ and requires you to manually set the magic file via the -m switch.

root@bt:~# binwalk -m /pentest/reverse-engineering/binwalk/magic.binwalk ~/dg834gt_1_02_09.img
:1248 0x4E0 CFE boot loader
1288 0x508 CFE boot loader
4177 0x1051 LZMA compressed data, properties: 0xA4, dictionary size: 285474816 bytes, uncompressed size: 256 bytes
7951 0x1F0F LZMA compressed data, properties: 0xC2, dictionary size: 556793856 bytes, uncompressed size: 67108881 bytes
8087 0x1F97 LZMA compressed data, properties: 0x82, dictionary size: 556793856 bytes, uncompressed size: 67108881 bytes
8227 0x2023 LZMA compressed data, properties: 0xC2, dictionary size: 556793856 bytes, uncompressed size: 67108881 bytes
8371 0x20B3 LZMA compressed data, properties: 0x82, dictionary size: 556793856 bytes, uncompressed size: 67108881 bytes
10563 0x2943 LZMA compressed data, properties: 0xDF, dictionary size: 555220992 bytes, uncompressed size: 167272448 bytes
65792 0x10100 CramFS filesystem, big endian size 2879488 version #2 sorted_dirs CRC 0x51df60ff, edition 0, 1975 blocks, 938 files
1016865 0xF8421 ARJ archive data, v193, backup, original name: \230\346+\210\365 ... [snip]

This gives us a pretty good idea of what we’re dealing with. First, there’s a Common Firmware Environment (CFE) bootloader, which is Broadcom’s alternative to U-Boot. There’s some irony here in that Broadcom and Atheros are competitors, yet CFE is being used on an Atheros chipset device. Anyway, there’s a bunch of LZMA junk after that which looks like various bits of firmware and a Linux kernel image. The bit we’re really interested in is the CramFS data. As a side note here, it looks like binwalk was a bit overzealous in identifying an ARJ archive at the end (hence the corrupted original name) so we can assume that the CramFS block takes up the remainder of the file.

In order to extract the filesystem, we can use good old dd. The following should suffice:

dd size=256 skip=257 count=20000 if=dg834gt_1_02_09.img of=firmware.cramfs

Note that 257 * 256 = 65792, which is 0x10100, i.e. the offset of the data we want to pull out. I stuck a really big count in there because we’re reading to the end of the file. Now, you’re going to want to grab some tools to work with CramFS:

sudo apt-get install cramfsprogs fusecram

This provides you with the modules needed to mount CramFS volumes, as well as some tools to help you along the way. Now we can mount the filesystem:

root@bt:~# sudo mount -t cramfs -o loop ~/firmware.cramfs /media/firmware/
mount: wrong fs type, bad option, bad superblock on /dev/loop1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so

Hmmm, that’s odd. Let’s see what dmesg has to say about this…

root@bt:~# dmesg | tail -n 1
[ 4394.319907] cramfs: wrong endianess

Aha! A fun fact about CramFS is that file systems have endianness as per the architecture they were created on. Since the router is big-endian and my box is little-endian, I need to convert it. Thankfully, cramfsprogs includes a tool called cramfsswap that flips the endianness of a provided image. Side note: if you get “wrong magic” as an error, you didn’t extract the right blocks of data, or the file system isn’t CramFS.

root@bt:~# cramfsswap ./firmware.cramfs ./firmware-conv.cramfs
Filesystem is big endian, will be converted to little endian.
Filesystem contains 937 files.
CRC: 0xe86ad3b0
root@bt:~# sudo mount -t cramfs -o loop ~/firmware-conv.cramfs /media/firmware/

Excellent! Now to dig around inside the files.

root@bt:~# ls -l /media/firmware/
total 20
drwxr-xr-x 1 root root 452 1970-01-01 01:00 bin
drwxr-xr-x 1 root root 0 1970-01-01 01:00 dev
lrwxrwxrwx 1 root root 8 1970-01-01 01:00 etc -> /tmp/etc
drwxr-xr-x 1 root root 784 1970-01-01 01:00 lib
drwxr-xr-x 1 root root 0 1970-01-01 01:00 proc
drwxr-xr-x 1 root root 176 1970-01-01 01:00 sbin
drwxr-xr-x 1 root root 0 1970-01-01 01:00 tmp
drwxr-xr-x 1 root root 116 1970-01-01 01:00 usr
lrwxrwxrwx 1 root root 8 1970-01-01 01:00 var -> /tmp/var
lrwxrwxrwx 1 root root 8 1970-01-01 01:00 www -> /tmp/www
drwxr-xr-x 1 root root 3900 1970-01-01 01:00 www.deu
drwxr-xr-x 1 root root 3908 1970-01-01 01:00 www.eng
drwxr-xr-x 1 root root 3824 1970-01-01 01:00 www.fre

There’s a full listing on pastebin, if you’re interested. It’s worth noting that if you can mount the filesystem, can see the directories and files inside it, but can’t read the file data, then you probably didn’t copy the entire filesystem and it’s missing chunks of data. Anyway, this looks pretty typical. We can see a very basic file system that comprises all the runtime parts of the device, excluding the kernel and any ramfs stuff. Here’s what I found:

  • The three www prefixed directories contain the template files used for the administration panel.
  • /bin contains busybox binaries.
  • /lib contains the kinds of libraries you’d expect on a router, e.g. libcrypt, libupnp, libpppoe, etc.
  • /lib/modules contains various kernel modules for the router, such as the push button driver and Atheros HAL.
  • /sbin contains various binaries such as ifconfig, insmod, lsmod, etc.
  • /usr/bin contains four binaries, including one called test.
  • /usr/etc contains the default config files and various scripts.
  • /usr/sbin contains various binaries for daemons (including reaim and iptables), as well as some for performing maintenance operations, e.g. WiFi control operations.
  • /usr/upnp contains the definitions for the UPnP endpoint.

The most interesting directory was /usr/etc, which contains both passwd and an The passwd file shows only root and nobody, which leads me to believe that all services run as root. The file has all sorts of interesting info in it:

Path: .
URL: file:///svn/Platform/DG834_PN/Source
Repository Root: file:///svn/Platform/DG834_PN
Repository UUID: 25bc2c04-8815-0410-823d-fa30465ac5aa
Revision: 93
Node Kind: directory
Schedule: normal
Last Changed Author: ethan
Last Changed Rev: 93
Last Changed Date: 2007-02-16 16:23:45 +0800 (Fri, 16 Feb 2007)

Boot Loader version: CFE version 1.0.37-5.11 for BCM96348

So we now know that Netgear use(d) SVN for their source control, that “Ethan” is the guy developing the firmware for the DG834, and that we’re running CFE 1.0.37-5.11 on the BCM96348 SoC IC. Hi, Ethan!

I’m going to leave this here for now, primarily because it’s almost 4am, but also because the point of this blog post was to show just how much information you can dig out of a device without even touching it with a screwdriver, or opening a manual. Keep in mind that the techniques I’ve shown here should apply to many routers and other small embedded devices. At some point in the future I’ll get around to digging into some of their custom binaries, as well as their HTTPD. If I find anything interesting, I’ll be sure to post an update. Also, let me know if you’ve got any spare routers you want me to dig into when I get a spare few hours – I’m always happy to take donations!

Preventing executable analysis – Part 1, Static Analysis

In this series of posts, I’m going to discuss executable analysis, the methods that are used and mechanisms to prevent them. There are three types of analysis that can be performed on executables:

  • Static – Analysis of the sample file on disk.
  • Emulated – Branch and stack analysis of the sample through an emulator.
  • Live – Analysis of the executing sample on a VM, usually using hooks.

I’m going to look at each type in detail, giving examples of techniques used in each and ways to make analysis difficult.

In this first post, I’ll look at static analysis. This type of analysis involves parsing the executable and disassembling the code and data contained within it, without ever running it. The benefit of this is that it’s safe, since it’s impossible for the code to cause any damage. The downside is that static analysis can’t really make assumptions about high-level behaviours.

Entry Point Check
The first method used to perform static analysis is simple header checks. If the entry point (EP) of the executable resides outside of a section marked as code, it is safe to assume that the application isn’t “normal”. In order to prevent recognising this from being a simple task, the executable should have its BaseOfCode header pointing at the same section the EP is in, even when packed.

Executables are often packed – i.e. their code is encrypted in some way. We can analyse this using entropy calculations on each section, to discover how “random” the data looks. It’s often tempting for authors to try to create a good cipher for encrypting packed sections, but this often leads to a few problems. Firstly, entropy calculations will very quickly spot sections that look too random to be normal code or data. Secondly, there are many applications out there that will look for sequences of data and instructions that match known cryptographic algorithms. It’s relatively easy to spot magic numbers and S-box arrays

In order to prevent this, a packing algorithm should be used that preserves the statistical signature of the original data. A good way to do this is to flip only the lowest two bits of each byte, or to simply shuffle the data rather than encrypting it with xor or a similar operation. By definition, a sample of data will have the same Shannon entropy regardless of how much you shuffle it. The usual way that analysis tools work is to split each section into blocks and compute an entropy graph across the file. By using a cipher that only shuffles bytes that are close, you can achieve an almost identical entropy graph:

Entropy Graph

Since instructions are multi-byte, shuffling completely destroys the code, making it impossible to read. It’s relatively simple to perform half-decent shuffling, given a reasonably large key:

for each byte k in key
	tmp = data[0]
	data[0] = data[k]
	data[k] = tmp

Simply loop the above over a sequence of data, you’ll get reasonable shuffling within each 256-byte block. OllyDbg doesn’t recognise this as packed, since it works on counts of particularly common bytes in code sections.

Jump Tables
Static analysis tools such as IDA Pro work by mapping sequences of jumps together. Some enhance this by performing heuristic analysis of jumps, for example turning jmp [file.exe+0x420c0] into an assumed jump based on the data at file offset 0x420c0. We can try to defeat this type of analysis by using jump tables. These are pointer tables generated at runtime, which are encrypted or obfuscated on disk. Jumps in the code are done by pointing to offsets in the jump table. Often this is further obfuscated by using jumps to register pointers, or stack jumps:

; ecx = function ID
mov eax, [ptrToTable+ecx*4] ; load the encrypted pointer into eax
xor eax, [ptrToKey+ecx*4]   ; xor with the key
push eax                    ; push address to stack
ret                         ; return (jump) to it, obfuscates the jump

Obviously there’s more we can do here – better encryption, values generated at runtime, more obfuscation, etc.

Control Flow Obfuscation
Some analysis tools focus on artifacts of compilers – i.e. the signatures of how common high level language constructs translate into assembly language. For example, some loops may be translated into a dec/jg loop, whereas some others might use rep mov. It all depends on the high level construct in use. By altering these constructs and using them in situations where they are unusual, this can confuse heuristics. One example for short loops is using a switch:

for(int i=0; i<5; i++)
	if(i%2==0) printf("%i is even\n", i);
	else printf("%i is odd\n", i);
	if(i==4) printf("done");

We can turn this into a switch statement that flattens out the flow, instead of being an obvious loop:

for(int i=0; i<5; i++)
		case 0: printf("%i is even\n", i); break;
		case 1: printf("%i is odd\n", i); break;
		case 2: printf("%i is even\n", i); break;
		case 3: printf("%i is odd\n", i); break;
		case 4: printf("%i is even\n", i); printf("done"); break;

Since this uses a switch, we can use a jump table that is easy to obfuscate.

There are many ways to break static analysis, some of which are simple, some of which are more complex. By employing these, it makes it very difficult for any analyst to decode and understand. Such methods can also prevent automated tools from performing in-depth analysis of the code. Understanding these methods helps both implement them and circumvent them. In the next part, I’ll be looking at virtualised and emulated analysis, which uses virtual hardware to analyse and fingerprint software without actually executing the real application code live on a hardware processor.

Further Reading