Adventures in .NET references

Weak referencing is a really useful feature when you don’t mind if an object is deleted, but you might still potentially want to access it again in future. For those of you who aren’t familiar with the concept of weak referencing, I’ll describe it briefly. If you already know how it works then you can skip ahead.

.NET is a garbage collected language, meaning that objects you create on the heap (e.g. with new) are automatically cleared up by the garbage collector (GC) when they are no longer being used. The definition of “being used” is implemented as a reference count. Here’s an example:

// we create an object instance and assign it to the variable 'foo'
// the instance now has one reference
var foo = new object();

// now we assign the value of foo (the instance) to bar
// the instance now has two references
var bar = foo;

// now we set foo to null.
// the instance now has one reference (bar)
foo = null;

When a variable goes out of scope it no longer exists, so the reference counter is decremented. When the reference count for an object instance drops to zero the GC is free to finalize (destroy) it. GC does this in passes and uses a generation-based model to periodically clean up objects without references. This means that an object may exist on the heap for some time after the reference count drops to zero. Incidentally, this is why SecureString exists – if you put sensitive data into a string object there is no guarantee when, or even if, that string will be erased from memory. Strings are also immutable in .NET, so you can’t manually overwrite them.

What I haven’t mentioned so far is that there are two types of reference – strong and weak. What I’ve talked about so far refers to strong references. Weak references are a special type of reference that still allow the GC to finalize the object, but also still allow the code to access (and create strong references to) the object if the GC has not yet finalized it. This is useful for caching because the GC will automatically “evict” (finalize) objects based on recent accesses and memory pressure.

Mixing weak references with lazy initialisation

In some cases you may not know if a code path is going to require access to a particular object, or if it will be accessed just once or multiple times. If that object takes up quite a bit of memory on the heap it may be prohibitive to keep it around. You could opt to manually handle this with a caching scheme, but a mixture of lazy initialisation and weak referencing allows this situation to be handled in a way that avoids allocation in the first place when the object isn’t used, and automatically manage caching of that object based on memory pressure and age via the GC.

I ran into this situation when I wanted to parse the PE headers and various structures of a lot of executable files, then run a battery of tests against each. Most tests only access a few different sections of the executable, and some tests do not run at all against some files (e.g. some tests only run on 64-bit binaries). The parsed data can take up quite a bit of memory – particularly import tables and disassembled code – but it’s not expensive to regenerate the data, so it makes sense to only initialise it when we need it, and get rid of it if we’re running short of memory. For the latter we can use weak referencing, but for the former we want lazy initialisation. Luckily both of these features are available in the .NET framework and are thread-safe by default.

For convenience I created a helper class that combines WeakReference with Lazy<T>, into WeakLazy<T>:

public class WeakLazy<T> where T : class
{
    readonly Func<T> _constructor;
    readonly Lazy<WeakReference> _lazyRef;

    public WeakLazy(Func<T> constructor)
    {
        _constructor = constructor;
        _lazyRef = new Lazy<WeakReference>(() => new WeakReference(_constructor()));
    }

    public bool IsAlive
    {
        get
        {
            if (!_lazyRef.IsValueCreated)
                return false;
            return _lazyRef.Value.IsAlive;
        }
    }

    public T Value
    {
        get
        {
            T obj = (T)_lazyRef.Value.Target;

            // if the object still exists, return that
            if (_lazyRef.Value.IsAlive)
                return obj;

            // object didn't exist so we need to create it again
            obj = _constructor();
            _lazyRef.Value.Target = obj;
            return obj;
        }
    }
}

This is fairly simple – when we access the Value property it initialises the object (this is the Lazy<T> functionality) and wraps it inside a WeakReference in order to keep the strong reference count at zero.

Here’s an example of how you might use it:

var peHeader = new WeakLazy<PEHeader>(() => new PEHeader(_file));

...

if (Is64bit)
{
    if (peHeader.Value.ImageBase < 0x100000000UL)
        Report.AddIssue(IssueMessages.MissingHiASLR, ...);
}

...

if (SomeOtherCondition)
{
    // some other access here
    if (peHeader.Value.??? ... )
        // ...?
}

In the first line we create a WeakLazy<T> wrapper around a PEHeader class, which represents the parsed PE (or “Optional”) header from some input file. At this point there is no PEHeader instance as its initialisation is lazy.

If the executable is 64-bit we check for HiASLR by validating that the base address is above the 4GB boundary. If the branch is taken we reference peHeader.Value, which triggers lazy instantiation of the PEHeader object via the lambda we passed on the first line.

Later we potentially access peHeader.Value again later, at which there are three possible cases. The first case is that the original branch was not taken (not a 64-bit exe) so the PEHeader gets created for the first time. The second case is that the original branch was taken and the underlying PEHeader object still exists, so we just access it. The third case is that the original branch was taken, but a GC pass occurred between the first and second access, so the object was finalized in the interim, so it gets recreated.

Unit testing WeakLazy<T>

The above all look correct, so let’s cover things off with some unit tests! The first couple of tests verify that lazy instantiation works as intended:

class TestObject
{
    public TestObject()
    {
        Bar = 123;
    }
    
    public void Foo() { }

    public int Bar { get; set; }
}

[TestMethod]
public void TestInstantiateViaMethod()
{
    var wl = new WeakLazy<TestObject>(() => new TestObject());
    Assert.IsFalse(wl.IsAlive);
    wl.Value.Foo();
    Assert.IsTrue(wl.IsAlive);
}

[TestMethod]
public void TestInstantiateViaProperty()
{
    var wl = new WeakLazy<TestObject>(() => new TestObject());
    Assert.IsFalse(wl.IsAlive);
    Assert.AreEqual(wl.Value.Bar, 123);
    Assert.IsTrue(wl.IsAlive);
}

These tests pass without problems. Next we want to test that weak referencing works:

[TestMethod]
public void TestWeakReferenceFinalize()
{
    var wl = new WeakLazy<TestObject>(() => new TestObject());
    wl.Value.Foo();
    Assert.IsTrue(wl.IsAlive);

    const int BLOWN = 1024;
    int fuse = 0;
    while (wl.IsAlive)
    {
        GC.Collect();
        if (++fuse == BLOWN)
            Assert.Fail("GC did not clear object.");
    }
}

This test first instantiates the object, then forces GC collection repeatedly (up to 1024 times) to make sure the object gets finalized. This test fails – the loop repeats until the assertion failure is hit. Can you see why? Here’s a hint: this unit test fails when the program is built as Debug, but not as Release.

Compiler magic or deeper behaviour?

What you might assume is that the compiler captures the result of wl.get_Value() into a local variable, thus “trapping” a reference to the TestObject instance. If you take a look at the compiled IL, you’ll find that this isn’t the case at all – the generated code is essentially the same barring some extra nops and unoptimised stloc/ldloc pairs in the debug code. In fact I spent quite a bit of time getting all confused about what was happening.

My first guess was that a strong reference was being kept in the CLR’s evaluation stack somewhere, but Visual Studio doesn’t allow you to see the evaluation stack in the CLR. I tried digging into this with mdbg but didn’t get much information out of that either. In the end I had to go hardcore and load up WinDbg.

It turns out that WinDbg has pretty solid support for .NET and CLR process internals via the sos extension. This extension comes inbuilt with WinDbg, but you have to load it manually with .loadby sos clr. Once this is done you can start using the CLR debugging features. I found this cheat sheet to be incredibly helpful.

First I manually modified my code to include some pauses – just some Console.ReadKey calls – then verified that my changes did not alter the behaviour I observed previously. After that I used !threads to find the correct managed thread and switch to it. From there I inspected the stack with !clrstack to verify that everything is as I expected, with no weird calls out to magic debugging methods or anything out of place. At this point it made sense to just directly check what the GC was holding onto, using !gchandles:

          Handle  Type                 Object  Size    Data Type
000002d4945615e8  WeakShort  000002d496196da8    24    PolyutilsTests.WeakLazyTests+TestObject

...

              MT    Count    TotalSize  Class Name
00007ff85ddc6c20        1           24  PolyutilsTests.WeakLazyTests+TestObject

From this we can see that only a weak handle to the object exists, so there isn’t anything holding it back. Yet, despite this, the Debug build of this program refuses to finalize the object that we have a weak reference to, whereas the Release build gets rid of it without problems.

Debug vs. Release assemblies

At this point I was convinced that this is a CLR behaviour unique to debug builds of the application, but not because of the generated IL. Opening up the Debug and Release binaries in JustDecompile showed me a difference in the flags applied to the DebuggableAttribute applied to the assembly.

From the Debug assembly:

[assembly: Debuggable(DebuggableAttribute.DebuggingModes.Default | DebuggableAttribute.DebuggingModes.DisableOptimizations | DebuggableAttribute.DebuggingModes.IgnoreSymbolStoreSequencePoints | DebuggableAttribute.DebuggingModes.EnableEditAndContinue)]

From the Release assembly:

[assembly: Debuggable(DebuggableAttribute.DebuggingModes.IgnoreSymbolStoreSequencePoints)]

Using the Reflexil plugin, I modified the Debug assembly’s DebuggableAttribute to match the Release assembly, and re-ran the test harness. This time it completed just fine, proving that this is a CLR behaviour directly related to debugging.

But which of these flags causes this difference in behaviour? For that I needed to go through and unset each flag, one by one, until the test passed. I immediately hit paydirt on my first try – removing the Default flag from the assembly made the test pass, even with the other options there. This doesn’t really make much sense to me, as the reference source says:

Default: Instructs the just-in-time (JIT) compiler to use its default behavior, which includes enabling optimizations, disabling Edit and Continue support, and using symbol store sequence points if present. In the .NET Framework version 2.0, JIT tracking information, the Microsoft intermediate language (MSIL) offset to the native-code offset within a method, is always generated.

The only behaviour I can see that is potentially relevant is further up in the same class:

/// <summary>Gets a value that indicates whether the runtime will track information during code generation for the debugger.</summary>
/// <returns>true if the runtime will track information during code generation for the debugger; otherwise, false.</returns>
/// <filterpriority>2</filterpriority>
public bool IsJITTrackingEnabled
{
    get
    {
        return (this.m_debuggingModes & DebuggableAttribute.DebuggingModes.Default) != DebuggableAttribute.DebuggingModes.None;
    }
}

I’m still not sure if JIT tracking is the cause or if it’s something else.

I found an issue on the CoreCLR project where they ran into the same problem as me, although it didn’t really shed much light on the subject other than informing me that the JIT can arbitrarily extend object lifetimes.

Conclusion

Builds with the Default flag set on the DebuggableAttribute for the assembly seem to force the GC to ignore weak handles that are held by the currently executing method. As for why, I’m not sure, but it might be due to JIT tracking being enabled.

Fixing this is easy – just move the object access to its own method, and mark it with MethodImplOptions.NoInlining to force the object lifetime to be contained separately and not inlined into the calling method:

[TestMethod]
public void TestWeakReferenceFinalize()
{
    var wl = new WeakLazy<TestObject>(() => new TestObject());
    AccessTestObject(wl);
    Assert.IsTrue(wl.IsAlive);

    const int BLOWN = 1024;
    int fuse = 0;
    while (wl.IsAlive)
    {
        GC.Collect();
        if (++fuse == BLOWN)
        Assert.Fail("GC did not clear object.");
    }
}

[MethodImpl(MethodImplOptions.NoInlining)]
private void AccessTestObject(WeakLazy<TestObject> wl)
{
    wl.Value.Foo();
}

This causes the unit test to pass on both Debug and Release builds.

Advertisements

Anti-debug with VirtualAlloc’s write watch

A lesser-known feature of the Windows memory manager is that it can maintain write watches on allocations for debugging and profiling purposes. Passing the MEM_WRITE_WATCH flag to VirtualAlloc “causes the system to track pages that are written to in the allocated region”. The GetWriteWatch and ResetWriteWatch APIs can be used to manage the watch counter. This can be (ab)used to catch out debuggers and hooks that modify memory outside the expected pattern.

There are four primary ways to exploit this feature.

The first is a simple buffer manipulation check. Allocate a buffer with write watching enabled, write to it once, get the write count, and see if it’s greater than 1.

The second is an API buffer manipulation check. Allocate a buffer with write watching enabled, pass it as a parameter to an API that expects a buffer, but pass invalid values to other parameters. If an API hook doesn’t check parameters properly, or manipulates parameters, it may write to the buffer. Check the number of writes to the buffer after the call, and if it’s nonzero then there’s a hook in place. Any API will do as long as it writes to some memory. A particularly good trick is to use an API where there’s some kind of count value passed as a reference – in the real API the value will likely not be set, thus producing no memory writes, but in a hook there’s a bigger likelihood that they’ll set some placeholder value regardless.

Third, we can use the buffer to store the result of some check we care about, e.g. IsDebuggerPresent. If the write count is one and the value in the buffer is FALSE then we can assume that there’s no debugger attached and nobody tampered with the result of the call (or skipped the call).

Finally, we can allocate some memory with RWX protection and write watching enabled, copy some anti-debug check there, call ResetWriteWatch to ensure the write counter is zeroed, execute our payload, then check the write count.

Obviously in all cases these checks themselves can be skipped over, but it’s not a well known trick and may be missed by novice reverse engineers.

I’ve contributed these tricks to al-khaser, a tool for testing VMs, debuggers, sandboxes, AV, etc. against many malware-like defences.

Talking about Windows drivers at 44CON 2015’s Community Evening

I’ll be speaking at 44CON this year, at the community evening on Wednesday 9th September. The community evening is free to attend – you just need to register to attend if you don’t have a conference ticket. My talk is currently scheduled at 19:45, and I’m speaking about writing Windows drivers, with the goal of leaving you a bit more informed about how they work, and how to get started.

In addition to my talk, Saumil Shah will be speaking about Stegosploit, and Michael Boman will be running a workshop on anti-analysis techniques used in malware. After the talks, there will be a showing of the 20th anniversary edition of Hackers, which is guaranteed to be fun.

As usual, there will be drinks and good conversation. Hope to see you all there! 🙂

W^X policy violation affecting all Windows drivers compiled in Visual Studio 2013 and previous

Back in June, I was doing some analysis on a Windows driver and discovered that the INIT section had the read, write, and executable characteristics flags set. Windows executables (drivers included) use these flags to tell the kernel what memory protection flags should be applied to that section’s pages once the contents are mapped into memory. With these flags set, the memory pages become both writable and executable, which violates the W^X policy, a concept which is considered good security practice. This is usually considered a security issue because it can give an attacker a place to write arbitrary code when staging an exploit, similar to how pre-NX exploits used to use the stack as a place to execute shellcode.

While investigating these section flags in the driver, I also noticed a slightly unusual flag was set: the DISCARDABLE flag. Marking a section as discardable in user-mode does nothing; the flag is meaningless. In kernel-mode, however, the flag causes the section’s pages to be unloaded after initialisation completes. There’s not a lot of documentation around this behaviour, but the best resource I discovered was an article on Raymond Chen’s “The Old New Thing” blog, which links off to some other pages that describe the usage and behaviour in various detail. I’d like to thank Hans Passant for giving me some pointers here, too. The short version of the story is that the INIT section contains the DriverEntry code (think of this like the ‘main()’ function of a driver), and it is marked as discardable because it isn’t used after the DriverEntry function returns. From gathering together scraps of information on this behaviour, it seems to be that the compiler does this because the memory that backs the DriverEntry function must be pageable (though I’m not sure why), but any driver code which may run at IRQLs higher than DISPATCH_LEVEL must not try to access any memory pages that are pageable, because there’s no guarantee that the OS can service the memory access operation. This is further evidenced by the fact that the CODE section of drivers is always flagged with the NOT_PAGED characteristic, whereas INIT is not. By discarding the INIT section, there can be no attempt to execute this pageable memory outside of the initialisation phase. My understanding of this is incomplete, so if anyone has any input on this, please let me know.

The DISCARDABLE behaviour means that the window of exploitation for targeting the memory pages in the INIT section is much smaller – a vulnerability must be triggered during the initialisation phase of a driver (before the section is discarded), and that driver’s INIT section location must be known. This certainly isn’t a vulnerability on its own (you need at least a write-what-where bug to leverage this) but it is also certainly bad practice.

Here’s where things get fun: in order to compare the driver I was analysing to a “known good” sample, I looked into some other drivers I had on my system. Every single driver I investigated, including ones that are core parts of the operating system (e.g. win32k.sys), had the same protection flags. At this point I was a little stumped – perhaps I got something wrong, and the writable flag is needed for some reason? In order to check this, I manually cleared the writable flag on a test driver, and loaded it. It worked just fine, as did several other test samples, from which I can surmise that it is superfluous. I also deduced that this must be a compiler (or linker) issue, since both Microsoft drivers and 3rd party drivers had the same issue. I tried drivers compiled with VS2005, VS2010, and VS2013, and all seemed to be affected, meaning that pretty much every driver on Windows Vista to Windows 8.1 is guaranteed to suffer from this behaviour, and Windows XP drivers probably do too.

INIT section of ATAPI Driver from Windows 8.1

While the target distribution appears to be pretty high, the only practical exploitation path I can think of is as follows:

  1. Application in unprivileged usermode can trigger a driver to be loaded on demand.
  2. Driver leaks a pointer (e.g. via debug output) during initialisation which can be used to determine the address of DriverEntry in memory.
  3. A write-what-where bug in another driver or in the kernel that is otherwise very difficult to exploit (e.g. due to KASLR, DEP, KPP, etc.) is triggered before the DriverEntry completes.
  4. Bug is used to overwrite the end of the DriverEntry function.
  5. Arbitrary code is executed in kernel.

This is a pretty tall order, but there are some things that make it more likely for some of the conditions to arise. First, since any driver can be used (they all have INIT marked as RWX) you only need to find one that you can trigger from unprivileged usermode. Ordinarily the race condition between step 1 and step 4 would be difficult to hit, but if the DriverEntry calls any kind of synchronisation routine (e.g. ZwWaitForSingleObject) then things get a lot easier, especially if the target sync object happens to have a poor or missing DACL, allowing for manipulation from unprivileged usermode code. These things make it a little easier, but it’s still not very likely.

Since I was utterly stumped at this point as to why drivers were being compiled in this way, I decided to contact Microsoft’s security team. Things were all quiet on that front for a long time; aside from an acknowledgement, I actually only heard back from yesterday (2015/09/03). To be fair to them, though, it was a complicated issue and even I wasn’t very sure as to its impact, and I forgot all about it until their response email.

Their answer was as follows:

After discussing this issue internally, we have decided that no action will be taken at this time. I am unable to allocate more resources to answer your questions more specifically, but we do thank you for your concern and your commitment to computer security.

And I can’t blame them. Exploiting this issue would need a powerful attack vector already, and even then it’d be pretty rare to find the prerequisite conditions. The only thing I’m a bit bummed about is that they couldn’t get anyone to explain how it all works in full.

But the story doesn’t end there! In preparation for writing this blog post, I opened up a couple of Microsoft’s drivers on my Windows 10 box to refresh my memory, and found that they no longer had the execute flag set on the INIT section. It seems that Microsoft probably patched this issue in Visual Studio 2015, or in a hotfix for previous versions, so that it no longer happens. Makes me feel all warm and fuzzy inside. I should note, however, that 3rd party drivers such as Nvidia’s audio and video drivers still have the same issue, which implies that they haven’t been recompiled with a version of Visual Studio that contains the fix. I suspect that many vendor drivers will continue to have this issue.

I asked Microsoft whether it had been fixed in VS2015, but they wouldn’t comment on the matter. Since I don’t have a copy of VS2015 yet, I can’t verify my suspicion that they fixed it.

In closing, I’d like to invite anyone who knows more than me about this to provide more information about how/why the INIT section is used and discarded. If you’ve got a copy of VS2015 and can build a quick Hello World driver to test it out, I’d love to see whether it has the RWX issue on INIT.


Disclosure timeline:

  • 29th June 2015 – Discovered Initial driver bug
  • 30th June 2015 – Discovered wider impact (all drivers affected)
  • 2nd July 2015 – Contacted Microsoft with report / query
  • 2nd July 2015 – Microsoft replied with acknowledgement
  • 6th July 2015 – Follow-up email sent to Microsoft
  • [ mostly forgot about this, so I didn’t chase it up ]
  • 3rd September 2015 – Microsoft respond (see above)
  • 3rd September 2015 – Acknowledgement email sent to Microsoft, querying fix status
  • 4th September 2015 – Microsoft respond, will not comment on fix status
  • 4th September 2015 – Disclosed