Coming from a C++ background, I always balk at designs that involve lots of small heap allocations. It’s something I’m gradually training myself to stop doing, because in managed code, it’s a completely irrational concern.
Given the constant usefulness of lambdas for things like this, it is worth reassuring myself (and others) that the overhead of such techniques is negligible. I already knew it probably was, but I had no idea just how negligible. It really is ridiculously unimportant.
I complain to anyone who will listen about the poor language support in C# for the IDisposable interface. Yeah, we’ve got the using statement, and that’s fine as far as it goes.
But compare that to what C++/CLI has: the most complete support of any .NET language. Not just the equivalent of the using statement, but also automatic implementation of IDisposable that takes care of disposing of nested owned objects, including inherited ones, like magic (relatively speaking, unless you’re a C++ programmer in which case you’ve taken it for granted for a decade or two).
The relationship between deterministic clean-up (i.e. destructors) and garbage collection (i.e. finalizers) was quite vaguely understood until Herb Sutter clarified as part of his work on C++/CLI. But now it’s all very clear how it should work – the only problem is, there doesn’t seem to be any movement towards fixing it in any future version of C#.
Microsoft "released" the source of the .NET framework a while ago. The details are here:
But of course, it’s not that simple. You’re not supposed to be able to read the source willy-nilly; you’re only supposed to be able to see it when you’re debugging a program. So there’s some instructions to set it up with Visual Studio. I got this to work once, but since I applied Service Pack 1 it hasn’t worked – I just get blank files.
Then there’s this curious page:
CLR programs have memory leaks, just like unmanaged ones, but for a different reason. There are lots of opportunities to add your object to a list – the major example being enlisting for events. If the event source lives for the lifetime of the program, then any objects listening for that event will also last for the lifetime of the program. So if you accidentally leave an object enlisted on such an event, you have created a memory leak – much like forgetting to call delete after calling new in C++.
But what is far better about the CLR is that it can pretty much tell you exactly what is causing a leak, although the technique is only documented on a few blog posts here and there. Also, they seem to be written by people who prefer using windbg. But after some fooling around, I’ve made it work in Visual Studio, which is a lot more convenient.
Say you’ve written a big complicated application in C++, composed of a large number of COM objects that chat to each other through custom interfaces declared in IDL.
After a decade or so of this, you’re not getting any younger and you don’t want to spend another decade looking at such ugly code. Or maybe you want to open up your product for extension by third parties but you want them to be able to use “modern” (.NET) languages. Either way, the interop support of the CLR is just what you need.
However, when you start reading about it, a question might occur to you. Well, it ought to, even if it doesn’t. How does garbage collection interact with COM reference counting?