Archive for February, 2015

TypeScript: Physical Code Organisation

February 21, 2015 3 comments

When I first started reading about TypeScript, I had one main concern: how am I going to make this work with the weird mix of modular code and old-school JS libraries in my existing codebase?

The features of the language itself are very well covered. I found various great introductions (and now there’s the awesome official Handbook), but they all seemed to gloss over certain fundamental questions I had about the physical organisation of code. So I’m going to go through the basics here and try to answer those questions as I go.

Modularity in the Browser

Let’s get the the controversial opinionated bit out of the way. (Spoiler: turns out my opinions on this are irrelevant to TS!)

How should you physically transport your JS into your user’s browser? There are those who suggest you should asynchronously load individual module files on the fly. I am not one of them. Stitch your files together into one big chunk, minify it, let the web server gzip it, let the browser cache it. This means it gets onto the user’s machine in a single request, typically once, like any other binary resource.

The exception would be during the development process: the edit-refresh-debug cycle. Clearly it shouldn’t be minified here. Nor should it be cached by the browser (load the latest version, ya varmint!) And ideally it shouldn’t be one big file, though that’s not as much of an issue as it was a few years ago (even as late as version 9, IE used to crash if you tried to debug large files, and Chrome would get confused about breakpoints).

But I’ve found it pretty straightforward to put a conditional flag in my applications, a ?DEBUG mode, which controls how it serves up the source. In production it’s the fast, small version. In ?DEBUG, it’s the convenient version (separate files).

In neither situation does it need to be anything other than CommonJS. For about four years now I’ve been using CommonJS-style require/exports as my module API in the browser, and it’s the smoothest, best-of-all-worlds experience I could wish for.

So what’s the point of AMD? Apparently “… debugging multiple files that are concatenated into one file [has] practical weaknesses. Those weaknesses may be addressed in browser tooling some day…” In my house they were addressed in the browser in about 2011.

But anyway… deep breath, calms down… it turns out that TypeScript doesn’t care how you do this. It turns us all into Lilliputians arguing over which way up a boiled egg must be eaten.

The kinds of file in TypeScript

In TS, modules and physical files are not necessarily the same thing. If you want to work that way, you can. You can mix and match. So however you ended up with your codebase, TS can probably handle it.

If a TS file just contains things like:

var x = 5;
function f() {
    return x;

Then the compiler will output the same thing (exactly the same thing, in that example). You can start to make it modular (in a sense) without splitting into multiple files:

module MyStuff {
    var x = 5;
    export function f() {
        return x;

var y = MyStuff.f();

That makes an (or extends an existing) object called MyStuff with one property, f, because I prefixed it with export. Modules can nest. So just as in JavaScript there’s one big global namespace that your source contributes properties to, but you can achieve modularity by using objects to contain related things.

You can at this point roll your own pattern: write lots of separate files in the above style, each file being responsible for wrapping its code in a named module, then pass them to the TS compiler and stitch the result into one file.

Now try using export at the top level in your file:

var x = 5;
export function f() {
    return x;

The compiler will complain that you haven’t told it what module system you want to use. You tell it with the flag --module commonjs (or --module amd if you’re crazy). Now it works and does exactly what you’d expect as a user of your chosen module system.

But what does this mean in terms of the static type system of TS and so on? It means that this particular file no longer contributes any properties to the global namespace. By just using the export prefix at the top level, you converted it into what TS calls an external module.

In order to make use of it from another module, you need to require it:

import myModule = require("super-modules/my-module");

(Subsequent versions of TS will add more flexible ways to write this, based on ES6.)

Nagging question that can’t be glossed over: What happens to the string "super-modules/my-module"? How is it interpreted? In the output JS it’s easy: it is just kept exactly as it is. So your module system better understand it. But the compiler also wants to find a TS file at compile time, to provide type information for the myModule variable.

Suppose the importing module is saved at the location:


The compiler will try these paths, in this order, until one exists:

  • somewhere/awesome-code/not-so-much/domestic/super-modules/my-module.ts
  • somewhere/awesome-code/not-so-much/super-modules/my-module.ts
  • somewhere/awesome-code/super-modules/my-module.ts
  • somewhere/super-modules/my-module.ts

i.e. it searches up the tree until it runs out of parent directories. (It will also accept a file with the extension .d.ts, or it can be “tricked” into not searching at all, but we’ll get to that later).

This is a little different to node’s take on CommonJS, where you’d only get that behaviour if your import path started with ./ – otherwise it inserts node_modules in the middle. But this doesn’t matter, as we’ll see.

One advantage of external modules over the first pattern we tried is that it avoids name clashes. Every module decides what name it will use to “mount” modules into its own namespace. Also note that by importing an external module in this way, your module also becomes external. Nothing you declare globally will actually end up as properties of the global object (e.g. window) any more.

So we have two kinds of file: external modules, and what I’m going to call plain files. The latter just pollute the global namespace with whatever you define in them. The compiler classifies all files as plain files unless they make use of import or export at the top level.

How do you call JavaScript from TypeScript?

No need to explain why this is an important question, I guess. The first thing to note is that widely-used JS libraries are packaged in various ways, many of them having longer histories than any popular JS module systems.

What if you’re dealing with something like jQuery and in your own JS you’ve been blithely assuming that $ exists globally? What you’re wishing for is that someone would rewrite jQuery as a plain TS file that says something like:

function $(selector: any) {
    // Um...

No use of export, see? It’s a little trickier than that in reality because $ is not just a function; it has properties of its own. Don’t worry – TS has ways to declare that.

Of course, no one can be bothered to rewrite jQuery in TS and fortunately they don’t have to. TypeScript supports ambient declarations, which are prefixed with the keyword declare like this:

declare var x: number;
declare function f(): number; 

These tell the compiler that somehow arrangements will be made such that the global namespace has properties x and f with those particular shapes. Just trust that they’ll be there, Mr Compiler, and don’t ask any questions. In fact the compiler won’t generate any output code for ambient declarations. (If you’re familiar with the old world of C, think header files, prototypes and extern).

Note that I don’t initialise x or provide a body for f, which would not be allowed; as a result the compiler cannot infer their types. To make the declarations be worth a damn, I specify the type number where necessary.

Finally, you can make sure that a file contains only ambient declarations by naming it with the extension .d.ts. That way, you can tell at a glance whether a file emits code. Your linking process (whatever it is) never needs to know about these declaration files. (Again, by analogy to C, these are header files, except the compiler bans them from defining anything. They can only declare.)

(In case you’re panicking at this point, it isn’t necessary to write your own declarations for jQuery, or for many other libraries (whether in the browser or Node). See DefinitelyTyped for tons of already-written ones.)

What if third party code does use a module system such as CommonJS? For example, if you’re using TS in Node and you want to say:

import path = require("path");

You have a couple of options. The first, and least popular as far as I can tell, is to have a file called path.d.ts that you put somewhere so it can be found by the compiler’s searching algorithm. Inside that file you’d have declarations such as:

export declare function join(...path: string[]): string;

The other option is that you have a file called path.d.ts that you put anywhere you like, as long as you give it to the TS compiler to read. In terms of modules it will be a plain file, not an external module. So it can declare anything you want. But somewhere in it, you write a peculiar module declaration:

declare module "path" {
    export function join(...path: string[]): string;

Note how the module name is given as a quoted string. This tells the compiler: if anyone tries to import "path", use this module as the imported type structure. It effectively overrides the searching algorithm. This is by far the most popular approach.

Reference comments

In some TS code you’ll see comments at the top of the file like this:

///<reference path="something/blah.d.ts" />

This simply tells the compiler to add that file (specified relative to the containing directory of the current file) to the set of files it is compiling. It’s like a crummy substitute for project files. In some near-future version of TS the compiler will look for a tsconfig.json in the current directory, which will act as a true project file (the superb TypeStrong plugin for the Atom editor already reads and writes the proposed format).

In Visual Studio projects, just adding a .ts file to a project is sufficient to get the compiler to read it. The only reason nowadays to use reference comments is to impose an order in which declarations are read by the compiler, as TypeScript’s approach to overloading depends on the order in which declarations appear.

DefinitelyTyped and tsd

If you install node and then (with appropriate permissions) say:

npm install -g tsd

You’ll get a command-line tool that will find, and optionally download, type definition files for you. e.g.

tsd query knockout

Or if you actually want to download it:

tsd query knockout --action install

This will just write a single file at typings/knockout/knockout.d.ts relative to the current directory. You can also add the option --save:

tsd query knockout --action install --save

That will make it save a file called tsd.json recording the precise versions of what you’ve downloaded. They’re all coming from the same github repository, so they are versioned by changeset.


I uhmm-ed and ahhh-ed for a while trying to decide what approach to take with my existing JS code. Should I write type declarations and only write brand new code in TS? Should I convert the most “actively developed” existing JS into TS?

The apparent dilemma stems from the way that .d.ts files let you describe a module without rewriting it, and “rewriting” sounds risky.

But it turned out, in my experience, that this is a false dilemma. The “rewriting” necessary to make a JS file into a TS file is

  1. Not that risky, as most of the actually code flow is completely unmodified. You’re mostly just declaring interfaces, and adding types to the variable names wherever they’re introduced.
  2. Phenomenally, indescribably worth the effort. By putting the types right in the code, the TS compiler helps you ensure that everything is consistent. Contrast this with the external .d.ts which the compiler has to trust is an accurate description. A .d.ts is like a promise from a politician.

In the end, I decided that the maximum benefit would come from rewriting two kinds of existing JS:

  • Anything where we have a lot of churn.
  • Anything quite fundamental that lots of other modules depend on, even if it’s not churning all that much.

You may come to a different conclusion, but this is working out great for me so far. Now when someone on the team has to write something new, they do it in TS and they have plenty of existing code in TS to act as their ecosystem.

I think that’s everything. What have I missed?

Categories: Uncategorized Tags:

Knockout.Clear: Fully automatic cleanup in KnockoutJS 3.3

February 21, 2015 2 comments

Note: This is the background discussion to a library called knockout.clear.


Among the many libraries that try to help us manage the complexity of responsive web UIs, KnockoutJS takes a unique approach that ultimately relies on the classic observer pattern, but with the twist that it can automatically detect dependencies and do all subscribing for you. If you take full advantage of this, you end up with a pattern I call “eventless programming”, a concept which I explored in depth a while back by rebooting it in C#.

The fundamental problem with the Observer pattern

The observer pattern suffers from a problem known as the lapsed listener. In short, if thing A is dependent on thing B, the lifetime of A must be greater than or equal to the lifetime of B, because B is holding a reference to A on its list of observers. This means that if B lasts a very long time (say, forever), then A will too. The end result can be indistinguishable from a memory leak – the very thing that garbage collection is supposed to solve, dagnabbit.

When you are explicitly, manually writing code to subscribe, this is not a surprise: you wrote code to put something on a list, so you must write code to take it off the list. It’s still a pain, but it’s unsurprising.

Knockout’s surprising behaviour

On the contrary, in Knockout you don’t have to write that code. Give an observable containing a string, you can define an observable that contains the string in brackets:

var stringInBrackets = ko.computed(() => "[" + str() + "]);

Without you needing to say so, stringInBrackets has been silently placed on str‘s list of observers (called “subscribers” in Knockout), so when str changes, stringInBrackets gets recomputed. But what if str lives forever whereas stringInBrackets is an ephemeral bit of fluff that you cook up temporarily and then discard? Then you have a problem. And what’s worse, a counterintuitive one. It looks like I’m just getting the value of something. Why should that other thing stop my thing from getting cleaned up?

To solve it, you have to put this somewhere:


When should you do that? When you are never going to need stringInBrackets again. This sounds easier to figure out than it sometimes is. In simple cases, it’s when you get rid of the DOM node that binds to it. But sometimes that’s only a temporary situation and you’ll later rebind it to another DOM node; if you’ve disposed it, it won’t work anymore.

Essentially, it’s the old problem of figuring out a hierarchical pattern of ownership, and making sure you don’t dispose too early, or not at all. Garbage collection is suppose to avoid this. Where you have the dispose pattern, you don’t have the benefits of GC.

Given this, it’s understandable that critics of Knockout sometimes accuse it of replacing one complexity problem with another.

Problem solved (mostly)

But in Knockout 3.2, an interesting new feature was added. We can change our code to say:

var stringInBrackets = ko.pureComputed(() => "[" + str() + "]);

Now when stringInBrackets is first created, it is asleep. Only when it gets its own first subscribe does it execute the evaluator function and become a subscriber to str, transitioning to being awake. Best of all, when stringInBrackets loses its final subscriber, it goes back to sleep, so it unsubscribes from str. Note that it can switch between the asleep/awake states as many times as required; this is in contrast to being disposed, which is a one-way street.

This makes all the difference in the world. Well-behaved UI bindings will take care of properly unsubscribing, which means that if your view model consists only of plain observables and pureComputeds you can wave goodbye to the lapsed listener problem!


… there’s a handy little trick you may have stumbled upon. I call it the “orphan computed”. It looks like this:

ko.computed(() => {
    items().forEach(item => {
        // do something with each item...

It looks weird at first. It’s an observable that always has the value undefined. (In TypeScript, it’s of type void). And therefore nothing observes it; after all, what would be the point? So why do we need it? Because of its side-effects. It’s not a pure function that returns a value. It changes the state of other things. An example would be that it makes minimal updates to another structure (e.g. grouped items).

If you can live without such chicanery you have nothing to worry about. But realistically, side-effects are very handy. There’s a very important example on the Knockout wiki that shows how to automatically unwrap promises, bridging the synchronous and asynchronous worlds, but internally it uses an orphan computed.

Can we switch these to using pureComputed?

ko.pureComputed(() => {
    items().forEach(item => {
        // do something with each item...

In a word: no. The name itself is a clue: pure functions don’t have side-effects. But it’s really not the side-effects that are the problem; it’s the fact that its an orphan. The pureComputed will begin in the asleep state. As nothing ever asks for its value, it never wakes up, so never executes its evaluator at all.

So pureComputed, which promised so much, would seem to be a bust. But hold your horses, my fine friend. What’s that I hear coming over the hill like so many tedious metaphors? It’s our old friend lateral thinking!

In many situations, the solution is simple: don’t make it an orphan. Make the view observe its value, and do nothing with it. You can do this with a trivial binding, which I call execute:

ko.bindingHandlers.execute = {
    init: function() {
        return { 'controlsDescendantBindings': true };
    update: function (element, valueAccessor) {
        // Unwrap recursively - so binding can be to an array, etc.
ko.virtualElements.allowedBindings.execute = true;

You can use it like any other binding in your template. If you use a comment binding, it will make no difference at all to the DOM structure:

<!-- ko execute: mySideEffects --><!-- /ko -->

You can give it any pureComputed (or an array of them) that you want to keep awake. It’s the easy-to-understand, kinda-hacky way to keep a pureComputed awake so you can enjoy its side effects. A lot of the time, this gets you where you want to be, and doesn’t involve any extra weird concepts. It’s just more of the same, and it’s very easy to get right.

An alternative for more demanding scenarios

With the arrival of Knockout 3.3 we get a subtle enhancement that is another game-changer.

An observable can emit other events besides change (which it emits to notify of its value changing). In 3.3 pureComputed now emits awake and asleep events, as described in issue #1576, so we can react to its state changing. I know, it doesn’t sound that earth-shattering at first, but we can use it to build a new utility, which I’ve taken to calling ko.execute.

Here it is in a simplified form:

ko.execute = function(pureComputed, evaluator, thisObj) {

    function wake() {
        if (!disposable) {
            disposable = ko.computed(function() {

    var disposable;
    pureComputed.subscribe(wake, null, "awake");
    pureComputed.subscribe(function() {
        if (disposable) {
            disposable = null;
    }, null, "asleep");

You use it to make orphans, just like you used to with ko.computed. The difference is that rather than having to remember to dispose at exactly the right time, instead you pass it a pureComputed which will keep your orphan awake:

ko.execute(stringInBrackets, () => {
    items().forEach(item => {
        // do something with each item...

It’s an alternative to the execute binding where, instead of referring to your side-effector from a binding, you associate it with something else to which you’re already binding. I’m going to call the first argument to execute the nanny, because it wakes your orphan up and puts it to sleep again.

But it has two limitations:

  • The nanny must be a pureComputed. This is a slight pain; in Knockout 3.3 ordinary observables don’t fire events to tell you when they transition between asleep and awake.
  • Your ko.execute‘s evaluator function must not depend on its nanny.

The second restriction is perhaps surprising, but think about it: your orphan will stay awake if there are any subscriptions to its nanny. If the orphan itself subscribes to the nanny, it will keep itself awake, and it will be no different to a plain ko.computed.

The full implementation of ko.execute, which can be found here, checks for both these conditions, making it impossible to use it incorrectly without finding out.

Using ko.unpromise to tame your asynchronous calls

Now we can re-examine that important use-case I mentioned earlier, involving asynchronous calls. The Knockout wiki gives a simple example implementation:

function asyncComputed(evaluator, owner) {
    var result = ko.observable();

    ko.computed(function() {;

    return result;

To make it work with modern standard-conforming promise libraries (as well as jQuery’s almost-promises) the done should be changed to then. But also we need to eliminate that ko.computed:

function asyncComputed(evaluator, owner) {
    var result = ko.observable();
    var wrapper = ko.pureComputed(result);

    ko.execute(wrapper, function() {;

    return wrapper;

See how it’s done? We dress up the result in pureComputed clothes, and that’s what we return, so our caller will be able to depend on it and so wake it up. And internally, we use that same pureComputed to be the “nanny” of our ko.execute, so it will wake up in sync with the nanny. When we get a result from the evaluator function, we poke it into the result observable.

Note how we obey the rules of ko.execute: we pass it a pureComputed as the first argument, and the second argument is an evaluator function that returns nothing and does not depend on the first argument.

Introducing knockout.clear

These few simple facilities combine to form a framework in which you can use Knockout without ever needing to dispose anything manually. Instead of your observables transitioning from alive to dead (or disposed), which is a one-way street, they are now able to transition between asleep and awake as necessary. When they are asleep, they are potentially garbage-collectable, not kept alive by being a lingering subscriber.

I’ve put them together in a very small library: knockout.clear

Let me know if you find it useful!

Categories: Uncategorized Tags:

TypeScript 1.6 – Async functions

February 1, 2015 7 comments

Update: have retitled this post based on the roadmap, which excitingly now has generators and async/await slated for 1.6!

I realise that I’m in danger of writing the same blog post about once a year, and I am definitely going to start making notes on my experiences using TypeScript generally, now that I’m using it on an industrial scale (around 40,000 lines converted from JavaScript in the last month or so, and the latest features in 1.4 have taken it to a new level of brilliance).

But the fact is, we’re getting tantalisingly close to my holy grail of convenient async programming and static typing in one marvellous open source package, on JavaScript-enabled platforms. If you get TypeScript’s source:

git clone
cd TypeScript

And then switch to the prototypeAsync branch:

git checkout prototypeAsync

And do the usual steps to build the compiler:

npm install -g jake
npm install
jake local 

You now have a TypeScript 1.5-ish compiler that you can run with:

node built/local/tsc.js -t ES5 my-code.ts

The -t ES5 flag is important because for the async code generation the compiler otherwise assumes that you’re targeting ES6, which (as of now, in browsers and mainstream node) you probably aren’t.

And then things are very straightforward (assuming you have a promisified API to call):

    async function startup() {

        if (!await fs.exists(metabasePath)) {
            await fs.mkdir(metabasePath);
        if (!await fs.exists(coverartPath)) {

        console.log("Loading metabase...");
        var metabaseJson: string;
        try {
            metabaseJson = await fs.readFile(metabaseFile, 'utf8');
        } catch (x) {
            console.log("No existing metabase found");

        // and so on...

This corresponds very closely to previous uses of yield (such as this), but without the need to manually wrap the function in a helper that makes a promise out of a generator.

As explained in the ES7 proposal the feature can be described in exactly those terms, and sure enough the TypeScript compiler structures its output as a function that makes a generator, wrapped in a function that turns a generator into a promise.

This of course made me assume that ES6 generator syntax would also be implemented, but it’s not yet. But no matter! As I previously demonstrated with C#, if a generator has been wrapped in a promise, we can wrap it back in a generator.

To keep the example short and sweet, I’m going to skip three details:

  • exception handling (which is really no different to returning values)
  • passing values into the generator so they are “returned” from the next use of yield (similarly, passing in an Error so it will be thrown out of yield)
  • returning a value at the end of the generator.

The first two are just more of the same, but the last one turned out to be technically tricky and I suspect is impossible. It’s a quirky and non-essential feature of ES6 generators anyway.

To start with I need type declarations for Promise and also (for reasons that will become clear) Thenable, so I grabbed es6-promise.d.ts from DefinitelyTyped.

Then we write a function, generator that accepts a function and returns a generator object (albeit a simplified one that only has the next method):

    function generator<TYields>(
        impl: (yield: (val: TYields) => Thenable<void>
    ) => Promise<void>) {

        var started = false,
            yielded: TYields,
            continuation: () => void;

        function start() {
            impl(val => {
                yielded = val;
                return {
                    then(onFulfilled?: () => void) {
                        continuation = onFulfilled;
                        return this;

        return {
            next(): { value?: TYields; done: boolean } {
                if (!started) {
                    started = true;
                } else if (continuation) {
                    var c = continuation;
                    continuation = null;
                return !continuation ? { done: true } 
                    : { value: yielded, done: false };

The impl function would be written using async/await, e.g.:

    var g = generator<string>(async (yield) => {


        await yield("first");


        for (var n = 0; n < 5; n++) {
            await yield("Number: " + n);

        await yield("last");


Note how it accepts a parameter yield that is itself a function: this serves as the equivalent of the yield keyword, although we have to prefix it with await:

    await yield("first");

And then we can drive the progress of the generator g in the usual way, completely synchronously:

    for (var r; r =, !r.done;) {    
        console.log("-- " + r.value);

Which prints:

-- first
-- Number: 0
-- Number: 1
-- Number: 2
-- Number: 3
-- Number: 4
-- last

So how does this work? Well, firstly (and somewhat ironically) we have to avoid using promises as much as possible. The reason has to do with the terrifying Zalgo. As it says in the Promises/A+ spec, when you cause a promise to be resolved, this does not immediately (synchronously) trigger a call to any functions that have been registered via then. This is important because it ensures that such callbacks are always asynchronous.

But this has nothing to do with generators, which do not inherently have anything to do with asynchronicity. In the above example, we must be able to create the generator and iterate through it to exhaustion, all in a single event loop tick. So if we rely on promises to carry messages back and forth between the code inside and outside the generator, it just ain’t gonna work. Our “driving” loop on the outside is purely synchronous. It doesn’t yield for anyone or anything.

Hence, observe that when generator calls impl:

    impl(val => {
        yielded = val;
        return {
            then(onFulfilled?: () => void) {
                continuation = onFulfilled;
                return this;

it completely ignored the returned promise, and in the implementation of yield (which is that lambda that accepts val) it cooks up a poor man’s pseudo-promise that clearly does not implement Promises/A+. Technically this is known as a mere Thenable. It doesn’t implement proper chaining behaviour (fortunately unnecessary in this context), instead returning itself. The onFulfilled function is just stashed in the continuation variable for later use in next:

    if (!started) {
        started = true;
    } else if (continuation) {
        var c = continuation;
        continuation = null;
    return !continuation ? { done: true } 
                         : { value: yielded, done: false };

The first part is trivial: if we haven’t started, then start and remember that we’ve done so. Then we come to the meat of the logic: if this is the second time next has been called, then we’ve started. That means that impl has been called, and it ran until it hit the first occurrence of await yield, i.e.:

    await yield("first");

The TypeScript compiler’s generated code will have received our Thenable, and enlisted on it by calling then, which means we have stashed that callback in continuation. To be sure we only call it once, we “swap” it out of continuation into a temporary variable before we call it:

    var c = continuation;
    continuation = null;

That (synchronously) executes another chunk of impl until the next await yield, but note that we left continuation set to null. This is important because what if impl runs out of code to execute? We can detect this, because continuation will remain null. And so the last part looks like this:

    return !continuation ? { done: true } 
                         : { value: yielded, done: false };

Why do we have to use this stateful trickery? To reiterate (pun!) the promise returned by impl is meant to signal to us when impl has finished, but it’s just no good to us, because it’s a well-behaved promise, so it wouldn’t execute our callback until the next event loop tick, which is way too late in good old synchronous generators.

But this means we can’t get the final return value (if any) of impl, as the only way to see that from the outside is by enlisting on the returned promise. And that’s why I can’t make that one feature of generators work in this example.

Anyway, hopefully soon this will just be of nerdy historical interest, once generators make it into TypeScript. What might be the stumbling block? Well, TypeScript is all about static typing. In an ES6 generator in plain JavaScript, all names (that can be bound to values) have the same static type, known in TypeScript as any, or in the vernacular as whatever:

    function *g() {
        var x = yield "hello";
        var y = yield 52;
        yield [x, y];

    var i = g();
    var a =;
    var b =;
    var c ="humpty").value;

The runtime types are another matter: as the only kind of assignments here are initialisation, so each variable only ever contains one type of value, we can analyse it and associate a definite type with each:

x: number = 61
y: string = "humpty"
a: string = "hello"
b: number = 52;
c: [number, string] = [61, "humpty"]

But in TypeScript, we want the compiler to track this kind of stuff for us. Could it use type inference to do any good? The two questions to be answered are:

  • What is the type of the value accepted by next, which is also the type of the value “returned” by the yield operator inside the generator?
  • What is the type of the value returned in the value property of the object returned by next, which is also the type of value accepted by the yield operator?

The compiler could look at the types that the generator passes to yield. It could take the union of those types (string | number | [number, string]) and thus infer the type of the value property of the object returned by next. But the flow of information in the other direction isn’t so easy: the type of value “returned” from yield inside the generator depends on what the driving code passes to next. It’s not possible to tie down the type via inference alone.

There are therefore two possibilities:

  • Leave the type as any. This is not great, especially not if (like me) you’re a noImplicitAny adherent. Static typing is the whole point!
  • Allow the programmer to fully specify the type signature of yield.

The latter is obviously my preference. Imagine you could write the interface of a generator:

    interface MyGeneratorFunc {
        *(blah: string): Generator<number, string>;

Note the * prefix, which would tell the compiler that we’re describing a generator function, by analogy with function *. And because it’s a generator, the compiler requires us to follow up with the return type Generator, which would be a built-in interface. The two type parameters describe:

  • the values passed to yield (and the final return value of the whole generator)
  • the values “returned” from yield

Note that the first type covers two kinds of outputs from the generator, but they have to be described by the same type because both are emitted in the value property of the object returned by the generator object’s next method:

function *g() {
    yield "eggs";
    yield "ham";
    return "lunch";

var i = g(); // {value: "eggs", done: false} // {value: "ham", done: false} // {value: "lunch", done: true} - Note: done === true 

Therefore, if we need them to be different types, we’ll have to use a union type to munge them together. In the most common simple use cases, the second type argument would be void.

This would probably be adequate, but in reality it’s trickier than this. Supposing in a parallel universe this extension was already implemented, but async/await was still the stuff of nightmares, how might we use it to describe the use of generators to achieve asynchrony? It’d be quite tricky. How about:

    interface AsyncFunc {
        *(): Generator<Promise<?>, ?>;

See what I mean? What replaces those question marks? What we’d like to say is that wherever yield occurs inside the generator, it should accept a promise of a T and give back a plain T, where those Ts are the same type for a single use of yield, and yet it can be a different T for each use of yield in the same generator.

The hypothetical declaration above just can’t capture that relation. It’s all getting messy. No wonder they’re doing async/await first. On the other hand, we’re not in that universe, so maybe this stuff does’t matter.

These details aside, given how mature the prototype appears to be, I’m very much hoping that it will be released soon, with or without ordinary generators. It’s solid enough for me to use it for my fun home project, and it’s obviously so much better than any other way of describing complex asynchronous operations that I am even happy to give up IDE integration in order to use it (though I’d be very interested to hear if it’s possible to get it working in Visual Studio or Eclipse).

(But so far I’m sticking to a strict policy of waiting for official releases before unleashing them on my co-workers, so for now my day job remains on 1.4. And so my next post will be about some of the fun I’m having with that.)