Archive for the ‘Uncategorized’ Category

To be continued…

March 10, 2016 Leave a comment

I’ve finally lost all patience with WordPress!

For future episodes, see:

Go to


(or if that doesn’t work, try Google I guess…)

Categories: Uncategorized

Is TypeScript really a superset of JavaScript? And does it even matter?

July 11, 2015 2 comments


  • What does it mean for a programming language to be a superset of another programming language?
  • What’s a programming language?
  • What’s a program?

In this discussion, a program, regardless of language, is a stream of characters.

If you generated a random stream of characters, it might be a valid program in some hypothetical language, just as the arrangement of stars in the night sky as viewed from Earth might happen to spell out an insulting message in some alien language we’ll never know about.

So a programming language is both:

  • the rules for deciding whether a given stream of characters is a valid program, from that language’s point of view, and,
  • the set of valid programs, because they are streams of characters that conform to those rules.

It’s the second (slightly surprising) formulation we’re interested in here, because it means that when we say “language A is a superset of language B”, we mean that A and B are sets of programs, and set A includes all the programs in set B. This is useful information, because it means all the programs we wrote in language B can immediately used in language A, without us needing to change them.

People get very muddled about this, because they think of the programming language as a set of rules instead of a set of programs, and therefore assume that a superset would include all the rules of the subset language, plus some extra rules. This could make it stricter, rejecting some previously valid programs, or it could make it looser, allowing new syntactic forms. So without knowing the details of the extra rules in question, we wouldn’t know what’s happened.

So the “set of rules” sense is far less useful than the “set of programs” sense, which does actually tell us something about the compatibility between the languages.

The most common statement in introductions and tutorials about TypeScript is that it is a superset of JavaScript. Really? Here’s a valid JavaScript program:

var x = 5;
x = "hello";

Rename it to .ts and compile it with tsc and you’ll get an error message:

Type 'string' is not assignable to type 'number'.

We can fix it though:

var x: any = "hello";
x = 5;

We’ve stopped the compiler from inferring that x is specifically a string variable just because that’s what we initialised it with. Plain JavaScript can be retro-imagined as a version of TypeScript that assumes every variable is of type any.

In any case, one example is sufficient to show that TypeScript is not a superset of JavaScript in the more useful “set of valid programs” sense, and it seems we’ve found one. Except it’s a bit murkier than that.

If you looked in the folder containing your source file right after you tried to compile the “broken” version, you would have found an output .js file that the TypeScript compiler had generated quite happily.

TypeScript makes your source jump over two hurdles:

  1. Is it good enough to produce JavaScript output?
  2. Does it pass type checks?

If your source clears the first hurdle, you get a runnable JavaScript program as output even if it doesn’t clear the second hurdle. This quirk allows TypeScript to claim to be a superset of JavaScript in the set-of-programs sense.

But I’m not sure it counts for much. Is anyone seriously going to release a product or publish a site when it has type errors in compilation? They wouldn’t be getting any value from TypeScript (over any ES6 transpiler such as Babel). The compiler has a couple of switches that really should be enabled in any serious project:

  • --noEmitOnError – require both hurdles to be cleared (the word “error” here refers to type errors).
  • --noImplicitAny – when type inference can’t deduce something more specific than any, halt the compilation.

If you’re going to use TypeScript (hint: you are) then use it properly.

And in that case, it is not a superset of JavaScript. And this is a Good Thing. The whole point is that existing JavaScript programs, due to the language’s dynamically typed looseness, very often contain mistakes that would be trapped by TypeScript. The example above, where a variable is reused for different types, might be a mistake but might not (in performance terms, it’s probably a mistake in that it stops modern JS runtimes from optimising code that accesses the variable).

When we want to use JavaScript, unchanged, as part of a TypeScript project, we just leave it as JavaScript and wrap it in d.ts declarations. It’s no big deal. You only change the extension to .ts because you want it to be more rigorously checked, so you know that the types make sense all the way down into the nitty gritty.

The parallel with C++’s relationship to C is striking. In C (specifically, ANSI C prior to the 1999 standard) it was not necessary to declare a function before you called it. In C++ it became mandatory. This – amongst other differences – meant that C++ was never a superset of C.

But this didn’t matter too much, because C++ was a superset of well-written C – to the extent that every C example in the 2nd edition of K&R was valid C++.

So TypeScript, in any meaningful sense, is not a superset of JavaScript, but this is nothing to get hung up over; if it were a superset of JavaScript, it would be considerably less useful.

Categories: Uncategorized Tags: ,

TypeScript 1.5: Get the decorators in…

April 2, 2015 3 comments

Update 2016-03-07: My new library, Doop, is a more practical demonstration of what you can do with decorators.

No great mysteries about what this class does:

class C {
    foo(n: number) {
        return n * 2;

It’s a stupid example with a method that returns its only parameter multiplied by two. So this prints 46:

var c = new C();

A feature coming in TS 1.5 that has really caught my eye is decorators. We can decorate the foo method:

class C {
    foo(n: number) {
        return n * 2;

That on its own will not compile, because we need to define log. Here’s my first try:

function log(target: Function, key: string, value: any) {
    return {
        value: function (...args: any[]) {

            var a = => JSON.stringify(a)).join();
            var result = value.value.apply(this, args);
            var r = JSON.stringify(result);

            console.log(`Call: ${key}(${a}) => ${r}`);

            return result;

Note: previously I used arrow function syntax to declare value, but as pointed out in this answer on Stack Overflow, that interferes with the value of this, which ought to be passed straight through unchanged.

The three parameters, target, key and value, are passed in by the helper code generated by the TypeScript compiler. In this case:

  • target will be the prototype of C,
  • key will be the name of the method ("foo"),
  • value will be a property descriptor, which here will have a value property that is the function that doubles its input.

So my log function returns a new property descriptor that the TypeScript compiler will use as the new definition of foo. This means I can wrap the real function in one that logs information to the console, so here I log the arguments and the return value.

So when the resulting program runs in node, it prints two lines instead of just one:

Call: foo(23) => 46

This is really quite neat. The next thing I think is: can I decorate a simple value so that it turns into a property that is an observable that can be listened to for change events? I’m going to find out.

Acquisition Machines

March 29, 2015 4 comments

This is off-topic from my usual technology geek focus, but it was prompted by reading a blog post Never Invent Here: the even-worse sibling of “Not Invented Here” by Michael O. Church, which reminded me of some things I’ve observed.

My experience of Never-Invent-Here has fortunately been brief and infrequent, and of a specific kind, but I know exactly why it happens and how to recognise it and what to do about it. It’s kind of inevitable.

When your exciting, innovation-happy employer has reached a kind of peak, and has a bunch of customers paying a steady revenue stream, and there’s no more inventing to do, they get acquired by what I call an Acquisition Machine (AM). These serve the opposite purpose to a VC, in that they get in at the top floor instead of the ground floor. They come in at the end of the story. They are 100% risk averse.

An AM has a large-ish technical staff, all former inventors from the mature companies they’ve gobbled up (having fired all the other staff). They never intentionally invent anything in-house. They almost never hire anyone new. The tech staff are only there to put out fires in old products for old customers (only the ones covered by maintenance contracts – that existing stream of revenue is the entire focus of the business).

So you may find yourself as one of those technical staff. You basically work in a museum now. The only projects that come up are “consolidation” efforts: making several old products look like one. In fact the management may just be inventing busy-work to keep you distracted during the down time. That’s cool! You can pitch approaches that involve the latest technologies, get skilled up. Play-act at inventing for a while.

Look at the world from the AM’s perspective. They’re terrified, risk-averse owners of shares that are never going to get back to what they were worth in 1999. They have a choice:

a) Bet on something invented in-house by one of these excitement-starved nerds we employ, and go to all the trouble of marketing it… OMG this sounds difficult and it probably won’t work… no way of estimating how much it will cost in total or how much we’ll ever make. Let’s not.

b) Buy an entire product-business-unit, something already proven, with customers already paying for it, something that is already clearly quantified with known cash inputs and outputs, whose outgoing management have already helpfully made a list of who you need to keep on and who you can safely lay off! Ready to plug into the machine and slowly drain down, generating cash to go on the pile, ready for the next acquisition.

Which are they going to go for? They are not in this for the excitement! It’s a slow way to grow, but it is a kind of growth. The target company’s old management are running out of exit routes, so they sell up cheap and so the AM does actually make a profit in the end. It just doesn’t do it by inventing anything. There’s no need.

I’m painting an extreme caricature of course – real companies may be somewhere on a spectrum from innovator to AM. Or they may be made up of parts with different degrees of maturity/stagnation. But the more stagnant it is, the worse it will be for anyone who wants to invent new stuff.

So if you are an inventor, and your employer gets bought, look for the signs that you’re stuck inside an AM. If you are, wait until you find a new opportunity elsewhere to start over, and then get the hell out. You can’t “fix” an AM. There’s nothing to fix. They’re just not the kind of business you want to be in. They have their own reason to exist.

React-ions – Part 2: Flux, The Easy Way

March 20, 2015 10 comments

The second of a two-part series about React:

Catching up on Flux has been an amusing experience. It’s like reading about a dance craze in the 1950s. Instead of “The Twist”, everybody wants to do “The Flux”. People are nervously looking in the mirror as they try out the moves. They write to the newspaper agony aunt, “I tried to Flux, but am I doing it right?” They want a member of the priesthood to bless their efforts with holy jargon, and say “You are one of us, Daddio!”

You’d think someone with a software project would rather ask:

  • Did my app work, in the end?
  • Does it perform okay?
  • Was it easy?
  • How much boilerplate crap did I have to paste in from blog posts?
  • Do I feel comfortable with the complexity or did it get out of control?
  • Do I know how to extend it further without creating a mess?

And based on their own answers to those questions, they should be able to figure out whether an approach was worthwhile for them.

My executive summary of Flux is: it’s a niche approach at best. For a lot of (maybe most) dynamic interactive UI development it’s not the right choice, because it’s error prone and unwieldy without providing significant advantages.

So how did I reach this conclusion?

It’s extraordinary that of the many hundreds of blog posts about Flux, hardly any try to explain it or justify it. They just describe it, without reference to long-established patterns it partly resembles, and without clarifying why it takes the trouble to deviate from those patterns. (Even worse, most explanations give truncated examples of how it might be used which don’t proceed far enough to demonstrate its intended purpose.)

The three primary sources of information I’ve drawn on are:

Derived Data

From the official site:

We originally set out to deal correctly with derived data: for example, we wanted to show an unread count for message threads while another view showed a list of threads, with the unread ones highlighted. This was difficult to handle with MVC – marking a single thread as read would update the thread model, and then also need to update the unread count model. These dependencies and cascading updates often occur in a large MVC application, leading to a tangled weave of data flow and unpredictable results.

Holy hype alarm, Batman! The part about the tangled weave is absolutely not justified by the scenario being described. This is a very familiar situation: some data set B needs to be computed from some other data set A. How did Facebook end up with a tangled weave and unpredictable results? Did they visit the wrong wig shop?

A sensible approach would be to come up with a pure function that accepts A and returns B. If that is an unworkable technique that causes “unpredictable results”, then someone needs let the applied mathematicians know. Then hopefully they can break it gently to the pure mathematicians who will then need to rebuild their entire subject from scratch.

For an example of something that does this right, look no further than a React component, which has state and props, and if either of those changes then the render function is evaluated to generate a complete new virtual DOM tree – relevant portion of the video.

Key lesson: first, try writing a pure function. If the performance is unacceptable (in 99.9% of cases, it’ll be fine) then consider alternatives.

In the video there is an example of how Facebook chat feature got more complex as it evolved. The code you can see growing on the screen is a function that runs every time the user has a new message – it’s effectively a handler for that event. They interpret the event by dishing out modifications to several different parts of the UI that may or may not be interested in what happened, depending on their current state.

They’re right – it was a very complicated way of doing it. They weren’t using pure functions. They should do what React components do: update a single definitive model of data (the state from which everything else can be computed), and then let the other components know that something has changed (no need to be specific), so they can all recompute all their data from scratch as a pure function.

In this case, that means keep all the messages received so far on a list. When a new message arrives, add it to the list and notify anything that needs to update.

It’s a common reaction to think how wasteful and inefficient it is to do that. But the second half of the video (that half that is not about Flux) is devoted almost entirely to dispelling that belief, as the React DOM reconciliation approach assumes that there will be an insignificant cost to recomputing the entire new virtual DOM every time anything changes.

In this case, rather than going from component state to virtual DOM, we’re going from list-of-all-messages to (for example) count-of-unread-messages. The functional-reactive approach here is to scan the array of message objects and count how many have a boolean property called read that is true. On my notebook such an operation is too fast for the JS timer resolution for any realistic number of messages. For a million messages it takes 18 milliseconds.

The Root of All Evil

The problem here, as so often, is premature optimisation, which is the idea that you can achieve “high performance” by doing everything the hard way. It’s simply not true.

The single most important quality software can have is malleability. It must be easy to change without breaking it. This leads to high performance software because the best way to achieve that is to measure the performance with a profiler and make careful, valuable optimisations only to those specific spots that you have found to be genuine bottlenecks. Easy to change means easy to optimise.

If you lean heavily on pure functions this will be a huge help to you as you apply performance tricks, because you can use caching very easily. And a clear distinction between mutable and immutable data is also important because it makes it easy to know when you need to clear the cache. Again, React components demonstrate this perfectly.

The Flux approach

Instead of radically simplifying and using pure functions like React, the aim of Flux is to stick with the difficult, fine-grained state-mutating approach shown in the video, where each time a message arrives, you run a very imperative set of code that mutates this, mutates that, mutates something else, in an attempt to bring them all up to a state that is consistent with what is now known.

In other words, if you want to keep doing it the hard way, Flux might just be the approach for you.

Here’s the simplest diagram:

Those arrows are effectively function calls, but via callbacks. So the thing on the right has registered a callback with the thing on the left, so that left can call right, and the influence of an action ripples through the layers.

  • An action is a plain JS object tagged with string type property (someone loves writing switch statements!) representing a request to update some data. Think of it as an abstraction of a mutating function call on your data model. As well as type it can contain any other parameters needed by the notional mutating function.

  • Having constructed an action, you pass it to the Dispatcher, which is a global singleton(!). The dispatcher has a list of subscriber callbacks, the subscribers are known as “stores”, and they are also global singletons(!!). The dispatcher loops through the stores and passes the action to all of them.

  • Each store’s subscription callback has a switch statement (hello!) so it can handle specific action types. It makes selective mutations to its internal (global singleton) state according to the instructions in the action, and raises its own change event – each store is an event source.

  • A view is a React component that subscribes to one (or maybe more) stores in the traditional way, i.e. as an event sink. Not all components have to do this. They speak of “controller-views” (two buzzwords for the price of one) that are specific React components that take care of subscribing and then use props to pass the information down their subtree in the standard React way.

So actions carry instructions to mutate data, and they make it as far as the store, where they cause data to be mutated. The last arrow is slightly different: it is not an action being passed. It’s just a change event, so it carries no data. To actually get the current data, the subscribing React component must call a public method of the store to which it is subscribing.

The point of all this, from the web site:

Stores have no direct setter methods like setAsRead(), but instead have only a single way of getting new data into their self-contained world — the callback they register with the dispatcher.

Why are stores banned from having their own setter methods? Because otherwise it would be possible to update their contents without also notifying all other stores that need to update in sync. That’s how the “derived data” problem is to be solved. If you only had one data store interested in various actions, there would be no point to any of this. (So please, if you’re thinking of blogging on this topic, remember to include in your example several stores that respond to the same actions so they can make corresponding updates to remain in sync.)

The examples

There are currently two examples in the github repository: TodoMvc and Chat.

TodoMvc has a problem as an example: it only has one store. This means it doesn’t actually have the problem that Flux is intended to solve. If there’s only one store, there’s no need for separate actions that go via a dispatcher to let multiple stores listen in. It could just have a store with ordinary methods that mutate the state of the store and fire the change event.

Chat has three stores, and is based on the Facebook chat scenario covered in the talk, so it’s got potential to be lot more applicable and illuminating.

In Chat, the three stores are:

  • MessageStore – a flat list of all messages across all threads
  • ThreadStore – a list of threads, with only the last message in each thread
  • UnreadThreadStore – an integer: the number of unread messages across all threads

The last one is more than a little ironic: if you look closely at the video, they were originally responding to events, and so decrementing/incrementing the unread message count. But in the Flux Chat example, even though they’re demo-ing a framework that is based on events so it can do exactly that kind of minimal mutation of the existing data, instead they’ve written the example so it recomputes the count by looping through all the messages (at the moment it doesn’t even cache the count).

If you’re going to be recomputing from scratch like that (and why would’t you?) then the strict action-dispatching approach of Flux is not actually going to be serving any purpose. It’s just ceremony. You could just have primary stores that store data; they’d have simple methods you could call to make them (a) mutate their data and (b) fire their own change event. Then you could have secondary stores that recompute their data in response to change events from other stores (both primary and secondary).

The UnreadThreadStore also gives us a demonstration of a dispatcher function called waitFor. This gives stores control over the order in which stores handle actions, essentially by telling the dispatcher to run specific stores’ action handlers synchronously for the current action. The reason a store will do this is because it wants to read data from those other stores, and it needs to do this after the other stores have updated their state.

It would make more sense for it to listen to the other store’s change event. A project called Reflux suggests doing exactly that.

I’ve seen discussions where people claimed that stores listening to other stores’s change events, perhaps through several layers, is the kind of “tangled weave of data flow and unpredictable results” that Flux is trying to avoid. But it’s just not. A tree of chained event handlers is a fine example of clean composition. If the arrangement is immutable (once constructed, a listener cannot be switched to listen to a different event source) then infinite loops are impossible.

The mess Facebook originally experience was not due to chained event handling, but due to disorganised fine-grained mutation of lots of different states in a single event handler.

Through the use of waitFor, Flux effectively does have stores listening to other stores change events. But it’s worse than that, because notice how in the UnreadThreadStore it has to listen for the two actions that it knows will cause the ThreadStore to change its data:

case ActionTypes.CLICK_THREAD:


If someone changes ThreadStore so it responds to a new action by mutating its state, that someone will have to check all the other stores to see whether they also need to respond to the new action, because these other stores may depend on the state of ThreadStore which is now changing more often. If the other stores just listened on ThreadStore‘s change event this would not be necessary. It would be an internal detail of ThreadStore, and the rest of the system would be isolated from that detail. By following the Flux approach, you are encouraged to spread around knowledge of how every store responds to actions. The intention is to maximise “performance”, but to reiterate, React itself does it much more simply: if any state or props of a component changes in any way, the whole component’s virtual DOM is re-computed by a single render function.

And to be clear, there is only a purpose to any of this if you are doing fine-grained mutation of stored state in response to the same actions in multiple stores, which UnreadThreadStore clearly doesn’t do.

So enough about UnreadThreadStore. How about ThreadStore and MessageStore? Again, they are peculiar. If you run the Chat demo site, you’ll notice that there is no page where you can see all the messages regardless of the thread they are in. What you see is a list of all the threads, and the messages in the currently selected thread.

It’s strange therefore that MessageStore maintains a list of all messages across all threads. It’s even stranger that it then has a function that, on demand, filters that list to get a list of just the messages for one thread! Again, this is the right way to do it, but it makes a mockery of the action-dispatch approach.

So ThreadStore is our last hope, and it delivers! It is actually another store that responds to some of the same actions as MessageStore and mutates its own data. Hurrah! But again, something really weird has happened. There’s a function getAllChrono that recomputes, from scratch, every time it is called, a sorted list of all the threads. And that’s the function that the associated React component calls so it can display the list.

Let’s consider some simpler alternatives:

  • One store that stores all the messages in a list. When you want to know the list of threads, scan through all the messages and gather that information on the fly, ditto the count of unread messages. These can be methods on that store.

This would probably be fine, even with absurdly large numbers of messages. But it might be both clearer and more understandable (as well as “faster”, though not meaningfully) this way:

  • One store that stores threads, which each have a list of their own messages. When a new message arrives, find the thread object and add it to that thread’s list of messages. This means you can now update and keep cached aggregate information about a thread. When you need to recompute it, you can scan just the messages for that thread.

But there’s a problem with both of these approaches: they don’t demonstrate Flux at all! They can have ordinary methods that mutate the single “source of truth” data. No need for actions or dispatchers.

It is true that both Todo and Chat are contrived examples – in fact there is a comment to that effect on this issue. And so we can expect there to be some unrealistic usages; they wanted an example that was simple to follow but which exercised all the key APIs.

However, this does mean that the creators of Flux have yet to provide an example that needs Flux. And in the various 3rd party examples I’ve looked at the situation is typically worse, in that most don’t even have multiple stores.

What do I conclude from this? I’m openminded enough that I would still be interested to see an example that really does something that would genuinely be harder to accomplish (and evolve further) without action-dispatching. But I suspect it would be a very niche, unusual application.

Categories: Uncategorized Tags: ,

React-ions – Part 1: Mostly Great

March 14, 2015 3 comments

The first of a two-part series about React:

I’d been planning to leave React well alone until it settled down a lot more. But over the last week I’ve started idly playing with it while travelling and waiting around, and getting more and more into it. It’s been dividing opinions for over a year now – but then, they let just anyone post on the Internet, so it’s full of idiotic opinions, right?

A work in progress

Turns out I’m not that late to the party. React’s version number starts with a zero, which under semantic versioning means “Anything may change at any time. The public API should not be considered stable.” The React team is taking full advantage of this early stage of development. They are not totally ignoring backward compatibility, but they are making trade-offs, e.g. if they can be backward compatible for code that uses JSX, then it’s okay if they break code that doesn’t use JSX. And yet JSX is supposed to be optional… But this is fine. Some parts of the API are necessarily more stable than others. They’re learning as they go, and one thing that’s gradually influencing them is the importance of stating (and controlling) which things need to be immutable. Every version seems to make an advance in that respect.

On the abandonment of external templates

As a heavy user of such templates, no complaints from me on this. In Angular and Knockout we add extra attributes to standard HTML, and the attributes are themselves a kind of embedded DSL in the HTML. The theory is that this means that the view or “presentation layer” is written in a high-level declarative language, so it can be maintained by a non-programmer. In practise this is unworkable. A template with bindings is fragile against modification by a non-programmer. You really have to know what you’re doing before you touch heavily template-ized HTML. It only appears clean and simple in the most unrealistic examples.

An external HTML template may appear superficially to be “separate” from the view model it binds to, but in reality it is intimately connected to it, having a one-to-one dependency between tags and bindings in the HTML and properties in the view model. And this means that the appearance of separation is unhelpful rather than helpful.

So this is all music to my ears. I’ve long thought that technology layers are overused as a way to carve up systems. Accordingly during my first experiments in large-scale JS app development, I rolled my own library that built very formulaic CRUD-like UIs out of what I called “schemas” (these were actually JS arrays). There was no HTML template in this system. Instead there were “types of control”, such as integer, date-time, etc. and you composed them to make a “record editor” that was self-persisting to JSON. It was crude but adequate. I liked that it lent itself to modularity, and let me add new whole capabilities in one vertical slice that cut across several technology layers.

Shortly after that I got enamoured of Knockout, which emphasised having a separate view (HTML+bindings) and view model (JS+observables). But I rapidly realised that what I very often wanted was a way to build UIs out of components, so I wrote my own custom bindings to achieve this, based again around the idea of a “control”, which is a view model with a built-in HTML template. Knockout 3.2 has since added its own support for components. However, it encourages you to register components into a global namespace so they can be referred to by name in HTML templates. This cuts across any module system you’re using to organise your code; your whole app is one big namespace at the component level.

React components don’t have this problem. Everything is JS, and so it can build on JS scoping and modularity. There is no global behind-the-scenes module-ignorant namespace of registered plugins. In your render function you may refer to another component by name, but it’s just the name of a JS variable that has to be in scope, e.g. imported from another module via require.

Seriously, I’m all over this like a weird rash.

Static typing

Types are taking over JS, kick-started by TypeScript, which is growing rapidly in both user base and features, is already solidly mature and effective, and is prompting further research efforts such as Facebook’s Flow and Google’s SoundScript.

This is another area in which React has an advantage by doing everything in JS and not breaking out into external HTML templates. Checking static types inside the binding attributes in an HTML template requires compile-time understanding of how all the kinds of attribute work. Not to mention special tooling to get design-time feedback, auto-completion in the editor. None of this is a problem for React.

Well, almost. The problem is there’s this strange thing called JSX.


Facebook’s own flavour of TypeScript, Flow (also not really ready for production), has built-in support for React’s JSX syntax (also from Facebook). What a fortunate coincidence! I think this is what they call synergy.

There have also been a couple of efforts to graft JSX support into a fork of TypeScript. But is this even necessary?

I find JSX to be a mere gimmick and distraction, with no discernible value. Indeed its existence may harm rather than help React adoption, because it’s so egregiously unjustifiable. Its only purpose is to look eye-catching in code snippets, providing a visual motif for people to mistake for the essence of React.

The story goes like this:

render() {
    return <div className="foo"></div>;

generates (at the moment, anyway):

render() {
    return React.createElement("div", { className: "foo" });

So at first glance JSX appears to be achieving significant boilerplate reduction. The React docs point us to a built-in shorthand for non-JSX users:

render() {
    return React.DOM.div({ className: "progress" });

Better, though still not that short. But if we’re going to be using div and span a lot, we could just import them our namespace:

var div = React.DOM.div,
    span = React.DOM.span;

Now the “verbose” version is:

render() {
    return div({ className: "progress" });

i.e. not at all verbose, almost the same length as the JSX version, with the advantage of being just plain JS.

In any case, these simple examples are misleading. In a realistic example of a component that actually does something useful there will be conditional elements (shown or not depending on this.state) and repeated elements using Array#map, etc. These parts have to be written in JS, and it’s a sensible React principle that there’s no point inventing a second syntax for them.

So often at least half the code in render is not expressible in JSX anyway. I find that staying in one perfectly adequate syntax is actually more helpful than switching back and forth.

And as for succinctness, when you’re rendering to DOM elements it’s quite common to need to throw in some purely structural wrappers that only have a class attribute, which in React has to be written as className. So what if you used factory functions that could optionally take a string and expand it into an object with a className property?

render() {
    return div("foo");

Uh-oh. Way shorter than the JSX version!

So I threw together a library to make this effortless, but as I was using a rough cut of it and finding it super convenient, I naturally wondered: given how handy this is, why doesn’t the React library itself support passing a string instead of a properties object?

I submitted a pull-request to do just that, but they turned it down. I admire their desire to not absorb into the core things that can be added externally, which is a great general principle to adhere to. But I wouldn’t have applied that principle in this case; the simple sweetness of the string-as-className shortcut is undeniable; so much so that now I’ve thought of it, it feels like an accidental omission that the core library doesn’t already support it.

It’s clear that React would be technically stronger without JSX, but it may be weaker from a marketing perspective. JSX is something concrete and weird-looking that people can focus on as the Chemical X in React, even though that is fundamentally misleading. So there’s the classic marketing-vs.-reality tension.

Events, Observables, Dirty checking etc.

A view has to update itself when the data in the view model changes. There are broadly two ways to do this:

  1. Dirty checking
  2. Observables

Angular uses dirty checking: it keeps a snapshot of the model data. After various events likely to coincide with data changes (e.g. button clicks), Angular compares the model data with the snapshot to find out what has changed.

Pretty much everything else uses observables. An observable is the combination of a value and a change event that fires when the value changes. Obviously you have to call a setter function to set the value, so that the change event can be fired. What if the value is a complex object and you tweak a value inside it? That’s no good – you’re bypassing the mechanism that fires the change event. So a good principle to abide by is to only store immutable objects in observables. The whole observable can be mutated, but only by completely replacing its whole value via the setter function.

React is interesting because sort of uses both these ideas, in very limited ways.

On the one hand, it does dirty checking, but not on plain model data; it holds a snapshot of a description of what the state of the DOM should be. This is a fantastic simplification compared with Angular, because React can make minimal updates to the DOM based on a fixed set of rules.

And on the other hand, every component has an associated observable called its state. We know it’s an observable because we have to call setState to change it and the documentation warns us not to mutate it any other way. On the other hand, there’s no public API to subscribe to a change event. The component itself is the only thing that directly subscribes to it.

There are small weaknesses to the React component API. The central one is that the current state is public property of the component class, so the fact that you’re not supposed to modify it directly is not self-documenting: there’s a setState but no getState.

And maybe there shouldn’t be either of them. According to the docs there are situations where you aren’t allowed to update the state. So might be better for each of the component methods to accept parameters providing the current state and – where applicable – a function to update the state. This would make it self-documenting w.r.t. which operations are allowed during a given method.

Tune in next time, wherein I confront the mysteries of Flux! What is it, really? And more to the point, what should it be?

Categories: Uncategorized Tags:

TypeScript: Physical Code Organisation

February 21, 2015 3 comments

When I first started reading about TypeScript, I had one main concern: how am I going to make this work with the weird mix of modular code and old-school JS libraries in my existing codebase?

The features of the language itself are very well covered. I found various great introductions (and now there’s the awesome official Handbook), but they all seemed to gloss over certain fundamental questions I had about the physical organisation of code. So I’m going to go through the basics here and try to answer those questions as I go.

Modularity in the Browser

Let’s get the the controversial opinionated bit out of the way. (Spoiler: turns out my opinions on this are irrelevant to TS!)

How should you physically transport your JS into your user’s browser? There are those who suggest you should asynchronously load individual module files on the fly. I am not one of them. Stitch your files together into one big chunk, minify it, let the web server gzip it, let the browser cache it. This means it gets onto the user’s machine in a single request, typically once, like any other binary resource.

The exception would be during the development process: the edit-refresh-debug cycle. Clearly it shouldn’t be minified here. Nor should it be cached by the browser (load the latest version, ya varmint!) And ideally it shouldn’t be one big file, though that’s not as much of an issue as it was a few years ago (even as late as version 9, IE used to crash if you tried to debug large files, and Chrome would get confused about breakpoints).

But I’ve found it pretty straightforward to put a conditional flag in my applications, a ?DEBUG mode, which controls how it serves up the source. In production it’s the fast, small version. In ?DEBUG, it’s the convenient version (separate files).

In neither situation does it need to be anything other than CommonJS. For about four years now I’ve been using CommonJS-style require/exports as my module API in the browser, and it’s the smoothest, best-of-all-worlds experience I could wish for.

So what’s the point of AMD? Apparently “… debugging multiple files that are concatenated into one file [has] practical weaknesses. Those weaknesses may be addressed in browser tooling some day…” In my house they were addressed in the browser in about 2011.

But anyway… deep breath, calms down… it turns out that TypeScript doesn’t care how you do this. It turns us all into Lilliputians arguing over which way up a boiled egg must be eaten.

The kinds of file in TypeScript

In TS, modules and physical files are not necessarily the same thing. If you want to work that way, you can. You can mix and match. So however you ended up with your codebase, TS can probably handle it.

If a TS file just contains things like:

var x = 5;
function f() {
    return x;

Then the compiler will output the same thing (exactly the same thing, in that example). You can start to make it modular (in a sense) without splitting into multiple files:

module MyStuff {
    var x = 5;
    export function f() {
        return x;

var y = MyStuff.f();

That makes an (or extends an existing) object called MyStuff with one property, f, because I prefixed it with export. Modules can nest. So just as in JavaScript there’s one big global namespace that your source contributes properties to, but you can achieve modularity by using objects to contain related things.

You can at this point roll your own pattern: write lots of separate files in the above style, each file being responsible for wrapping its code in a named module, then pass them to the TS compiler and stitch the result into one file.

Now try using export at the top level in your file:

var x = 5;
export function f() {
    return x;

The compiler will complain that you haven’t told it what module system you want to use. You tell it with the flag --module commonjs (or --module amd if you’re crazy). Now it works and does exactly what you’d expect as a user of your chosen module system.

But what does this mean in terms of the static type system of TS and so on? It means that this particular file no longer contributes any properties to the global namespace. By just using the export prefix at the top level, you converted it into what TS calls an external module.

In order to make use of it from another module, you need to require it:

import myModule = require("super-modules/my-module");

(Subsequent versions of TS will add more flexible ways to write this, based on ES6.)

Nagging question that can’t be glossed over: What happens to the string "super-modules/my-module"? How is it interpreted? In the output JS it’s easy: it is just kept exactly as it is. So your module system better understand it. But the compiler also wants to find a TS file at compile time, to provide type information for the myModule variable.

Suppose the importing module is saved at the location:


The compiler will try these paths, in this order, until one exists:

  • somewhere/awesome-code/not-so-much/domestic/super-modules/my-module.ts
  • somewhere/awesome-code/not-so-much/super-modules/my-module.ts
  • somewhere/awesome-code/super-modules/my-module.ts
  • somewhere/super-modules/my-module.ts

i.e. it searches up the tree until it runs out of parent directories. (It will also accept a file with the extension .d.ts, or it can be “tricked” into not searching at all, but we’ll get to that later).

This is a little different to node’s take on CommonJS, where you’d only get that behaviour if your import path started with ./ – otherwise it inserts node_modules in the middle. But this doesn’t matter, as we’ll see.

One advantage of external modules over the first pattern we tried is that it avoids name clashes. Every module decides what name it will use to “mount” modules into its own namespace. Also note that by importing an external module in this way, your module also becomes external. Nothing you declare globally will actually end up as properties of the global object (e.g. window) any more.

So we have two kinds of file: external modules, and what I’m going to call plain files. The latter just pollute the global namespace with whatever you define in them. The compiler classifies all files as plain files unless they make use of import or export at the top level.

How do you call JavaScript from TypeScript?

No need to explain why this is an important question, I guess. The first thing to note is that widely-used JS libraries are packaged in various ways, many of them having longer histories than any popular JS module systems.

What if you’re dealing with something like jQuery and in your own JS you’ve been blithely assuming that $ exists globally? What you’re wishing for is that someone would rewrite jQuery as a plain TS file that says something like:

function $(selector: any) {
    // Um...

No use of export, see? It’s a little trickier than that in reality because $ is not just a function; it has properties of its own. Don’t worry – TS has ways to declare that.

Of course, no one can be bothered to rewrite jQuery in TS and fortunately they don’t have to. TypeScript supports ambient declarations, which are prefixed with the keyword declare like this:

declare var x: number;
declare function f(): number; 

These tell the compiler that somehow arrangements will be made such that the global namespace has properties x and f with those particular shapes. Just trust that they’ll be there, Mr Compiler, and don’t ask any questions. In fact the compiler won’t generate any output code for ambient declarations. (If you’re familiar with the old world of C, think header files, prototypes and extern).

Note that I don’t initialise x or provide a body for f, which would not be allowed; as a result the compiler cannot infer their types. To make the declarations be worth a damn, I specify the type number where necessary.

Finally, you can make sure that a file contains only ambient declarations by naming it with the extension .d.ts. That way, you can tell at a glance whether a file emits code. Your linking process (whatever it is) never needs to know about these declaration files. (Again, by analogy to C, these are header files, except the compiler bans them from defining anything. They can only declare.)

(In case you’re panicking at this point, it isn’t necessary to write your own declarations for jQuery, or for many other libraries (whether in the browser or Node). See DefinitelyTyped for tons of already-written ones.)

What if third party code does use a module system such as CommonJS? For example, if you’re using TS in Node and you want to say:

import path = require("path");

You have a couple of options. The first, and least popular as far as I can tell, is to have a file called path.d.ts that you put somewhere so it can be found by the compiler’s searching algorithm. Inside that file you’d have declarations such as:

export declare function join(...path: string[]): string;

The other option is that you have a file called path.d.ts that you put anywhere you like, as long as you give it to the TS compiler to read. In terms of modules it will be a plain file, not an external module. So it can declare anything you want. But somewhere in it, you write a peculiar module declaration:

declare module "path" {
    export function join(...path: string[]): string;

Note how the module name is given as a quoted string. This tells the compiler: if anyone tries to import "path", use this module as the imported type structure. It effectively overrides the searching algorithm. This is by far the most popular approach.

Reference comments

In some TS code you’ll see comments at the top of the file like this:

///<reference path="something/blah.d.ts" />

This simply tells the compiler to add that file (specified relative to the containing directory of the current file) to the set of files it is compiling. It’s like a crummy substitute for project files. In some near-future version of TS the compiler will look for a tsconfig.json in the current directory, which will act as a true project file (the superb TypeStrong plugin for the Atom editor already reads and writes the proposed format).

In Visual Studio projects, just adding a .ts file to a project is sufficient to get the compiler to read it. The only reason nowadays to use reference comments is to impose an order in which declarations are read by the compiler, as TypeScript’s approach to overloading depends on the order in which declarations appear.

DefinitelyTyped and tsd

If you install node and then (with appropriate permissions) say:

npm install -g tsd

You’ll get a command-line tool that will find, and optionally download, type definition files for you. e.g.

tsd query knockout

Or if you actually want to download it:

tsd query knockout --action install

This will just write a single file at typings/knockout/knockout.d.ts relative to the current directory. You can also add the option --save:

tsd query knockout --action install --save

That will make it save a file called tsd.json recording the precise versions of what you’ve downloaded. They’re all coming from the same github repository, so they are versioned by changeset.


I uhmm-ed and ahhh-ed for a while trying to decide what approach to take with my existing JS code. Should I write type declarations and only write brand new code in TS? Should I convert the most “actively developed” existing JS into TS?

The apparent dilemma stems from the way that .d.ts files let you describe a module without rewriting it, and “rewriting” sounds risky.

But it turned out, in my experience, that this is a false dilemma. The “rewriting” necessary to make a JS file into a TS file is

  1. Not that risky, as most of the actually code flow is completely unmodified. You’re mostly just declaring interfaces, and adding types to the variable names wherever they’re introduced.
  2. Phenomenally, indescribably worth the effort. By putting the types right in the code, the TS compiler helps you ensure that everything is consistent. Contrast this with the external .d.ts which the compiler has to trust is an accurate description. A .d.ts is like a promise from a politician.

In the end, I decided that the maximum benefit would come from rewriting two kinds of existing JS:

  • Anything where we have a lot of churn.
  • Anything quite fundamental that lots of other modules depend on, even if it’s not churning all that much.

You may come to a different conclusion, but this is working out great for me so far. Now when someone on the team has to write something new, they do it in TS and they have plenty of existing code in TS to act as their ecosystem.

I think that’s everything. What have I missed?

Categories: Uncategorized Tags: