To be continued…

March 10, 2016 Leave a comment

I’ve finally lost all patience with WordPress!

For future episodes, see:

Go to http://earwicker.dtdns.net

 

(or if that doesn’t work, try Google I guess…)

Categories: Uncategorized

doop: Immutable classes for TypeScript

March 7, 2016 1 comment

Update 2016-03-07 In version 0.9.2 I’ve updated the name of the decorator function to begin with lowercase, to fit with the convention (of course you can rename it to whatever you prefer when you import).

As great as Immutable.js is, especially with a TypeScript declaration included in the package, the Record class leaves me a little disappointed.

In an ordinary class with public properties we’re used to being able to say:

const a = new Animal();
a.hasTail = true;
a.legs = 2;

const tailPrefix = a.hasTail ? "a" : "no";
const desciption = `Has ${a.legs} legs and ${tail} tail.`

That is, each property is a single named feature that can be used to set and get its value. But immutability makes things a little more complicated, because rather than changing a property of an object, we instead create a whole new object that has the same values on all its properties except for the one we want to change. It’s just a convenient version of “clone and update”. This is how it has to be with immutable data. You can’t change an object, but you can easily make a new object that is modified to your requirements.

Why is this hard to achieve in a statically typed way? This thread gives a nice quick background. In a nutshell, you use TypeScript because you want to statically declare the structure of your data. Immutable.js provides a class called Record that lets you define class-like data types, but at runtime rather than compile time. You can overlay TypeScript interface declarations onto them at compile time, but it’s a bit messy. Inheritance is troublesome, and there’s a stubborn set method that takes the property name as a string, so there’s nothing stopping you at compile-time from specifying the wrong property name or the wrong type of value.

The most complex suggestion in that thread is to use code generation to automatically generate a complete statically typed immutable class, from a simpler declaration in a TS-like syntax. This is certainly an option, but seems like a defeat for something so fundamental to programming as declaring the data structures we’re going to use in memory.

Really this kind of class declaration should be second nature. If we’re going to adopt immutable data as an approach, we’re going to be flinging these things around like there’s no tomorrow.

So I wanted to see if something simpler could be done using the built-in metaprogramming capabilities in TypeScript, namely decorators. And it can! And it’s not as ugly as it might be! And there’s a nice hack hiding under it!

How it looks

This is how to declare an immutable class with some properties and one method that reads the properties.

import { doop } from "../doop";

@doop
class Animal {

    @doop
    get hasTail() { return doop<boolean, this>() }

    @doop
    get legs() { return doop<number, this>(); }

    @doop
    get food() { return doop<string, this>(); }

    constructor() {
        this.hasTail(true).legs(2);
    }

    describe() {
        const tail = this.hasTail() ? "a" : "no";
        return `Has ${this.legs()} legs, ${tail} tail and likes to eat ${this.food()}.`;
    }
}

The library doop exposes a single named feature, doop, and you can see it being used in three ways in the above snippet:

  • As a class decorator, right above the Animal class: this allows it to “finish off” the class definition when the code is loaded into the JS engine.
  • As a property decorator, above each property: this inserts a function that implements both get and set functionality.
  • As a helper function, called inside each property getter

Although not visible in that snippet, there’s also a generic interface, Doop, returned by the helper function, and hence supported by each property:

interface Doop<V, O> {
    (): V;
    (newValue: V): O;
}

That’s a function object with two overloads. So to get the value of a property (as you can see happening in the describe method) you call it like a function with no arguments:

if (a.hasTail()) { ...

It’s a little annoying that you can’t just say:

if (a.hasTail) { ...

But that would rule out being able to “set” (make a modified clone) through the same named feature on the object. If the type of hasTail were boolean, we’d be stuck.

There’s a particular pattern you follow to create a property in an immutable class. You have to define it as a getter function (using the get prefix), and return the result of calling doop as a helper function, which is where you get to specify the type of the property. Note: you only need to define a getter; doop provides getting and pseudo-setting (i.e. cloning) via the same property, with static type checking.

See how the constructor is able to call its properties to supply them with initial values. This looks a lot like mutation, doesn’t it? Well, it is. But it’s okay because we’re in the constructor. doop won’t let this happen on properties of a class that has finished being constructed and therefore is in danger of being seen to change (NB. you can leak a reference to your unfinished object out of your constructor by passing this as an argument to some outside function… so don’t do that).

And in the describe method (which is just here as an example, not part of any mandatory pattern) you can see how we retrieve the values by calling properties as if they were methods, this time passing no parameters.

But what’s not demonstrated in this example is “setting” a value in an already-constructed object. It looks like this:

const a = new Animal();
expect(a.legs()).toEqual(2); // jasmine spec-style assertion

// create a modified clone
const b = a.legs(4);
expect(b.legs()).toEqual(4);

// original object is unaffected
expect(a.legs()).toEqual(2);

Inheritance is supported; a derived class can add more properties, and in its constructor (after calling super() it can mutate the base class’s properties. The runtime performance of a derived class should be identical to that of an equivalent class that declares all the properties itself.

One thing to be wary of is adding ordinary instance properties to a doop class. It would be difficult to effectively block this happening, and in any case there may occasionally be a good reason to do it, as long as you understand one basic limitation of it: ordinary instance properties belong to an instance. When you call a property to set its value, you are returned a new instance, and there is no magic that automatically copies or initialises any instance fields. Only the other doop properties will have the same values as in the original instance. Any plain instance fields in the new instance will have the value undefined.

For simplicity’s sake, just make sure in a doop class that all data is stored in doop properties.

Implementation

The implementation of the cloning is basically the one described here so it’s super-fast.

I mentioned there’s a hack involved, and it’s this: I needed a way to generate, from a single declaration in the library user’s code, something that can perform two completely different operations: a simple get and a pseudo-set that returns a new instance. That means I need each property to be an object with two functions. But if I do that literally, then a get would look like this:

// A bit ugly
const v = a.legs.get();
const a2 = a.legs.set(4);

I don’t like the verbosity, for starters. But there’s a worse problem caused by legs being an extra object in the middle. Think about how this works in JS. Inside the get function this would point to legs, which is just some helper object stored in a property defined on the prototype used by all instances of the Animal class. It’s not associated with an instance. It doesn’t know what instance we’re trying to get a value from. I could fix this by creating a duplicate legs object as an instance property on every Animal instance, and then giving it a back-reference to the owning Animal, but that would entirely defeat the whole point of the really fast implementation, which uses a secret array so it can be rapidly cloned, whereas copying object properties is very much slower.

Or I could make legs, as a property getter, allocate a new middle object on the fly and pass through the this reference. So every time you so much as looked at a property, you’d be allocating an object that needs to be garbage collected. Modern GCs are amazing, but still, let’s not invent work for them.

So what if instead of properties, I made the user declare a function with two overloads for getting and setting? That solves the this problem, but greatly increases the boilerplate code overhead. The user would actually have to write two declarations for the overloads (stating the “property” type twice) and a third for the implementation:

// Ugh
@doop
legs(): number;
legs(v: number): this;
legs(v?: number): any { }

The function body would be empty because the doop decorator replaces it with a working version. But it’s just a big splurge of noise so it’s not good enough. And yet it’s the best usage syntax available. Ho hum.

Lateral thinking to the rescue: in TypeScript we can declare the interface to a function with two overloads. Here it is again:

export interface Doop<V, O> {
    (): V;
    (newValue: V): O;
}

Note that O is the type of the object that owns the property, as that’s what the “setter” overload has to return a new instance of.

Using a getter in the actual doop library looks like this:

const l: number = a.legs();

There are at least two possible interpretations of a.legs():

  • legs is function that returns the number we want.
  • legs is a property backed by a getter function, that returns a function with at least one overload (): number, which when called returns the number we want.

To explain the second one more carefully: the part that says a.legs will actually call the getter function, which returns a second function, so a.legs() would actually make two calls. The returned function would need to be created on-the-fly so it has access to the relevant this, so this is very much like the GC-heavy option I described earlier.

But it’s not possible to tell which it is from the syntax. And that’s quite good. Because if we tell the TypeScript compiler that we’re declaring a getter function that returns a function, it will generate JavaScript such as a.legs(). But at runtime, we can use the simple implementation where legs is just a function. The doop decorator can make that switcheroo, and we get the best of both worlds: a one-liner declaration of a property getter, and a minimal overhead implementation.

Well, it seemed nifty to me when I realised it would work!

So this is what the doop property decorator does: the user has declared a property, and all we care about is its name. All properties are the same at runtime: just a function that can be called to either get or clone-mutate.

doop on GitHub

Is TypeScript really a superset of JavaScript? And does it even matter?

July 11, 2015 2 comments

Questions:

  • What does it mean for a programming language to be a superset of another programming language?
  • What’s a programming language?
  • What’s a program?

In this discussion, a program, regardless of language, is a stream of characters.

If you generated a random stream of characters, it might be a valid program in some hypothetical language, just as the arrangement of stars in the night sky as viewed from Earth might happen to spell out an insulting message in some alien language we’ll never know about.

So a programming language is both:

  • the rules for deciding whether a given stream of characters is a valid program, from that language’s point of view, and,
  • the set of valid programs, because they are streams of characters that conform to those rules.

It’s the second (slightly surprising) formulation we’re interested in here, because it means that when we say “language A is a superset of language B”, we mean that A and B are sets of programs, and set A includes all the programs in set B. This is useful information, because it means all the programs we wrote in language B can immediately used in language A, without us needing to change them.

People get very muddled about this, because they think of the programming language as a set of rules instead of a set of programs, and therefore assume that a superset would include all the rules of the subset language, plus some extra rules. This could make it stricter, rejecting some previously valid programs, or it could make it looser, allowing new syntactic forms. So without knowing the details of the extra rules in question, we wouldn’t know what’s happened.

So the “set of rules” sense is far less useful than the “set of programs” sense, which does actually tell us something about the compatibility between the languages.

The most common statement in introductions and tutorials about TypeScript is that it is a superset of JavaScript. Really? Here’s a valid JavaScript program:

var x = 5;
x = "hello";

Rename it to .ts and compile it with tsc and you’ll get an error message:

Type 'string' is not assignable to type 'number'.

We can fix it though:

var x: any = "hello";
x = 5;

We’ve stopped the compiler from inferring that x is specifically a string variable just because that’s what we initialised it with. Plain JavaScript can be retro-imagined as a version of TypeScript that assumes every variable is of type any.

In any case, one example is sufficient to show that TypeScript is not a superset of JavaScript in the more useful “set of valid programs” sense, and it seems we’ve found one. Except it’s a bit murkier than that.

If you looked in the folder containing your source file right after you tried to compile the “broken” version, you would have found an output .js file that the TypeScript compiler had generated quite happily.

TypeScript makes your source jump over two hurdles:

  1. Is it good enough to produce JavaScript output?
  2. Does it pass type checks?

If your source clears the first hurdle, you get a runnable JavaScript program as output even if it doesn’t clear the second hurdle. This quirk allows TypeScript to claim to be a superset of JavaScript in the set-of-programs sense.

But I’m not sure it counts for much. Is anyone seriously going to release a product or publish a site when it has type errors in compilation? They wouldn’t be getting any value from TypeScript (over any ES6 transpiler such as Babel). The compiler has a couple of switches that really should be enabled in any serious project:

  • --noEmitOnError – require both hurdles to be cleared (the word “error” here refers to type errors).
  • --noImplicitAny – when type inference can’t deduce something more specific than any, halt the compilation.

If you’re going to use TypeScript (hint: you are) then use it properly.

And in that case, it is not a superset of JavaScript. And this is a Good Thing. The whole point is that existing JavaScript programs, due to the language’s dynamically typed looseness, very often contain mistakes that would be trapped by TypeScript. The example above, where a variable is reused for different types, might be a mistake but might not (in performance terms, it’s probably a mistake in that it stops modern JS runtimes from optimising code that accesses the variable).

When we want to use JavaScript, unchanged, as part of a TypeScript project, we just leave it as JavaScript and wrap it in d.ts declarations. It’s no big deal. You only change the extension to .ts because you want it to be more rigorously checked, so you know that the types make sense all the way down into the nitty gritty.

The parallel with C++’s relationship to C is striking. In C (specifically, ANSI C prior to the 1999 standard) it was not necessary to declare a function before you called it. In C++ it became mandatory. This – amongst other differences – meant that C++ was never a superset of C.

But this didn’t matter too much, because C++ was a superset of well-written C – to the extent that every C example in the 2nd edition of K&R was valid C++.

So TypeScript, in any meaningful sense, is not a superset of JavaScript, but this is nothing to get hung up over; if it were a superset of JavaScript, it would be considerably less useful.

Categories: Uncategorized Tags: ,

TypeScript 1.5: Get the decorators in…

April 2, 2015 3 comments

Update 2016-03-07: My new library, Doop, is a more practical demonstration of what you can do with decorators.

No great mysteries about what this class does:

class C {
    foo(n: number) {
        return n * 2;
    }
}

It’s a stupid example with a method that returns its only parameter multiplied by two. So this prints 46:

var c = new C();
console.log(c.foo(23));

A feature coming in TS 1.5 that has really caught my eye is decorators. We can decorate the foo method:

class C {
    @log
    foo(n: number) {
        return n * 2;
    }
}

That on its own will not compile, because we need to define log. Here’s my first try:

function log(target: Function, key: string, value: any) {
    return {
        value: function (...args: any[]) {

            var a = args.map(a => JSON.stringify(a)).join();
            var result = value.value.apply(this, args);
            var r = JSON.stringify(result);

            console.log(`Call: ${key}(${a}) => ${r}`);

            return result;
        }
    };
}

Note: previously I used arrow function syntax to declare value, but as pointed out in this answer on Stack Overflow, that interferes with the value of this, which ought to be passed straight through unchanged.

The three parameters, target, key and value, are passed in by the helper code generated by the TypeScript compiler. In this case:

  • target will be the prototype of C,
  • key will be the name of the method ("foo"),
  • value will be a property descriptor, which here will have a value property that is the function that doubles its input.

So my log function returns a new property descriptor that the TypeScript compiler will use as the new definition of foo. This means I can wrap the real function in one that logs information to the console, so here I log the arguments and the return value.

So when the resulting program runs in node, it prints two lines instead of just one:

Call: foo(23) => 46
46

This is really quite neat. The next thing I think is: can I decorate a simple value so that it turns into a property that is an observable that can be listened to for change events? I’m going to find out.

Acquisition Machines

March 29, 2015 4 comments

This is off-topic from my usual technology geek focus, but it was prompted by reading a blog post Never Invent Here: the even-worse sibling of “Not Invented Here” by Michael O. Church, which reminded me of some things I’ve observed.

My experience of Never-Invent-Here has fortunately been brief and infrequent, and of a specific kind, but I know exactly why it happens and how to recognise it and what to do about it. It’s kind of inevitable.

When your exciting, innovation-happy employer has reached a kind of peak, and has a bunch of customers paying a steady revenue stream, and there’s no more inventing to do, they get acquired by what I call an Acquisition Machine (AM). These serve the opposite purpose to a VC, in that they get in at the top floor instead of the ground floor. They come in at the end of the story. They are 100% risk averse.

An AM has a large-ish technical staff, all former inventors from the mature companies they’ve gobbled up (having fired all the other staff). They never intentionally invent anything in-house. They almost never hire anyone new. The tech staff are only there to put out fires in old products for old customers (only the ones covered by maintenance contracts – that existing stream of revenue is the entire focus of the business).

So you may find yourself as one of those technical staff. You basically work in a museum now. The only projects that come up are “consolidation” efforts: making several old products look like one. In fact the management may just be inventing busy-work to keep you distracted during the down time. That’s cool! You can pitch approaches that involve the latest technologies, get skilled up. Play-act at inventing for a while.

Look at the world from the AM’s perspective. They’re terrified, risk-averse owners of shares that are never going to get back to what they were worth in 1999. They have a choice:

a) Bet on something invented in-house by one of these excitement-starved nerds we employ, and go to all the trouble of marketing it… OMG this sounds difficult and it probably won’t work… no way of estimating how much it will cost in total or how much we’ll ever make. Let’s not.

b) Buy an entire product-business-unit, something already proven, with customers already paying for it, something that is already clearly quantified with known cash inputs and outputs, whose outgoing management have already helpfully made a list of who you need to keep on and who you can safely lay off! Ready to plug into the machine and slowly drain down, generating cash to go on the pile, ready for the next acquisition.

Which are they going to go for? They are not in this for the excitement! It’s a slow way to grow, but it is a kind of growth. The target company’s old management are running out of exit routes, so they sell up cheap and so the AM does actually make a profit in the end. It just doesn’t do it by inventing anything. There’s no need.

I’m painting an extreme caricature of course – real companies may be somewhere on a spectrum from innovator to AM. Or they may be made up of parts with different degrees of maturity/stagnation. But the more stagnant it is, the worse it will be for anyone who wants to invent new stuff.

So if you are an inventor, and your employer gets bought, look for the signs that you’re stuck inside an AM. If you are, wait until you find a new opportunity elsewhere to start over, and then get the hell out. You can’t “fix” an AM. There’s nothing to fix. They’re just not the kind of business you want to be in. They have their own reason to exist.

React-ions – Part 2: Flux, The Easy Way

March 20, 2015 10 comments

The second of a two-part series about React:

Catching up on Flux has been an amusing experience. It’s like reading about a dance craze in the 1950s. Instead of “The Twist”, everybody wants to do “The Flux”. People are nervously looking in the mirror as they try out the moves. They write to the newspaper agony aunt, “I tried to Flux, but am I doing it right?” They want a member of the priesthood to bless their efforts with holy jargon, and say “You are one of us, Daddio!”

You’d think someone with a software project would rather ask:

  • Did my app work, in the end?
  • Does it perform okay?
  • Was it easy?
  • How much boilerplate crap did I have to paste in from blog posts?
  • Do I feel comfortable with the complexity or did it get out of control?
  • Do I know how to extend it further without creating a mess?

And based on their own answers to those questions, they should be able to figure out whether an approach was worthwhile for them.

My executive summary of Flux is: it’s a niche approach at best. For a lot of (maybe most) dynamic interactive UI development it’s not the right choice, because it’s error prone and unwieldy without providing significant advantages.

So how did I reach this conclusion?

It’s extraordinary that of the many hundreds of blog posts about Flux, hardly any try to explain it or justify it. They just describe it, without reference to long-established patterns it partly resembles, and without clarifying why it takes the trouble to deviate from those patterns. (Even worse, most explanations give truncated examples of how it might be used which don’t proceed far enough to demonstrate its intended purpose.)

The three primary sources of information I’ve drawn on are:

Derived Data

From the official site:

We originally set out to deal correctly with derived data: for example, we wanted to show an unread count for message threads while another view showed a list of threads, with the unread ones highlighted. This was difficult to handle with MVC – marking a single thread as read would update the thread model, and then also need to update the unread count model. These dependencies and cascading updates often occur in a large MVC application, leading to a tangled weave of data flow and unpredictable results.

Holy hype alarm, Batman! The part about the tangled weave is absolutely not justified by the scenario being described. This is a very familiar situation: some data set B needs to be computed from some other data set A. How did Facebook end up with a tangled weave and unpredictable results? Did they visit the wrong wig shop?

A sensible approach would be to come up with a pure function that accepts A and returns B. If that is an unworkable technique that causes “unpredictable results”, then someone needs let the applied mathematicians know. Then hopefully they can break it gently to the pure mathematicians who will then need to rebuild their entire subject from scratch.

For an example of something that does this right, look no further than a React component, which has state and props, and if either of those changes then the render function is evaluated to generate a complete new virtual DOM tree – relevant portion of the video.

Key lesson: first, try writing a pure function. If the performance is unacceptable (in 99.9% of cases, it’ll be fine) then consider alternatives.

In the video there is an example of how Facebook chat feature got more complex as it evolved. The code you can see growing on the screen is a function that runs every time the user has a new message – it’s effectively a handler for that event. They interpret the event by dishing out modifications to several different parts of the UI that may or may not be interested in what happened, depending on their current state.

They’re right – it was a very complicated way of doing it. They weren’t using pure functions. They should do what React components do: update a single definitive model of data (the state from which everything else can be computed), and then let the other components know that something has changed (no need to be specific), so they can all recompute all their data from scratch as a pure function.

In this case, that means keep all the messages received so far on a list. When a new message arrives, add it to the list and notify anything that needs to update.

It’s a common reaction to think how wasteful and inefficient it is to do that. But the second half of the video (that half that is not about Flux) is devoted almost entirely to dispelling that belief, as the React DOM reconciliation approach assumes that there will be an insignificant cost to recomputing the entire new virtual DOM every time anything changes.

In this case, rather than going from component state to virtual DOM, we’re going from list-of-all-messages to (for example) count-of-unread-messages. The functional-reactive approach here is to scan the array of message objects and count how many have a boolean property called read that is true. On my notebook such an operation is too fast for the JS timer resolution for any realistic number of messages. For a million messages it takes 18 milliseconds.

The Root of All Evil

The problem here, as so often, is premature optimisation, which is the idea that you can achieve “high performance” by doing everything the hard way. It’s simply not true.

The single most important quality software can have is malleability. It must be easy to change without breaking it. This leads to high performance software because the best way to achieve that is to measure the performance with a profiler and make careful, valuable optimisations only to those specific spots that you have found to be genuine bottlenecks. Easy to change means easy to optimise.

If you lean heavily on pure functions this will be a huge help to you as you apply performance tricks, because you can use caching very easily. And a clear distinction between mutable and immutable data is also important because it makes it easy to know when you need to clear the cache. Again, React components demonstrate this perfectly.

The Flux approach

Instead of radically simplifying and using pure functions like React, the aim of Flux is to stick with the difficult, fine-grained state-mutating approach shown in the video, where each time a message arrives, you run a very imperative set of code that mutates this, mutates that, mutates something else, in an attempt to bring them all up to a state that is consistent with what is now known.

In other words, if you want to keep doing it the hard way, Flux might just be the approach for you.

Here’s the simplest diagram:

Those arrows are effectively function calls, but via callbacks. So the thing on the right has registered a callback with the thing on the left, so that left can call right, and the influence of an action ripples through the layers.

  • An action is a plain JS object tagged with string type property (someone loves writing switch statements!) representing a request to update some data. Think of it as an abstraction of a mutating function call on your data model. As well as type it can contain any other parameters needed by the notional mutating function.

  • Having constructed an action, you pass it to the Dispatcher, which is a global singleton(!). The dispatcher has a list of subscriber callbacks, the subscribers are known as “stores”, and they are also global singletons(!!). The dispatcher loops through the stores and passes the action to all of them.

  • Each store’s subscription callback has a switch statement (hello!) so it can handle specific action types. It makes selective mutations to its internal (global singleton) state according to the instructions in the action, and raises its own change event – each store is an event source.

  • A view is a React component that subscribes to one (or maybe more) stores in the traditional way, i.e. as an event sink. Not all components have to do this. They speak of “controller-views” (two buzzwords for the price of one) that are specific React components that take care of subscribing and then use props to pass the information down their subtree in the standard React way.

So actions carry instructions to mutate data, and they make it as far as the store, where they cause data to be mutated. The last arrow is slightly different: it is not an action being passed. It’s just a change event, so it carries no data. To actually get the current data, the subscribing React component must call a public method of the store to which it is subscribing.

The point of all this, from the web site:

Stores have no direct setter methods like setAsRead(), but instead have only a single way of getting new data into their self-contained world — the callback they register with the dispatcher.

Why are stores banned from having their own setter methods? Because otherwise it would be possible to update their contents without also notifying all other stores that need to update in sync. That’s how the “derived data” problem is to be solved. If you only had one data store interested in various actions, there would be no point to any of this. (So please, if you’re thinking of blogging on this topic, remember to include in your example several stores that respond to the same actions so they can make corresponding updates to remain in sync.)

The examples

There are currently two examples in the github repository: TodoMvc and Chat.

TodoMvc has a problem as an example: it only has one store. This means it doesn’t actually have the problem that Flux is intended to solve. If there’s only one store, there’s no need for separate actions that go via a dispatcher to let multiple stores listen in. It could just have a store with ordinary methods that mutate the state of the store and fire the change event.

Chat has three stores, and is based on the Facebook chat scenario covered in the talk, so it’s got potential to be lot more applicable and illuminating.

In Chat, the three stores are:

  • MessageStore – a flat list of all messages across all threads
  • ThreadStore – a list of threads, with only the last message in each thread
  • UnreadThreadStore – an integer: the number of unread messages across all threads

The last one is more than a little ironic: if you look closely at the video, they were originally responding to events, and so decrementing/incrementing the unread message count. But in the Flux Chat example, even though they’re demo-ing a framework that is based on events so it can do exactly that kind of minimal mutation of the existing data, instead they’ve written the example so it recomputes the count by looping through all the messages (at the moment it doesn’t even cache the count).

If you’re going to be recomputing from scratch like that (and why would’t you?) then the strict action-dispatching approach of Flux is not actually going to be serving any purpose. It’s just ceremony. You could just have primary stores that store data; they’d have simple methods you could call to make them (a) mutate their data and (b) fire their own change event. Then you could have secondary stores that recompute their data in response to change events from other stores (both primary and secondary).

The UnreadThreadStore also gives us a demonstration of a dispatcher function called waitFor. This gives stores control over the order in which stores handle actions, essentially by telling the dispatcher to run specific stores’ action handlers synchronously for the current action. The reason a store will do this is because it wants to read data from those other stores, and it needs to do this after the other stores have updated their state.

It would make more sense for it to listen to the other store’s change event. A project called Reflux suggests doing exactly that.

I’ve seen discussions where people claimed that stores listening to other stores’s change events, perhaps through several layers, is the kind of “tangled weave of data flow and unpredictable results” that Flux is trying to avoid. But it’s just not. A tree of chained event handlers is a fine example of clean composition. If the arrangement is immutable (once constructed, a listener cannot be switched to listen to a different event source) then infinite loops are impossible.

The mess Facebook originally experience was not due to chained event handling, but due to disorganised fine-grained mutation of lots of different states in a single event handler.

Through the use of waitFor, Flux effectively does have stores listening to other stores change events. But it’s worse than that, because notice how in the UnreadThreadStore it has to listen for the two actions that it knows will cause the ThreadStore to change its data:

case ActionTypes.CLICK_THREAD:
    UnreadThreadStore.emitChange();
    break;

case ActionTypes.RECEIVE_RAW_MESSAGES:
    UnreadThreadStore.emitChange();
    break;

If someone changes ThreadStore so it responds to a new action by mutating its state, that someone will have to check all the other stores to see whether they also need to respond to the new action, because these other stores may depend on the state of ThreadStore which is now changing more often. If the other stores just listened on ThreadStore‘s change event this would not be necessary. It would be an internal detail of ThreadStore, and the rest of the system would be isolated from that detail. By following the Flux approach, you are encouraged to spread around knowledge of how every store responds to actions. The intention is to maximise “performance”, but to reiterate, React itself does it much more simply: if any state or props of a component changes in any way, the whole component’s virtual DOM is re-computed by a single render function.

And to be clear, there is only a purpose to any of this if you are doing fine-grained mutation of stored state in response to the same actions in multiple stores, which UnreadThreadStore clearly doesn’t do.

So enough about UnreadThreadStore. How about ThreadStore and MessageStore? Again, they are peculiar. If you run the Chat demo site, you’ll notice that there is no page where you can see all the messages regardless of the thread they are in. What you see is a list of all the threads, and the messages in the currently selected thread.

It’s strange therefore that MessageStore maintains a list of all messages across all threads. It’s even stranger that it then has a function that, on demand, filters that list to get a list of just the messages for one thread! Again, this is the right way to do it, but it makes a mockery of the action-dispatch approach.

So ThreadStore is our last hope, and it delivers! It is actually another store that responds to some of the same actions as MessageStore and mutates its own data. Hurrah! But again, something really weird has happened. There’s a function getAllChrono that recomputes, from scratch, every time it is called, a sorted list of all the threads. And that’s the function that the associated React component calls so it can display the list.

Let’s consider some simpler alternatives:

  • One store that stores all the messages in a list. When you want to know the list of threads, scan through all the messages and gather that information on the fly, ditto the count of unread messages. These can be methods on that store.

This would probably be fine, even with absurdly large numbers of messages. But it might be both clearer and more understandable (as well as “faster”, though not meaningfully) this way:

  • One store that stores threads, which each have a list of their own messages. When a new message arrives, find the thread object and add it to that thread’s list of messages. This means you can now update and keep cached aggregate information about a thread. When you need to recompute it, you can scan just the messages for that thread.

But there’s a problem with both of these approaches: they don’t demonstrate Flux at all! They can have ordinary methods that mutate the single “source of truth” data. No need for actions or dispatchers.

It is true that both Todo and Chat are contrived examples – in fact there is a comment to that effect on this issue. And so we can expect there to be some unrealistic usages; they wanted an example that was simple to follow but which exercised all the key APIs.

However, this does mean that the creators of Flux have yet to provide an example that needs Flux. And in the various 3rd party examples I’ve looked at the situation is typically worse, in that most don’t even have multiple stores.

What do I conclude from this? I’m openminded enough that I would still be interested to see an example that really does something that would genuinely be harder to accomplish (and evolve further) without action-dispatching. But I suspect it would be a very niche, unusual application.

Categories: Uncategorized Tags: ,

React-ions – Part 1: Mostly Great

March 14, 2015 3 comments

The first of a two-part series about React:

I’d been planning to leave React well alone until it settled down a lot more. But over the last week I’ve started idly playing with it while travelling and waiting around, and getting more and more into it. It’s been dividing opinions for over a year now – but then, they let just anyone post on the Internet, so it’s full of idiotic opinions, right?

A work in progress

Turns out I’m not that late to the party. React’s version number starts with a zero, which under semantic versioning means “Anything may change at any time. The public API should not be considered stable.” The React team is taking full advantage of this early stage of development. They are not totally ignoring backward compatibility, but they are making trade-offs, e.g. if they can be backward compatible for code that uses JSX, then it’s okay if they break code that doesn’t use JSX. And yet JSX is supposed to be optional… But this is fine. Some parts of the API are necessarily more stable than others. They’re learning as they go, and one thing that’s gradually influencing them is the importance of stating (and controlling) which things need to be immutable. Every version seems to make an advance in that respect.

On the abandonment of external templates

As a heavy user of such templates, no complaints from me on this. In Angular and Knockout we add extra attributes to standard HTML, and the attributes are themselves a kind of embedded DSL in the HTML. The theory is that this means that the view or “presentation layer” is written in a high-level declarative language, so it can be maintained by a non-programmer. In practise this is unworkable. A template with bindings is fragile against modification by a non-programmer. You really have to know what you’re doing before you touch heavily template-ized HTML. It only appears clean and simple in the most unrealistic examples.

An external HTML template may appear superficially to be “separate” from the view model it binds to, but in reality it is intimately connected to it, having a one-to-one dependency between tags and bindings in the HTML and properties in the view model. And this means that the appearance of separation is unhelpful rather than helpful.

So this is all music to my ears. I’ve long thought that technology layers are overused as a way to carve up systems. Accordingly during my first experiments in large-scale JS app development, I rolled my own library that built very formulaic CRUD-like UIs out of what I called “schemas” (these were actually JS arrays). There was no HTML template in this system. Instead there were “types of control”, such as integer, date-time, etc. and you composed them to make a “record editor” that was self-persisting to JSON. It was crude but adequate. I liked that it lent itself to modularity, and let me add new whole capabilities in one vertical slice that cut across several technology layers.

Shortly after that I got enamoured of Knockout, which emphasised having a separate view (HTML+bindings) and view model (JS+observables). But I rapidly realised that what I very often wanted was a way to build UIs out of components, so I wrote my own custom bindings to achieve this, based again around the idea of a “control”, which is a view model with a built-in HTML template. Knockout 3.2 has since added its own support for components. However, it encourages you to register components into a global namespace so they can be referred to by name in HTML templates. This cuts across any module system you’re using to organise your code; your whole app is one big namespace at the component level.

React components don’t have this problem. Everything is JS, and so it can build on JS scoping and modularity. There is no global behind-the-scenes module-ignorant namespace of registered plugins. In your render function you may refer to another component by name, but it’s just the name of a JS variable that has to be in scope, e.g. imported from another module via require.

Seriously, I’m all over this like a weird rash.

Static typing

Types are taking over JS, kick-started by TypeScript, which is growing rapidly in both user base and features, is already solidly mature and effective, and is prompting further research efforts such as Facebook’s Flow and Google’s SoundScript.

This is another area in which React has an advantage by doing everything in JS and not breaking out into external HTML templates. Checking static types inside the binding attributes in an HTML template requires compile-time understanding of how all the kinds of attribute work. Not to mention special tooling to get design-time feedback, auto-completion in the editor. None of this is a problem for React.

Well, almost. The problem is there’s this strange thing called JSX.

JSX

Facebook’s own flavour of TypeScript, Flow (also not really ready for production), has built-in support for React’s JSX syntax (also from Facebook). What a fortunate coincidence! I think this is what they call synergy.

There have also been a couple of efforts to graft JSX support into a fork of TypeScript. But is this even necessary?

I find JSX to be a mere gimmick and distraction, with no discernible value. Indeed its existence may harm rather than help React adoption, because it’s so egregiously unjustifiable. Its only purpose is to look eye-catching in code snippets, providing a visual motif for people to mistake for the essence of React.

The story goes like this:

render() {
    return <div className="foo"></div>;
}

generates (at the moment, anyway):

render() {
    return React.createElement("div", { className: "foo" });
}

So at first glance JSX appears to be achieving significant boilerplate reduction. The React docs point us to a built-in shorthand for non-JSX users:

render() {
    return React.DOM.div({ className: "progress" });
}

Better, though still not that short. But if we’re going to be using div and span a lot, we could just import them our namespace:

var div = React.DOM.div,
    span = React.DOM.span;

Now the “verbose” version is:

render() {
    return div({ className: "progress" });
}

i.e. not at all verbose, almost the same length as the JSX version, with the advantage of being just plain JS.

In any case, these simple examples are misleading. In a realistic example of a component that actually does something useful there will be conditional elements (shown or not depending on this.state) and repeated elements using Array#map, etc. These parts have to be written in JS, and it’s a sensible React principle that there’s no point inventing a second syntax for them.

So often at least half the code in render is not expressible in JSX anyway. I find that staying in one perfectly adequate syntax is actually more helpful than switching back and forth.

And as for succinctness, when you’re rendering to DOM elements it’s quite common to need to throw in some purely structural wrappers that only have a class attribute, which in React has to be written as className. So what if you used factory functions that could optionally take a string and expand it into an object with a className property?

render() {
    return div("foo");
}

Uh-oh. Way shorter than the JSX version!

So I threw together a library to make this effortless, but as I was using a rough cut of it and finding it super convenient, I naturally wondered: given how handy this is, why doesn’t the React library itself support passing a string instead of a properties object?

I submitted a pull-request to do just that, but they turned it down. I admire their desire to not absorb into the core things that can be added externally, which is a great general principle to adhere to. But I wouldn’t have applied that principle in this case; the simple sweetness of the string-as-className shortcut is undeniable; so much so that now I’ve thought of it, it feels like an accidental omission that the core library doesn’t already support it.

It’s clear that React would be technically stronger without JSX, but it may be weaker from a marketing perspective. JSX is something concrete and weird-looking that people can focus on as the Chemical X in React, even though that is fundamentally misleading. So there’s the classic marketing-vs.-reality tension.

Events, Observables, Dirty checking etc.

A view has to update itself when the data in the view model changes. There are broadly two ways to do this:

  1. Dirty checking
  2. Observables

Angular uses dirty checking: it keeps a snapshot of the model data. After various events likely to coincide with data changes (e.g. button clicks), Angular compares the model data with the snapshot to find out what has changed.

Pretty much everything else uses observables. An observable is the combination of a value and a change event that fires when the value changes. Obviously you have to call a setter function to set the value, so that the change event can be fired. What if the value is a complex object and you tweak a value inside it? That’s no good – you’re bypassing the mechanism that fires the change event. So a good principle to abide by is to only store immutable objects in observables. The whole observable can be mutated, but only by completely replacing its whole value via the setter function.

React is interesting because sort of uses both these ideas, in very limited ways.

On the one hand, it does dirty checking, but not on plain model data; it holds a snapshot of a description of what the state of the DOM should be. This is a fantastic simplification compared with Angular, because React can make minimal updates to the DOM based on a fixed set of rules.

And on the other hand, every component has an associated observable called its state. We know it’s an observable because we have to call setState to change it and the documentation warns us not to mutate it any other way. On the other hand, there’s no public API to subscribe to a change event. The component itself is the only thing that directly subscribes to it.

There are small weaknesses to the React component API. The central one is that the current state is public property of the component class, so the fact that you’re not supposed to modify it directly is not self-documenting: there’s a setState but no getState.

And maybe there shouldn’t be either of them. According to the docs there are situations where you aren’t allowed to update the state. So might be better for each of the component methods to accept parameters providing the current state and – where applicable – a function to update the state. This would make it self-documenting w.r.t. which operations are allowed during a given method.

Tune in next time, wherein I confront the mysteries of Flux! What is it, really? And more to the point, what should it be?

Categories: Uncategorized Tags:
Follow

Get every new post delivered to your Inbox.

Join 26 other followers