Desktop Apps in JavaScript+HTML

March 7, 2014 Leave a comment

Part of the reason I did Eventless (apart from it being fun and explanatory) was so I’d have something almost as convenient as Knockout available to me the next time I had to write a desktop app with a UI. It has now been many years since we could regard web UI development as a hairshirt chore, where we have to make do without the comforts of familiar mature environments and tools. Quite the opposite: whenever I have to write and support a desktop app I curse the fact that I can’t hit F12 and “inspect the DOM” of my UI while it’s running, or immediately debug the code on the tester’s machine when they find a problem.

In the decade-before-last, Microsoft quietly released an IE feature called HTA, or “HTML application”. It’s the neat (but trivially obvious) idea of switching off the usual browser security checks and allowing you to create any ActiveX scriptable objects, so you could write applications with HTML UIs. Neat, but horrible in practise because… well, you had to be there to appreciate the overall shoddiness. And so that’s why they of course had to rush down to the patent office.

But fast forward to the last few years. We can develop complex server or command-line applications in JavaScript using the Node platform and its ecosystem of libraries. We can use a variety of front end languages to fill in the deficiencies of raw JavaScript. The debugging experience inside modern browsers is the best debugging experience anywhere. And so on. It’s well beyond the time for HTAs done right.

It being such an obvious idea, there are a few implementations floating around, but the best I’ve seen is node-webkit. Which is a fine technical name (because it tells you honestly that it’s just node mashed together with webkit), but I think they should pick some cool and memorable “marketing” name for it, because it’s just too brilliant a combination to go without a name of its own. I suggest calling it 长裤. Or failing that, 内裤.

The easiest way to get started with it is to install node, then the nodewebkit package with the -g flag (it’s not a module that extends node; rather, it’s a separate runtime that has its own copy of node embedded). Then you create a package.json with an HTML file as its main:

"main": "index.html"

From that HTML file you can pull in scripts using the script tag in the usual way. But inside those scripts you can use node’s require. Yup.

The sweet combination for me is the beautiful TypeScript, plus the phenomenal Knockout, plus whatever node modules I want to call (along with TypeScript declarations from DefinitelyTyped). This gives me the best of everything: static typing, Chrome-like debugging, the smoothest most scalable/flexible form of two-way binding, the whole works. So I’ll probably never use Eventless in a real project.

I actually started writing a UI for conveniently driving Selenium (which I hopefully will have time to describe soon) in C#/Windows Forms. After getting it all working, I trashed it and switched to node-webkit and it was ridiculous how quickly I was able to get back to the same spot, plus a huge momentum boost from the fun I was having.

(Though admittedly quite a lot of that fun was probably the first flush of joy from using TypeScript.)

(I’m a nerd, did I mention that?)

Categories: Uncategorized Tags: ,

JavaScript for Java programmers

December 7, 2013 2 comments

I just found on my hard drive a talk I gave over two years ago. If you’re a reasonably experienced Java programmer looking for a way to really understand how JavaScript works (especially functions as object, closures, etc.) it may be of help to you:

Categories: Uncategorized Tags: , ,

Rich Text Editor in the HTML Canvas – Part 1: Introducing CAROTA

November 4, 2013 Leave a comment

I’m developing a rich text editor from scratch in JavaScript, atop the HTML5 canvas. It’s called Carota (Latin for carrot, which sounds like “caret”, and I like carrots).

Here is the demo page, which is very self-explanatory, in that it presents a bunch of information about the editor, inside the editor itself, so you can fiddle with it and instantly see how it persists the text in JSON. As you can see, it’s quite far along. In fact I suspect it is already good enough for every way I currently make use of rich text in browser applications. If your browser is old, it will not work. (Hint: IE8 is way old.)

So… Why? What a crazy waste of time when browsers already have the marvellous contentEditable feature, right?

A quick survey of the state-of-the-art suggests otherwise. Google Docs uses its own text layout and rendering system, only using the DOM as low-level display mechanism (the details on that link are very relevant and interesting). Go to Apple’s iCloud which now has a beta of their Pages word processor, and use your browser to look at how they do it: the text is rendered using absolute, meticulously positioned SVG elements, so they too perform their own layout.

And having tried for the last year to get contentEditable to serve my purposes, in the same way on all browsers (actually, even one browser would be something), I can understand why the Twin Behemoths of the Cloud have taken control of their own text layout. So I’m going to do the same thing, but with Canvas. (My previous plan was to do a plugin for Windows so I’d be able to use the Win32 Rich Edit control, but that kind of plugin is about to die out.)

Before I got as far as drawing any text on screen, I had to be very careful to build up a fundamental model of how flowing text actually works. I wanted to end up with beautiful components that each do something extremely simple, and plug together into a working editor. That way it’s easier to change stuff to meet future needs. I’ve designed it from the ground up to be hacked by other people to do whatever they want.

So, hopefully I’ll be back soon to start describing how it works. In the meantime, fork me on github and you can also get the development set-up via the usual:

npm install carota

For a really quick minimal demo, try this jsfiddle, which just creates an editor in an empty DIV and then uses load and save for persistence.

per: composable forward-passing processor functions

October 31, 2013 Leave a comment

The other day I tried implementing SICP streams in JavaScript as part of a fun project I’m tinkering with. I noticed that (unsurprisingly, given how they work) they generate a lot of memory garbage during the main loop of whatever operation you’re doing, and the overhead of this can actually become significant.

So I blogged them, sighed to myself, and then ripped them out of my project. What should I use instead? I want a combination of high performance and convenience. Also I miss generators, which of course I can’t (yet) assume are available on my target platforms, as they include browsers dating as far back as the ancient IE9 (ask your grandmother) and Chrome 30 without the about:flags experimental JavaScript features enabled.

My building block will be a function of this shape, which I’ll refer to as a processor:

function timesTwo(emit, value) {
    emit(value * 2); // this part is optional

It takes two parameters, the first being a function to which it can pass values forward, and then second being a value for it to process. So you have to call it to give it a value, and it may or may not emit values onwards to wherever you tell it.

Of course it can emit nothing, or many values:

function triplicate(emit, value) {

If you’re into Ruby this will be very familiar to you as a way of representing a sequence of values, and if you implement this kind of function it’s nice and easy, like using yield in a true generator.

The difference here from the Ruby pattern is that we’re discussing a pure function, rather than a method on an object. So we take a second argument as an input that can be used to determine what we emit. For example, we could assume our values will be arrays, and “flatten” them:

function flatten(emit, value) {
    value.forEach(emit); // JS array's forEach fits perfectly

If you called this three times passing an array each time, emit would get all the elements of the three arrays as a flat stream of elements (in separate calls), not knowing which arrays they were originally from.

Alternatively we can ignore the second parameter (not even bothering to declare it) and so define a pure source of data that emits its sequence of values when called:

function chatter(emit) {

So far, so patterny. But the pain with these kinds of building blocks is that they don’t directly compose. An intermediate processor like flatten wants two arguments: where to emit its output and what value to process. But any preceding step just wants a function called emit that accepts one argument: a value.

We can take any two processors and turn them into a single processor that chains the two (put on your higher-order functional programming spectacles now):

function compose(first, second) {
    return function(emit, value) {
        return first(function(firstValue) {
            return second(emit, firstValue);
        }, value);

(Note: I pass back the return value through the layers because it has a use that we’ll go into later.)

See how it works? compose creates and returns a new function. It weaves first and second together. The new wrapper receives the outer emit, and that is given to second, so that’s where the final results will be sent. The first function is passed another new function to emit to, which is where we call second.

That’s the beautiful version. But it means that we have allocate a function object every time a value is passed in. And let’s be crazy C++ programmers for a moment and assume that’s Just Not Good Enough. We could rewrite compose to be butt-ugly and yet still do what we want, without doing any dynamic allocation after initial setup:

function compose(first, second) {
    var secondEmit;
    function firstEmit(firstVal) {
        return second(secondEmit, firstVal);
    return function(emit, value) {
        secondEmit = emit;
        return first(firstEmit, value);            

Yes, EEWWW indeed. We use a local variable, secondEmit, in which we stash the outer emit as soon as we know it. And we create a firstEmit function once, so we can reuse it.

In simple scenarios this will behave the same as the beautiful version. But not always:

function troubleMaker(emit, value) {
    setInterval(function() { emit(value); }, 100);

var doubleTrouble = compose(troubleMaker, timesTwo);

doubleTrouble(function(value) { console.log(value); }, 3);
doubleTrouble(function(value) { document.write(value); }, 4);

Now we have the value 6 being printed to the console and the value 8 being appended to the document. Except… not if we use the second version of compose, because then the second call to foo would redirect both streams to the new destination (by updating that secondEmit local variable). What a mess.

Fortunately, we’re not C++ programmers! Phew! This doesn’t mean that we don’t care about performance. It just means that we do some measuring before going crazy about imaginary overhead. And on V8 I find that the “faster” version is approximately… 1% faster. Screw that. Let’s stick with the beautiful, easy-to-predict version.

The one other interesting feature of the pattern is what we can use the return value for: to signal that the receiver doesn’t want to receive any more data. To do this they return true. So our flatten example should actually look like this:

function flatten(emit, value) {
    return value.some(emit);

That way, we stop looping unnecessarily when there’s no need to keep going (because that’s what a JavaScript array’s some method does: quits and returns true when the function you gave it returns true).

So that’s the pattern. What more is there to say? Well, although writing a processor (or pure value generator) is very easy, because you just write imperative code and call emit whenever you like, it’s not so convenient to use them as components. Yes, we have combine to efficiently pipeline processors together, but there are a lot of “standard” things we tend to do on sequences where it would be nice not to have to write boilerplate code. Especially when writing tests (shudder).

To make this really easy and succinct, I’ve cooked up a little library called per:

npm install per

(Source code)

It’s reminiscent of jQuery, in that it puts a lightweight wrapper around something so that we can call useful operations on it. Most operations return another instance of the wrapper, so calls can be chained. The only entry point into the whole library is a single function called per, so in node (or browserify or webmake) you’d say:

var per = require('per');

In the browser you could just load per.js via the script tag and then you get a global per. Then you can wrap functions like this:

var numbers = per(function(emit) {
    for (var n = 0; n < 100; n++) {

The simplest methods on a per are ways to capture and return the emitted values:

var f = numbers.first(),    // f == 0
    l = numbers.last(),     // l == 99
    a = numbers.all()       // array [0... 99]

The function wrapped by a per is available in a property called forEach, so named because for a simple generator you can treat it a little like an array:

numbers.forEach(function(value) {

To compose functions, there is a per method which is exactly like the compose function we figured out above.

var odds = numbers.per(function(emit, value) {
                           if (value % 2) {

The above pattern is an example of a filter, which evaluates an expression to decide whether or not to forward a value. This is such an obvious pattern that we should have a built-in formalization of it:

var odds = numbers.filter(function(value) {
                              return value % 2

For extra brevity we support passing a string expression in terms of x:

var odds = numbers.filter('x%2');

Another common pattern is to transform a value and pass it on, which is captured by the map method:

var evens = numbers.filter('x%2').map('x-1');

How can we use the resulting combination? As always it has a forEach property, and because numbers doesn’t need a value parameter we only need to pass it a function to emit to (or we can use first, last or all).

There are operators skip and take that work like their corresponding Linq operators:

odds.skip(3).all()          // [7, 9, 11, 13...]
odds.skip(2).take(3).all()  // [5, 7, 9]

These are examples of stateful transformers, because they have an internal counter that controls their behaviour.

We can also construct an initial per by passing an array:

var a = per([1, 2, 3, 4, 5]);

console.log(a.first()); // 1
console.log(a.last()); // 5

You can even pass per a simple value and it will act as a function that merely emits that one value. For example that value might be something complex, such as a whole document that you’re going to pass through several parsing stages.

The above examples all use a fairly simple structure: the first call to per provides an initial value (or stream of values), then there is a chain of transformations, then the final step collects the results. This is fine if you want to process a stream of data in a single quick operation.

But what if you have a source of data that occasionally delivers you a value, and you want to send it through a pipeline of transformers? That is, rather than a single deluge you have more of a drip, drip, drip of information. You can prepare a chain of transformers:

var p = per(foo1).per(foo2)
                 .listen(function(value) {

The listen is used to inject a function in the pipeline that just receives values, but otherwise has no effect on the stream. So in fact it’s like per but you don’t have to write emit(value) – that just happens automatically.

So now you need a way to pass values into the front, for which you can use submit:


Sometimes you need to direct the flow of information through multiple separate sections simultaneously. This is where multicast comes in handy:

var p = per(foo1).per(foo2).multicast(

You can pass as many arguments as you like to multicast (they can be instances of per or plain transformer functions) and they will each receive every value. By the way, multicast is built on listen, so it doesn’t affect the stream, except that when all the recipients have returned true to indicate that they are full, then multicast itself will do the same.

But really the core idea is break up a complex multi-stage task into functions like this:

function transformer(emit, value) {
    // do something interesting and maybe call emit
    // a few times, passing forward values to the next stage

The per library merely provides some common ways to stitch such functions together.

Categories: Uncategorized Tags: ,

SICP-style Streams in JavaScript

October 29, 2013 1 comment

In the not-famous-enough book Structure and Interpretation of Computer Programs (Abelson & Sussman, or “The Wizard book”) we learn about streams.

A stream is a tempting variation on the old school Lisp style of linked list. To get a plain old list, we can set up objects like this:

var a = {
    value: 'apple',
    next: null

var b = {
    value: 'banana',
    next: a

var c = {
    value: 'cantaloupe',
    next: b

So here our whole list is represented by c, and we can loop through it and print all the fruits:

for (var i = c; i != null; i = {

So far, so boring. The idea with a stream is very simple. Instead of storing the next object in the next property, we store a function that, if called, will return the next object. That is, we make it lazy. Note that our loop would still look much the same:

for (var i = c; i != null; i = {

The only difference is we call next() instead of just reading it. And to set up the objects we’d have to say:

var a = {
    value: 'apple',
    next: function() { return null; }

var b = {
    value: 'banana',
    next: function() { return a; }

var c = {
    value: 'cantaloupe',
    next: function() { return b; }

So far, so pointless. But the value of this does not come from silly hand-built examples. In real software you would use this to generate streams from other data sources, or from other streams. It’s like Linq-to-objects in C#, but the foundations are actually more purely functional, because even the iteration process involves only immutable objects, and so everything is repeatable, nothing is destroyed merely by using it. Part-way through a stream you can stash the current node, and come back to it later. It will still represent “the rest of the stream”, even though you already used it once.

It is this extreme level of generality that persuaded me try using streams in a real JavaScript library. I want to write a rich text editor for HTML Canvas (more of that in a later post, hopefully). So I would have streams of characters, streams of words, streams of lines, etc. It seemed to fit, and also I have a week off work and it’s fun to re-invent the wheel.

I start with an object representing the empty stream. This is nicer than using null, because I want to provide member functions on streams. If you had to check whether a stream was null before calling methods on it, that would suck mightily.

var empty = {};

function getEmpty() {
    return empty;

Then we need a way to make a non-empty stream:

function create(value, next) {
    return Object.create(empty, {
        value: { value: value },
        next: { value: next || getEmpty }

It uses the empty stream as its prototype, and adds immutable properties for value and the next function. If no next function is passed, we substitute getEmpty. So calling create('banana') would make a stream of just one item.

One very handy building block is range:

var range = function(start, limit) {
    return start >= limit ? empty : create(start, function() {
        return range(start + 1, limit);

Note the pattern, as it is typical: the next works by calling the outer function with the arguments needed to make it do the next step. And you may be thinking – AHGGHGH! Stack overflow! But no, as long as we loop through the stream using our for-loop pattern, the stack will not get arbitrarily deep.

Here’s a favourite of mine, so often forgotten about:

var unfold = function(seed, increment, terminator) {
    return create(seed, function() {
        var next = increment(seed);
        return next === terminator ? empty :
            unfold(next, increment, terminator);

You call it with a seed value, which becomes the first value of the stream, and also an increment function that knows how to get from one value to the next, and a terminator value that would be returned by the increment function when it has no more values. So in fact you could implement range in terms of unfold:

var range = function(start, limit) {
    return unfold(start, function(v) { return v + 1; }, limit);

It can also turn a traditional linked list into a stream:

var fromList = function(front) {
    return unfold(front, function(i) { return; }, null);

Groovy! Now we have several ways to originate a stream, so lets add some methods. Recall that empty is the prototype for streams, so:

empty.forEach = function(each) {
    for (var s = this; s !== empty; s = {

Nothing to it! And we can use forEach to get a stream into an array:

empty.toArray = function() {
    var ar = [];
    this.forEach(function(i) { ar.push(i); });
    return ar;

Of course, how could we live without the awesome power of map? = function(mapFunc) {
    var self = this;
    return self === empty ? empty : create(mapFunc(self.value), function() {

Again, that lazy-recursive pattern. And now we can very easily implement converting an array into a stream:

var fromArray = function(ar) {
    return range(0, ar.length).map(function(i) {
        return ar[i];

How about concat? Well, this has a slight wrinkle in that if the argument is a function, I treat it as a lazy way to get the second sequence:

empty.concat = function(other) {
    function next(item) {
        return item === empty
            ? (typeof other === 'function' ? other() : other)
            : create(item.value, function() { return next(; });
    return next(this);

And with concat we can easily implement the holy grail of methods, bind (known as SelectMany in Linq and flatMap in Scala):

empty.bind = function(bindFunc) {
    var self = this;
    return self === empty ? empty : bindFunc(self.value).concat(function() {

Think that one through – it’s a mind-bender. The bindFunc returns a sub-stream for each item in the outer stream, and we join them all together. So:


    // ordinary array of numbers
    [1, 2, 3, 4, 5, 6, 7, 8, 9],

    // making that same array in an interesting way
        [[1, 2, 3], [4], [5, 6], [], [7], [], [], [8, 9]]
    ).bind(function(ar) {
        return Stream.fromArray(ar);


Anyway, I wrote my rich text layout engine using this stream foundation, and (as I like to do with these things) I set up an animation loop and watched it repeatedly carry out the entire word-break and line-wrap process from scratch in every frame, to see what frame rate I could get. Sadly, according to the browsers’ profilers, the runtime was spending a LOT of time creating and throwing away temporary objects, collecting garbage and all the other housekeeping tasks that I’d set for it just so I could use this cool stream concept. Interestingly, in terms of this raw crunching through objects, IE 10 was faster than Chrome 30. But I know that by using a simpler basic abstraction it would be much faster in both browsers.

How do I know? Well, because I found that I could speed up my program very easily by caching the stream of words in an ordinary array. And guess what… I could just use arrays in the first place. I am only scanning forward through streams and I definitely want to cache all my intermediate results. So I may as well just build arrays. (Even though I haven’t started the rewrite yet, I know it will be way faster because of what the profilers told me).

So, for now, we say: fair well streams, we hardly knew ye.

Eventless Programming – Part 5: Async/Await and Throttling

March 3, 2013 2 comments

Posts in this series:

Last time we were able to throw a few more working features into our UI with relatively little effort, following very simple patterns. The result is that the UI automatically updates itself according to every change the user makes to the state of a control.

Which raises the question: what if that involves working a little too hard? If the user makes half a dozen changes in quick succession (say, they impatiently click the “Increase Radiation” button five times) then it makes little sense to do a ton of work to update the UI after every single click. It would be wiser to wait until they stop clicking it for half a second or something.

This is especially important when the updating work involves something really costly like downloading content over the network. And that’s another point – how do we integrate with asynchronous background calls? Oh no, my whole universe is starting to crumble!

Except it’s not. There are simple answers to both questions. Let’s start with the rapid-fire-recomputations problem. Recall that in our Notes view model, we cook up this:

SelectedNotes = Computed.From(
    () => AllNotes.Where(n => n.IsSelected.Value).ToList());

This means that every time you check or uncheck the selection CheckBox for a note, a Where/ToList is re-executed, and then various bits of UI infrastructure get rejigged as a result of SelectedNotes changing. In fact we can easily monitor how often this happens by adding this line:

Computed.Do(() => Debug.WriteLine("SelectedNotes: " + SelectedNotes.Value.Count()));

Now try clicking Select all and you’ll see the crazy results:

SelectedNotes: 0
SelectedNotes: 1
SelectedNotes: 2
SelectedNotes: 3
SelectedNotes: 4
SelectedNotes: 5
SelectedNotes: 6
SelectedNotes: 7
SelectedNotes: 8
SelectedNotes: 9
SelectedNotes: 10
SelectedNotes: 11
SelectedNotes: 12
SelectedNotes: 13
SelectedNotes: 14
SelectedNotes: 15
SelectedNotes: 16

Yes, as we loop through the notes setting IsSelected to true, every single time the SelectedNotes list is recomputed and so triggers UI updates. It may not be a problem in this little example, but it’s not to hard to see how it could become a problem if the UI update were more expensive.

So that’s why it needs to be this easy to fix it:

SelectedNotes = Computed.From(
    () => AllNotes.Where(n => n.IsSelected.Value).ToList()

Yep, you just tack .Throttle(100) on the end, and it configures the Computed to wait until it hasn’t been notified of a change in its dependencies for at 100 milliseconds before acting on it; that is, it waits for stability. There’s actually a second parameter bool that lets you select between two behaviours:

1. true: wait for n milliseconds of stability (the default)
2. false: recompute no more than once every n milliseconds during instability

The default behaviour means that if the user keeps making changes back and forth indefinitely, the UI will never update. But of course in reality they don’t do that, so it’s usually fine. But if you wanted to show continuous feedback while (say) the user drags a slider, maybe the second behaviour would be more suitable. There’s no hard and fast rule here, which is why it’s an option.

Now let’s do some asynchronous downloading. Say we have a TextBox where the user types a URL, and we are going to download it automatically as they type. In reality they’d be adjusting a more restricted set of parameters than just a raw URL, of course, but it serves as an example. First we’d bind the TextBox to a Setable.

var urlText = Setable.From(string.Empty);

Then from that we’d compute a Uri object, or null if the string isn’t yet a valid URI.

var uri = Computed.From(() =>
    Uri result;
    Uri.TryCreate(urlText.Value, UriKind.Absolute, out result);
    return result;

So far, so obvious. But it turns out the last part isn’t too surprising either:

var response = Computed.From(async () =>
    if (uri.Value == null)
        return "You haven't entered a valid url yet...";

        return await new HttpClient().GetStringAsync(uri.Value);
    catch (HttpRequestException x)
        return "Error: " + x.Message;

You just put the async keyword in front of the lambda you pass into Computed.From (and of course you should probably use Throttle, as I’ve done here). So what is response? It’s an IGetable, as you’d expect, but what type of Value does it hold? By inspecting the various return statements of our lambda, it’s pretty clear that it returns a string, but it is an async lambda so therefore it returns Task<string>.

But Computed.From has a special overload for Task<T> that takes care of await-ing the result value. So the upshot is that response is simply a IGetable<string> and can immediately be bound to a control:


When used inside a UI application, async/await automatically ensures that if you launch a task from the UI thread and await it, the await will return on that same UI thread. So your UI code never has to worry about threads much at all, which is very reassuring (it’s not like the old days, when you had to manually use Control.Invoke to ensure that your UI updating operations happened on the right thread).

The complete source code for this series of articles is available here:

It’s a Visual Studio 2012 solution.

Next time… well, actually I’m going to pause here for a while. The next step would be to see how we can apply these ideas to a more current UI framework like Windows Runtime. In the meantime, I hope it’s been interesting to see how the core ideas of Knockout are applicable in environments that have nothing to do with HTML and JavaScript, and it’s not actually about binding to the UI. The key thing is providing a highly productive way to transform your raw data. That’s what makes the binding part easy. So unless you make the data transformation easy, then you’ve just moved the problem without solving it. Computed observables are what make it possible to declare your data transformations, and so enable eventless programming. They are the secret sauce of Knockout.

Categories: Uncategorized Tags: , ,

Eventless Programming – Part 4: More Views Than You Can Shake A Stick At

March 2, 2013 Leave a comment

Posts in this series:

Last time we got a basic UI together. Next we’ll throw lots of typical ingredients into it, but common theme will be: everything stays consistent. And there ain’t much code.

First of all, just to follow a tidy pattern, we’ll define a proper “view model” class for our main Form:

public class Notes
    public readonly ISetableList<Note> AllNotes = new SetableList<Note>();
    public readonly IGetable<IEnumerable<Note>> SelectedNotes;
    public readonly ISetable<Note> ActiveNote = new Setable<Note>();

    public Notes(IEnumerable<string> initialNotes)
            text => new Note(this) { Text = { Value = text } }));

        SelectedNotes = Computed.From(
            () => AllNotes.Where(n => n.IsSelected.Value).ToList());

    public void Add(string note)
        AllNotes.Add(new Note(this) { Text = { Value = note } });

This is just the same code as before, minus any bindings to UI controls. The idea is that you could then make a bunch of different views that bind to the same data (and of course, you can write unit tests). We won’t bother to do that with the whole Notes system, but we will do it for individual notes. I just wanted to neaten this up before continuing.

There is one new thing: the ActiveNote property. Nothing fancy, it just holds a single Note (or null, of course).

I’ve changed Note so it takes a reference to Notes in its constructor (oh yes, it has a constructor now):

public class Note
    public readonly ISetable<string> Text = new Setable<string>();
    public readonly ISetable<bool> IsSelected = new Setable<bool>();
    public readonly ISetable<NotePriority> Priority = new Setable<NotePriority>();
    public readonly ISetable<bool> IsActive;
    public Note(Notes notes)
        IsActive = Computed.From(
            get: () => notes.ActiveNote.Value == this,
            set: value =>
                    if (value)
                        notes.ActiveNote.Value = this;
                    else if (notes.ActiveNote.Value == this)
                        notes.ActiveNote.Value = null;

This allows it to set up its own new IsActive property that is true only if this property is currently the active one. It’s Computed with a getter and a setter, so it provides a different way to interact with Notes.ActiveNote:

noteA.IsActive = true; // activeNote is now noteA
noteB.IsActive = true; // activeNote is now noteB
noteB.IsActive = false; // activeNote is now null

So although it looks like each Note has an independent bool property, they are utterly co-dependent. Only one can be true at a time. Compare this to the typical hacky approach where you actually give each Note a separate bool and then try to keep them in line by manually setting them. How much more elegant to just declare how to simulate a bool property by using a single shared ActiveNote property. It’s pure poetry!

The main NotesForm just sets up the binding to UI controls. In fact I’ve made it implement IBindsTo<Notes> just to follow the pattern. But really this is the same code as before, just rearranged. Oh, I added the concept of NotePriority:

public enum NotePriority
    Normal, Low, High

So Note has an setable Priority as well. So we have a couple of new things we could bind to in NoteListItemForm:

public void Bind(Note note)

    Computed.Do(() => BackColor = note.IsActive.Value
                            ? SystemColors.Highlight
                            : SystemColors.Window);

    Computed.Do(() => pictureBox.Image = 
        note.Priority.Value == NotePriority.High ? Properties.Resources.high :
        note.Priority.Value == NotePriority.Low ? Properties.Resources.low :

    textBoxContent.Click += (s, ev) => note.IsActive.Value = true;

This is a chance to show how you can bind anything in your UI, even if there isn’t a special extension method for whatever it may be. The trick is to use Computed.Do, which we haven’t used so far. It’s just like Computed.From but it returns void and takes a plain Action. Now what the heck is the point of that? A computed observable that computes nothing and gives you back nothing to observe? Oh but it does! As with any void function, it returns a new version of the universe. Or more mundanely, it has side effects on this universe. Or even more mundanely, it has side-effects on some control in our UI.

The beauty of it is that it runs the action once when you set it up, and then again whenever anything it depends on changes. So although you have to write imperative side-effecting statements, they are typically just assignments, and the right-hand side is an expression in terms of observables, just like you’d use in a binding. So it’s a way to roll your own kind of binding. Here I have two examples: the background color is different for the active note, and an icon is used for the two non-normal priorities:


There’s also a plain old Click event handler so the user has a way to set this Note to be the active one.

We can also display a more expanded view of the active note, using another small form called NoteEditingForm, which implements IBindsTo<Note> in the usual way.


Edit the text in the active view and it immediately changes in the list, and vice versa. Cool!

This is ridiculously easy. On the main form we add a new empty Panel, and then we bind it like this:


That’s it. BindContent is actually a close relative of BindForEach, the difference being that it uses a single value that may be null, which we could think of as a list that has either zero items or one item. NoteEditingForm has one minor thing we haven’t seen previously: binding to radio buttons:

public void Bind(Note note)

    radioButtonHigh.BindChecked(note.Priority, NotePriority.High);
    radioButtonNormal.BindChecked(note.Priority, NotePriority.Normal);
    radioButtonLow.BindChecked(note.Priority, NotePriority.Low);

Easy-peasy. Okay, so what else could we do? What if there was an alternative view where we just show you a big scrollable list of these NoteEditingForm tiles? Sounds like a lot more work… No, you dummy! It’s just this:


And immediately we have this:


Of course, you can edit in one view and it changes in all the others. They’re all bound to the same simple underlying view model, and the bindings are declarative, and therefore hard to get wrong. It makes the UI very malleable – you can very quickly try out ideas, move pieces around, and know that the pieces will keep working. And that in turn gives you the power to make a better UI, because experimentation (with feedback from real users, even if that’s just yourself) is the only way to make a better UI.

Code in the usual place. Next time, how we can integrate with async/await, and use throttling to reduce the rate of recomputation.

Categories: Uncategorized Tags: , ,

Get every new post delivered to your Inbox.