Tag Archives: design

Variance expression in tell don’t ask programs

This time we’re going to explore how tell don’t ask (TDA) programs express variance, and hopefully come across an interesting corollary of our TDA definition.

We’ll consider two ubiquitous operations: getting and storing a value into a map. In both cases we’ll discuss the standard Java implementation before creating a speculative TDA version.

Example one: value retrieval

We want to lookup the value associated with a particular key.

This operation has entropy two. Either the map contains the value or it does not.


V get(Object key);

Here the returned value is either:

  • null to signify absence
  • the present value

Amusingly nothing prevents us from storing a null value other than some very hopeful javadoc.

Clearly this interface violates our TDA definition.

  • Should we be perturbed by this?

If such a core library concept is against us, perhaps we’ve chosen a poor language as for TDA implementation; Java, after all, is frequently mocked for not being a tremendous OO language.

What about a lauded OO language, like Smalltalk? Well, Smalltalk’s dictionary class still provides value access via at:, which generates an error if a value isn’t present.

Even the more robust accessor

at: key ifAbsent: action

still returns the present value, rather than forwarding a message to it.

Let’s continue to persist with our TDA approach and see what happens.

TDA like we mean it

void forward(final K key, final Consumer<V> onPresent, final Runnable onAbsent);

Now we obey our TDA definition and end up passing in two more objects.

There’s that number two again! It sounds like we’ve expressed the variance in function parameters – we’ve given the map two functions that it could call depending on whether a value is present or not, and we trust it to call only one of them.

Perhaps we could express the TDA variance as follows:

variance ⊂ downstream call graph

Example two: Storage

Now we want to put a new value into the map.

Once again, the entropy of this function sounds like it might be two:

  • There could be a value already present for that key; we overwrite it
  • There is no existing value and we just add the new key value pair.

In both cases the state of the map changes.

Java standard

V put(K key, V value)

Once again, not quite tell don’t ask compliant.

Side note: If one looks at the javadoc on java.util.Map, one notices that with the addition of exceptions and the possibilities of null the entropy is now far higher than two.

TDA like we mean it

void put(K key, V value, Consumer<V> oldValueExisted, Runnable noExistingValue);

Here again we have two extra parameters.

  • Does this new definition express the complete variance?

N.B We could argue that oldValueExisted and thereWasNoExistingValue don’t have access to the new key and value. Nothing in the TDA definition prevents us from binding this information into the implementations at construction time, though.

What’s really missing is the variance in the map’s state. This is left implicit – the caller still has a reference to the (now mutated) map, and the old version is gone, forever!

What if we wanted to use an immutable map in this style? A naive first attempt might look something like this:

Map<K,V> put(K key, V value, Consumer<V> onOldValue, Runnable noExistingValue);

The map decides which function to call next, but must also provide a new map to pass back to the original caller. This violates our TDA definition, however, and if either of the two callbacks want to play this game, we’re suddenly in trouble (they could be longer lived objects than a simple closure – how do we update their callers with their new values?).

It’s hard to see how we can scale this approach.

Conjecture: TDA mutation expression

What a pretentious subheading.

void put(K key, V value, Consumer<V> onOldValue, 
         Runnable noExistingValue, Consumer<Map<K,V>> onNewMap);

Now we’re fully tell don’t ask compliant, but what are we going to do with that new map?

Well, temporarily ignoring TDA, a caller could pass themselves as onNewMap. It could then tell its callers that it has changed, and so on, up the stack.

  • Can we do that and keep to the TDA definition?

We need the traditional extra layer of indirection.

Messages outgoing from our object can no longer be delivered directly, but will need to be captured/dispatched by some piece of architecture that knows where messages should be routed to and delivers them appropriately.

What’s the tapping noise I hear? Ye gods, we’ve just dug up the foundations of the actor model.

So, we have two choices:

  • Keep direct dispatch, but leave state change implicit
  • Abandon direct dispatch and build towards an actor like model.

Very important scoping note

In order to achieve TDA certification with the actor model, we’ll actually need to abandon synchronous dispatch (and qualify the TDA definition slightly).

Precisely why this is necessary is left (for now) as an exercise for the reader. We’ll spend the rest of this post discussing simple, direct dispatch TDA, and leave discussion of the actor model for another day.

The Consequences Of Implicit Mutation

This really ought to be the name of the next Blotted Science album.

We have roughly agreed that in TDA world:

variance ⊂ downstream call graph + (implicitly) mutable object state

This is perhaps more elegantly expressed as:

variance ⊂ downstream call graph for current method invocation +
           downstream call graph of future method invocations

That sounds quite worrying, no? We could see the variance of a call at the point of any future invocation. That’s a helluva lot of variance!

That is true, but our insistence on TDA at least allows each object to know precisely what messages it might get passed (and thus the possible states it can get into).

This fits with my experience in this area; as soon as we disregard TDA, understanding (and changing) code becomes extremely difficult.

On the up side

We do gain some immutability elsewhere. A large chunk of our object graph often is fixed at start time (spot the final fields of a given object).

For example, anyone using our TDA map could hold on to it for the entire program runtime, even though its contents (and thus behaviour) will evolve over time. The TDA map feels more like a subscription registry.


Caveat implementor!

TDA as we have defined it leads inexorably to mutable objects.

The only way to be sane (citation needed) in a mutable world is to encapsulate decisions made about state to the places where the state is. In a way, TDA is both the cause and solution of all of our problems.

Coming up

We are yet to discover the limits of TDA, and we haven’t really explored the effects of TDA at anything other than the unit level.

If TDA is such a good idea:

  • Why don’t library classes (like Map) obey our definition?
  • What effect does their lawlessness have on code that wants to be good?

We’ll try to make some headway on this soon.

In addition, we also need to cover:

  1. Variance expression and encapsulation in functional programs (FP).

    This should lead to a nice lemma on nesting FP in TDA (and vice versa). We’ll probably go here next, as a brief diversion (not least because I just extracted most of it from this post).

  2. The actor model

What is Object Oriented Programming, anyway?

Before I embark on a series of potentially ill judged posts regarding some aspects of OOP, perhaps it would be a good idea to try to define what it is.

This is somewhat dangerous territory. I am reminded of the final sections of Stewart Lee’s masterpiece “90’s comedian” where he draws a chalk circle around himself and the microphone, repeating a ritual practised by medieval clowns to protect themselves against persecution from heresy (I’ll hopefully be able to link to this soon).

Other people’s attempts

Wikipedia’s attempt is the best example to start with.

Object-oriented programming (OOP) is a programming paradigm that represents concepts as “objects” that have data fields (attributes that describe the object) and associated procedures known as methods. Objects, which are usually instances of classes, are used to interact with one another to design applications and computer programs.[1][2] C++, Objective-C, Smalltalk, Java, C#, Perl, Python, Ruby and PHP are examples of object-oriented programming languages.

A little taster there of the vagaries that are waiting for us. One can almost see the scars from previous edit wars.

What about the folks who came up with the idea?

The C2 Wiki has enough different definitions that there’s even an index page listing all the variants.

From all these pages, my favourite snippet:

OO is not a well defined concept — Jonathan Rees

I’m inclined to agree with that, on the above evidence.

Baby steps

If we are unable to define OOP, can we distil a sentence out of all that noise? Well, thankfully, someone already has:

Procedural code gets information then makes decisions. Object-oriented code tells objects to do things. — Alec Sharp – a bit of a meta reference, but there we go

Finally some tersity! C2Wiki provides some more colour.

I’ve been trapped in OO land recently, and tell don’t ask (TDA) has proved a neat little principle to guide the design.

We’ll back off from adding another definition of OOP; we’ll even omit any explanation of why TDA is useful (more on this soon). Instead, we’ll start by attempting to pin down a rigorous description of TDA.

Definition : tell don’t ask

A function is tell don’t ask if information moves only from the caller to the callee.

This prevents the caller making decisions based on the callee’s state. This isn’t complicated or new, it’s just strict encapsulation.

I hear a cry at the back of the room: “That seems a bit dry. Can we have something a bit more concrete?”.

Ok, mysterious interloper, let’s explore some alternatives.

Other definitions (that don’t work)

Interloper: Hey! They do work! Why don’t we just say:

A function is tell don’t ask if it has void return type.

Narrator: Well, what about this counter-example: OutputStream‘s write

public void write(byte[] b) throws IOException

Narrator: Now, the caller can catch IOException and infer something about the status of the write, and thus the state of the OutputStream.

Interloper: Ok, Mr. Pedant, how about this:

A function is tell don’t ask if it has void return type and never throws

Narrator: Ho ho, we’re getting there, but you forgot about the cavepeoples’ return value:

public class A {
    public void doThing(final B b) {
        final List<String> out = new LinkedList<String>();

        // now we know all of B's strings. Stupid B!

public class B {
    private final List<String> myStrings = ImmutableList.copyOf("a", "b");

    public void showThyself(final List<? super String> out) {
        for (final String s: myStrings) out.add(s);

Interloper: Oh man. People still do that?

Narrator: Yes. I know. What can you do?

Interloper: *sigh. That does give me another idea though, how about this:

A function is tell don’t ask if:

  1. It has void return type
  2. It never throws
  3. All the function’s parameters are either classes, whose functions all obey 1 and 2, or primitives

Narrator: Now we’re getting somewhere. Have you considered this case, though?

public class A {
    public void doThing(final B b) {

    public void aha(final String theSecretOfB) {
        // now we have the secret of B! Stupid B!

public class B {
    private final String secret;

    public B(String secret) {
        this.secret = secret;

    public void doThing(final A a) {

Interloper: Oh man, this is getting seriously boring; ok, last go. Maybe add a clause to say that the caller can’t pass itself as a parameter?

Narrator: Well, yes, but then we can get around that by wrapping the caller inside some other class that we then pass into the function.

Interloper: How about we enforce that no parameter can contain the caller then?

Narrator: Well, that’s fine, but is the concrete definition that we now have more or less useful than the abstract one?

A function is tell don’t ask if:

  1. It has void return type
  2. It never throws
  3. All the function’s parameters are either classes, whose functions all obey 1 and 2, or primitives
  4. None of the function’s parameters may contain a reference to the caller, directly or indirectly

Interloper: Ok, I see your point. What about reflection, though?

Narrator: Well, once we start using reflection, all bets are off – we can’t even guarantee the privacy of fields, so attempting to define things with reflection in play is just pointless.


  • What about queries?

I’d argue that the whole concept of ‘query’ is exactly the opposite of TDA, we ‘ask’ a question and then act on the response. This is a deliberate exclusion from our toolbox; we want to see if we can get by without it.

  • Can real programs really work this way?

Let’s find out! We’ll start with our tight definition of TDA, and perhaps we’ll find some species of program that are difficult to express. On the other hand, perhaps everything will work. Place your bets…now!

At this point, I am minded to reproduce some quotations on being restrictive that David Wheeler’s essay (on fixing unix/linux filenames) includes.

Seek freedom and become captive of your desires, seek discipline and find your liberty — Frank Herbert, Dune

Negative freedom is freedom from constraint, that is, permission to do things; Positive freedom is empowerment, that is, ability to do things… Negative and positive freedoms, it might seem, are two different descriptions of the same thing. No! Life is not so simple. There is reason to think that constraints (prohibitions, if you like) can actually help people to do things better. Constraints can enhance ability… — Angus Sibley, “Two Kinds of Freedom”

Next time

We’ll talk about how TDA programs express variance, and contrast that approach with how variance is expressed in functional programs. We might make the joke about non-functional programs, for the sake of tradition. See you then.

Declaring unit test bankruptcy

I have recently started a major stream of work centered on a particular application in the LMAX stack. This application has had plenty of features added to it over the last few years, but nothing has really required an overhaul.

Our work, however, is somewhat more involved; even finishing the simplest of our requirements has been taking a week or so – that’s a long time, for us.

Hitting the buffers

Our method, to begin with, looked something like the following:

  • Write acceptance tests for feature (we tend to batch these up – it helps us explore the story)
  • Write integration tests for our application, supporting the feature (these usually resemble the ATs)
  • Spike implementation within the application
  • Use knowledge gained from spike to drive refactoring
  • Repeat the last two steps until the ATs and ITs pass

We’re very much in the Kent Beck school of development here:

First refactor the program to make it easy to add the feature, then add the feature

Our problem was that refactoring the program was hard! We discovered that while making the ITs and ATs pass was easy, getting the unit tests to compile and pass was much harder.

This was frustrating; not least because the unit tests were of the overspecified mock interaction sort. If we moved even the smallest piece of validation, anywhere from one to a hundred unit tests would fail.

Symptom, not cause

We blamed the tests – they were stupid tests, we said; why had anyone bothered to write them? So, we tried to rewrite a couple of unit tests in a more lean style – just creating what we needed to test our new feature.

This felt a lot better right up until we finished, when we looked from our new tests to the old tests, and from the old tests to the new tests; but already it was impossible to say which was which.

These tests were a symptom that the code underneath was jumbled. Someone had attempted to break up large, core domain objects into separate responsibilities by pulling behaviour up into ‘processor’ objects, which had made things smaller but also broke encapsulation. More on this another day.

This was novel – here was a case where the wrong refactoring had painted us into a corner. The problematic tests this ill judged refactoring wrought besmirched all attempts to escape to a better place.

Declaring unit test bankruptcy

We decided to remove these unit tests. They were creating a catch 22 situation: we couldn’t refactor the code without breaking the tests, and we couldn’t make the tests better without fixing the code.

We ended up working like this:

  • Write acceptance tests for feature
  • Write one integration test for the application (a deliberately smaller step)
  • Spike implementation within the application
  • Run unit tests with spike code to detect pain
  • Rewrite those unit tests as integration tests
  • Delete the painful unit tests
  • Revert the spike, and use knowledge gained from spike to drive refactoring
  • Make new integration tests pass with well factored code
  • Continue until all the ATs pass

This allows us to make swingeing refactorings safely; speeding our journey towards a place where we may one day be able to TDD all the way down.

Enabling factors

We were lucky to have:

  • A mature integration tests framework. This made writing the new tests to assert only on the I/O events from our particular application easy.
  • A single threaded application (so integration tests are almost as quick as unit tests, and they don’t suffer from races)
  • Extensive AT coverage over the system as a whole.

Beware though, for these are double edged swords. Perhaps it is because our framework makes ATs and ITs so easy to create that we neglected the factoring of the code within.

It seems we have been guilty of declaring stories done when the acceptance tests all pass. If only life were that simple!


In TDD, at the unit level, the method is as follows:

  • (write new test) Red
  • (make test pass) Green
  • Refactor

Here, ‘refactor’ is usually removal of duplication, and separation of responsibilities into separate classes.

We need to execute the refactor step ‘all the way up’.

The refactor step for ITDD and ATDD

I wrote a sort of checklist of things that I think about; but they were:

  • too specific
  • impenetrable
  • probably wrong

Instead, I advise instead that all one needs to do at this point is to stop and think. More specific advice is left as an exercise to the reader. (Hint: Think of the principles you apply at the unit level – can you scale them up to the level of systems and applications?)


  • Listen to your tests! We could have avoided this whole affair if we had listened to the tests at the time of writing.
  • Make sure your definition of done includes the ‘refactor’ step.