Exploring the Nature of Software Delivery

My understanding of the nature of software development is undergoing change as I continue to investigate the works of such diverse thinkers as Karl Popper, Alan Turing, Dedre Gentner, David Easton and many others. This series of articles reflects on what I have learned. It is a work in progress. Your feedback is welcome. You can reach me via twitter @logosity or through my professional online prescence: Agnomia.

Part I: The Theory Under Test

Nevertheless there are still some who do believe that philosophy can pose genuine problems about things [...] and if by chance they find themselves unable to accept any of the existing creeds, all they can do is to begin afresh from the beginning.
Karl Popper in The Logic of Scientific Discovery

A few years ago, I started to write down some of things I've learned about the nature of software delivery. I didn't make much progress beyond a couple of anecdotes, a few aphoristic tweets and drafts of some vaguely analytic concepts. They were easy enough to discuss in my daily life, but all of my attempts to generalize them for a wider audience were... Disappointing. I was so frustrated, I almost gave up. But then, a funny thing happened: I started to become curious about why it was so difficult to write about this topic, and began to think (and write) about that instead.

This document is the result. It is both reflective and philosophical in character, matching the nature of my investigation and my preference for that sort of thing. As such, there is little in the way of immediately applicable practices, recommendations or advice - As will become clear, I feel we need to have a solid foundation for describing delivery before we can say how we ought to practice it.1 Instead, this exploration advances a sketch; a suggestion of a different model for thinking about and explaining delivery than you might be used to. That model is not an Agile - or even a Software Engineering - view of delivery, though it builds upon and (where appropriate) respectfully critiques both.

Yet, despite its incomplete nature, I already find it useful for explaining ideas. It has also given me a new way to look at my current work, so that I am starting to feel comfortable sharing it more widely.

Also, it has reached a point where if it is to grow further, I need to let it out into the world. Whether it survives and grows, or fades away is yet to be seen. But whatever its fate, I have already found value in the journey. And really, what more can we ask for than that?

The delivery systems model

The central goal of this enquiry - and thus of this series of articles - is to develop a useful way to to describe a delivery effort in order to share knowledge about the delivery process. This begs the question: What is my definition of delivery? As, you will see, I find this a difficult question to answer without resorting to concepts that I feel are poorly defined themselves. However, I offer the following informal definition as a placeholder:

By delivery, I am referring to the practice of one or more people procuring, creating and/or configuring one or more computational artifacts2 for the use of another. That is: all computing development where multiple people are involved3.

Conceptually, the model consists of just a few primitives that are related to one another in a systems view of delivery. Much like language primitives, these elements - and terms derived from them - are the basic building blocks for explaining delivery efforts using the model. I will define them - and explain some of the ways I envision using the model - as I recount how the model evolved.

delivery system
Figure 1: The Delivery System. Demand and support signals in an environment result in decisions and changes to computational artifacts. Such artifacts comprise a formal model of an implicit (yet empirical) theory that is subject to the possibilities and limitations of computer science.


In this first part, we're going talk a little bit about context and then introduce the idea of the Theory Under Test by describing the philosophical concepts of falsification and demarcation.

The importance of context

The earliest and most obvious element was the central role of context. Ironically, the importance of context was one of the things that I was finding it difficult to write about. I had hoped to build on my speakerconf presentation in 2011, extending the idea that crossing context barriers added cost to delivery efforts. Now, I was starting to see that context not only impacted my delivery efforts themselves, but also my ability to write about them. For example:

  1. It is very difficult to separate the "wisdom" of a delivery effort from it's context. What makes sense in one environment can be downright harmful in another. Relatedly, the widespread belief in the existence of "best" practices encourages an extra-contextual objectivity that I found unjustifiable, meaning I couldn't reach any conclusions without adding pages of qualifiers.

  2. Without context, many common software engineering terms (e.g. test, product, system, customer, feature & success to name just a few) are so vague as to be meaningless in the sense that a conclusion based on them is valid only to the extent that the audience shares the same context as the writer. I couldn't escape a sense that I would either be "preaching to the choir" or critically misunderstood, resulting in me adding even more disclaimers and qualifiers.

No wonder I almost gave up...

A way forward...

To be honest, I did kinda give up - in that I didn't set out right away to get beyond "It's all about context." But then in 2015, I decided to read "Fooled by Randomness" by Nicholas Taleb. It turned out to be a fateful decision because Taleb holds Karl Popper in high regard. And so, I was exposed to Popper's ideas on Falsifiability and that led to me learning more about the Demarcation Problem.

These two concepts play an important role in my story, so a quick summary is in order: The demarcation problem is a philosophical question that asks what criteria separate real-world scientific (i.e. empirical) knowledge from both non-scientific knowledge (e.g. mathematics & logic) and beliefs (e.g. pseudo-science & superstition). According to Popper, the demarcation boundary lies between the sets of statements that are falsifiable, and those that are not.

The demarcation boundary
Figure 2: One way to classify knowledge is whether it is arrived at by reasoning from known facts (i.e. deduction) or generalizing from experience (i.e. induction). One long-standing question in philosophy has been how to separate correct generalization from metaphysical beliefs (The Problem of Demarcation). Popper's insight was to reject inductive logic in favor of a demarcation based on falsifiability (red line).


The basic idea is that because all theoretical statements are generalized from individual examples, none are verifiable, but some are falsifiable. Popper thus rejected the idea that inductive logic could ever lead to knowledge. Instead, the demarcation boundary establishes a domain of certainty where deductive logic can proceed from theoretical premises provisionally accepted (i.e. not falsified) in order to say meaningful things about the world. The classic example is the existence of black swans:4 We can never prove (i.e. verify) the statement: "all swans are white", because no matter how many white swans we encounter, the next one might be black. We can however, disprove (i.e. falsify) the statement simply by finding a single black swan.5 Scientific statements are those that include the possibility of observing phenomena that allows us to reject them.

One of the reasons I have a philosophy degree (and that I am a programmer) is that I find logic & epistemology to be endlessly fascinating topics. Finally tackling Popper offered an opportunity to explore both. So, while I wasn't really thinking of the difficulties I'd encountered in my writing, Taleb inspired me to read Popper anyway. A chance encounter that triggered an avalanche of ideas; ultimately transforming my understanding of what delivery even means.

This was not obvious right away, however. When I decided to dig into "The Logic of Scientific Discovery" it was simply out of curiosity. But, almost immediately I saw that my earlier thoughts on context were a specific example of the problem of the intersubjectivity of knowledge. In a scientific context, theories are the vehicle of intersubjective knowledge - and as you might have guessed, Popper's claim is that for a theory to be considered scientific in character, it must include statements that are falsifiable.6

I was already starting to see how I might use this idea to improve my writing on the subject of delivery, when I encountered this:

Theories are nets cast to catch what we call ‘the world’: to rationalize, to explain, and to master it. We endeavour to make the mesh ever finer and finer.
(Kindle Locations 835-836) in The Logic of Scientific Discovery

I'd been using this metaphor of ever-finer mesh netting to describe how I practiced Test-Driven Development for over 15 years. To encounter it in this context was mind-blowing. What I was reading suggested a whole new way for me to think about writing software: That testing assumptions about the behavior of code is analogous to falsifying a scientific theory! Furthermore, code statements (like the statements of a theory) comprise a deductive system that lies entirely within the demarcation boundary implying that the process of extending scientific theories has a lot in common with writing code. This reminded me of course of Edsger Dijkstra's famous observation that testing can show the presence of bugs, but never their absence, but I'd never heard anyone extend the idea to the nature of the development process itself. Yet, Popper's take on the validation of scientific theories sounded so much like the process of writing software that I had to explore it further.

Not only had my chance reading of Popper suggested a whole new way to understand why I was finding it difficult to write about delivery, it offered the promise of a whole new way to define it.

The nature of delivery...

And that's how "the theory under test" became part of my delivery model. Instead of a vaguely-defined "system" being the object of delivery, I now saw a delivery effort as concerned with creating and changing a formal model7 of a theory. Thus, anything that is true about the nature and limitations of scientific enquiry could also potentially be relevant to explaining or investigating a delivery effort.

But, as I investigated further, I noticed something strange. This idea seemed to be both aligned and at odds with more traditional views of delivery. Take for example the role specification plays in this philosophy of computer science article (Section 2):

[A specification] provides the criteria of correctness and malfunction. From this perspective, the role of specification is a normative one. If one asks does the device work, it is the definition functioning as a specification that tells us whether it does. Indeed, without it the question would be moot. At all levels of abstraction, the logical role of specification is always the same: it provides a criterion for correctness and malfunction.

Later in this same section the author (quoting himself) argues that:

specifications are not intended to be a scientific theory of anything
Raymond Turner in The Philosophy of Computer Science

and:

[S]pecifications are taken to be prescribed in advance of the artifact construction; they guide the implementer. This might be taken to suggest a more substantive role for specification i.e., to provide a methodology for the construction of the artifact. However, the method by which we arrive at the artifact is a separate issue to its specification.
Raymond Turner in The Philosophy of Computer Science

On the one hand, he seems to simultaneously admit the existence of a scientific theory - and that specs are not that theory. After some reflection and comparison to Popper and other sources on the structure of scientific theories, I concluded that the specification as Turner describes it plays the role of the model. The implementation plays a subordinate role, not directly tied to theory building at all. In other words, Turner sees programming as a software engineering activity. The correctness of the specification (and where it comes from) is thus out of scope when considering the nature of delivery.

And that is perhaps appropriate in an article on the philosophy of computer science. But Turner also seems to suggest that the origin of a specification is outside the purview of Software Engineering itself - yet remains critical to its correctness! If so, so I reject this idea as it seems to me that something so important as "how do we know we are delivering the right thing" should not be considered out of scope for discussing the nature of delivery.

And the sentiment is not limited to Dr. Turner. It seems to be central to the formation of the SE discipline itself. Take for example the following, from a work cited on wikipedia's Software Engineering page:

The discipline of software engineering was created to address poor quality of software, get projects exceeding time and budget under control, and ensure that software is built systematically, rigorously, measurably, on time, on budget, and within specification[12]
Matti Tedre in Science of Computing: Shaping a Discipline

Note the specification is again assumed to exist. Funny thing is, in twenty years of writing software professionally, I've seen a true old-school specification exactly ZERO times. As someone who discovered Extreme Programming in 1999 because I couldn't find anything useful on the topic of requirements management to help deal with a particularly difficult customer situation, this doesn't really surprise me. What does is that the existence of a specification as an input into the delivery process persists as a core idea in Software Engineering. Then, again many accepted truths of SE might actually be little more than folklore, so perhaps I shouldn't be surprised by that either.

Admittedly, there has been a focus on improving such practices8 throughout the history of the Agile movement, but Agile methods don't really challenge the existence of a specification so much as how it is discovered, refined and recorded. Or to put it another way, Agile is still part of the Software Engineering paradigm, still tied to the manufacturing metaphor (products, designs, builds, pipelines, productivity, architecture, etc.). And still subject to the same theoretical weakness when describing how to go about "building the right thing."9

On the other hand, SE (and Agile) have a lot to of useful things to say on building things right. As I pondered this, it occurred to me that this might be related to the Problem of Demarcation. The delivery knowledge that has generalized well tends to be of a technical nature (e.g. how to optimize code, how to structure/modularize large scale code bases, etc.).

This suggested the possibility that demarcation could be expressed more directly in computational terms. So, we will pick up the story there in the forthcoming part II...


Here's what I expect to cover in the next few segments:
  • Part II: Demarcation in CS terms (i.e. integration with computer science)
  • Part III: Appraising the role of metaphor and analogy in delivery
  • Part IV: Explaining the role of David Easton and systems theory

  1. Or to put that in theoretical terms: This investigation concerns the nature of a rigorous descriptive theory of delivery, which I feel is necessary to advance (and evaluate) a rigorous normative theory about how one ought to delivery technology solutions. 

  2. We will talk more about the nature of computational artifacts in Part II. For now, I will define it as any embodiment of a computation, either in hardware or software. 

  3. Some aspects of this model are relevant for the case where a single person, conceives, creates and uses the computational system in question, but much of the difficult elements of delivery are its social aspects, including knowledge-sharing. Thus, my informal definition of delivery is inherently social. 

  4. Taleb's decision to title his subsequent work "The Black Swan" is among other things, clearly a nod to Popper, Hume and the classic tradition of the problem of induction (which, according to Popper is addressed by his solution to the demarcation problem) 

  5. Logically, this is a propositional form known as Modus Tollens or symbolically (given s is 'swan', W is 'white' and B is the 'observation of a non-white swan' or '$\neg W$' then): $\forall{s}\,W \to \neg B; B \therefore \neg (\forall{s}\,W)$ 

  6. A valid or accepted theory must also have been subjected to - and survived - some amount of actual attempts to falsify it. Much of the rest of "The Logic of Scientific Discovery" is concerned with what such a process should look like; when theories should be rejected, and so forth. The character and limitations of such "research programs" (to use Lakatos' term) was a central question for philosophy of science throughout the 20th century, but we can mostly skip that for now. A good primer on the subject, for those who want to learn more is: Theory and Reality: An Introduction to the Philosophy of Science by Peter Godfrey-Smith 

  7. One point of potential confusion in this topic, is that I talk about modeling in two ways - one conceptually, and one within the context of a delivery effort. To clarify: The Delivery System model has as its subject the concept of delivery efforts in general. Meanwhile, the role of a computational artifact within a particular delivery effort is to model the theory under test. 

  8. "Iteration", "communication" and "business" value are three fairly common concepts in all the Agile methods I'm aware of. 

  9. The Agile diaspora has weakened the concept a fair bit as a quick googling of the phrase "the code is the design" will attest. With the Theory Under Test, I hope to take that one step further: Both specification and design as part of the manufacturing metaphor underlying SE, are deemphasized in favor of something more like: The delivered computational artifact(s) are a formal model of the theory under test. The full impact of this shift in metaphor will hopefully become clear when we get to the systems theoretical aspects of the delivery system model.