Home >

Anti-Scale

Anti-Scale

A Manifesto for Sane Software Development

Posted on March 15, 2020

Introduction

Imagine there’s no infinite scale
It’s easy if you try
No elastic infrastructure below us
Above us only a few gentle users
Imagine all the developers writing simple, useful applications

Imagine there are no load balancers
It isn’t hard to do
Nothing to scale up or down
And no microservices framework too
Imagine all the developers writing value-adding code rather boiler plate… You…

You may say I’m a dreamer
But I’m not the only one
I hope some day you’ll join us
And the world will be as one

Imagine no testing frameworks nor CI/CD pipelines
I wonder if you can
No need for code coverage or auto-deployments
A brotherhood of geeks
Imagine all the developers
Sharing all the world… You…

You may say I’m a dreamer
But I’m not the only one
I hope someday you’ll join us
And the world will live as one

Software has become too complex. The art of writing simple, effective, and fun computer programs has been lost. CI/CD pipelines, microservices architecture, enterprise application frameworks, Agile ceremonies, and compulsory unit testing are just some of the few impediments between us and yet-to-be-written, useful applications.

Not only we are prompted to spend a Herculean effort building scaffolding, before we can greet our users with “Hello World”, but we are also dictated the way in which we are supposed to interact with our fellow human beings.

We are no longer individuals with histories and rich, multi-dimensional personalities; we have become anonymous and disposable “squad members”. Our interactions are named and scheduled. “Stand ups” and “grooming sessions” are just some of the cynical and recurring events that have hijacked our engineering social space. We are supposed to reason by shuffling Post-it notes on Kanban boards rather than by collaborating in a way that is relevant for the problem at hand. This madness has to stop.

Why Anti-Scale?

The vast majority of businesses in the UK employ fewer than 10 people according to a Business Briefing document published by the House of Commons Library in 2019. A similar distribution applies to most of the developed world—and even lower average is expected to be found in developing nations. Why are we then building software as though we were to become the next FAANG (Facebook, Amazon, Apple, Netflix, Google)?

Anti-Scale is not about burying one’s head in the sand; on the contrary, it is about hyper-rationality. It is about asking tough questions such as:

Anti-Scale is about answering the quintessential question, “Will it scale?”, with a firm and eloquent answer: No, it won’t scale, there is a ceiling.

In a nutshell, we need an Anti-Scale perspective on software because:

Anti-Scale Principles

Similarly to the Agile manifesto, the principles are phrased using the “X over Y” structure. The first principles are about human scale, whereas the later ones are about technical scale. Unlike the Agile manifesto, though, we tolerate but don’t necessarily value things on the right.

I - Patrols over Teams

Two is company, three’s a crowd. Law enforcement seems to understand organisational units better than the software industry. A patrol, two police officers who complement each other—for example, when one drives, the other one looks out for disturbances—is a proven arrangement of people. Defining a team’s size by the amount of pizza they can be fed on seems rather arbitrary. Moreover, conventional large teams (especially when their size is increased in the face of demand) often incur exponential friction as per Brook’s Law, which is an observation on the effects of combinatorial explosion on people. For example, if we consider a typical “two pizza team” consisting of seven members, and apply the combinatorial formula, N(N-1)/2, where N is the team size, we end up with twenty-one communication lines; an order of magnitude higher than the the number that is applicable to a patrol of two people—just one.

This principle is not a disguised appeal to apply pair programming techniques. Pair programming, if used at all, is a technique that is up to the patrol’s to employ or not, rather than a methodology for managers to enforce. Likewise, this principle does not require the mandatory presence of two people. A single, lone wolf developer is perfectly fine, especially if she has a combination of introversion (feels energised working alone) and conscientiousness (requires little supervision or external stimuli).

II - Engineering over Agile Ceremonies

Al-Nakbah, which means “catastrophe” or “cataclysm” in Arabic, is how Palestinians refer to the exodus they endured in 1948 during the establishment of Israel. No other word does better justice at describing the establishment of Agile “methodologies” (XP, Scrum, etc.) in the software engineering world.

The fundamental challenge in software engineering is not whether a line of code is written in response to a 200-page business requirements document or a ticket on Jira, but how to write software in and of itself. Writing software is becoming increasingly more difficult as the discipline and the industry mature. Being a software engineer is not a “personality trait”, as typically perceived by outsiders, where the engineering aspect is a taken for granted and the actual skill gaps to be filled are in the management pseudoscience domain. What company do you know that had brought the likes of Donald Knuth or Erik Meijer as opposed to “Agile coaches” to help fill their teams’ supposed “skill gaps”? I thought so.

Algorithms, data structures, complexity theory, discrete mathematics, and so on are just the tip of the iceberg in term of the various disciplines that developers may need to master before they are able to write effective code. Every minute wasted on sprint planning sessions, stand ups, grooming sessions and other artificial ceremonies are minutes that do not go into the engineering of the product. Software engineers are not goldfish who can’t plan beyond two week sprints and business people are not illiterate minions whose range of English expression has to be reduced to filling the blanks in “As … I want … so that I can achieve …”.

The two fundamental questions in software engineering are (1) what is the artefact that needs be built and (2) what is the elicitation engineering technique that is more conducive for capturing the relevant details pertaining said artefact? It may be that the problem is one of human computer interaction where wireframes or mocks may be useful. It may be, instead, that the problem has to do with modelling a complex domain in which entity-relationship diagrams may be appropriate. A typical application will require the engineering of many different aspects at various times.

There is an inversion of control here. The requirements are not a flat shaped output produced by business people (who have to be burdened with writing moronic user stories and run a raffle to see who ends up as “product owner”) but an input whose shape is determined by the developer, depending on the nature of the problem at hand.

The walls in an area where developers work would be full of engineering artefacts such as state machine diagrams, entity diagrams, flow charts, wireframes, mathematical equations, and so on. This is not “documentation” per se but the kind of material that the developers would likely produce to wrap their heads around the problem at hand, and verify their understanding with their sponsors. If a “requirements” question arises, the common ground would be one of these diagrams (or even a sketch drawn from scratch on a whiteboard) rather than a yellow Post-it note with a “user story” written on it.

If we want better software, we need to provide to developers more “cave time” to do proper engineering and get rid of Agile rituals that only serve to provide the illusion of progress—and entertain middle management.

III - Prototyping over Premature Testing

Test Driven Development (TDD) is what every developer believes the world expects from them regardless of whether this is methodology is explicitly requested or not.

Imagine Jenna, a senior developer, who just came home from work, after a long, stressful day. Her husband Joe had set up the dinner table for her. She sits at the table but doesn’t eat nor speak. Whilst she keeps contemplating, motionless, the bowl of pea soup in front her, Joe asks:

“What is wrong honey?”.

Jenna replies:

“Today I wrote a piece of code without writing a unit test for it first”.

How depressing. How on earth have we fallen this low? How many developers often feel like Jenna. But let’s continue…

Joe holds Jenna’s hands, looks directly into her eyes and asks:

“Why did you do that my love? I promise I won’t judge you”

Jenna hesitates but finally answers, shedding a tear:

“I just wanted to see if my function did what I thought it would”.

More often that not, just like Jenna, we write software to discover the shape of the problem rather than the guts of the solution. In turn, a new insight into the original problem emerges which leads to a different take on the solution. This creative feedback loop is what allows great software to be conceived: software that surpasses expectations and that results in a delightful experience to their users. Would Richard Stallman have conceived Emacs had he written units tests for it first?

Once the problem has been framed, only then we can reason about the problem statement’s applicable behavioural boundaries and crystallise them in one or more unit tests. The priority should always be building a prototype that characterises the problem, rather than a solution that presumes a firm understanding on the problem by means of rigid class boundary conditions. The thing worth remembering here is that unit tests are clueless about users’ feelings. It is easier to build a wrong product which is fully unit tested and results in 100% code coverage than it is to build the right product—notwithstanding serious bugs lurking around.

IV - Local Processes over Distributed Applications

A modular solution should be composed of simple programs that integrate with one another using POSIX primitives such as command arguments, return codes, signals, and input/output/error streams.

For example, a component that creates a thumbnail image out of a large picture, called, say, big2small.py may take the large image as an input stream and return the new smaller, converted image on standard output. This allows a consuming main program (say, a blog application) to operate with the command without having to write and read temporary files. The command can be also be used on the command line in a standalone fashion like this:

big2small.py < big_cat.jpg > small_cat.jpg

The fear that such building blocks may incur a fork() tax under load is a scaling fear. Having said this, if a given piece of functionality makes no sense outside of the main application, there is no need to create additional components and add more complexity for the sake of modularity alone.

Standalone monoliths are often simpler that a collection of daemons or interconnected processes. The main idea behind this principle is that, in a “divide and conquer” scenario, our building blocks should be simple commands rather than, say, RESTful microservices.

V - Local Files over Distributed Databases

Many developers grew up in the era of two-tier and three-tier architectures. No matter how small the application, there would be a front-end, say, WordPress running on the Apache web server, and remote database, like MySQL. But things weren’t always this way. For nearly two decades, most small business relied on programs written in the likes of dBase, Clipper, and FoxPro, whose databases were local to the application—and the database engine was often embedded/compiled with it.

Distributed database management systems provide scale, concurrency, and multi-tenancy. What if we could trade these features for other advantages? Below we have some of the benefits that we would get in the case of using an embedded database engine—for example, SQLite—as opposed to a remote, over-the-network database—like Oracle:

  1. The application is self-contained and doesn’t require external components to be configured beforehand
  2. No need for credentials, complex connection strings, TLS certificates and so on
  3. No need for DBAs: the database is fully controlled by the developer
  4. Less cognitive load: the database merely supports the application as opposed to the database being a complex ecosystem in its own right
  5. Performing a backup is merely a matter of duplicating a file
  6. Having a “development” version of the database, is, again, just a matter of duplicating a file
  7. For small databases, the data can travel together with the application (it can be checked into the source control management system)
  8. Ultra low-latency and high-throughput. No network roundtrips

But the principle says local files, not local databases. Why is that? Because, the first port of call when it comes to storing data should always be vanilla text files (CSV, TSV, Yaml, JSON, etc.) If we can given up the lack of indexes and advanced query languages, vanilla text files offer a whole new set of advantages on top of the ones listed above:

  1. Text files are human readable
  2. Text files can be edited with any simple editor such as Vi, EMACS, Notepad, etc.
  3. Text files don’t require a specific “engine” or a linked library—often written in a different language like C
  4. Text files are first class citizens in source control management systems
  5. Data can be moved between databases using copy-paste
  6. Comparing changes across versions can be accomplished with simple tools like the diff command
  7. Most common formats such as CSV, Yaml and JSON have mature parsing libraries in all mainstream programming languages
  8. Structured formats like Yaml and JSON have libraries that offer isomorphic mapping to local objects in most mainstream programming languages
  9. Tabular formats like CSV/TSV are the lingua franca to interoperate with spreadsheets, databases and data science toolkits like Pandas
  10. In the case of JSON, the file incurs little marshalling/unmarshalling overhead in a JavaScript environment like a web browser or a NodeJS backend
  11. Nothing beats the speed of appending data to a local file as opposed to inserting a record into an indexed database

When deciding between vanilla text files and an embedded database like SQLite, the answer does not lie in dogma, but in hyper-rationality which what the Anti-Scale manifesto is all about. Some of the questions to ask are:

  1. How many records?
  2. How many columns or attributes per record?
  3. What is the size of the average record?
  4. What are the query/search patterns, if any?

Let us say that we are designing an application for “Lovely Muffins”, a small bakery store ran by Jenna’s sister, Hanna. Hanna needs a mini ERP application to keep track of the inventory of ingredients (bags of flour and sugar, bottles of oil, etc.) and the type of finished muffins (Blueberry, Banana, etc.) that are sitting in the bakery’s inventory.

Let us say that each inventory item has a number of attributes and that the encoding is JSON so we have a record such as:

{ "name": "Blueberry Muffin",
  "size": "Medium",
  "batch": 8,
  "in_inventory": 20200203,
  "out_of_inventory": null, 
  ...
}

Let us assume that each record has, in average, up to 512 bytes (roughly over 6 lines of 80-column text) and that there may be up to 8,000 of such records present in the file. The maximum file size is, thus, 512x8000 = 4,096,000 bytes, which translates to roughly 4MB.

What about search? Let’s make it tougher, what about searching string fragments such as “ufi” in Muffin, or “ium” in Medium. Let’s assume no indexation either. Under this premise, every search is a DBA’s worst nightmare, a dreadful “full table scan”. Let’s also assume that locating every string requires traversing to the end of the 4MB file. How terrible (slow) is this? Well, a pragmatic way of finding out is by setting up an experiment. Let’s use the King James’ Bible from the Gutenberg project, which is 4.2MB in length, just a bit larger than the requirement for Lovely Muffins:

% curl http://www.gutenberg.org/cache/epub/10/pg10.txt -o bible.txt
% ls -l -h bible.txt 
-rw-r--r--  1 ernie  staff   4.2M 29 Feb 14:20 bible.txt

The next step is simply measuring the speed of a text search on that file—for example, using the grep command. We will use a non-repeating string expected to be found at the end of the file such as the word newsletter which only appears once in the very last sentence of the text:

% time grep newsletter -n bible.txt
100231:subscribe to our email newsletter to hear about new eBooks.
0.09s user 0.00s system 98% cpu 0.094 total

There it is, 0.09 seconds (on a MacBook Pro). If we were to have ten searches per second, in a non-multithreaded application, we are talking about supporting, with low latency, up to 10 searches per second. This is a very good scalability ceiling for Lovely Muffin’s inventory application who will be used only by Hanna, and perhaps a couple of her assistants at any given time.

Satan worshippers will appreciate that this scalability ceiling can be raised ten fold without too much effort by holding the file in memory, caching the 100 most searched results, and other tricks of the trade.

If you believe that 4MB is nothing, think again: the first IBM PC, on which the first developers cut their teeth creating useful applications for all sorts of small businesses, only had—optionally—a second floppy disk unit to store 360KB worth of data, rather than hard disk drive capable of storing multiple megabytes.

VI - Explicit Source Code over Compilation

Once upon a time, developers who could write assembly would look down on BASIC programmers, until C took off and K&R fans would look down on both BASIC and Assembly programmers. Since then, there has always been an unwritten rule that says “compiled is better than interpreted”.

Today the rule seems to have degenerated to the point that we are not longer compiling for the objective of taking source code and producing machine language but simply transpiling one form of source code to another form of source code. It is fair to say that since the death of Enterprise Java Beans (RIP), and barring niche examples like Google Protocol Buffers or the likes of Template Haskell, this technique is not that pervasive on the server side, but on the front-end wise, oh boy, it is a complete different story.

Web developers today write anything but JavaScript, HTML, and CSS. The code that one sees when clicking “view source” could have come from the most bizarre, unthinkable, evil and twisted origins such as TypeScript, Angular, and SASS, respectively. As this were not enough, it is possible that multiple files might have been merged into a single file, that function names have been replaced with unintelligible tokens, and that the already indecipherable JavaScript source code is further convoluted with boilerplate code to emulate missing language extensions or gaps in, oh wait, yes, Internet Explorer 6. And no, the use of source maps is not the “get out of jail card” to justify this mess.

What about multiple-browser support, JavaScript backwards compatibility, modularity, and download speed? All concerns that arise in the face of human and technical scale. Pareto would be turning in his grave if he learned that some web developers spend 97% of their effort accommodating 3% of their user base.

But let’s cut to the chase. What does this principle mean in practice?

On the server-side, interpreted languages like Python or Ruby, or compiled languages that have a user-friendly “run from code” workflow like Go should be preferred as opposed to languages that spit out binaries that normally travel independently from their source like in the case of Java or C++.

On the front-end, instead, official W3C syntax like this, using only lightweight helper libraries, that can be imported directly from a CDN (or a local copy) should be preferred over tools that require NPM modules that perform evil transformations behind the scenes. If the “framework” has a serve command and changes cannot be seen by refreshing a local file directly from the web browser—only through the framework’s web server—it is a symptom that we’ve picked up some hipster FAANG framework that will make our lives miserable in the long run.

In a nutshell, the principle of Explicit Source Code over Compilation can be understood as What You See Is What You Get (WYSIWYG) or the Principle of Least Astonishment (POLA). In similar words, what you write is what you run. If a bug is found, said bug is directly traceable to the source without intermediate artefacts that may hinder root cause analysis. To see the effect of correcting the bug, just running the source directly is all that is required—no file watchers running in the background required!

VII - Primitive UIs over Complex GUIs

User experience is challenging. There are many aspects that require consideration at the same time: multiple devices, operating systems, screen sizes, varying user locales, accessibility, and so on. What would happen if Nestlé or Procter & Gamble were to make their online services unavailable to 1% of their customers who are still browsing the web on Internet Explorer 6, or have a Galaxy S2 running Android 2.3 Gingerbread? Yes, I know, all hell would break loose.

Another issue is that creating rich GUIs, in particular, is complex and requires “high-level” primitives such as grid/table components, pop-up calendars, and so on. This results in abstractions that sit on top of other abstractions. For example, the Telerik’s Kendo UI widget tool kit, sitting on top of Angular, sitting, in turn, on top of TypeScript, and … turtles all the way down. Maintaining tool chains like this is not only a burden, but also requires a steep learning curve.

With our Anti-Scale goggles on, we can think differently—and do better. Our aim is simplicity, not 100% market satisfaction. We can decide that some customers may not be served, and those who are served may not have a FAANG experience. The aim is to create the simplest form of user interaction that allows the intended task to be accomplished. Not every application has to be a web server with a fancy OAuth-capable login screen and a hamburger menu, all neatly packaged using a Material Design template.

The first question to ask is whether we can implement the application without building a UI in the first place? Maybe our application can wait for files to be placed in a folder, and produce results by writing the results in another one. Maybe it can read input directly from an Excel file rather than forcing the user to paste data onto a web form.

What about a console application? Are we sure that our users are so illiterate that they cannot interact with an applications using a keyboard? Surely those guys and gals behind check-in counters at airports don’t require a PhD to use Amadeus effectively.

If the use case is the web, we can follow the same rigorous thought process. What is the value in a Single Page Application (SPA)? Do we really need to fill up all the available columns in a large screen? Is scrolling to be avoided at all costs for all device form factors? And most importantly, can we ask the odd 1-3% of the user base to upgrade to a reasonable web browser or mobile phone rather than squandering engineering hours in backward compatibility hacks. Yes, there are polyfills and there is Babel but these conflict with principle VI.

Most users just want go get the job done with the least amount of fuzz; the last thing they want is just another app or another screen to log onto. By providing them with a simple interaction model, we can make both the users and the developers happy at the same time.

VIII - Localhost over Pre-Production Infrastructure

Multiple environments (dev, staging, pre-prod, etc.), build servers, and challenges such as environment affinity, asset integrity across environments, versioning, the need for automated deployments, and so on, are all evils that stem from high scalability and high availability needs. In the world of finite scale, the scale ceiling is the developer’s laptop. Nothing more computational powerful than the developer’s workstation is to be expected; in most cases we will actually need to throttle the application to observe its behaviour in an environment that is less powerful than a laptop such as a Digital Ocean’s $5/m droplet.

Given that the antecedent principles guarantee that applications and their data are local, a developer’s laptop should always be representative of production in every aspect—other than speed where actual production is expected to be slower. The build experience should work in a such a way that developers can always reproduce builds without replicating a complex pipeline mechanism that only makes sense in the presence of a central build system like Jenkins. In fact, a Continuous Integration (CI) server should never be a requirement. Modern tools like Docker or the Nix package manager (and even older ones like Vagrant) allow setting up 100% reproducible and reliable builds on a local machine.

To summarise, the whole of the production environment (including a recent snapshot of data) should run in a developer’s laptop. There should not be any reason whatsoever that prevents code from being shipped directly from a laptop to production. This principle doesn’t imply that shipping to production necessarily means making new code available to users. New code may be served on a different port or URL before the “live” version is replaced.

Last but not least, the idea that targeting production and testing directly in it is a good thing is not only held only by the author.

Final Words

The Anti-Scale manifesto may not apply to the reader. This is a manifesto that is relevant when, as the name suggests, scale is meant and guaranteed to be finite. Even if your scenario is finitely scalable, you may “know better”, or aspire better; maybe you want to be hired by a FAANG in the future?

This is not a new paradigm, or a “new take” on the industry’s zeitgeist. It is the acknowledgement that there a number of disenfranchised customers and developers who do not fit into such a presumed industry in the first place.

Credits