Abstract: the story of software is a story of layered abstractions. Blockchain is not a new layer of abstraction, but a serverless trust API within an existing layer. History says: pick a hook-it-and-forget-it API provider aka chain by optimizing for durable security. Other frequently debated factors (governance, scalability, etc) are only distractions. There is one viable API provider (chain) running today, two more may emerge sometime this year.
Photo by By David Marcus
Problem
The capital of a software developer is her time and skill. When deploying this capital, she needs to optimize for:
(1) efficiency: minimum time to get her software out into the wild.
(2) effectiveness: maximum reach and durability of her software.
These optimization objectives are independent of the developer’s motivation: be it amassing wealth, climbing glory summit, or simply extinguishing boredom. They are also independent of the context, domain, or scale of the software in question.
There is nothing special about blockchain that makes either of these axes inapplicable or in need to amendment, supplement, or special considerations. These axes are also independent of whether the blockchain developer is working on system/infrastructure software (p2p networking, node clients, developer tools like compilers and Web3 SDKs etc) or application/smart contracts (DeFi, front-ends interfacing with and composing contracts such DEXes aggregators).
While these optimization axes can make sense in the abstract, it can be fuzzy translating them into a concrete development and design decisions when working in a tumultuous and fast-paced domain like that of the blockchain. It can be difficult for a blockchain developer to make the right capital allocation decisions with clarity of thought in a sector with such low signal-to-noise ratio:
-which chain to build on?
-at what layer?
-under what trust assumptions between the software and its users?
-etc etc.
Fortunately, none of these questions matter, not because they are unimportant and worthy of consideration, but because their answers would effortlessly fall off a tree if we rise above the noise and examine the software evolution over the past 6–7 decades to understand the “universal law” that has remained constant throughout.
Problem: What is the overarching domain-, context- and scale-independent law that has thus far held over the past 6–7 decades of software evolution? What evolutionary adaptations have emerged in order to maximize the efficiency and effectiveness constraints of software developers?
Approach
The story of the software evolution is a story of abstraction. By harnessing high and low voltage to represent true/false, we abstracted away primitives of mathematical logic into hardware “gates” made up of resistors, diodes and transistors. We abstracted away arithmetic atop these logic gates and used that to precisely shuttle bits and bytes around different motherboard compartments with absolute precision. Deep down in your device, the only thing happening for you to see this article on the screen is arithmetic operations simulated using Boolean logic that is in turn simulated using high-low voltage that encode for true/false (0/1).
We quickly abstracted away from the low-level details of how arithmetic operations are executed using assembly language. We’re now four layers in: voltage, logic, arithmetic, and assembly. Assembly code boils down to moving bytes around and doing arithmetic operations on them. Assembly code translates into machine instructions that in turn translate into arithmetic operations that in turn translate into Boolean logic operations that are, ouf, finally simulated with high and low or high voltage trapped in transistors.
On top of assembly sit compilers, and on top of those sit human-ish programming languages. We are now closer to the human than the machine. Even non-programmers could probably make a descent guess as to what a chunk of code in a highly readable language like Python is doing.
The abstraction stack: each layer spares the layer above from worrying about its details. The separation is so clean that looking at the translated product from one layer down to another, one cannot make out what it represents above. Technologists working in one layer are shielded from the details in other layers.
The law: technologists working at any of these layers of abstraction almost never look up or down one layer, let alone two or more layers. Neither do they go to joint conferences, or engage in debates about how each layer should do its job. Technologists optimize their respective layers to make life easier for those relying on them above.
While a low-level system software engineer may descend from Rust (a high-level programming language) down to assembly to tweak a few knobs when programming, say, some hardware driver, that is the rare exception that proves the rule. In fact, if this becomes a regular (and frustrating) thing, the market just creates a dedicated sub-layer and abstracts its concerns away forever (e.g. Apple creates a firmware team who abstracts away the low-level boot-up and recovery concerns from the macOS teams). In short, if jumping back and forth from one layer to another becomes a regular thing, you would see a new layer of abstraction emerging in between to eliminate this need.
Approach: Apply the universal law of outsourcing of concerns thru abstraction to blockchain development. What concerns can be outsourced to blockchain networks which could not have otherwise been abstracted away?
Design
The “concerns-want-to-be-abstracted” law manifests within each layer in the abstraction stack. Inside the high-level programming languages (HLPL) layer for example, various concerns are further encapsulated and abstracted away in the form of libraries that are then accessed through application programming interfaces (APIs). Blockchain developers operate at this HLPL layer.
One of the first and most routine things practitioners do within this layer is “importing” as many libraries as they can find to help do the heavy lifting. Programming some machine learning thingy? Don’t roll out your own statistical methods, use that highly-optimized and time-tested library. Wanting to plot some charts? Don’t even think about doing that from scratch, you don’t have enough years left to build one. Use a graphics library to do the heavy lifting and focus on concerns specific to your problem: manipulating your data, customizing colour pallettes to your taste, automating data-to-plot workflow to fit your data streaming frequencies, etc. The sheer volume of open and free software libraries that developers build and gift to the world is staggering.
In the era of the cloud the concept of an API expanded to more than an interface to software libraries that abstract away reusable functionalities. It also came to denote interfaces to whole servers, platforms, applications, and runtimes hosted and maintained by someone else. You can actually see the law of abstraction in motion in the ever increasing efficiency and availability of APIs that can quickly be glued together with minimal code. You can be sure that this “glue” code will keep shrinking under the gravitational force of abstraction.
There is, however, one API that is otherwise very slow and very expensive for the cloud: the trust API. What is unique about blockchain software development is that this fundamental concern of simply establishing / maintaining / ensuring / insuring trust, which costs so much capital and labour (courts, police, PDF forms, intermediaries, fraud-detection departments, payment processors, etc etc), can now be abstracted away from developers.
Blockchain networks provide a trust API which is just one of many abstracted legos at the disposal of software developers. It is a special kind of API, however, given how much cost and overhead it usually requires by going through intermediaries. Before starting to cook her software recipe, the developer visits the FOSS garden and/or cloud API offerings to pick the freshest ingredients. Blockchain networks are the most cost-effective and least-overhead provider of the trust API ingredient.
Blockchain networks are essentially trust API providers. A blockchain developer can disintermediate herself from the business logic of her application, outsourcing custody of funds to the network. She encodes the rules of interaction in p2p communication protocols and/or smart contracts, by outsourcing trust, liability and accounting of to the network and the participants:
Overhead: the developer is spared the costs of intermediating between users of her software. The users are spared the costs of having to trust the developer or each other.
Liability: because the developer no longer plays an intermediary role (for escrow of funds or dispute resolution or any other kind of coordination or intermediation) she no longer has any liability. She outsourced control to the network, the rules of the smart contract code, and the users themselves.
Accounting: The cost of using the software is pushed to the edges. The network hosts the software, and its users pay for access. The metering and accounting of that cost is outsourced to the network. And how convenient: blockchains have currencies/payments in their DNA. The cost of transacting and interacting is encoded as part of the rules of the smart contract so there is no risk of “hidden fees” suddenly appearing.
Design: outsource your trust concerns to a durably reliable and secure API provider, one that most satisfies the “hook-it-and-forget-it” principle.
Implementation
We established that:
(1) abstraction of concerns is the law that governs software evolution, and
(2) trust is the concern that only blockchain platforms could possibly abstract away completely.
So now shopping for a blockchain platform to build on is now a straight forward process of elimination. What blockchain platform provides a reliable, secure, and durable trust API? It is important to emphasize the “durability” aspect of the selection criteria. It is not only how well an API scores today, but how likely it will continue to score high into the future.
Ethereum is the most viable option available today. Because shared security between all shards is a pillar of protocol design, Polkadot and NEAR are potential alternatives in the near future. They will have WASM support from day one, but they will lack liquidity and monetary premium of their native asset. There are ways to bootstrap some liquidity from day one, but strangely that doesn't seem to be a priority.
Implementation: Ethereum is the most durably secure platform today. Polkadot and NEAR may prove to be viable alternatives given their shared-security approach. Polkadot and NEAR need to catch up with Ethereum’s monetary premium (essential for security in PoS networks) and liquidity, Ethereum needs to catch up with eWASM.
Falsification
Conveniently the core thesis of the Cosmos project is the opposite of the thesis presented in this article, and so it can serve as anti-thesis of what is presented in this article. Sovereignty is a core design principle of the Cosmos ecosystem, meaning: developers will want to have the choice to control all or some aspects of their applications and so they are going deploy their own app-specific chains. This will give developers flexibility to tweak different parameters to their needs: consensus, fee metering, virtual machine, flexibility or lack thereof of scripting, etc. But this also means: the app-chain developers must roll their own security.
Falsification: If app-specific roll-your-own-security chains proliferate, the thesis presented in this article would be invalidated, and vice versa. Let’s revisit this thesis in 2–5 years and see how things played out.
Extensions
We can take a peak into the far future and see if this thesis will continue to hold. Consider the following tectonic shifts:
(1) The browser is eating desktops to become the universal UI.
(2) WASM is eating runtime to become the universal platform: compile to WASM and run anywhere.
(3) There is an emerging new layer of abstraction: the no-code / low-code / visual programming. It will democratize access to programming to anyone with a good idea.
A new layer on the stack is emerging: low-code / no-code / visual-programming. In the near future any one will be able to program visually: dragging and dropping logic blocks (most of which will be from a catalog of open-source legos) and establishing relationships between them with arrows. What will the future grocery store manager program for the upcoming black Friday sale?
We can foresee a non-technical person, say a grocery store manager, drags and drops a few objects on some no-code platform, in order to implement a promotional offer on some extra stock she has. She clicks “Deploy”, pays $0.1 transaction fee, and voila, her truly serverless self-sustaining dApp is up and running. It is self-sustaining because the dApp access cost is pushed to the edge (customers who would interact with it in order to, say, buy the promotion’s NFT). This promotional dApp proves extremely well-designed that every other grocery store forks and deploys it with 1-click. A revolution ensues.
These trends should only entrench the law of abstraction of concerns. If people with no CS or programming backgrounds start to successfully build software systems, it would only be because more and more concerns continued to be abstracted away.
If developers have for decades now opted to outsource their concerns to reliable libraries / APIs / networks, it should only be more true for the “ non-developers” developers of the future.
Extensions: the law of abstraction of concerns will continue to shape software evolution into the future. We can observe glimpses of its future effects through the emerging new layers of abstractions, most notably WASM and low- / no- / visual-programming.
Conclusion
Looking at the evolution of software we observe the emergence and persistence of a successful evolutionary adaptation: abstraction of concerns. There are many vertical layers of abstraction between observing a cat video on YouTube and the corresponding blinks of on/off transistors in the underlying hardware. Technologists operating at one layer do not know neither care how those layers below do their job.
Blockchain networks do not represent a new layer of abstraction. They are, architecturally speaking, at the same level as serverless cloud. What they do uniquely offer, however, is a trust API. Through it developers are able to most effectively and efficiently abstract concerns that could not have otherwise been outsourced: overhead, liability, or accounting of costs of coordinating agents.
A blockchain developer should therefore pick the most durably reliable and secure and API provider(s). Ethereum is the only option today. Polkadot and NEAR represent potential viable alternatives given their security focus. Many other networks have fundamental weak spots that make them high-risk to build on.
Conclusion: Blockchain networks are above all trust API providers. Ignore the noise (scalability, governance, etc). Focus on durable security. Choose minimal-security-overhead hook-it-and-forget-it blockchain networks.
In subsequent articles I will be (1) elaborating further into what makes some blockchain platforms non-viable and how to spot their flaws through the marketing smoke and (2) some “maximally secure and sufficiently scalable” technologies like sharding, rollups, and parathreads.
Find more writings from the author here: https://aliatiia.github.io/articles
Loved the article! I share the abstraction layer view. Actually blockchain is the "value" layer of software from my perspective. Keep writing, please. Here a fan.
What I learned: the author's favorite word is "abstraction"
You still need blockchain for true P2P insurance:
https://joshuadavis31.medium.com/true-p2p-insurance-requires-the-blockchain-to-work-661024d50920