Don't invest in cryptoassets unless you're prepared to lose all the money you invest. Cryptoassets are high-risk investments, and you are unlikely to be protected if something goes wrong.Take 2 mins to learn more

Thanks to our supporter

brand image
Artboard-25-1.png

What (else) did Bitcoin achieve – verifiable computing

(part 1 in a series on the impact of the Satoshi’s invention.  Part 2., Part 3.) There are several things that Bitcoin achieved – the money, and the invention of the blockchain being the obvious ones that people talk about. But it is in the process…

Share

Share to Facebook
Share to Twitter
Share to Linkedin

(part 1 in a series on the impact of the Satoshi’s invention.  Part 2., Part 3.)
There are several things that Bitcoin achieved – the money, and the invention of the blockchain being the obvious ones that people talk about. But it is in the process of achieving a few more things. Isolating these factors out has been an interesting exercise, so I started writing them down a few months back, with a view to sharing them.  Why?  Folks who attended the coinscrum event last Tuesday will recognise at least one motivation, but there are more, including – why not?
Here’s one – Verifiable computing.
What are sometimes called smart contracts aren’t necessarily interesting because of their capability to do automation – we’ve always had that, it’s called computing. Or their capability to do financial transactions, leading up to contracts even, as per the name. Again, that’s always been there in various and many acronyms and ventures. From 1994, Nick Szabo wrote:

Digital cash protocols[2,3] are fine examples of smart contracts. They enable online payment while honoring the characteristics desired of paper cash: unforgeability, confidentiality, and divisibility. When we take a second glance at digital cash protocols, considering them in the wider context of smart contract design, we see that these protocols can be used to implement a wide variety of electronic bearer securities, not just cash.

The difference this time is, I suspect, verifiable computing. What is that? It is simply the ability to compute, and to know we have computed correctly.
We’ve always been interested in this. It was an early esoteric topic of computer science, and indeed the team in which I did my undergrad thesis was involved in precisely that – replicated computing for verifiability and reliability. The space shuttle that then qualified as new tech had 5 IBM “mainframe” computers on board – 3 in a voting loop, 1 monitoring and 1 spare. Or something, I forget the details.
Then and later, the capability to do computing and verify the computing was done correctly was considered a pipe dream. The reason for this skepticism was that the favoured solution involved some form of voting. If done in hardware, the point was ruined because we now had a single point of failure – the voting machine – and if done in software, we still had a source of bugs in the voting. Further, the cost was not like 3x, it was more like 10x, and for that money we could generally afford to build failures into the model. So the business never really took off, it was too much money for too elusive a result.
Now with Bitcoin’s blockchain, we can do verifiable computing. Just flipping over to Casey’s blog, because it is his post that has coalesced this thought in my mind:

Axis 1 => spectrum of verifiability

A python script is somewhat verifiable I would say. If a python script is running on somebody else’s metal your ability to verify what its doing is usually limited to observing its results.

Even if you could verify the code that was being ran via some fingerprinting mechanism you still wouldn’t necessarily be able to verify the execution environment of that script.

The environment is important because the same script can run different ways depending on its environment. Scripts can read top level environment variables, and run differently on a different version of their language. The practical upshot here is that nobody really has a capability to verify code that’s running on someone else’s metal.

And this is one of the powerful capabilities which smart contracts offer to users.

Smart contracts completely isolate the logic and data into a “casing” (provided by a blockchain) which is utterly verifiable. Every compute step along the logic sequence is verified by every node on the network.

Those nodes could be other banks within a consortium, internal audit, external audit, the business’s accounting department, your grandmother, or whomever is in the network. But all of these nodes will be checking each other’s work.

Simply put, all of the computation is performed (and, checked) by all of the (full) nodes on the network.
Down to popping off the stack computes.

Now this is overkill for many, many computing requirements which an enterprise may have (indeed the vast majority of an enterprise’s computing requirements do not need this level of computation verifiability).

But for instances where one has a data driven relationship (whether that is a compliance relationship, a customer relationship, or a peer relationship) it may be a price which institutions are willing to pay. In some contexts.

But. And this is the key. It is certainly very different than a simple python script running on someone else’s metal.

 
That Bitcoin’s smart contracts achieved verifiable computing could almost be said to be an accidental result. It’s not clear that Satoshi Nakamoto was heading in this direction. He wanted smart contracts, but did he want verifiable computing? Of course it is easy to claim that he did and the result is the proof, but I wonder if there is some serendipity here?
There’s definitely some blowback – as history has shown. The post-Nakamoto core team unwound many of the features, and in that disappointment, sparked the fork that is Ethereum. This effort to get back to the full Turing mojo of a universal /and now verifiable/ computer has now kicked back into the Bitcoin efforts, and a while back, Blockstream released “Essentials” with many of the goodies turned back on.
So, one thing we can say about smart contracts and verifiable computing is that this is not easy conceptual stuff – this is thinking and this is development that is on a plane with the original Turing times; the development of much of which we now take for granted.
Indeed, if we look at what was historically written, picking up from that first cite:

We also see that to implement a full customer-vendor transaction, we need more than just the digital cash protocol; we need a protocol that guarantees that product will be delivered if payment is made, and vice versa. Current commercial systems use a wide variety of techniques to accomplish this, such as certified mail, face to face exchange, reliance on credit history and collection agencies to extend credit, etc.

etc etc, Szabo is talking about guaranteeing the result. Indeed if you ask anyone in the Bitcoin world what that is about, they’ll sing in chorus – multisig! Which he goes on to mention.  Which is not what we’re talking about here.
These are tools to achieve verifiability over the results. If you like, we could call this transactional thinking. But verifiable computation goes beyond the results, as Casey said.  To repeat:

Simply put, all of the computation is performed (and, checked) by all of the (full) nodes on the network. Down to popping off the stack computes.

This overall thinking is very much part and parcel of the original thinking by Nick Szabo, who conceptualised the smart contract as far back as 1994, which if memory serves was when he was working at DigiCash.
To me, this opens an open question – was the vision of smart contracts linked to verifiable computing?

Smart contracts reference that property in a dynamic, often proactively enforced form, and provide much better observation and verification where proactive measures must fall short.

Verification (my emphasis) is certainly there, but the text is mostly visionary rather than particular, in computer science terms. And thus I’m not sure – but I can also see why it wouldn’t be stressed. If Nick had said he was really talking about verifiable computing, then he’d actually have made the job harder because by then we already knew enough about it to say it was a pipe dream.
Or so it seemed, until 2009.
This makes Bitcoin’s version of a smart contract a much more interesting concept, and a much more revolutionary one. Solving verifiable computing is definitely top draw stuff – we’ve gone from theoretically troubling and implausible to practically doable in one invention.
Why is this revolutionary? Here’s maybe why.
If we have verifiable computing we now have a trusted computing platform!  Or to use today’s jargon, a trusted execution environment, or TEE.  TTP (trusted third party), anyone? HSM (hardware security module)?  OK, so this one isn’t going to be good at keeping secrets, but there are other things we need to do in a fashion worthy of our trust.
Another example – instead of talking about IoT and toasters, let’s talk about running a space shuttle on a blockchain. If we construct our reliable platform as several competing devices that can only work on our personal blockchain, then *the problem of trusting our hardware goes away*.
See where this goes?  Think of your car sharing computing power with the traffic.  Think of jacking in to the airplane’s entertainment system and borrowing some cycles from the fuel system to complete your render before touch-down.  You can repay the plane back as it is computing its descent into land.  Or, you’re arguing the accounts for your group’s savings and can’t agree on who’s machine to use, because someone always steals the money.  To solve the dilemma, spin up a private, on-demand, dynamic chain (PODchain?) on everyone’s phones, communicating the accounts smart contract over bluetooth, verifiably do the math, and shut it down.
Done!  Time for beer.  And share the costs with another PODchain including the cash register, right?  Verifiable computing in the form of the Bitcoin-inspired smart contract may well change our views about the Turing machine.   Or at least our understanding of the performance envelope of reliable computing.  That’s gotta be worth something, a prize or something 😉
(Top. Part 2. Part 3.)

Ian Grigg
Ian Grigg
Major project: Chamapesa.com Advisor at Akropolis, EOS, Mattereum, Knabu. Financial Cryptographer / Crypto-Plumber. Developed the Ricardo Transaction Engine from 1995-Present. Invented: Ricardian Contract, Triple-Entry Accounting.

You may also like

The-Uniswap-Governance-Debacle.jpg
listen

The Uniswap Governance Debacle

This is another story of why Governance and Tokenomics is so important for the valuation of a DeFi project. Uniswap is a household name in the DeFi space. They are the Apple of the crypto industry. They have great investors, a great team, and they…

Read more
thumb-1-1.png

Coinscrum – Cross pollination in an agnostic blockchain community

By Yatu Yoga ::Blockchain Anthropologist exploring the Blockchain, Fintech, and Crypto space :: PHD Researcher at Golsmiths College, London —- The origin story of Coinscrum begins in September 2012, with five guys tentatively meeting up at a pub called the Cleveland Arms to discuss Bitcoin,…

Read more
th-Rashid-Hoosenally.png

Network Economies in the Year 2049

By Oscar Pacey :: Network Economies Adviser at Tranquility Node —- We’re early adopters, which must mean the tech doesn’t do what we want it to do yet. Bitcoin isn’t a currency… yet, security tokens don’t provide any new liquidity or features… yet and decentralised…

Read more
Artboard-21-1.png

The DAO: How to not fuck it up

Written by: Jack du Rose For the last fortnight, we, the Ethereum community, have been creating “The DAO”.We’re at ~$130M right now (subject to the price of Ether), and that I’m sure will jump substantially before the creation period ends.For the uninitiated, “The DAO” is,…

Read more

Recommended

Subscribe to us

Understanding your dog for dummies cheatsheet

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.