The Meaning of Decentralization

“Decentralization” is one of the words that is used in the cryptoeconomics space the most frequently, and is often even viewed as a blockchain’s entire raison d’être, but it is also one of the words that is perhaps defined the most poorly. Thousands of hours of research, and billions of dollars of hashpower, have been spent for the sole purpose of attempting to achieve decentralization, and to protect and improve it, and when discussions get rivalrous it is extremely common for proponents of one protocol (or protocol extension) to claim that the opposing proposals are “centralized” as the ultimate knockdown argument.

But there is often a lot of confusion as to what this word actually means. Consider, for example, the following completely unhelpful, but unfortunately all too common, diagram:

Now, consider the two answers on Quora for “what is the difference between distributed and decentralized”. The first essentially parrots the above diagram, whereas the second makes the entirely different claim that “distributed means not all the processing of the transactions is done in the same place”, whereas “decentralized means that not one single entity has control over all the processing”. Meanwhile, the top answer on the Ethereum stack exchange gives a very similar diagram, but with the words “decentralized” and “distributed” switched places! Clearly, a clarification is in order.

Three types of Decentralization

When people talk about software decentralization, there are actually three separate axes of centralization/decentralization that they may be talking about. While in some cases it is difficult to see how you can have one without the other, in general they are quite independent of each other. The axes are as follows:

  • Architectural (de)centralization — how many physical computers is a system made up of? How many of those computers can it tolerate breaking down at any single time?
  • Political (de)centralization — how many individuals or organizations ultimately control the computers that the system is made up of?
  • Logical (de)centralization— does the interface and data structures that the system presents and maintains look more like a single monolithic object, or an amorphous swarm? One simple heuristic is: if you cut the system in half, including both providers and users, will both halves continue to fully operate as independent units?

We can try to put these three dimensions into a chart:

Note that a lot of these placements are very rough and highly debatable. But let’s try going through any of them:

  • Traditional corporations are politically centralized (one CEO), architecturally centralized (one head office) and logically centralized (can’t really split them in half)
  • Civil law relies on a centralized law-making body, whereas common law is built up of precedent made by many individual judges. Civil law still has some architectural decentralization as there are many courts that nevertheless have large discretion, but common law have more of it. Both are logically centralized (“the law is the law”).
  • Languages are logically decentralized; the English spoken between Alice and Bob and the English spoken between Charlie and David do not need to agree at all. There is no centralized infrastructure required for a language to exist, and the rules of English grammar are not created or controlled by any one single person (whereas Esperanto was originally invented by Ludwig Zamenhof, though now it functions more like a living language that evolves incrementally with no authority)
  • BitTorrent is logically decentralized similarly to how English is. Content delivery networks are similar, but are controlled by one single company.
  • Blockchains are politically decentralized (no one controls them) and architecturally decentralized (no infrastructural central point of failure) but they are logically centralized (there is one commonly agreed state and the system behaves like a single computer)

Many times when people talk about the virtues of a blockchain, they describe the convenience benefits of having “one central database”; that centralization is logical centralization, and it’s a kind of centralization that is arguably in many cases good (though Juan Benet from IPFS would also push for logical decentralization wherever possible, because logically decentralized systems tend to be good at surviving network partitions, work well in regions of the world that have poor connectivity, etc; see also this article from Scuttlebot explicitly advocating logical decentralization).

Architectural centralization often leads to political centralization, though not necessarily — in a formal democracy, politicians meet and hold votes in some physical governance chamber, but the maintainers of this chamber do not end up deriving any substantial amount of power over decision-making as a result. In computerized systems, architectural but not political decentralization might happen if there is an online community which uses a centralized forum for convenience, but where there is a widely agreed social contract that if the owners of the forum act maliciously then everyone will move to a different forum (communities that are formed around rebellion against what they see as censorship in another forum likely have this property in practice).

Logical centralization makes architectural decentralization harder, but not impossible — see how decentralized consensus networks have already been proven to work, but are more difficult than maintaining BitTorrent. And logical centralization makes political decentralization harder — in logically centralized systems, it’s harder to resolve contention by simply agreeing to “live and let live”.

Three reasons for Decentralization

The next question is, why is decentralization useful in the first place? There are generally several arguments raised:

  • Fault tolerance— decentralized systems are less likely to fail accidentally because they rely on many separate components that are not likely.
  • Attack resistance— decentralized systems are more expensive to attack and destroy or manipulate because they lack sensitive central points that can be attacked at much lower cost than the economic size of the surrounding system.
  • Collusion resistance — it is much harder for participants in decentralized systems to collude to act in ways that benefit them at the expense of other participants, whereas the leaderships of corporations and governments collude in ways that benefit themselves but harm less well-coordinated citizens, customers, employees and the general public all the time.

All three arguments are important and valid, but all three arguments lead to some interesting and different conclusions once you start thinking about protocol decisions with the three individual perspectives in mind. Let us try to expand out each of these arguments one by one.


Regarding fault tolerance, the core argument is simple. What’s less likely to happen: one single computer failing, or five out of ten computers all failing at the same time? The principle is uncontroversial, and is used in real life in many situations, including jet engines, backup power generators particularly in places like hospitals, military infrastructure, financial portfolio diversification, and yes, computer networks.

However, this kind of decentralization, while still effective and highly important, often turns out to be far less of a panacea than a naive mathematical model would sometimes predict. The reason is common mode failure. Sure, four jet engines are less likely to fail than one jet engine, but what if all four engines were made in the same factory, and a fault was introduced in all four by the same rogue employee?

Do blockchains as they are today manage to protect against common mode failure? Not necessarily. Consider the following scenarios:

  • All nodes in a blockchain run the same client software, and this client software turns out to have a bug.
  • All nodes in a blockchain run the same client software, and the development team of this software turns out to be socially corrupted.
  • The research team that is proposing protocol upgrades turns out to be socially corrupted.
  • In a proof of work blockchain, 70% of miners are in the same country, and the government of this country decides to seize all mining farms for national security purposes.
  • The majority of mining hardware is built by the same company, and this company gets bribed or coerced into implementing a backdoor that allows this hardware to be shut down at will.
  • In a proof of stake blockchain, 70% of the coins at stake are held at one exchange.

A holistic view of fault tolerance decentralization would look at all of these aspects, and see how they can be minimized. Some natural conclusions that arise are fairly obvious:

  • It is crucially important to have multiple competing implementations.
  • The knowledge of the technical considerations behind protocol upgrades must be democratized, so that more people can feel comfortable participating in research discussions and criticizing protocol changes that are clearly bad.
  • Core developers and researchers should be employed by multiple companies or organizations (or, alternatively, many of them can be volunteers).
  • Mining algorithms should be designed in a way that minimizes the risk of centralization
  • Ideally we use proof of stake to move away from hardware centralization risk entirely (though we should also be cautious of new risks that pop up due to proof of stake).

Note that the fault tolerance requirement in its naive form focuses on architectural decentralization, but once you start thinking about fault tolerance of the community that governs the protocol’s ongoing development, then political decentralization is important too.


Now, let’s look at attack resistance. In some pure economic models, you sometimes get the result that decentralization does not even matter. If you create a protocol where the validators are guaranteed to lose $50 million if a 51% attack (ie. finality reversion) happens, then it doesn’t really matter if the validators are controlled by one company or 100 companies — $50 million economic security margin is $50 million economic security margin. In fact, there are deep game-theoretic reasons why centralization may even maximize this notion of economic security (the transaction selection model of existing blockchains reflects this insight, as transaction inclusion into blocks through miners/block proposers is actually a very rapidly rotating dictatorship).

However, once you adopt a richer economic model, and particularly one that admits the possibility of coercion (or much milder things like targeted DoS attacks against nodes), decentralization becomes more important. If you threaten one person with death, suddenly $50 million will not matter to them as much anymore. But if the $50 million is spread between ten people, then you have to threaten ten times as many people, and do it all at the same time. In general, the modern world is in many cases characterized by an attack/defense asymmetry in favor of the attacker — a building that costs $10 million to build may cost less than $100,000 to destroy, but the attacker’s leverage is often sublinear: if a building that costs $10 million to build costs $100,000 to destroy, a building that costs $1 million to build may realistically cost perhaps $30,000 to destroy. Smaller gives better ratios.

What does this reasoning lead to? First of all, it pushes strongly in favor of proof of stake over proof of work, as computer hardware is easy to detect, regulate, or attack, whereas coins can be much more easily hidden (proof of stake also has strong attack resistance for other reasons). Second, it is a point in favor of having widely distributed development teams, including geographic distribution. Third, it implies that both the economic model and the fault-tolerance model need to be looked at when designing consensus protocols.


Finally, we can get to perhaps the most intricate argument of the three, collusion resistance. Collusion is difficult to define; perhaps the only truly valid way to put it is to simply say that collusion is “coordination that we don’t like”. There are many situations in real life where even though having perfect coordination between everyone would be ideal, one sub-group being able to coordinate while the others cannot is dangerous.

One simple example is antitrust law — deliberate regulatory barriers that get placed in order to make it more difficult for participants on one side of the marketplace to come together and act like a monopolist and get outsided profits at the expense of both the other side of the marketplace and general social welfare. Another example is rules against active coordination between candidates and super-PACs in the United States, though those have proven difficult to enforce in practice. A much smaller example is a rule in some chess tournaments preventing two players from playing many games against each other to try to raise one player’s score. No matter where you look, attempts to prevent undesired coordination in sophisticated institutions are everywhere.

In the case of blockchain protocols, the mathematical and economic reasoning behind the safety of the consensus often relies crucially on the uncoordinated choice model, or the assumption that the game consists of many small actors that make decisions independently. If any one actor gets more than 1/3 of the mining power in a proof of work system, they can gain outsized profits by selfish-mining. However, can we really say that the uncoordinated choice model is realistic when 90% of the Bitcoin network’s mining power is well-coordinated enough to show up together at the same conference?

Blockchain advocates also make the point that blockchains are more secure to build on because they can’t just change their rules arbitrarily on a whim whenever they want to, but this case would be difficult to defend if the developers of the software and protocol were all working for one company, were part of one family and sat in one room. The whole point is that these systems should not act like self-interested unitary monopolies. Hence, you can certainly make a case that blockchains would be more secure if they were more discoordinated.

However, this presents a fundamental paradox. Many communities, including Ethereum’s, are often praised for having a strong community spirit and being able to coordinate quickly on implementing, releasing and activating a hard fork to fix denial-of-service issues in the protocol within six days. But how can we foster and improve this good kind of coordination, but at the same time prevent “bad coordination” that consists of miners trying to screw everyone else over by repeatedly coordinating 51% attacks?


There are three ways to answer this:

  • Don’t bother mitigating undesired coordination; instead, try to build protocols that can resist it.
  • Try to find a happy medium that allows enough coordination for a protocol to evolve and move forward, but not enough to enable attacks.
  • Try to make a distinction between beneficial coordination and harmful coordination, and make the former easier and the latter harder.

The first approach makes up a large part of the Casper design philosophy. However, it by itself is insufficient, as relying on economics alone fails to deal with the other two categories of concerns about decentralization. The second is difficult to engineer explicitly, especially for the long term, but it does often happen accidentally. For example, the fact that bitcoin’s core developers generally speak English but miners generally speak Chinese can be viewed as a happy accident, as it creates a kind of “bicameral” governance that makes coordination more difficult, with the side benefit of reducing the risk of common mode failure, as the English and Chinese communities will reason at least somewhat separately due to distance and communication difficulties and are therefore less likely to both make the same mistake.

The third is a social challenge more than anything else; solutions in this regard may include:

  • Social interventions that try to increase participants’ loyalty to the community around the blockchain as a whole and substitute or discourage the possibility of the players on one side of a market becoming directly loyal to each other.
  • Promoting communication between different “sides of the market” in the same context, so as to reduce the possibility that either validators or developers or miners begin to see themselves as a “class” that must coordinate to defend their interests against other classes.
  • Designing the protocol in such a way as to reduce the incentive for validators/miners to engage in one-to-one “special relationships”, centralized relay networks and other similar super-protocol mechanisms.
  • Clear norms about what the fundamental properties that the protocol is supposed to have, and what kinds of things should not be done, or at least should be done only under very extreme circumstances.

This third kind of decentralization, decentralization as undesired-coordination-avoidance, is thus perhaps the most difficult to achieve, and tradeoffs are unavoidable. Perhaps the best solution may be to rely heavily on the one group that is guaranteed to be fairly decentralized: the protocol’s users.

A Brief History of Blockchain

Many of the technologies we now take for granted were quiet revolutions in their time. Just think about how much smartphones have changed the way we live and work. It used to be that when people were out of the office, they were gone, because a telephone was tied to a place, not to a person. Now we have global nomads building new businesses straight from their phones. And to think: Smartphones have been around for merely a decade.

We’re now in the midst of another quiet revolution: blockchain, a distributed database that maintains a continuously growing list of ordered records, called “blocks.” Consider what’s happened in just the past 10 years:

  • The first major blockchain innovation was bitcoin, a digital currency experiment. The market cap of bitcoin now hovers between $10–$20 billion dollars, and is used by millions of people for payments, including a large and growing remittances market.
  • The second innovation was called blockchain, which was essentially the realization that the underlying technology that operated bitcoin could be separated from the currency and used for all kinds of other interorganizational cooperation. Almost every major financial institution in the world is doing blockchain research at the moment, and 15% of banks are expected to be using blockchain in 2017.
  • The third innovation was called the “smart contract,” embodied in a second-generation blockchain system called ethereum, which built little computer programs directly into blockchain that allowed financial instruments, like loans or bonds, to be represented, rather than only the cash-like tokens of the bitcoin. The ethereum smart contract platform now has a market cap of around a billion dollars, with hundreds of projects headed toward the market.
  • The fourth major innovation, the current cutting edge of blockchain thinking, is called “proof of stake.” Current generation blockchains are secured by “proof of work,” in which the group with the largest total computing power makes the decisions. These groups are called “miners” and operate vast data centers to provide this security, in exchange for cryptocurrency payments. The new systems do away with these data centers, replacing them with complex financial instruments, for a similar or even higher degree of security. Proof-of-stake systems are expected to go live later this year.
  • The fifth major innovation on the horizon is called blockchain scaling. Right now, in the blockchain world, every computer in the network processes every transaction. This is slow. A scaled blockchain accelerates the process, without sacrificing security, by figuring out how many computers are necessary to validate each transaction and dividing up the work efficiently. To manage this without compromising the legendary security and robustness of blockchain is a difficult problem, but not an intractable one. A scaled blockchain is expected to be fast enough to power the internet of things and go head-to-head with the major payment middlemen (VISA and SWIFT) of the banking world.

This innovation landscape represents just 10 years of work by an elite group of computer scientists, cryptographers, and mathematicians. As the full potential of these breakthroughs hits society, things are sure to get a little weird. Self-driving cars and drones will use blockchains to pay for services like charging stations and landing pads. International currency transfers will go from taking days to an hour, and then to a few minutes, with a higher degree of reliability than the current system has been able to manage.

These changes, and others, represent a pervasive lowering of transaction costs. When transaction costs drop past invisible thresholds, there will be sudden, dramatic, hard-to-predict aggregations and disaggregations of existing business models. For example, auctions used to be narrow and local, rather than universal and global, as they are now on sites like eBay. As the costs of reaching people dropped, there was a sudden change in the system. Blockchain is reasonably expected to trigger as many of these cascades as e-commerce has done since it was invented, in the late 1990s.

Predicting what direction it will all take is hard. Did anybody see social media coming? Who would have predicted that clicking on our friends’ faces would replace time spent in front of the TV? Predictors usually overestimate how fast things will happen and underestimate the long-term impacts. But the sense of scale inside the blockchain industry is that the changes coming will be “as large as the original invention of the internet,” and this may not be overstated. What we can predict is that as blockchain matures and more people catch on to this new mode of collaboration, it will extend into everything from supply chains to provably fair internet dating (eliminating the possibility of fake profiles and other underhanded techniques). And given how far blockchain come in 10 years, perhaps the future could indeed arrive sooner than any of us think.

Until the late 1990s it was impossible to process a credit card securely on the internet — e-commerce simply did not exist. How fast could blockchain bring about another revolutionary change? Consider that Dubai’s blockchain strategy (disclosure: I designed it) is to issue all government documents on blockchain by 2020, with substantial initial projects just announced to go live this year. The Internet of Agreements concept presented at the World Government Summit builds on this strategy to envision a substantial transformation of global trade, using blockchains to smooth out some of the bumps caused by Brexit and the recent U.S. withdrawal from the Trans-Pacific Partnership. These ambitious agendas will have to be proven in practice, but the expectation in Dubai is that cost savings and innovation benefits will more than justify the cost of experimentation. As Mariana Mazzucato teaches in The Entrepreneurial State, the cutting edge of innovation, particularly in infrastructure, is often in the hands of the state, and that seems destined to be true in the blockchain space.

Why is the Blockchain Considered Tamper-Proof

From a computer science perspective, a blockchain is simply a data structure used to store data.  There are many other data structures available, such as databases (rows,columns), comma separated lists (csv files), text files, etc.  However, the blockchain has some unique properties, which I will explain below.

The easiest way to imagine a blockchain is to think of it like a book.  Just like a book has numerous pages, a blockchain has numerous blocks in linear order.  In a book, each page has the contents (i.e. the story) and additional information in the header, such as the page number.  Similarly, each block has the contents (i.e. the individual payment transactions) and additional information in the header to point to the previous block.  However, this is where things get really interesting, let me explain.  Each block has a unique fingerprint, which is generated based on the contents of the block (i.e. payment transactions) and the fingerprint of the previous block.  The implication is that if anyone tried to remove, change or replace one of the blocks in the chain, then the fingerprint of the following block would no longer correctly reference the previous block.  The new fingerprint of the changed block would no longer link up to the next block in the sequence.  The chain would be broken, and the intrusion would be detected. 

The only way to change a block, is to also change all of the blocks that follow it.  The further in the past a block is located, the harder it is to change.  Fortunately, because changing or adding new blocks is a very complex and expensive process, regenerating multiple blocks is simply not a feasible option.  Hence the blockchain is considered to be immutable or tamper proof.  In a future article I will discuss how blocks are added to the blockchain, and the complexity involved.  

Ethereum Official Documentation

What is Ethereum?

Ethereum is an open blockchain platform that lets anyone build and use decentralized applications that run on blockchain technology. Like Bitcoin, no one controls or owns Ethereum – it is an open-source project built by many people around the world. But unlike the Bitcoin protocol, Ethereum was designed to be adaptable and flexible. It is easy to create new applications on the Ethereum platform, and with the Homestead release, it is now safe for anyone to use those applications.

A next generation blockchain

Blockchain technology is the technological basis of Bitcoin, first described by its mysterious author Satoshi Nakamoto in his white paper “Bitcoin: A Peer-to-Peer Electronic Cash System”, published in 2008. While the use of blockchains for more general uses was already discussed in the original paper, it was not until a few years later that blockchain technology emerged as a generic term. A blockchain is a distributed computing architecture where every network node executes and records the same transactions, which are grouped into blocks. Only one block can be added at a time, and every block contains a mathematical proof that verifies that it follows in sequence from the previous block. In this way, the blockchain’s “distributed database” is kept in consensus across the whole network. Individual user interactions with the ledger (transactions) are secured by strong cryptography. Nodes that maintain and verify the network are incentivized by mathematically enforced economic incentives coded into the protocol.

In Bitcoin’s case the distributed database is conceived of as a table of account balances, a ledger, and transactions are transfers of the bitcoin token to facilitate trustless finance between individuals. But as bitcoin began attracting greater attention from developers and technologists, novel projects began to use the bitcoin network for purposes other than transfers of value tokens. Many of these took the form of “alt coins” – separate blockchains with cryptocurrencies of their own which improved on the original bitcoin protocol to add new features or capabilities. In late 2013, Ethereum’s inventor Vitalik Buterin proposed that a single blockchain with the capability to be reprogrammed to perform any arbitrarily complex computation could subsume these many other projects.

In 2014, Ethereum founders Vitalik Buterin, Gavin Wood and Jeffrey Wilcke began work on a next-generation blockchain that had the ambitions to implement a general, fully trustless smart contract platform.

Ethereum Virtual Machine

Ethereum is a programmable blockchain. Rather than give users a set of pre-defined operations (e.g. bitcoin transactions), Ethereum allows users to create their own operations of any complexity they wish. In this way, it serves as a platform for many different types of decentralized blockchain applications, including but not limited to cryptocurrencies.

Ethereum in the narrow sense refers to a suite of protocols that define a platform for decentralised applications. At the heart of it is the Ethereum Virtual Machine (“EVM”), which can execute code of arbitrary algorithmic complexity. In computer science terms, Ethereum is “Turing complete”. Developers can create applications that run on the EVM using friendly programming languages modelled on existing languages like JavaScript and Python.

Like any blockchain, Ethereum also includes a peer-to-peer network protocol. The Ethereum blockchain database is maintained and updated by many nodes connected to the network. Each and every node of the network runs the EVM and executes the same instructions. For this reason, Ethereum is sometimes described evocatively as a “world computer”.

This massive parallelisation of computing across the entire Ethereum network is not done to make computation more efficient. In fact, this process makes computation on Ethereum far slower and more expensive than on a traditional “computer”. Rather, every Ethereum node runs the EVM in order to maintain consensus across the blockchain. Decentralized consensus gives Ethereum extreme levels of fault tolerance, ensures zero downtime, and makes data stored on the blockchain forever unchangeable and censorship-resistant.

The Ethereum platform itself is featureless or value-agnostic. Similar to programming languages, it is up to entrepreneurs and developers to decide what it should be used for. However, it is clear that certain application types benefit more than others from Ethereum’s capabilities. Specifically, ethereum is suited for applications that automate direct interaction between peers or facilitate coordinated group action across a network. For instance, applications for coordinating peer-to-peer marketplaces, or the automation of complex financial contracts. Bitcoin allows for individuals to exchange cash without involving any middlemen like financial institutions, banks, or governments. Ethereum’s impact may be more far-reaching. In theory, financial interactions or exchanges of any complexity could be carried out automatically and reliably using code running on Ethereum. Beyond financial applications, any environments where trust, security, and permanence are important – for instance, asset-registries, voting, governance, and the internet of things – could be massively impacted by the Ethereum platform.

How does Ethereum work?

Ethereum incorporates many features and technologies that will be familiar to users of Bitcoin, while also introducing many modifications and innovations of its own.

Whereas the Bitcoin blockchain was purely a list of transactions, Ethereum’s basic unit is the account. The Ethereum blockchain tracks the state of every account, and all state transitions on the Ethereum blockchain are transfers of value and information between accounts. There are two types of accounts:

  • Externally Owned Accounts (EOAs), which are controlled by private keys
  • Contract Accounts, which are controlled by their contract code and can only be “activated” by an EOA

For most users, the basic difference between these is that human users control EOAs – because they can control the private keys which give control over an EOA. Contract accounts, on the other hand, are governed by their internal code. If they are “controlled” by a human user, it is because they are programmed to be controlled by an EOA with a certain address, which is in turn controlled by whoever holds the private keys that control that EOA. The popular term “smart contracts” refers to code in a Contract Account – programs that execute when a transaction is sent to that account. Users can create new contracts by deploying code to the blockchain.

Contract accounts only perform an operation when instructed to do so by an EOA. So it is not possible for a Contract account to be performing native operations like random number generation or API calls – it can do these things only if prompted by an EOA. This is because Ethereum requires nodes to be able to agree on the outcome of computation, which requires a guarantee of strictly deterministic execution.

Like in Bitcoin, users must pay small transaction fees to the network. This protects the Ethereum blockchain from frivolous or malicious computational tasks, like DDoS attacks or infinite loops. The sender of a transaction must pay for each step of the “program” they activated, including computation and memory storage. These fees are paid in amounts of Ethereum’s native value-token, ether.

These transaction fees are collected by the nodes that validate the network. These “miners” are nodes in the Ethereum network that receive, propagate, verify, and execute transactions. The miners then group the transactions – which include many updates to the “state” of accounts in the Ethereum blockchain – into what are called “blocks”, and miners then compete with one another for their block to be the next one to be added to the blockchain. Miners are rewarded with ether for each successful block they mine. This provides the economic incentive for people to dedicate hardware and electricity to the Ethereum network.

Just as in the Bitcoin network, miners are tasked with solving a complex mathematical problem in order to successfully “mine” a block. This is known as a “Proof of Work”. Any computational problem that requires orders of magnitude more resources to solve algorithmically than it takes to verify the solution is a good candidate for proof of work. In order to discourage centralisation due to the use of specialised hardware (e.g. ASICs), as has occurred in the Bitcoin network, Ethereum chose a memory-hard computational problem. If the problem requires memory as well as CPU, the ideal hardware is in fact the general computer. This makes Ethereum’s Proof of Work ASIC-resistant, allowing a more decentralized distribution of security than blockchains whose mining is dominated by specialized hardware, like Bitcoin.

Introduction to Self-Sovereign Identity

A gentle introduction to self-sovereign identity

A gentle introduction to self-sovereign identity

In May 2017, the Indian Centre for Internet and Society think tank published a report detailing the ways in which India’s national identity database (Aadhaar) is leaking potentially compromising personal information. The information relates to over 130 million Indian nationals.  The leaks create a great opportunity for financial fraud, and cause irreversible harm to the privacy of the individuals concerned.

It is clear that the central identity repository model has deficiencies.  This post describes a new paradigm for managing our digital identities: self-sovereign identity.

Self-sovereign identity is the concept that people and businesses can store their own identity data on their own devices, and provide it efficiently to those who need to validate it, without relying on a central repository of identity data.  It’s a digital way of doing what we do today with bits of paper.  This has benefits compared with both current manual processes and central repositories such as India’s Aadhaar.

Efficient identification processes promote financial inclusion.  By lowering the cost to banks of opening accounts for small businesses, financing becomes profitable for the banks and therefore accessible for the small businesses.

What are important concepts in identity?

There are three parts to identity: claimsproofs, and attestations.

Claims

An identity claim is an assertion made by the person or business:

“My name is Antony and my date of birth is 1 Jan 1901”

claim

Proofs

proof is some form of document that provides evidence for the claim. Proofs come in all sorts of formats. Usually for individuals it’s photocopies of passports, birth certificates, and utility bills. For companies it’s a bundle of incorporation and ownership structure documents.

proofs

Attestations

An attestation is when a third party validates that according to their records, the claims are true. For example a University may attest to the fact that someone studied there and earned a degree. An attestation from the right authority is more robust than a proof, which may be forged. However, attestations are a burden on the authority as the information can be sensitive. This means that the information needs to be maintained so that only specific people can access it.

attestation

What’s the identity problem?

Banks need to understand their new customers and business clients to check eligibility, and to prove to regulators that they (the banks) are not banking baddies. They also need keep the information they have on their clients up to date.

The problems are:

  • Proofs are usually unstructured data, taking the form of images and photocopies. This means that someone in the bank has to manually read and scan the documents to extract the relevant data to type into a system for storage and processing.
  • When the data changes in real life (such as a change of address, or a change in a company’s ownership structure), the customer is obliged to tell the various financial service providers they have relationships with.
  • Some forms of proof (eg photocopies of original documents) can be easily faked, meaning extra steps to prove authenticity need to be taken, such as having photocopies notarised, leading to extra friction and expense.

This results in expensive, time-consuming and troublesome processes that annoy everyone.

kyc_current_process

What are the technical improvements?

Whatever style of overall solution is used, the three problems outlined above need to be solved technically. A combination of standards and digital signatures works well.

The technical solution for improving on unstructured data is for data to be stored and transported in a machine-readable structured format, ie text in boxes with standard labels.

The technical solution for managing changes in data is a common method used to update all the necessary entities. This means using APIs to connect, authenticate yourself (proving it’s your account), and update details.

The technical solution for proving authenticity of identity proofs is digitally signed attestations, possibly time-bound. A digitally signed proof is as good as an attestation because the digital signature cannot be forged. Digital signatures have two properties that make them inherently better than paper documents:

  1. Digital signatures become invalid if there are any changes to the signed document. In other words, they guarantee the integrity of the document.
  2. Digital signatures cannot be ‘lifted’ and copied from one document to another.

What’s the centralised solution?

A common solution for identity management is a central repository. A third party owns and controls a repository of many people’s identities. The customer enters their facts into the system, and uploads supporting evidence. Whoever needs this can access this data (with permission from the client of course), and can systematically suck this data into their own systems. If details change, the customer updates it once, and can push the change to the connected banks.

centralised_identity_solutions

This sounds wonderful, and it certainly offers some benefits. But there are problems with this model.

What are the problems with centralised solutions?

1. Toxic data

Being in charge of this identity repository is a double-edged sword. On the one hand, an operator can make money, by charging for a convenient utility. On the other hand, this data is a liability to the operator: A central-identity system is a goldmine for hackers, and a cybersecurity headache for the operator.

If a hacker can get into the systems and copy the data, they can sell the digital identities and their documentary evidence to other baddies. These baddies can then steal the identities and commit fraud and crimes while using the names of the innocent. This can and does wreck the lives of the innocent, and creates a significant liability for the operator.

2. Jurisdictional politics

Regulators want personal data to be stored within the geographical boundaries of the jurisdiction under their control. So it can be difficult to create international identity repositories because there is always the argument about which country to warehouse the data and who can access it, from where.

3. Monopolistic tendencies

This isn’t a problem for the central repository operators, but it’s a problem for the users. If a utility operator gains enough traction, network effects lead to more users. The utility operator can become a quasi-monopoly. Operators of monopolistic functions tend to become resistant to change; they overcharge and don’t innovate due to a lack of competitive pressure. This is wonderful for the operator, but is at the expense of the users.

What’s the decentralised answer?

Is it a blockchain?

A blockchain is a type of distributed ledger where all data is replicated to all participants in real time. Should identity data be stored on a blockchain that is managed by a number of participating entities (say, the bigger banks)? No:

  1. Replicating all identity data to all parties breaks all kinds of regulations about keeping personal data onshore within a jurisdiction; only storing personal data that is relevant to the business; and only storing data that the customer has consented to.
  2. The cybersecurity risk is increased. If one central data store is difficult enough to secure, now you’re replicating this data to multiple parties, each with their own cybersecurity practices and gaps. This makes it easier for an attacker to steal the data.

What if the identity data were encrypted?

  1. Encrypted personal data can still fall foul of personal data regulations.
  2. Why would the parties (eg banks) store and manage a bunch of identity data that they can’t see or use? What’s the upside?

So what’s the answer?

The emerging answer is “self-sovereign identity“. This digital concept is very similar to the way we keep our non-digital identities today.

Today, we keep passports, birth certificates, utility bills at home under our own control, maybe in an “important drawer”, and we share them when needed. We don’t store these bits of paper with a third party. Self-sovereign identity is the digital equivalent of what we do with bits of paper now.

How would self-sovereign identity work for the user?

You would have an app on a smartphone or computer, some sort of “identity wallet” where identity data would be stored on the hard drive of your device, maybe backed up on another device or on a personal backup solution, but crucially not stored in a central repository.

Your identity wallet would start off empty with only a self-generated identification number derived from public key, and a corresponding private key (like a password, used to create digital signatures). This keypair differs from a username and password because it is created by the user by “rolling dice and doing some maths” rather than by requesting a username/password combination from a third party.

At this stage, no one else in the world knows about this identification number. No one issued it to you. You created it yourself. It is self-sovereign. The laws of big numbers and randomness ensure that no one else will generate the same identification number as you.

You then use this identification number, along with your identity claims, and get attestations from relevant authorities.

You can then use these attested claims as your identity information.

self_sovereign_identity_public_key_model

Claims would be stored by typing text into standardised text fields, and saving photos or scans of documents.

Proofs would be stored by saving scans or photos of proof documents. However this would be for backward compatibility, because digitally signed attestations remove the need for proofs as we know them today.

Attestations – and here’s the neat bit – would be stored in this wallet too. These would be machine readable, digitally signed pieces of information, valid within certain time windows. The relevant authority would need to sign these with digital signatures – for example, passport agencies, hospitals, driving licence authorities, police, etc.

Need to know, but not more: Authorities could provide “bundles” of attested claims, such as “over 18”, “over 21”, “accredited investor”, “can drive cars” etc, for the user to use as they see fit. The identity owner would be able to choose which piece of information to pass to any requester. For example, if you need to prove you are over 18, you don’t need to share your date of birth, you just need a statement saying you are over 18, signed by the relevant authority.

attestation_over18

Sharing this kind of data is safer both for the identity provider and the recipient. The provider doesn’t need to overshare, and the recipient doesn’t need to store unnecessarily sensitive data – for example, if the recipient gets hacked, they are only storing “Over 18” flags, not dates of birth.

Even banks themselves could attest to the person having an account with them. We would first need to understand what liability they take on when they create these attestations. I would assume it would be no more than the liability they currently take on when they send you a bank statement, which you use as a proof of address elsewhere.

Data sharing

Data would be stored on the person’s device (as pieces of paper are currently stored at home today), and then when requested, the person would approve a third party to collect specific data, by tapping a notification on their device, We already have something similar to this – if you have ever used a service by “linking” your Facebook or LinkedIn account, this is similar – but instead of going to Facebook’s servers to collect your personal data, it requests it from your phone, and you have granular control over what data is shared.

self_sovereign_identity_platform

Conclusion – and distributed ledgers

Who would orchestrate this? Well perhaps this is where a distributed ledger may come in. The software, the network, and the workflow dance would need to be built, run, and maintained. Digital signatures require public and private keys that need to be managed, and certificates need to be issued, revoked, refreshed. Identity data isn’t static, it needs to evolve, according to some business logic.

A Examine Crimson Tide Football Rookies In The Nfl

A Examine Crimson Tide Football Rookies In The NflDE Vernon Gholston of Pitt is apparent pick. Cullen Jenkins, released by Philadelphia, was in Seattle on March 4. Selection: Ramses Barden, WR, Cal Poly San Luis Obispo. Elisha Eli Nelson Manning was born January 3, 1981.falcons ownership, nfl wageringMotivating of course if Brett favre does not decide to play New York Islanders jerseys for the Vikings yr.Yet if Bradley is available after January. 13, would the Eagles move fast to sign him selections?MISSING WIDE RECEIVER ALERT: (Earth City, MO) The St. Louis Rams are missing a wide receiver which supposed with regard to an exciting component in their offense this year. #81 Mardy Gilyard has been invisible for up to the

Minnesota Timberwolves jerseys

entire season. Rare sightings have occurred but are becoming more uncommon. If you know the whereabouts of the 4th round draft pick please contact the Saint. Louis Rams organization and they will pick on the draft pick, because may well desperate for help in the position. We appreciate you your assistance.San Diego Chargers (1-1) [10] – San Diego came out guns blazing, and Jacksonville suffered at their offer. No one talks regarding their defense anymore, which is understandable, and they came away with six turnovers. Detroit Red Wings jerseys Phillip Rivers played extremely well despite two interceptions, and Mike Tolbert looked phenomenal out for this backfield after filling looking for an injured Ryan Matthews. If this squad keeps playing in that level, ought to eventually regain dominance associated with AFC Western.Now only four are still. Two were early season favorites to make it to Championship weekend (Pittsburgh and Philadelphia), while Arizona and Baltimore’s success has been more https://chinajerseysmall.com/tag/miami-heat-jerseys/ unforeseen.For those that don’t know and never watch the news or visit paper, Michael vick was faced with a financing canine fighting kennel in Va. He was involved in federal charges on particular. Prosecutors also cited extreme cruelty in the actual animals were trained and treated. Vick cut a plea and ended up doing some jail time for this. He also was suspended indefinitely by the NFL. The falcons new jerseys cheap sued him to recover part of his signing bonus as well as Lamar Jackson jersey an arbitrator ruled within favor. The Falcons also released him after couple of other team seemed interested in a possible vocation. There is an opportunity he could play again and be reinstated together with NFL. There is also talk that this individual play a good offshoot league if the NFL doesn’t let him play this year, that is purely speculative.15) Detroit Lions (7-9): Running game, offensive line, or barricade? DE Derrick Harvey from Florida or OT Ryan Clady? OT Jeff Otah, or DE Quentin Groves? While have to a serious need to obtain a pass rusher, the offensive line was still one for the worst as NFL, and OT Ryan Clady of Boise State could be deemed a steal during that point inside of draft.22. Gambling (1-1) [22] – It is difficult to win when you turn the ball over six occasions when. Jacksonville is coming off an unsatisfactory loss and shouldn’t be excited regarding week three meeting with the Eagles. Never ever defense can ruffle Kevin Kolb, can be expected to start, they are able to take this game, for the. Then again, Atlanta Hawks jerseys a struggling Kolb could resulted in a Vick sighting; that would not be good.Cleveland Browns at Chicago Bears – It’ll be interesting to discover how both teams respond after they lost with combined 63 points towards Packers and Bengals. Believe the difference in farmville will be Jay Cutler, who is irregular but can at least get his offense moving at durations. Bears win.31) (projected) Packers (13-3): This really is a

Utah Jazz jerseys

roulette wheel. GM Ted Thompson says he drafts on value, and often seemingly bizarre picks prove that can. Who replaces a big play DT who gets 7 sacks a season as element time player, stops the run, and makes huge turnovers two years running to oft injured NT to be able to do nothing but maybe stop the flow? Green Bay. Cornerback seems an obvious choice, but value over a board says SS or OL. Two old OT, oft injured centers, and guys playing out of place (Spitz at G instead of C, Collredge at G instead of T) discover another talented OT to add depth and figure the actual two deeps. OT Michael Oher of Ole Miss, who may also play guard, is the pick here.

Margaret Grace
  Kids loved it!

Karen Beier
  Super great! It’s a great price and is exactly what I was expecting!

a person s development ture of national football league supe
a bunch of states lawmakers media utilities to cope with pol
Has Been Nfl Thursday Night Football Jerseys A Meteoric One Prior To Joining The
yep you can purchase Cheap incredibly serving prices With a
numerous extremely memory joggers any time you re engaging i
wholesalenhljerseys1.com



cheap jerseys