Appendix A: Cryptography & Encoding
The appendix chapter contains various protocol details.
A.1. Cryptographic Algorithms
A.1.1. Hash Functions
A.1.1.1. BLAKE2
BLAKE2 is a collection of cryptographic hash functions known for their high speed. Their design closely resembles BLAKE which has been a finalist in the SHA-3 competition.
Polkadot is using the Blake2b variant, which is optimized for 64-bit platforms. Unless otherwise specified, the Blake2b hash function with a 256-bit output is used whenever Blake2b is invoked in this document. The detailed specification and sample implementations of all variants of Blake2 hash functions can be found in RFC 7693 (1).
A.1.2. Randomness
TBH
A.1.3. VRF
A Verifiable Random Function (VRF) is a mathematical operation that takes some input and produces a random number using a secret key along with a proof of authenticity that this random number was generated using the submitter’s secret key and the given input. Any challenger can verify the proof to ensure the random number generation is valid and has not been tampered with (for example, to the benefit of the submitter).
In Polkadot, VRFs are used for the BABE block production lottery by Block-Production-Lottery and the parachain approval voting mechanism (Section 8.5.). The VRF uses a mechanism similar to algorithms introduced in the following papers:
It essentially generates a deterministic elliptic curve based on Schnorr signature as a verifiable random value. The elliptic curve group used in the VRF function is the Ristretto group specified in:
Definition 181. VRF Proof
The VRF proof proves the correctness of an associated VRF output. The VRF proof, , is a data structure of the following format:
where is the challenge and is the 32-byte Schnorr poof. Both are expressed as Curve25519 scalars as defined in Definition Definition 182.
Definition 182. DLEQ Prove
The function creates a proof for a given input, , based on the provided transcript, .
First:
Then the witness scalar is calculated, , where is the 32-byte secret seed used for nonce generation in the context of sr25519.
where is the length of the witness, encoded as a 32-bit little-endian integer. is a 32-byte array containing the secret witness scalar.
where
is the compressed Ristretto point of the scalar input.
is the compressed Ristretto point of the public key.
is the compressed Ristretto point of the wittness:
For the 64-byte challenge:
And the Schnorr proof:
where is the secret key.
Definition 183. DLEQ Verify
The function verifiers the VRF input, against the output, , with the associated proof (Definition 181) and public key, .
where
is calculated as:
where is the Ristretto basepoint.
is calculated as:
The challenge is valid if equals :
A.1.3.1. Transcript
A VRF transcript serves as a domain-specific separator of cryptographic protocols and is represented as a mathematical object, as defined by Merlin, which defines how that object is generated and encoded. The usage of the transcript is implementation specific, such as for certain mechanisms in the Availability & Validity chapter (Chapter 8), and is therefore described in more detail in those protocols. The input value used to initiate the transcript is referred to as a context (Definition 184).
Definition 184. VRF Context
The VRF context is a constant byte array used to initiate the VRF transcript. The VRF context is constant for all users of the VRF for the specific context for which the VRF function is used. Context prevents VRF values generated by the same nodes for other purposes to be reused for purposes not meant to. For example, the VRF context for the BABE Production lottery defined in Section 5.2. is set to be "substrate-babe-vrf".
Definition 185. VRF Transcript
A transcript, or VRF transcript, is a STROBE object, , as defined in the STROBE documentation, respectively section "5. State of a STROBE object".
where
The duplex state, , is a 200-byte array created by the keccak-f1600 sponge function on the initial STROBE state. Specifically,
R
is of value166
, andX.Y.Z
is of value1.0.2
.has the initial value of
0
.has the initial value of
0
.has the initial value of
0
.
Then, the meta-AD
operation (Definition 186) (where more=False
) is used to add the protocol label Merlin v1.0
to followed by appending (Section A.1.3.1.1.) label dom-step
and its corresponding context, , resulting in the final transcript, .
serves as an arbitrary identifier/separator and its value is defined by the protocol specification individually. This transcript is treated just like a STROBE object, wherein any operations (Definition 186) on it modify the values such as and .
Formally, when creating a transcript, we refer to it as .
Definition 186. STROBE Operations
STROBE operations are described in the STROBE specification, respectively section "6. Strobe operations". Operations are indicated by their corresponding bitfield, as described in section "6.2. Operations and flags" and implemented as described in section "7. Implementation of operations"
A.1.3.1.1. Messages
Appending messages, or "data," to the transcript (Definition 185) first requires meta-AD
operations for a given label of the messages, including the size of the message, followed by an AD
operation on the message itself. The size of the message is a 4-byte, little-endian encoded integer.
where is the transcript (Definition 185), is the given label and the message, respectively representing its size. is the resulting transcript with the appended data. STROBE operations are described in Definition 186.
Formally, when appending a message, we refer to it as .
A.1.4. Cryptographic Keys
Various types of keys are used in Polkadot to prove the identity of the actors involved in the Polkadot Protocols. To improve the security of the users, each key type has its own unique function and must be treated differently, as described in this Section.
Definition 187. Account Key
Account key is a key pair of type of either of the schemes in the following table:
Table 2. List of the public key scheme that can be used for an account key
Key Scheme | Description |
---|---|
sr25519 | Schnorr signature on Ristretto compressed ed25519 points as implemented in TODO |
ed25519 | The ed25519 signature complies with (4) except for the verification process which adhere to Ed25519 Zebra variant specified in (5). In short, the signature point is not assumed to be in the prime-ordered subgroup group. As such, the verifier must explicitly clear the cofactor during the course of verifying the signature equation. |
secp256k1 | Only for outgoing transfer transactions. |
An account key can be used to sign transactions among other accounts and balance-related functions. Keys defined in Definition 187 and Definition 188 are created and managed by the user independent of the Polkadot implementation. The user notifies the network about the used keys by submitting a transaction.
Definition 188. Stash Key
The Stash key is a type of account that is intended to hold a large amount of funds. As a result, one may actively participate with a stash key, keeping the stash key offline in a secure location. It can also be used to designate a Proxy account to vote in governance proposals.
Controller accounts and controller keys are no longer supported. For more information about the deprecation, see the Polkadot wiki or a more detailed discussion in the Polkadot forum. If you want to know how to set up Stash and Staking Proxy Keys, you can also check thePolkadot wiki The following definition will be removed soon.
Definition 189. Controller Key
The Controller key is a type of account key that acts on behalf of the Stash account. It signs transactions that make decisions regarding the nomination and the validation of the other keys. It is a key that will be in direct control of a user and should mostly be kept offline, used to submit manual extrinsics. It sets preferences like payout account and commission. If used for a validator, it certifies the session keys. It only needs the required funds to pay transaction fees [TODO: key needing fund needs to be defined].
Definition 190. Session Keys
Session keys are short-lived keys that are used to authenticate validator operations. Session keys are generated by the Polkadot Host and should be changed regularly due to security reasons. Nonetheless, no validity period is enforced by the Polkadot protocol on session keys. Various types of keys used by the Polkadot Host are presented in Table 3:
Table 3. List of key schemes which are used for session keys depending on the protocol
Protocol | Key scheme |
---|---|
GRANDPA | ED25519 |
BABE | SR25519 |
I’m Online | SR25519 |
Parachain | SR25519 |
BEEFY | secp256k1 |
Session keys must be accessible by certain Polkadot Host APIs defined in Appendix B. Session keys are not meant to control the majority of the users’ funds and should only be used for their intended purpose.
A.1.4.1. Holding and staking funds
TBH
A.1.4.2. Designating a proxy for voting
TBH
A.2. Auxiliary Encodings
Definition 191. Unix Time
By Unix time, we refer to the unsigned, little-endian encoded 64-bit integer which stores the number of milliseconds that have elapsed since the Unix epoch, that is the time 00:00:00 UTC on 1 January 1970, minus leap seconds. Leap seconds are ignored, and every day is treated as if it contained exactly 86’400 seconds.
A.2.1. Binary Enconding
Definition 192. Sequence of Bytes
By a sequences of bytes or a byte array, , of length , we refer to
We define to be the set of all byte arrays of length . Furthermore, we define:
We represent the concatenation of byte arrays and by:
Definition 193. Bitwise Representation
For a given byte the bitwise representation in bits is defined as:
where
Definition 194. Little Endian
By the little-endian representation of a non-negative integer, , represented as
in base 256, we refer to a byte array such that
Accordingly, we define the function :
Definition 195. UINT32
By UINT32, we refer to a non-negative integer stored in a byte array of length using little-endian encoding format.
A.2.2. SCALE Codec
The Polkadot Host uses Simple Concatenated Aggregate Little-Endian” (SCALE) codec to encode byte arrays as well as other data structures. SCALE provides a canonical encoding to produce consistent hash values across their implementation, including the Merkle hash proof for the State Storage.
Definition 196. Decoding
refers to the decoding of a blob of data. Since the SCALE codec is not self-describing, it’s up to the decoder to validate whether the blob of data can be deserialized into the given type or data structure.
It’s accepted behavior for the decoder to partially decode the blob of data. This means any additional data that does not fit into a data structure can be ignored.
Considering that the decoded data is never larger than the encoded message, this information can serve as a way to validate values that can vary in size, such as sequences (Definition 202). The decoder should strictly use the size of the encoded data as an upper bound when decoding in order to prevent denial of service attacks.
Definition 197. Tuple
The SCALE codec for Tuple, , such that:
Where ’s are values of different types, is defined as:
In the case of a tuple (or a structure), the knowledge of the shape of data is not encoded even though it is necessary for decoding. The decoder needs to derive that information from the context where the encoding/decoding is happening.
Definition 198. Varying Data Type
We define a varying data type to be an ordered set of data types.
A value of varying data type is a pair where for some and is its value of type , which can be empty. We define , unless it is explicitly defined as another value in the definition of a particular varying data type.
In particular, we define two specific varying data which are frequently used in various parts of the Polkadot protocol: Option (Definition 200) and Result (Definition 201).
Definition 199. Encoding of Varying Data Type
The SCALE codec for value of varying data type , formally referred to as is defined as follows:
Where is an 8-bit integer determining the type of . In particular, for the optional type defined in Definition 198, we have:
The SCALE codec does not encode the correspondence between the value and the data type it represents; the decoder needs prior knowledge of such correspondence to decode the data.
Definition 200. Option Type
The Option type is a varying data type of which indicates if data of type is available (referred to as some state) or not (referred to as empty, none or null state). The presence of type none, indicated by , implies that the data corresponding to type is not available and contains no additional data. Where as the presence of type indicated by implies that the data is available.
Definition 201. Result Type
The Result type is a varying data type of which is used to indicate if a certain operation or function was executed successfully (referred to as "ok" state) or not (referred to as "error" state). implies success, implies failure. Both types can either contain additional data or are defined as empty types otherwise.
Definition 202. Sequence
The SCALE codec for sequence such that:
where ’s are values of the same type (and the decoder is unable to infer value of from the context) is defined as:
where is defined in Definition 208.
In some cases, the length indicator is omitted if the length of the sequence is fixed and known by the decoder upfront. Such cases are explicitly stated by the definition of the corresponding type.
Definition 203. Dictionary
SCALE codec for dictionary or hashtable D with key-value pairs s such that:
is defined as the SCALE codec of as a sequence of key-value pairs (as tuples):
where is encoded the same way as but argument refers to the number of key-value pairs rather than the length.
Definition 204. Boolean
The SCALE codec for a boolean value defined as a byte as follows:
Definition 205. String
The SCALE codec for a string value is an encoded sequence (Definition 202) consisting of UTF-8 encoded bytes.
Definition 206. Fixed Length
The SCALE codec, , for other types such as fixed length integers not defined here otherwise, is equal to little-endian encoding of those values defined in Definition 194.
Definition 207. Empty
The SCALE codec, , for an empty type, is defined as a byte array of zero length and depicted as .
A.2.2.1. Length and Compact Encoding
SCALE Length encoding is used to encode integer numbers of varying sizes prominently in an encoding length of arrays:
Definition 208. Length Encoding
SCALE Length encoding, , also known as a compact encoding, of a non-negative number is defined as follows:
in where the least significant bits of the first byte of byte array b are defined as follows:
and the rest of the bits of store the value of in little-endian format in base-2 as follows:
such that:
Note that denotes the length of the original integer being encoded and does not include the extra byte describing the length. The encoding can be used for integers up to .
A.2.3. Hex Encoding
Practically, it is more convenient and efficient to store and process data which is stored in a byte array. On the other hand, the trie keys are broken into 4-bits nibbles. Accordingly, we need a method to encode sequences of 4-bits nibbles into byte arrays canonically. To this aim, we define hex encoding function as follows:
Definition 209. Hex Encoding
Suppose that is a sequence of nibbles, then:
A.3. Chain Specification
Chain Specification (chainspec) is a collection of information that describes the blockchain network. It includes information required for a host to connect and sync with the Polakdot network, for example, the initial nodes to communicate with, protocol identifier, initial state that the hosts agree, etc. There are a set of core fields required by the Host and a set of extensions that are used by optionally implemented features of the Host. The fields of chain specification are categorized in three parts:
- ChainSpec
- ChainSpec Extensions
- Genesis State which is the only mandatory part of the chainspec.
A.3.1. Chain Spec
Chain specification contains information used by the Host to communicate with network participants and optionally send data to telemetry endpoints.
The client specification contains the fields below. The values for the Polkadot chain are specified:
name: The human-readable name of the chain.
"name": "Polkadot"
id: The id of the chain.
"id": "polkadot"
chainType: Possible values are
Live
,Development
,Local
."chainType": "Live"
bootNodes: A list of MultiAddress that belong to boot nodes of the chain. The list of boot nodes for Polkadot can be found here
telemetryEndpoints: Optional list of "(multiaddress, verbosity)" pairs of telemetry endpoints. The verbosity goes from
0
to9
. With0
being the mode with the lowest verbosity.forkId: Optional fork id. Should most likely be left empty. Can be used to signal a fork on the network level when two chains have the same genesis hash.
"forkId": {}
- properties: Optional additional properties of the chain as subfields including token symbol, token decimals, and address formats.
"properties": {
"ss58Format": 0,
"tokenDecimals": 10,
"tokenSymbol": "DOT"
}
A.3.2. Chain Spec Extensions
ChainSpec Extensions are additional parameters customizable from the chainspec and correspond to optional features implemented in the Host.
Definition 210. Bad Blocks Header
BadBlocks describes a list of block header hashes that are known a priori to be bad (not belonging to the canonical chain) by the host, so that the host can explicitly avoid importing them. These block headers are always considered invalid and filtered out before importing the block:
where is a known invalid block header hash.
Definition 211. Fork Blocks
ForkBlocks describes a list of expected block header hashes at certain block heights. They are used to set trusted checkpoints, i.e., the host will refuse to import a block with a different hash at the given height. Forkblocks are useful mechanisms to guide the Host to the right fork in instances where the chain is bricked (possibly due to issues in runtime upgrades).
where is an apriori known valid block header hash at block height . The host is expected to accept no other block except at height .
lightSyncState describes a check-pointing format for light clients. Its specification is currently Work-In-Progress.
A.3.3. Genesis State
The genesis state is a set of key-value pairs representing the initial state of the Polkadot state storage. It can be retrieved from the Polkadot repository. While each of those key-value pairs offers important identifiable information to the Runtime, to the Polkadot Host they are a transparent set of arbitrary chain- and network-dependent keys and values. The only exception to this are the :code
(Section 2.6.2.) and :heappages
(Section 2.6.3.1.) keys, which are used by the Polkadot Host to initialize the WASM environment and its Runtime. The other keys and values are unspecified and solely depend on the chain and respectively its corresponding Runtime. On initialization, the data should be inserted into the state storage with the Host API (Section B.2.1.).
As such, Polkadot does not define a formal genesis block. Nonetheless, for compatibility reasons in several algorithms, the Polkadot Host defines the genesis header (Definition 212). By the abuse of terminology, "genesis block" refers to the hypothetical parent of block number 1 which holds the genesis header as its header.
Definition 212. Genesis Header
The Polkadot genesis header is a data structure conforming to block header format (Definition 10). It contains the following values:
Table 4. Table of Genesis Header Values
Block header field | Genesis Header Value |
---|---|
parent_hash | |
number | |
state_root | Merkle hash of the state storage trie (Definition 29) after inserting the genesis state in it. |
extrinsics_root | Merkle hash of an empty trie: |
digest |
Definition 213. Code Substitutes
Code Substitutes is a list of pairs of the block numbers and wasm_code
. The given WASM code will be used to substitute the on-chain WASM code starting with the given block number until the spec_version
on-chain changes. The substitute code should be as close as possible to the on-chain wasm code. A substitute should be used to fix a bug that can not be fixed with a runtime upgrade if, for example, the runtime is constantly panicking. Introducing new runtime apis isn't supported, because the node will read the runtime version from the on-chain wasm code. Use this functionality only when there is no other way around and to only patch the problematic bug, the rest should be done with an on-chain runtime upgrade.
A.4. Erasure Encoding
A.4.1. Erasure Encoding
Erasure Encoding has not been documented yet.