Appendix A: Cryptography & Encoding
Appendix chapter containing various protocol details
A.1. Cryptographic Algorithms
A.1.1. Hash Functions
A.1.1.1. BLAKE2
BLAKE2 is a collection of cryptographic hash functions known for their high speed. Their design closely resembles BLAKE which has been a finalist in the SHA-3 competition.
Polkadot is using the Blake2b variant which is optimized for 64-bit platforms. Unless otherwise specified, the Blake2b hash function with a 256-bit output is used whenever Blake2b is invoked in this document. The detailed specification and sample implementations of all variants of Blake2 hash functions can be found in RFC 7693 (1).
A.1.2. Randomness
TBH
A.1.3. VRF
A Verifiable Random Function (VRF) is a mathematical operation that takes some input and produces a random number using a secret key along with a proof of authenticity that this random number was generated using the submitter’s secret key and the given input. The proof can be verified by any challenger to ensure the random number generation is valid and has not been tampered with (for example to the benfit of submitter).
In Polkadot, VRFs are used for the BABE block production lottery by Block-Production-Lottery and the parachain approval voting mechanism (Section 8.5.). The VRF uses mechanism similar to algorithms introduced in the following papers:
It essentially generates a deterministic elliptic curve based Schnorr signature as a verifiable random value. The elliptic curve group used in the VRF function is the Ristretto group specified in:
Definition 171. VRF Proof
The VRF proof proves the correctness for an associated VRF output. The VRF proof, , is a datastructure of the following format:
where is the challenge and is the 32-byte Schnorr poof. Both are expressed as Curve25519 scalars as defined in Definition Definition 172.
Definition 172. DLEQ Prove
The function creates a proof for a given input, , based on the provided transcript, .
First:
Then the witness scalar is calculated, , where is the 32-byte secret seed used for nonce generation in the context of sr25519.
where is length of the witness, encoded as a 32-bit little-endian integer. is a 32-byte array containing the secret witness scalar.
where
is the compressed Ristretto point of the scalar input.
is the compressed Ristretto point of the public key.
is the compressed Ristretto point of the wittness:
For the 64-byte challenge:
And the Schnorr proof:
where is the secret key.
Definition 173. DLEQ Verify
The function verifiers the VRF input, against the output, , with the associated proof (Definition 171) and public key, .
where
is calculated as:
where is the Ristretto basepoint.
is calculated as:
The challenge is valid if equals :
A.1.3.1. Transcript
A VRF transcript serves as a domain-specific separator of cryptographic protocols and is represented as a mathematical object, as defined by Merlin, which defines how that object is generated and encoded. The usage of the transcript is implementation specific, such as for certain mechanisms in the Availability & Validity chapter (Chapter 8), and is therefore described in more detail in those protocols. The input value used to initiate the transcript is referred to as a context (Definition 174).
Definition 174. VRF Contex
The VRF context is a constant byte array used to initiate VRF transcript. The VRF context is constant for all users of the VRF for the specific context for which the VRF function is used. Context prevents VRF values generated by the same nodes for other purposes to be reused for purposes not meant to. For example, the VRF context for BABE Production lottery defined in Section 5.2. is set to be "substrate-babe-vrf".
Definition 175. VRF Transcript
A transcript, or VRF transcript, is a STROBE object, , as defined in the STROBE documentation, respectively section "5. State of a STROBE object".
where
The duplex state, , is a 200-byte array created by the keccak-f1600 sponge function on the initial STROBE state. Specifically,
R
is of value166
andX.Y.Z
is of value1.0.2
.has the initial value of
0
.has the initial value of
0
.has the initial value of
0
.
Then, the meta-AD
operation (Definition 176) (where more=False
) is used to add the protocol label Merlin v1.0
to followed by appending (Section A.1.3.1.1.) label dom-step
and its corresponding context, , resulting in the final transcript, .
serves as an arbitrary identifier/separator and its value is defined by the protocol specification individually. This transcript is treated just like a STROBE object, wherein any operations (Definition 176) on it modify the values such as and .
Formally, when creating a transcript we refer to it as .
Definition 176. STROBE Operations
STROBE operations are described in the STROBE specification, respectively section "6. Strobe operations". Operations are indicated by their corresponding bitfield, as described in section "6.2. Operations and flags" and implemented as described in section "7. Implementation of operations"
A.1.3.1.1. Messages
Appending messages, or "data", to the transcript (Definition 175) first requires meta-AD
operations for a given label of the messages, including the size of the message, followed by an AD
operation on the message itself. The size of the message is a 4-byte, little-endian encoded integer.
where is the transcript (Definition 175), is the given label and the message, respectively representing its size. is the resulting transcript with the appended data. STROBE operations are described in Definition 176.
Formally, when appending a message we refer to it as .
A.1.4. Cryptographic Keys
Various types of keys are used in Polkadot to prove the identity of the actors involved in the Polkadot Protocols. To improve the security of the users, each key type has its own unique function and must be treated differently, as described by this Section.
Definition 177. Account Key
Account key is a key pair of type of either of the schemes in the following table:
Table 2. List of the public key scheme which can be used for an account key
Key Scheme | Description |
---|---|
sr25519 | Schnorr signature on Ristretto compressed ed25519 points as implemented in TODO |
ed25519 | The ed25519 signature complies with [@josefsson_edwards-curve_2017] except for the verification process which adhere to Ed25519 Zebra variant specified in [@devalence_ed25519zebra_2020]. In short, the signature point is not assumed to be on in the prime ordered subgroup group. As such, the verifier must explicitly clear the cofactor during the course of verifying the signature equation. |
secp256k1 | Only for outgoing transfer transactions. |
An account key can be used to sign transactions among other accounts and balance-related functions. There are two prominent subcategories of account keys namely "stash keys" and "controller keys", each being used for a different function. Keys defined in Definition 177, Definition 178 and Definition 179 are created and managed by the user independent of the Polkadot implementation. The user notifies the network about the used keys by submitting a transaction, as defined in Section A.1.4.2. and Section A.1.4.5. respectively.
Definition 178. Stash Key
The Stash key is a type of account key that holds funds bonded for staking (described in Section A.1.4.1.) to a particular controller key (defined in Definition 179). As a result, one may actively participate with a stash key keeping the stash key offline in a secure location. It can also be used to designate a Proxy account to vote in governance proposals, as described in Section A.1.4.2.. The Stash key holds the majority of the users’ funds and should neither be shared with anyone, saved on an online device, nor used to submit extrinsics.
Definition 179. Controller Key
The Controller key is a type of account key that acts on behalf of the Stash account. It signs transactions that make decisions regarding the nomination and the validation of the other keys. It is a key that will be in direct control of a user and should mostly be kept offline, used to submit manual extrinsics. It sets preferences like payout account and commission, as described in Section A.1.4.4.. If used for a validator, it certifies the session keys, as described in Section A.1.4.5.. It only needs the required funds to pay transaction fees [TODO: key needing fund needs to be defined].
Definition 180. Session Keys
Session keys are short-lived keys that are used to authenticate validator operations. Session keys are generated by the Polkadot Host and should be changed regularly due to security reasons. Nonetheless, no validity period is enforced by the Polkadot protocol on session keys. Various types of keys used by the Polkadot Host are presented in Table 3:
Table 3. List of key schemes which are used for session keys depending on the protocol
Protocol | Key scheme |
---|---|
GRANDPA | ED25519 |
BABE | SR25519 |
I’m Online | SR25519 |
Parachain | SR25519 |
Session keys must be accessible by certain Polkadot Host APIs defined in Appendix B. Session keys are not meant to control the majority of the users’ funds and should only be used for their intended purpose.
A.1.4.1. Holding and staking funds
TBH
A.1.4.2. Creating a Controller key
TBH
A.1.4.3. Designating a proxy for voting
TBH
A.1.4.4. Controller settings
TBH
A.1.4.5. Certifying keys
Due to security considerations and Runtime upgrades, the session keys are supposed to be changed regularly. As such, the new session keys need to be certified by a controller key before putting them in use. The controller only needs to create a certificate by signing a session public key and broadcasting this certificate via an extrinsic. [TODO: spec the detail of the data structure of the certificate etc.]
A.2. Auxiliary Encodings
Definition 181. Unix Time
By Unix time, we refer to the unsigned, little-endian encoded 64-bit integer which stores the number of milliseconds that have elapsed since the Unix epoch, that is the time 00:00:00 UTC on 1 January 1970, minus leap seconds. Leap seconds are ignored, and every day is treated as if it contained exactly 86’400 seconds.
A.2.1. Binary Enconding
Definition 182. Sequence of Bytes
By a sequences of bytes or a byte array, , of length , we refer to
We define to be the set of all byte arrays of length . Furthermore, we define:
We represent the concatenation of byte arrays and by:
Definition 183. Bitwise Representation
For a given byte the bitwise representation in bits is defined as:
where
Definition 184. Little Endian
By the little-endian representation of a non-negative integer, , represented as
in base 256, we refer to a byte array such that
Accordingly, we define the function :
Definition 185. UINT32
By UINT32 we refer to a non-negative integer stored in a byte array of length using little-endian encoding format.
A.2.2. SCALE Codec
The Polkadot Host uses Simple Concatenated Aggregate Little-Endian” (SCALE) codec to encode byte arrays as well as other data structures. SCALE provides a canonical encoding to produce consistent hash values across their implementation, including the Merkle hash proof for the State Storage.
Definition 186. Decoding
refers to the decoding of a blob of data. Since the SCALE codec is not self-describing, it’s up to the decoder to validate whether the blob of data can be deserialized into the given type or data structure.
It’s accepted behavior for the decoder to partially decode the blob of data. Meaning, any additional data that does not fit into a datastructure can be ignored.
Considering that the decoded data is never larger than the encoded message, this information can serve as a way to validate values that can vary in sizes, such as sequences (Definition 192). The decoder should strictly use the size of the encoded data as an upper bound when decoding in order to prevent denial of service attacks.
Definition 187. Tuple
The SCALE codec for Tuple, , such that:
Where ’s are values of different types, is defined as:
In case of a tuple (or a structure), the knowledge of the shape of data is not encoded even though it is necessary for decoding. The decoder needs to derive that information from the context where the encoding/decoding is happening.
Definition 188. Varying Data Type
We define a varying data type to be an ordered set of data types.
A value of varying date type is a pair where for some and is its value of type , which can be empty. We define , unless it is explicitly defined as another value in the definition of a particular varying data type.
In particular, we define two specific varying data which are frequently used in various part of Polkadot protocol: Option (Definition 190) and Result (Definition 191).
Definition 189. Encoding of Varying Data Type
The SCALE codec for value of varying data type , formally referred to as is defined as follows:
Where is a 8-bit integer determining the type of . In particular, for the optional type defined in Definition 188, we have:
The SCALE codec does not encode the correspondence between the value and the data type it represents; the decoder needs prior knowledge of such correspondence to decode the data.
Definition 190. Option Type
The Option type is a varying data type of which indicates if data of type is available (referred to as some state) or not (referred to as empty, none or null state). The presence of type none, indicated by , implies that the data corresponding to type is not available and contains no additional data. Where as the presence of type indicated by implies that the data is available.
Definition 191. Result Type
The Result type is a varying data type of which is used to indicate if a certain operation or function was executed successfully (referred to as "ok" state) or not (referred to as "error" state). implies success, implies failure. Both types can either contain additional data or are defined as empty type otherwise.
Definition 192. Sequence
The SCALE codec for sequence such that:
where ’s are values of the same type (and the decoder is unable to infer value of from the context) is defined as:
where is defined in Definition 198.
In some cases, the length indicator is omitted if the length of the sequence is fixed and known by the decoder upfront. Such cases are explicitly stated by the definition of the corresponding type.
Definition 193. Dictionary
SCALE codec for dictionary or hashtable D with key-value pairs s such that:
is defined the SCALE codec of as a sequence of key value pairs (as tuples):
where is encoded the same way as but argument refers to the number of key-value pairs rather than the length.
Definition 194. Boolean
The SCALE codec for a boolean value defined as a byte as follows:
Definition 195. String
The SCALE codec for a string value is an encoded sequence (Definition 192) consisting of UTF-8 encoded bytes.
Definition 196. Fixed Length
The SCALE codec, , for other types such as fixed length integers not defined here otherwise, is equal to little endian encoding of those values defined in Definition 184.
Definition 197. Empty
The SCALE codec, , for an empty type is defined to a byte array of zero length and depicted as .
A.2.2.1. Length and Compact Encoding
SCALE Length encoding is used to encode integer numbers of variying sizes prominently in an encoding length of arrays:
Definition 198. Length Encoding
SCALE Length encoding, , also known as a compact encoding, of a non-negative number is defined as follows:
in where the least significant bits of the first byte of byte array b are defined as follows:
and the rest of the bits of store the value of in little-endian format in base-2 as follows:
such that:
A.2.3. Hex Encoding
Practically, it is more convenient and efficient to store and process data which is stored in a byte array. On the other hand, the trie keys are broken into 4-bits nibbles. Accordingly, we need a method to encode sequences of 4-bits nibbles into byte arrays canonically. To this aim, we define hex encoding function as follows:
Definition 199. Hex Encoding
Suppose that is a sequence of nibbles, then:
A.3. Genesis State
The genesis state is a set of key-value pairs representing the initial state of the Polkadot state storage. It can be retrieved from the Polkadot repository. While each of those key-value pairs offers important identifiable information to the Runtime, to the Polkadot Host they are a transparent set of arbitrary chain- and network-dependent keys and values. The only exception to this are the :code
(Section 2.6.2.) and :heappages
(Section 2.6.3.1.) keys, which are used by the Polkadot Host to initialize the WASM environment and its Runtime. The other keys and values are unspecified and solely depend on the chain and respectively its corresponding Runtime. On initialization the data should be inserted into the state storage with the Host API (Section B.2.1.).
As such, Polkadot does not define a formal genesis block. Nonetheless for the compatibility reasons in several algorithms, the Polkadot Host defines the genesis header (Definition 200). By the abuse of terminology, "genesis block" refers to the hypothetical parent of block number 1 which holds genesis header as its header.
Definition 200. Genesis Header
The Polkadot genesis header is a data structure conforming to block header format (Definition 10). It contains the following values:
Table 4. Table of Genesis Header Values
Block header field | Genesis Header Value |
---|---|
parent_hash | 0 |
number | 0 |
state_root | Merkle hash of the state storage trie (Definition 29) after inserting the genesis state in it. |
extrinsics_root | 0 |
digest | 0 |
A.4. Erasure Encoding
A.4.1. Erasure Encoding
Erasure Encoding has not been documented yet.