Neptune Documentation
The documentation for Neptune is distributed across the following categories.
Consensus
Neptune achieves succinctness by requiring STARK proofs to certify most of the consensus-critical logic. As a consequence, verifying and even running a full node is cheap. The tradeoff is that someone has to produce these STARK proofs, and this burden ultimately falls most heavily on the miner (for aggregated block transactions) and to a lesser extent on the sender (for individual transactions).
The particular proof system that Neptune uses is Triton VM. The particular computations that are proven (and verified) as part of consensus logic are documented here.
Consensus is the feature of a network whose nodes overwhelmingly agree on the current contents of a database, typically a blockchain. This database is append-only. While reorganizations can happen they are expected to be rare and shallow. Every once in a while, a new block is added. The block body contains a single transaction that aggregates together all inputs and outputs of individual user transactions since the previous block. Blocks and Transactions are the key data objects that consensus pertains to. The consensus logic determines which blocks and transactions are valid and confirmable.
Note that there is a distinction between valid and confirmable. Validity refers to the internal consistency of a data object. Confirmable refers to its current relation to the rest of the blockchain. For example, having insufficient proof-of-work or including a double-spending transaction makes a block invalid. But a block can be both valid and unconfirmable, for instance if its timestamp is too far into the future. STARK proofs are capable of establishing validity but not confirmability.
Since both blocks and transactions come with STARK proofs certifying their validity, it is worthwhile to separate the kernel from the proof. The kernel is the actual payload data that appears on the blockchain, and the object that the proof asserts validity of. There can be different proofs certifying the validity of a block or transaction kernel. Proofs can typically be recursed away so that the marginal cost of storing them is zero.
Neptune-Core
Neptune-Core is the name for Neptune's canonical client software. It denotes the binary/executable that runs a node on the Neptune network.
Triton VM
Neptune achieves succinctness by requiring STARK proofs to certify most of the consensus-critical logic. As a consequence, verifying and even running a full node is cheap. The tradeoff is that someone has to produce these STARK proofs, and this burden ultimately falls on the miner.
The particular proof system that Neptune uses is Triton VM. Triton VM is a standalone project and comes with its own documentation
Contributing
Instructions and helpful information for people who want to contribute.
Consensus
Neptune achieves succinctness by requiring STARK proofs to certify most of the consensus-critical logic. As a consequence, verifying and even running a full node is cheap. The tradeoff is that someone has to produce these STARK proofs, and this burden ultimately falls most heavily on the miner (for aggregated block transactions) and to a lesser extent on the sender (for individual transactions).
The particular proof system that Neptune uses is Triton VM. The particular computations that are proven (and verified) as part of consensus logic are documented here.
Consensus is the feature of a network whose nodes overwhelmingly agree on the current contents of a database, typically a blockchain. This database is append-only. While reorganizations can happen they are expected to be rare and shallow. Every once in a while, a new block is added. The block body contains a single transaction that aggregates together all inputs and outputs of individual user transactions since the previous block. Blocks and Transactions are the key data objects that consensus pertains to. The consensus logic determines which blocks and transactions are valid and confirmable.
Note that there is a distinction between valid and confirmable. Validity refers to the internal consistency of a data object. Confirmable refers to its current relation to the rest of the blockchain. For example, having insufficient proof-of-work or including a double-spending transaction makes a block invalid. But a block can be both valid and unconfirmable, for instance if its timestamp is too far into the future. STARK proofs are capable of establishing validity but not confirmability.
Since both blocks and transactions come with STARK proofs certifying their validity, it is worthwhile to separate the kernel from the proof. The kernel is the actual payload data that appears on the blockchain, and the object that the proof asserts validity of. There can be different proofs certifying the validity of a block or transaction kernel. Proofs can typically be recursed away so that the marginal cost of storing them is zero.
Transaction
A transaction kernel consists of the following fields:
inputs: Vec<RemovalRecord>
The commitments to the UTXOs that are consumed by this transaction.outputs: Vec<AdditionRecord>
The commitments to the UTXOs that are generated by this transaction.public_announcements: Vec<PublicAnnouncement>
a list of self-identifying strings broadcasted to the world. These may contain encrypted secrets but only the recipient(s) can ascertain that.fee: NativeCurrencyAmount
A reward for the miner who includes this transaction in a block.coinbase: Option<NativeCurrencyAmount>
The miner is allowed to set this field to a mining reward which is determined by various variable network parameters.timestamp: Timestamp
When the transaction took or takes place.mutator_set_hash: Digest
A commitment to the mutator set that is to be updated by the transaction.
Note that while addition records and removal records are both commitments to UTXOs, they are different types of commitments. The removal record is an index set into the SWBF (with supporting chunk dictionary) whereas the addition record is a hash digest.
Validity
Transaction validity is designed to check four conditions:
- The lock scripts of all input UTXOs halt gracefully
- All involved typescripts halt gracefully
- All input UTXOs are present in the mutator set's append-only commitment list
- All input UTXOs are not present in the mutator set's sliding-window Bloom filter.
A transaction is valid if (any of):
- a) it has a valid witness (including spending keys and mutator set membership proofs)
- b) it has valid proofs for each subprogram (subprograms establish things like the owners consent to this transaction, there is no inflation, etc.)
- c) it has a single valid proof that the entire witness is valid (so, a multi-claim proof of all claims listed in (b))
- d) it has a single valid proof that the transaction originates from merging two valid transactions
- e) it has a single valid proof that the transaction belongs to an integral mempool, i.e., one to which only valid transactions were added
- f) it has a single valid proof that another single valid proof exists but under an older timestamp or mutator set accumulator
- g) it has a single valid proof that another single valid proof exists (but possibly with an older version of the proof system or different parameters).
For the purpose of describing computations and claims, the following notation is used. The symbol :
denotes the type of an object, whereas ::
denotes the type signature of a computation (interpreting the input and output streams as arguments and return values, respectively).
A: Witness Validity
The transaction witness represents all raw data necessary to prove a transaction valid. It does not contain any proof data. In the code this data structure is called PrimitiveWitness
to highlight the fact that it does not elide any witness information.
A transaction witness is defined to be valid if, after deriving from it a set of claims as listed in (b) and nondeterminisms, all programs halt gracefully.
A transaction witness consists of the following fields:
input_utxos: SaltedUtxos
A wrapper object wrapping together a list of inputUtxo
s and a salt, which is 3BFieldElement
s.lock_scripts_and_witnesses: Vec<LockScriptAndWitness>
The lock scripts determine the spending policies of the input UTXOs; in the simplest case, whether their owners approve of the transaction.type_scripts_and_witnesses: Vec<TypeScriptAndWitness>
The scripts that authenticate the correct evolution of all token types involved.input_membership_proofs: Vec<MsMembershipProof>
Membership proofs in the mutator set for the input UTXOs.output_utxos: SaltedUtxos
A wrapper object wrapping together a list of outputUtxo
s and a salt, which is 3BFieldElement
s.output_sender_randomnesses: Vec<Digest>
Senders' contributions to output commitment randomnesses.output_receiver_digests: Vec<Digest>
Receivers' contributions to output commitment randomnesses.mutator_set_accumulator: MutatorSetAccumulator
The mutator set accumulator, which is the anonymous accumulator.kernel: TransactionKernel
The transaction kernel that this witness attests to.
Note that a (transaction, valid witness) pair cannot be broadcasted because that would undermine both soundness and privacy.
B: Standard Decomposition into Subclaims
The motivation for splitting transaction validity into subclaims is that the induced subprograms can be proved individually, which might be cheaper than proving the whole thing in one go. Also, it is conceivable that components of a transaction are updated and do not invalidate all subproofs but only a subset of them. The subprograms are as follows.
-
RemovalRecordsIntegrity :: (transaction_kernel_mast_hash : Digest) ⟶ (inputs_salted_utxos_hash : Digest)
Establishes that all removal records (which themselves are commitments to input UTXOs) are correctly computed and applicable. Specifically:- divine the input UTXOs
- divine the salt
- divine the mutator set accumulator and authenticate it against the given transaction kernel MAST hash
- for each input UTXO:
- divine the receiver preimage
- divine the sender randomness
- compute the canonical commitment
- verify the membership of the canonical commitment to the AOCL
- compute the removal record index set
- verify that the calculated removal record index set matches the claimed index set
- hash the list of removal record sets and authenticate it against the given transaction kernel MAST hash
- output the hash of the salted input UTXOs.
Checks ensuring that each AOCL index is unique and that the published authentication paths are valid, are delegated to the miner and do, for performance reasons, not belong here. Checks that the removal record has not already been applied (i.e. no double-spend) is also delegated to the miner.
-
KernelToOutputs :: (transaction_kernel_mast_hash : Digest) ⟶ (outputs_salted_utxos_hash : Digest)
Collects the output UTXOs into a more digestible format. Specifically:- divine the output UTXOs
- divine the salt
- for each output UTXO:
- divine the commitment randomness
- compute the canonical commitment
- hash the list of canonical commitments
- authenticate the list of canonical commitments against the given transaction kernel MAST hash
- output the hash of the salted UTXOs.
-
CollectLockScripts :: (inputs_salted_utxos_hash : Digest) ⟶ (lock_script_hashes : [Digest])
Collects the lock script hashes into a list. Specifically:- divine the input UTXOs
- divine the salt
- authenticate the salted UTXOs against the given hash digest
- for each UTXO:
- collect the lock script hash
- output all lock script hashes.
-
LockScript :: (transaction_kernel_mast_hash : Digest) ⟶ ∅
Unlocks a single input UTXO. The concrete program value of a lockscript depends on the UTXO. By default, this program is created by a generation address, in which case it asserts knowledge of a preimage to a hardcoded digest. This lock script for every UTXO must halt gracefully. -
CollectTypeScripts :: (inputs_salted_utxos_hash : Digest) × (outputs_salted_utxos_hash : Digest) ⟶ (type_script_hashes : [Digest])
Collects the type scripts into a more digestible format. Specifically:- divine all input UTXOs
- divine the salt for input UTXOs
- authenticate the salted input UTXOs against the given hash digest
- divine all output UTXOs
- divine the salt for output UTXOs
- authenticate the salted output UTXOs against the given hash digest
- for each input or output UTXO:
- collect the type script hash
- filter out duplicates
- output the unique type script hashes
-
TypeScript :: (transaction_kernel_mast_hash : Digest) × (salted_input_utxos_hash : Digest) × (salted_output_utxos_hash : Digest) ⟶ ∅
Authenticates the correct evolution of all UTXOs of a given type. The concrete program value depends on the token types involved in the transaction. For Neptune's native currency, Neptune Coins, the type script asserts that a) all output amounts are positive, and b) the sum of all input amounts is greater than or equal to the fee plus the sum of all output amounts. Every type script whose hash was returned byCollectTypeScripts
must halt gracefully.
Diagram 1 shows how the explicit inputs and outputs of all the subprograms relate to each other. Single arrows denote inputs or outputs. Double lines indicate that the program(s) on the one end hash to the digest(s) on the other.
Diagram 1: Transaction validity. |
All subprograms can be proven individually given access to the transaction's witness. The next table shows which fields of the TransactionPrimitiveWitness
are (potentially) used in which subprogram.
field | used by |
---|---|
input_utxos | RemovalRecordsIntegrity , CollectLockScripts , CollectTypeScripts |
input_lock_scripts | CollectLockScripts , LockScript |
type_scripts | CollectTypeScripts , TypeScript |
lock_script_witnesses | LockScript |
input_membership_proofs | RemovalRecordsIntegrity |
output_utxos | KernelToOutputs , CollectTypeScripts |
mutator_set_accumulator | RemovalRecordsIntegrity |
kernel | RemovalRecordsIntegrity , KernelToOutputs , LockScript (?) TypeScript (?) |
Note that none of the subprograms require that each removal record lists one SWBF index that does not yet live in the mutator set SWBF. This absence is required for the transaction to be confirmable, but not for it to be valid. If the transaction has an input whose index set is already entirely contained by the mutator set SWBF, then this transaction can never be confirmed. Even if there is a reorganization that results in the absence criterion being satisfied, the transaction commits to the mutator set hash and this commitment cannot be undone.
C: Multi-Claim Proof
Where (b) generates a separate proof for every individual subclaim, (c) generates one proof for the batch of claims. The set of claims established is identical; the main benefit comes from having only one execution of the Triton VM prover.
D: Transaction Merger
Two transactions can be merged into one. Among other things, this operation replaces two proofs with just one. The program TransactionMerger :: (transaction_kernel_mast_hash : Digest) ⟶ ∅
verifies a transaction resulting from a merger as follows:
- divine
txa : TransactionKernel
- verify proof for
txa
(proof is divined) - divine
txb : TransactionKernel
- verify proof for
txb
(proof is divined) - for each removal record
rr
intxa.inputs
- verify that
rr
is not a member oftxb.inputs
- verify that
- for each removal record
rr
intxb.inputs
- verify that
rr
is not a member oftba.inputs
- verify that
- verify that at most one of
txa.coinbase
andtxb.coinbase
is set - verify that
txa.mutator_set_hash == txb.mutator_set_hash
- compile a new
TransactionKernel
objectkernel
:- set
kernel.inputs
totxa.inputs || txb.inputs
after shuffling randomly - set
kernel.outputs
totxa.outputs || txb.outputs
after shuffling randomly - set
kernel.public_announcements
totxa.public_announcements || txb.public_announcements
after shuffling randomly - set
kernel.coinbase
totxa.coinbase
ortxb.coinbase
or toNone
- set
kernel.fee
totxa.fee + txb.fee
- set
kernel.timestamp
tomax(txa.timestamp, txb.timestamp)
- set
kernel.mutator_set_hash
totxa.mutator_set_hash
- set
- compute the mast hash of
kernel
- verify the computed hash against the given
transaction_kernel_mast_hash
.
E: Proof of Integral Mempool Operation
A transaction is valid if it was ever added to an integral mempool. The motivating use case for this feature is that mempool operators can delete transaction proofs as long as they store and routinely update one
An integral mempool is an MMR containing transactions kernels, along with a proof of integral history. The integral mempool can be updated in only one way: by appending a valid transaction.
append : (old_mmr : Mmr<TransactionKernel>) × (old_history_proof: StarkProof) × (tx : Transaction) ⟶ (new_mmr : Mmr<TransactionKernel>) × (new_history_proof : StarkProof)
The proof of integral history certifies that the MMR is the (left-hand side of the) output of some append
operation. Specifically, the claim is the input-output-program triple
- input:
mmr : Mmr<TransactionKernel>
- output:
∅
- program:
- if
mmr
is empty, halt gracefully; otherwise - divine
old_mmr
- divine
tx_kernel
- verify
tx_kernel
with some divined transaction proof - append
tx_kernel
toold_mmr
resulting innew_mmr
- assert that
new_mmr == mmr
.
- if
The claim for certifying the validity of transaction based on its inclusion in an integral mempool is induced by MemberOfIntegralMempool :: (transaction_kernel_mast_hash : Digest) ⟶ ∅
, and the program executes the following logic:
- divine
mmr : Mmr<TransactionKernel>
- verify
mmr
with some divined proof of integral history - verify membership of
transaction_kernel_mast_hash
tommr
with a divined authentication path.
F: Transaction Data Update
A transaction is valid if another transaction that is identical except for fixing an older mutator set hash or timestamp, was valid. Specifically, the program TransactionDataUpdate :: (transaction_kernel_mast_hash : Digest) ⟶ ∅
verifies the update of transaction data as follows:
- divine
old_kernel : TransactionKernel
- verify
old_kernel
with some divined proof - create a new
TransactionKernel
objectnew_kernel
- set all fields of
new_kernel
to the matching field ofold_kernel
except:- set
new_kernel.timestamp
such thatnew_kernel.timestamp >= old_kernel.timestamp
- set
new_kernel.mutator_set_hash =/= tx.mutator_set_hash
only if the following instructions execute gracefully without crashing- divine the mutator set AOCL MMR accumulator
new_kernel_aocl
- authenticate
new_kernel_aocl
against the mutator set MAST hashnew_kernel.mutator_set_hash
using a divined authentication path - divine the mutator set AOCL MMR accumultar
old_kernel_aocl
- authenticate the
old_kernel_aocl
against the mutator set MAST hashold_kernel.mutator_set_hash
using a divined authentication path - verify that there is a set of AOCL leafs whose addition sends
old_kernel_aocl
tonew_kernel_aocl
- divine the mutator set AOCL MMR accumulator
- set
new_kernel.inputs
to the following list:- each index set is identical to the matching index set from
old_kernel
- read the chunks dictionary
- for every index in the inactive part of the SWBF, verify that it lives in some chunk
- for every chunk in the chunk dictionary, verify its authentication path (either from
divine_sibling
or memory -- to be decided)
- each index set is identical to the matching index set from
- set
G: Transaction Proof Update
Triton VM allows proofs to be updated to a new version of the proof system or to new proof system parameters. However, this is a property of Triton VM proofs and not of Neptune transactions, so it is covered in the relevant documentation of Triton VM.
Putting Everything Together
Clauses (b)--(f) are presented as separate computations, but in reality the master program for transaction validity TransactionIsValid :: (transaction_kernel_mast_hash : Digest) ⟶ ∅
is a disjunction of these clauses. Specifically:
- do any of:
- verify all of the following claims, individually or via one multi-claim proof:
RemovalRecordsIntegrity :: (transaction_kernel_mast_hash : Digest) ⟶ (inputs_salted_utxos_hash : Digest)
CollectLockScripts :: (inputs_salted_utxos_hash : Digest) ⟶ (lock_script_hashes : [Digest])
LockScript :: (transaction_kernel_mast_hash : Digest) ⟶ ∅
for each lock script hashKernelToOutputs :: (transaction_kernel_mast_hash : Digest) ⟶ (outputs_salted_utxos_hash : Digest)
CollectTypeScripts :: (inputs_salted_utxos_hash : Digest) × (outputs_salted_utxos_hash : Digest) ⟶ (type_script_hashes : [Digest])
TypeScript :: (transaction_kernel_mast_hash : Digest) ⟶ ∅
for each type script hash;
- verify claim
TransactionMerger :: (transaction_kernel_mast_hash : Digest) ⟶ ∅
- verify claim
MemberOfIntegralMempool :: (transaction_kernel_mast_hash : Digest) ⟶ ∅
- verify claim
TransactionDataUpdate :: (transaction_kernel_mast_hash : Digest) ⟶ ∅
.
- verify all of the following claims, individually or via one multi-claim proof:
Block
A block kernel consists of a header, body, and an appendix.
The block header has constant size and consists of:
version
the version of the Neptune protocolheight
the block height represented as aBFieldElement
prev_block_digest
the hash of the block's predecessortimestamp
when the block was foundnonce
randomness for proof-of-workcumulative_proof_of_work
approximate number of hashes computed in the block's entire lineagedifficulty
approximate number of hashes required to find a blockguesser_digest
the lock prevents any but the guesser from spending guesser fees.
The block body holds the variable-size data, consisting of:
transaction_kernel
every block contains one transaction, which represents the merger of all broadcasted transactions that the miner decided to confirm.mutator_set_accumulator
the mutator set is the data structure that holds the UTXOs. It is simultaneously an accumulator (giving rise to a compact representation and compact membership proofs) and an anonymity architecture (so that outputs from one transactions cannot be linked to inputs to another).lock_free_mmr_accumulator
the data structure holding lock-free UTXOsblock_mmr_accumulator
the peaks of a Merkle mountain range that contains all historical blocks in the current block's line.
The block appendix consists of a list of claims. The block program verifies the truth of all of these claims. The appendix can be extended in future soft forks.
Besides the kernel, blocks also contain proofs. The block proof is a STARK proof of correct execution of the BlockProgram
, which validates a subset of the validity rules below. In addition to that, it validates all claims listed in the appendix.
Validity
Note: this section describes the validity rules for blocks at some future point when we have succinctness, not the current validity rules (although there is a significant overlap).
A block is valid if (any of):
- a) it is the genesis block
- b) the incremental validity conditions are satisfied
- c) it lives in the
block_mmr_accumulator
of a block that is valid.
A: Genesis Block
The genesis block is hardcoded in the source code, see genesis_block
in block/mod.rs
.
B: Incremental Validity
A block is incrementally valid if (all of):
- a) the transaction is valid
- b) the transaction's coinbase conforms with the block subsidy schedule
- c) all the inputs in the transaction either live in the lock-free UTXO MMR or have at least one index that is absent from the mutator set SWBF
- d) the
mutator_set_accumulator
results from applying all removal records and then all addition records to the previous block'smutator_set_accumulator
- e) the
block_mmr_accumulator
results from appending the previous block's hash to the previous block'sblock_mmr_accumulator
- f) there is an ancestor block
luca
of the current block such that for each uncle blockuncle
uncle
is validluca
is an ancestor ofuncle
- neither
luca
nor any of the blocks betweenluca
and the current block listuncle
as an uncle block
- g) the
version
matches that of its predecessor or is member of a predefined list of exceptions - h) the
height
is one greater than that of its predecessor - i) the
timestamp
is greater than that of its predecssor - j) the network statistics trackers are updated correctly
- k) the variable network parameters are updated correctly.
C: Mmr Membership
A block is valid if it lives in the block_mmr_accumulator
of a valid block. This feature ensures several things.
- It is possible to prove that one block is an ancestor of another.
- Archival nodes do not need to store old block proofs; storing the most recent block proof suffices.
- Non-tip blocks can be quickly verified to be valid and, if the receiver is synchronized to the tip, canonical as well.
- In case of reorganization, storing the now-abandoned tip proof continues to suffice to establish the validity of shared blocks. (That said, an archival node should prove canonicity of shared blocks also, and to do this he must synchronize and download all blocks on the new fork.)
Confirmability
A block is confirmable if (all of):
- a) it is valid
- b) its timestamp is less than 5 minutes into the future
- c) its size is less than the
MAX_BLOCK_SIZE
inBFieldElement
s - d) its hash is less than the previous block's
target_difficulty
.
Confirmability is not something that can be proven. It must be checked explicitly by the node upon receiving the block.
Canonicity
A block is canonical if it lives on the chain with the most cumulative proof-of-work. However, the fork chain rule is only evaluated if an incoming block has a different height than the current block.
UTXO
A UTXO is a collection of coins owned by some person in between two transactions, along with a set of conditions under which it can be spent. Every UTXO is generated as an output of a transaction and is consumed as an input of a transaction.
A UTXO can be lockable or lock-free. Lockable and lock-free UTXOs are stored in different data structures, the Mutator Set and an MMR respectively. Consequently, lockable UTXOs undergo mixing whereas lock-free UTXOs are traceable by design. Another difference is that lockable UTXOs have lock scripts whereas lock-free UTXOs do not.
A coin consists of state and a type script hash. A UTXO can have multiple coins, but for every type script hash it can have at most one. The state of a coin can be any string of BFieldElement
s; it relies on the type script for interpretation.
Type scripts and lock scripts are programs that prevent invalid expenditures. They are written in Triton VM assembler ("tasm") and their graceful execution is attested to through a Triton STARK proof.
Lock Script
A lock script determines who, or more generally, under which conditions, a (lockable) UTXO can be spent. In the most basic case, the lock script verifies the presence or knowledge of secret key material that only the UTXO owner has access to, and crashes otherwise. Lock scripts can be arbitrarily complex, supporting shared ownership with quorums or even unlocking contingent upon certain cryptographic proofs unrelated to data.
The input to a lock script program is the transaction kernel MAST hash. As a result, a proof of graceful execution of a lock script is tailored to the transaction. Using nondeterminism, the program can divine features of the transaction and then authenticate that information against the kernel. In this way, a lock script can restrict the format of transactions that spend it.
Type Script
A type script determines how the state of coins of a particular type is allowed to evolve across transaction. For instance, a type script could interpret the states of all coins of its type as amounts, and then verify for all UTXOs involved in a transaction, that the sum of inputs equals the sum of outputs and that no numbers are negative. This example captures accounting logic, and indeed, Neptune Coins embody this logic. Another example is a time lock: this type script verifies that the timestamp on a transaction is larger than some specified value.
The input to a type script program is the transaction kernel MAST hash, the hash of the salted list of input UTXOs, and the hash of the salted list of output UTXOs. It takes two more arguments than lock scripts do, in order to facilitate reasoning about UTXOs involved in the transaction.
The CollectTypeScripts
program, which is part of a ProofCollection
testifying to the validity of a transaction, establishes that all type scripts are satisfied, including in particular both the input UTXOs' coins and the output UTXOs' coins. It is necessary to include the output UTXOs' type scripts because otherwise it is possible to generate a valid transaction whose inputs do not have any native currency coins but whose outputs do.
Neptune Coins
Neptune Coins refers to two things
- the native currency coin type for Neptune;
- the unit in which quantities of the former are measured.
In the code, the struct NativeCurrencyAmount
defines the unit. The native currency type script is encapsulated as a struct NativeCurrency
implementing trait ConsensusProgram
in native_currency.rs
.
The Unit
One Neptune Coin equals $10^{30} \times 2^2$ nau, which stands for Neptune Atomic Unit. The conversion factor is such that
- The largest possible amount, corresponding to 42'000'000 Neptune Coins, can be represented in 127 bits.
- It can represent a number of Neptune Coins with up to 30 decimal symbols after the point exactly.
The struct NativeCurrencyAmount
is a wrapper around a u128
. It leaves 1 bit for testing positivity.
The Type Script
The Neptune Coins type script
- computes the sum of all inputs, plus coinbase if it is set;
- computes the sum of all outputs plus fee;
- equates the two quantities.
Additional Features
Transactions have two features that make the native currency type script special. The first is the fee field, which is the excess of the transaction balance that can be captured by the miner. The second is the option coinbase field, which stipulates by how much a transaction is allowed to exceed the sum of input amounts because it is the only transaction in a block.
Two-Step Mining
Two-step mining entails separating two steps out of what can jointly be considered mining:
- composing, wherein transactions are assembled and a block proposal is composed;
- guessing, which is a search for a random number called a nonce that sends the block's hash below the target.
Composing
Composing involves making a selection of transactions, merging them, and producing a block proof. Because it involves proving, it requires beefy machinery.
Guessing
Making one guess involves sampling a random number and hashing 7 times using the Tip5 hash function. Very few computational resources are required to perform this step and as a result it should be possible on simple and cheap hardware.
Block Rewards
In the beginning of Neptune's life, every block is allowed to mint a certain number of Neptune coins. This number is known as the block subsidy. The initial subsidy is set to INITIAL_BLOCK_SUBSIDY = 64
. This subsidy is halved automatically every BLOCKS_PER_GENERATION = 321630
blocks , which corresponds to approximately three years.
In addition to the block subsidy, blocks also redistribute the transaction fees paid by the transactions included in their block. The sum of the block subsidy and the transaction fees is the block reward.
Half of the block reward is time-locked for MINING_REWARD_TIME_LOCK_PERIOD = 3
years; and the other half is liquid immediately.
Distribution of Block Reward
The block reward is distributed between the composer and the guesser at a ratio determined solely by the composer. The composer claims (part of) the block reward by including into the block a transaction that spends it to UTXOs under his control. The guesser automatically receives the remaining portion upon finding the winning nonce.
Block composers can choose to disseminate block proposals, which are blocks without winning nonces. Guessers can pick the block proposal that is most favorable to them.
Neptune-Core
Neptune-Core is the name for Neptune's canonical client software. It denotes the binary/executable that runs a node on the Neptune network.
Neptune Core Overview
neptune-core
uses the tokio async framework and tokio's multi-threaded executor which assigns tasks to threads in a threadpool and requires the use of thread synchronization primitives. We refer to spawned tokio tasks as tasks
but you can think of them as threads if that fits your mental model better. Note that a tokio task may (or may not) run on a separate operating system thread from that task that spawned it, at tokio's discretion.
neptune-core
connects to other clients through TCP/IP and accepts calls to its RPC server via tarpc using json serialization over the serde_transport. The project also includes neptune-cli
a command-line client and neptune-dashboard
, a cli/tui wallet tool. Both interact with neptune-core
via the tarpc RPC protocol.
Long-lived async tasks of neptune-core binary
There are four classes of tasks:
main
: handles init andmain_loop
peer[]
: handlesconnect_to_peers
andpeer_loop
mining
: runsminer_loop
, has a worker and a monitor taskrpc_server[]
: handlesrpc_server
for incoming RPC requests
Channels
Long-lived tasks can communicate with each other through channels provided by the tokio framework. All communication goes through the main task. Eg, there is no way for the miner task to communicate with peer tasks.
The channels are:
- peer to main:
mpsc
, "multiple producer, single consumer". - main to peer:
broadcast
, messages can only be sent to all peer tasks. If you only want one peer task to act, the message must include an IP that represents the peer for which the action is intended. - miner to main:
mpsc
. Only one miner task (the monitor/master task) sends messages to main. Used to tell the main loop about newly found blocks. - main to miner:
watch
. Used to tell the miner to mine on top of a new block; to shut down; or that the mempool has been updated, and that it therefore is safe to mine on the next block. - rpc server to main:
mpsc
: Used to e.g. send a transaction object that is built from client-controlled UTXOs to the main task where it can be added to the mempool. This channel is also used to shut down the program when theshutdown
command is called.
Global State
All tasks that are part of Neptune Core have access to the global state and they can all read from it. Each type of task can have its own local state that is not shared across tasks, this is not what is discussed here.
The global state has five fields and they each follow some rules:
wallet_state
contains information necessary to generate new transactions and print the user's balance.chain
Blockchain state. Contains information about state of the blockchain, block height, digest of latest block etc. Onlymain
task may updatechain
.chain
consists of two field:light_state
, ephemeral, contains only latest blockarchival_state
, persistent.archival_state
consists of data stored both in a database and on disk. The blocks themselves are stored on disk, and meta-information about the blocks are stored in theblock_index
database.archival_state
also contains thearchival_mutator_set
which can be used to recover unsynced membership proofs for the mutator set.
network
, network state. Consists ofpeer_map
for storing in memory info about all connected peers andpeer_databases
for persisting info about banned peers. Both of these can be written to by main or by peer tasks.network
also contains asyncing
value (onlymain
may write) andinstance_id
which is read-only.cli
CLI arguments. The state carries around the CLI arguments. These are read-only.mempool
, in-memory data structure of a set of transactions that have not yet been mined in a block. The miner reads from themempool
to find the most valuable transactions to mine. Only the main task may write tomempool
.mempool
comes with a concept of ordering such that only the transactions that pay the highest fee per size are remembered.mempool
enforces a max size such that its size can be constrained.
Receiving a New Block
When a new block is received from a peer, it is first validated by the peer task. If the block is valid and more canonical than the current tip, it is sent to the main task. The main task is responsible for updating the GlobalState
data structure to reflect the new block. This is done by write-acquiring the single GlobalStateLock
and then calling the respective helper functions with this lock held throughout the updating process.
There are two pieces of code in the main loop that update the state with a new block: one when new blocks are received from a peer, and one for when the block is found locally by the miner task. These two functionalities are somewhat similar. In this process all databases are flushed to ensure that the changes are persisted on disk. The individual steps of updating the global state with a new block are:
-
- If block was found locally: Send it to all peers before updating state.
- If block was received from peer: Check if
sync
mode is activated and if we can leavesync
mode (see below for an explanation of synchronization).
write_block
: Write the block to disk and update theblock_index
database with the block's meta information.update_mutator_set
: Update the archival mutator set with the transaction (input and output UTXOs) from this block by applying all addition records and removal records contained in the block.update_wallet_state_with_new_block
: Check if this block contains UTXOs spent by or sent to us. Also update membership proofs for unspent UTXOs that are managed/relevant to/spendable by this client's wallet.mempool.update_with_block
: Remove transactions that were included in this block and update all mutator set data associated with all remaining transactions in the mempool- Update
light_state
with the latest block. - Flush all databases
- Tell miner
- If block was found locally: Tell miner that it can start working on next block since the
mempool
has now been updated with the latest block. - If blocks were received from peer: Tell miner to start building on top of a new chain tip.
- If block was found locally: Tell miner that it can start working on next block since the
Spending UTXOs
A transaction that spends UTXOs managed by the client can be made by calling the create_transaction
method on the GlobalState
instance. This function needs a synced wallet_db
and a chain tip in light_state
to produce a valid transaction.
For a working example, see the implementation of the send_to_many()
RPC method.
Scheduled Tasks in Main Loop
Different tasks are scheduled in the main loop every N seconds. These currently handle: peer discovery, block (batch) synchronization, and mempoool cleanup.
- Peer discovery: This is used to find new peers to connect to. The logic attempts to find peers that have a distance bigger than 2 in the network where distance 0 is defined as yourself; distance 1 are the peers you connect to at start up, and all incoming connections; distance 2 are your peers' peers and so on.
- Synchronization: Synchronization is intended for nodes to catch up if they are more than N blocks behind the longest reported chain. When a client is in synchronization mode, it will batch-download blocks in sequential order to catch up with the longest reported chain.
- Mempool cleanup: Remove from the mempool transactions that are more than 72 hours old.
A task for recovering unsynced membership proofs would fit well in here.
Design Philosophies
- Avoid state-through-instruction-pointer. This means that a request/response exchange should be handled without nesting of e.g. matched messages from another peer. So when a peer task requests a block from another peer the peer task must return to the instruction pointer where it can receive any message from the peer and not only work if it actually gets the block as the next message. The reasoning behind this is that a peer task must be able to respond to e.g. a peer discovery request message from the same peer before that peer responds with the requested block.
Central Primitives
From tokio
spawn
select!
tokio::sync::RwLock
From Std lib:
Arc
From neptune-core:
neptune_core::locks::tokio::AtomicRw
(wrapsArc<tokio::sync::RwLock>
)
Persistent Memory
We use leveldb
for our database layer with custom wrappers that make it more async-friendly, type safe, and emulate multi-table transactions.
neptune_core::database::NeptuneLevelDb
provides async wrappers for leveldb APIs to avoid blocking async tasks.
leveldb
is a simple key/value store, meaning it only allows manipulating individual strings. It does however provide a batch update facility. neptune_core::database::storage::storage_schema::DbSchema
leverages these batch updates to provide vector and singleton types that can be manipulated in rust code and then atomically written to leveldb
as a single batch update (aka transaction).
Blocks are stored on disk and their position on disk is stored in the block_index
database. Blocks are read from and written to disk using mmap
. We wrap all file-system calls with tokio's spawn_blocking()
so they will not block other async tasks.
Challenges
-
Deadlocks. We only have a single RwLock over the GlobalState. This is encapsulated in struct
GlobalStateLock
. This makes deadlocks pretty easy to avoid, following some simple rules:-
avoid deadlocking yourself. If a function has read-acquired the global lock then it must be released before write-acquiring. Likewise never attempt to write-acquire the lock twice.
-
avoid deadlocking others. Always be certain that the global lock will be released in timely fashion. In other words if you have some kind of long running task with an event loop that needs to acquire the global lock, ensure that it gets acquired+released inside the loop rather than outside.
-
-
Atomic writing to databases:
neptune-core
presently writes to the following databases: wallet_db, block_index_db, archival_mutator_set, peer_state. If one of the databases are updated but the other is not, this can leave data in an invalid state. We could fix this by storing all state in a single transactional database but that might make the code base less modular.
note: We should also add logic to rebuild the archival state from the block_index_db
and the blocks stored on disk since it can be derived from the blocks. This functionality could be contained in a separate binary or a check could be performed at startup.
Tracing
A structured way of inspecting a program when designing the RPC API, is to use tracing, which is a logger, that is suitable for programs with asynchronous control flow.
- Get a feeling for the core concepts.
- Read tokio's short tutorial.
- View the 3 different formatters.
- See what we can have eventually: https://tokio.rs/tokio/topics/tracing-next-steps
The main value-proposition of tracing is that you can add #[instrument]
attribute over the function you currently work on. This will print the nested trace!("")
statements. You can also do it more advanced:
#![allow(unused)] fn main() { #[instrument(ret, skip_all, fields(particular_arg = inputarg1*2), level="debug")] fn my_func(&self, inputarg1: u32, inputarg2: u32) -> u32 { debug!("This will be visible from `stdout`"); info!("This prints"); trace!("This does not print {:#?}", inputarg2); inputarg1 * 42 + inputarg2 } }
Prints the return value, but none of the args (default behaviour is to prints all arguments with std::fmt::Debug formatter). It creates a new key with a value that is the double of the inputarg1
and prints that.
It then prints everything that is debug
level or above, where trace < debug < info < warn < error
, so here the trace!()
is omitted. You configure the lowest level you want to see with environment variable RUST_LOG=debug
.
RPC
To develop a new RPC, it can be productive to view two terminals simultaneously and run one of the following commands in each:
XDG_DATA_HOME=~/.local/share/neptune-integration-test/0/ RUST_LOG=debug cargo run -- --compose --guess --network regtest # Window1 RPC-server
XDG_DATA_HOME=~/.local/share/neptune-integration-test/0/ RUST_LOG=trace cargo run --bin rpc_cli -- --server-addr 127.0.0.1:9799 send '[{"public_key": "0399bb06fa556962201e1647a7c5b231af6ff6dd6d1c1a8599309caa126526422e", "amount":{"values":[11,0,0,0]}}]' # Window2 RPC-client
Note that the client exists quickly, so here the .pretty()
tracing subscriber is suitable, while .compact()
is perhaps better for the server.
neptune-cli client
neptune-cli
is a separate program with a separate address space. This means the state
object (see further down) is not available, and all data from Neptune Core must be received via RPC.
neptune-cli
does not have any long-lived tasks but rather receives individual commands via CLI, sends a query to neptune-core, presents the response, and exits.
Events
neptune-core can be seen as an event-driven program. Below is a list of all the events, and the messages that these events create.
Events
Description | Direct Task Messages | Indirect Task Messages | Spawned Network Messages |
---|---|---|---|
New block found locally | FromMinerToMain::NewBlock | MainToPeerTask::BlockFromMiner PeerMessage::Block | PeerMessage::Block |
New block received from peer Got: PeerMessage::Block | PeerTaskToMain::NewBlock | ToMiner::NewBlock MainToPeerTask::Block | PeerMessage::BlockNotification |
Block notification received from peer Got: PeerMessage::BlockNotification | MainToMiner::NewBlock | MainToPeerTask::Block | PeerMessage::BlockNotification |
Syncing
Syncing is different depending on the node type.
Synchronization for Archival Nodes
Synchronization describes the state that a blockchain client can be in.
Synchronization is motivated by the way that regular block downloading happens. If a client receives a new block from a peer, the client checks if it knows the parent of this block. If it does not know the parent, then the client requests the parent from the peer. If this parent block is also not known, it requests the parent of that and so on. In this process all blocks are received in opposite order from which they are mined, and the blocks whose parents are not known are kept in memory. To avoid overflowing the memory if thousands of blocks were to be fetched this way, synchronization was built.
When synchronization is active, the blocks are fetched in sequential order, from oldest to newest block.
State that is used to manage synchronization is stored in the main thread which runs at
startup. This thread ends up in main_loop.rs
and stays there until program shutdown.
The MutableMainLoopState
currently consists of two fields: A state to handle peer discovery and a state to
handle synchronization. The SyncState
records which blockchain heights that the connected peers have reported
and it records the latest synchronization request that was sent by the client. When a peer is connected, the
handshake for the connection contains the latest block header, and if the height and proof-of-work-family
values exceeds the client's height value by a certain (configurable) threshold, synchronization mode is
activated. The synchronization process runs once every N
seconds (currently 15) and which kind of request
for a batch of blocks that should be sent to a peer. A client can request a batch of blocks from a peer using
the PeerMessage::BlockRequestBatch
type constructor. This type takes a list of block digests and a requested
batch size as parameter. The list of block digests represents the block digests of the blocks that the client
has already stored to its database.
The peer then reponds with a list of transfers that follows the first digest that it recognizes in the list of block digest the syncing node has sent.
Sequences
Reorganization
Neptune is a blockchain which features recursive STARK proofs as part of its consensus mechanism. This implies that participants can synchronize trustlessly by simply downloading the latest block and verifying this. Unlike most other blockchains, it is not necessary to download all historical blocks to get a cryptographically verified view of the state of the blockchain.
It is possible, though, to run an archival node that downloads all historical blocks. This archival node comes with additional functionality such as being able to reconstruct transaction's membership proofs, provide some historical transaction statistics, and allow other archival nodes to synchronize.
This document provides an overview of how different parts of the client's state handle reorganizations.
State overview
The client's state consists of the following parts:
- wallet
- light state
- archival state (optional)
- mempool
The wallet handles transactions that the client holds the spending keys for. The light state contains the latest block which verifies the validity of the entire history of the blockchain. The archival state is optional and allows, among other things, the client to re-synchronize wallets that are no longer up-to-date. The mempool keeps track of transactions that are not yet included in blocks, thus allowing miners to confirm transactions by picking some from the mempool to include in the next block.
Wallet
The wallet can handle reorganizations that are up to n
blocks deep, where n
can be controlled with the CLI argument number_of_mps_per_utxo
.
Reorganizations that are deeper than this will make the membership proofs of
the transactions temporarily invalid until they can be recovered either through
the client's own archival state (if it exists), or through a peer's archival
state. This recovery process happens automatically.
Light State
The light state only contains the latest block and thus can handle arbitrarily deep reorganizations.
Archival State
The archival state can handle arbitrarily deep reorganizations.
Mempool
The mempool can currently not handle reorganizations. If a reorganization occurs, all transactions in the mempool will be deleted, and the initiator of a transaction will have to publish the transaction again. The transactions that were included in blocks that are abandoned through this reorganization are not added to the mempool again, they also have to be published again.
Keys and Addresses
neptune-core
uses an extensible system of keys and addresses. This is accomplished via an abstract type for each. At present two types of keys are supported: Generation
and Symmetric
.
Abstraction layer
Three enum
are provided for working with keys and addresses:
enum | description |
---|---|
KeyType | enumerates available key/address implementations |
SpendingKey | enumerates key types and provides methods |
ReceivingAddress | enumerates address types and provides methods |
note: It was decided to use enum
rather than traits because the enums can be
used within our RPC layer while traits cannnot.
Most public APIs use these types. That provides flexibility and should also make it easy to add new implementations in the future if necessary.
Root Wallet Seed
At present all supported key types are based on the same secret seed
. The end-user can store/backup this seed using a bip39 style mnemonic.
Key derivation
For each key-type, the neptune-core wallet keeps a counter which tracks the latest derived key.
To obtain the next unused address for a given key type call the rpc method next_receiving_address(key_type)
.
(note: as of this writing it always returns the same address at index 0, but in the future it will work as described)
An equivalent API for obtaining the next unused spending key is available in the neptune-core crate, but is not (yet?) exposed as an rpc API.
Available key types
Generation
and Symmetric
type keys are intended for different usages.
Generation
keys and addresses
Generation
keys are asymmetric keys, meaning that they use public-key cryptography to separate a secret key from a public key.
They are primarily intended for sending funds to third party wallets. They can also be used for sending funds back to the originating wallet but when used in this context they waste unnecessary space and incur unnecessary fees on the part of the transaction initiator.
Generation
keys and addresses use the lattice-based public key encryption scheme described in Section 2.7 of this paper. This choice of cryptosystem was made because of its native compatibility with the Oxfoi prime, $2^{64} - 2^{32} + 1$, which is the field into which Neptune encodes all blockchain data. (It does this, in turn, because Triton VM only works over this field.) Furthermore, according to current understanding, the parameters and underlying mathematics guarantee security long into the future and, in particular, even against attacks mounted on quantum computers.
The address encodes the public key using bech32m. The human readable prefix "nolga" stands for "Neptune oxfoi lattice-based generation address". The public announcement encodes a ciphertext which, when decrypted with the correct key, yields the UTXO information.
Naming
These are called "Generation" keys because they are designed to be quantum-secure and it is believed/hoped that the cryptography should be unbreakable for at least a generation and hopefully many generations. If correct, it would be safe to put funds in a paper or metal wallet and ignore them for decades, perhaps until they are transferred to the original owner's children or grand-children.
Symmetric
keys and addresses
Symmetric
keys are implemented with aes-256-gcm, a type of symmetric key,
meaning that a single key is used both for encrypting and decrypting.
Anyone holding the key can spend associated funds. A symmetric key is equivalent to a private-key, and it has no equivalent to a public-key.
They are primarily intended for sending funds (such as change outputs) back to the originating wallet. However additional use-cases exist such as sending between separate wallets owned by the same person or organization.
Data encrypted with Symmetric
keys is smaller than data encrypted with asymmetric keys such as Generation
. As such, it requires less blockchain space and should result in lower fees.
For this reason change output notifications are encrypted with a Symmetric
key by default and it is desirable to do the same for all outputs destined for the
originating wallet.
Note that the Symmetric
variant of abstract types SpendingKey
and ReceivingAddress
both use the same underlying SymmetricKey
. So they differ only in the methods available. For this reason, it is important never to give an "address" of the Symmetric
type to an untrusted third party, because it is also the spending key.
Utxo Notification
When a sender creates a payment it is necessary to transfer some secrets to the recipient in order for the recipient to identify and claim the payment.
The secrets consist of a Utxo
and a Digest
that represents a random value created by the sender called sender_randomness
.
It does not matter how these secrets are transferred between sender and receiver so long as it is done in a secure, private fashion.
There are two broad possibilities:
- write the secrets to the blockchain, encrypted to the recipient
- do not write secrets to the blockchain. Use some out-of-band method instead.
neptune-core
supports both of these. They are referred to as notification methods. An enum UtxoNotifyMethod
exists and provides variant OnChain
and OffChain
.
It is also important to recognize that sometimes the sender and receiver may be the same wallet or two wallets owned by the same person or organization.
OnChain Utxo transfers
OnChain
transfers are performed with the struct PublicAnnouncement
. It is an opaque list of fields of type BFieldElement
that can hold arbitrary data. A list of PublicAnnouncement
are attached to each neptune Transaction
and stored on the blockchain.
The neptune key types leverage PublicAnnouncement
to store the key_type
in the first field and a unique receiver_id
in the second field that is derived from the receiving address. These fields are plaintext, so anyone can read them.
The remaining fields (variable length) are filled with encrypted ciphertext that holds Utxo
and sender_randomness
which are necessary to claim/spend the Utxo
.
Identifying Utxo
destined for our wallet
Illustrating the challenge.
Given that the notification secrets are encrypted there exists a problem. How can a wallet identify which PublicAnnouncement
are intended for it?
The simplest and most obvious solution is to attempt to decrypt the ciphertext of each. If the encryption succeeds then we can proceed with claiming the Utxo
. While this works it is very inefficient. Each block may contain thousands of PublicAnnouncement
. Further our wallet may have hundreds or even thousands of keys that must be checked against each announcement, making this an n*m
operation. While it may be feasible for a node to do this if it is online all the time it becomes very expensive to scan the entire blockchain as may be necessary when restoring an old wallet from a seed.
We can do better.
How neptune-core
solves it.
This is where the key-type
and receiver_identifier
of the PublicAnnouncement
come into play.
Since these fields are plaintext we can use them to identify notifications intended for our wallet prior to attempting decryption.
Each SpendingKey
has a receiver_identifier
field that is derived from the secret-key. This uniquely identifies the key without giving away the secret. As such, it can be shared in the public-announcement.
The algorithm looks like:
for each key-type we support:
for each known key in our wallet:
for each public-announcement in the block-transaction:
filter by key-type
filter by key.receiver_id
filter by key.decrypt(announcement.ciphertext) result
Privacy warning
It is important to note that this scheme makes it possible to link together multiple payments that are made to the same key. This mainly affects Generation
keys as the address (public-key) is intended to be shared with 3rd parties and it is not possible to prevent 3rd parties from making multiple payments to the same address.
Wallet owners can mitigate this risk somewhat by generating a unique receiving address for each payment and avoid posting it in a public place. Of course this is not feasible for some use-cases, eg posting an address in a forum for purpose of accepting donations.
It is planned to address this privacy concern but it may not happen until after Neptune mainnet launches.
OffChain Utxo transfers
Many types of OffChain transfers are possible. examples:
-
Local state (never leaves source machine/wallet).
-
Neptune p2p network
-
External / Serialized (proposed)
In the future neptune-core
or a 3rd party wallet might support using a
decentralized storage mechanism such as IPFS. Decentralized storage may provide a solution for ongoing wallet backups or primary wallet storage to minimize risk of funds loss, as discussed below.
Warning! Risk of funds loss
It is important to recognize that all OffChain
methods carry an extra risk of losing funds as compared to OnChain
notification. Since the secrets do not exist anywhere on the blockchain they can never be restored by the wallet if lost during or any time after the transfer.
For example Bob performs an OffChain utxo transfer to Sally. Everything goes fine and Sally receives the notification and her wallet successfully identifies and validates the funds. Six months later Sally's hard-drive crashes and she doesn't have any backup except for her seed-phrase. She imports the seed-phrase into a new neptune-core wallet. The wallet then scans the blockchain for Utxo
that belong to Sally. Unfortunately the wallet will not be able to recognize or claim any Utxo
that she received via OffChain
notification.
For this reason, it becomes crucial to maintain ongoing backups/redundancy of wallet data when receiving payments via OffChain notification. And/or to ensure that the OffChain mechanism can reasonably provide data storage indefinitely into the future.
Wallet authors should have strategies in mind to help prevent funds loss for recipients if providing off-chain send functionality. Using decentralized cloud storage for encrypted wallet files might be one such strategy.
With the scary stuff out of the way, let's look at some OffChain
notification methods.
Local state.
note: neptune-core
already supports OffChain
notifications via local state.
Local state transfers are useful when a wallet makes a payment to itself.
Self-payments occur for almost every transaction when a change output is
created. Let's say that Bob has a single Utxo
in his wallet worth 5 tokens.
Bob pays Sally 3 tokens so the 5-token Utxo
gets split into two Utxo
worth 3
and 2 respectively. The 2-token Utxo
is called the change output, and it must
be returned into Bob's wallet.
note: A wallet can send funds to itself for other reasons, but change outputs are predicted to be the most common use-case.
When a wallet is sending a Utxo
to itself there is no need to announce this on
the public blockchain. Instead the wallet simply stores a record, called an
ExpectedUtxo
in local state (memory and disk) and once a block is mined that
contains the transaction, the wallet can recognize the Utxo
, verify it can be
claimed, and add it to the list of wallet-owned Utxo
called monitored_utxos
.
Neptune p2p network
note: concept only. not yet supported in neptune-core
.
Utxo
secrets that are destined for 3rd party wallets can be distributed via the neptune P2P network. This would use the same p2p protocol that distributes transactions and blocks however the secrets would be stored in a separate UtxoNotificationPool
inside each neptune-core node.
There are challenges with keeping the data around in perpetuity as this would place a great storage burden on p2p nodes. A solution outside the p2p network might be required for that.
External / Serialized
note: this is a proposed mechanism. It does not exist at time of writing.
The idea here is that the transfer and ongoing storage takes place completely outside of neptune-core
.
-
When a transaction is sent
neptune-core
would provide a serialized data structure, egOffchainUtxoNotification
containing fields: key_type, receiver_identifier, ciphertext(utxo, sender_randomness) for eachOffChain
output. Note that these are the exact fields stored inPublicAnnouncement
for notifications. -
Some external process then transfers the serialized data to the intended recipient.
-
The recipient then invokes the
claim_utxos()
RPC api and passes in a list of serializedOffchainUtxoNotification
.neptune-core
then attempts to recognize and claim each one, just as if it had been found on the blockchain. -
Optionally the recipient could pass a flag to
claim_utxos()
that would cause it to initiate a newOnChain
payment into the recipient's wallet. This could serve a couple purposes:- using
OnChain
notification minimizes future data-loss risk for recipient. - if the funds were sent with a symmetric-key this prevents the sender from spending (stealing) the funds later.
- using
User Guides
Explainers and tutorials on how to use or get started using the various software packages that constitute the client.
Building the software, or installing it using a script, yields four executables. Two of these executables are user interfaces. The executables are:
neptune-core
is the daemon that runs the protocol.triton-vm-prover
is a binary invoked byneptune-core
for out-of-process proving tasks.neptune-dashboard
is a terminal user interface that requires a running instance ofneptune-core
.neptune-cli
is a command-line interface that might require a running instance ofneptune-core
depending on the command.
Except for the installation instructions, the user guides in this section assume these executables are installed.
Installation
Compile from Source
Linux Debian/Ubuntu
-
Open a terminal to run the following commands.
-
Install curl:
sudo apt install curl
-
Install the rust compiler and accessories:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
-
Source the rust environment:
source "$HOME/.cargo/env"
-
Install build tools:
sudo apt install build-essential
-
Install LevelDB:
sudo apt install libleveldb-dev libsnappy-dev cmake
-
Download the repository:
git clone https://github.com/Neptune-Crypto/neptune-core.git
-
Enter the repository:
cd neptune-core
-
Checkout the release branch
git checkout release
. (Alternatively, for the unstable development branch, skip this step.) -
Build for release and put the binaries in your local path (
~/.cargo/bin/
):cargo install --locked --path .
(needs at least 3 GB of RAM and a few minutes)
Windows
To install Rust and cargo on Windows, you can follow these instructions. Installing cargo might require you to install Visual Studio with some C++ support but the cargo installer for Windows should handle that. With a functioning version of cargo, compilation on Windows should just work out-of-the-box with cargo build etc.
-
Download and run the CMake installer from the website.
-
Open PowerShell to run the following commands.
-
Download the repository:
git clone https://github.com/Neptune-Crypto/neptune-core.git
-
Enter the repository:
cd neptune-core
-
Checkout the release branch
git checkout release
. (Alternatively, for the unstable development branch, skip this step.) -
Run
cargo install --locked --path .
Automatic
Go to the releases page, scroll down to the section "Assets" and select and install the right package for your system.
Managing Secret Seeds
The wallet derives all spending keys and receiving addresses from a secret seed. It does this deterministically, meaning that with a back-up of the secret seed you can re-derive the exact same keys and addresses. Moreover, with the exception of off-chain UTXO notifications, all incoming payments have on-chain ciphertexts that (when decrypted) provide all necessary information to spend the funds at a later date. Put together, this construction means that (with the exception of payments with off-chain UTXO notifications):
a back-up of the secret seed, along with historical blockchain data, suffices to reclaim all funds.
Wallet File
By default, neptune-core
stores the wallet secret seed to and reads it from [data-dir]/neptune/[network]/wallet/wallet.dat
. Here [data-dir]
is the data directory and this directory is the second line in the log output when running neptune-core
. The [network]
is main
unless you are not on mainnet.
A convenient command is > neptune-cli which-wallet
, which shows the location of the wallet file.
Warning: do not share your wallet file with other people, especially not other people claiming to help you.
Incoming Sender Randomness
There is another file in the same data directory called incoming_randomness.dat
. It contains data that you also need to spend funds, but since this data is generated by the sender and not the receiver, it cannot be derived from the wallet's secret seed.
The incoming sender randomness is always part of the information payload sent (in encrypted form) to the beneficiary of a transaction, along with the amount of funds transferred. With the exception of off-chain UTXO transfers, this ciphertext lives on the blockchain; and so with that exception, the blockchain serves to back up the information in incoming_randomness.dat
.
If you do receive transactions with off-chain UTXO notifications, it is recommended to either a) back up this file or b) consolidate your funds by sending them to yourself via a transaction with on-chain notification.
New Secret Seed
By default, neptune-core
will read the wallet file. If none exists, it will generate one and populate it with a random secret seed.
To generate a new secret seed and wallet.dat
without starting neptune-core
, use the CLI: > neptune-cli generate-wallet
.
Note that this command does nothing if the wallet file already exists. If you want to invoke this command even though a wallet.dat
file already exists, rename it first.
Secret Seed Phrase
Neptune supports BIP-39 secret seed phrases. A secret seed phrase consists of 18 simple English words, such as the ones shown below. Secret seeds can be exported to phrases and vice versa; the point is that the phrase is easier to back up, for instance by physically carving it into fire-proof stone.
1. toilet
2. trick
3. shiver
4. never
5. can
6. frown
7. gonna
8. mirror
9. mail
10. let
11. connect
12. oven
13. you
14. type
15. pill
16. down
17. vast
18. view
- To export a seed phrase:
> neptune-cli export-seed-phrase
. This command will read from thewallet.dat
file and will fail if that file does not exist. - To import a seed phrase:
> neptune-cli import-seed-phrase
. Note that this command will not do anything if awallet.dat
file already exists.
Generating Addresses
Generation Address
Presently, Neptune has two types of addresses, Generation Addresses and Symmetric Addresses. The latter type is used internally for sending funds to the same client from which they originate, like with change and composer UTXOs. Generation addresses are the type you want for addresses that third parties can send funds to.
A generation address looks like this:
nolgam1y9765mhpwlv3syglda3qrfjn97rw4chgtkv76c85z5z7wm5l2emkwnhkshmpwcwyxy0dnzd3e75k7fkr8khm86usptf4g2ztxl0hlnmulttnqeqf96zwgumr27jg96ymgr70s3ms4a2nmj5qku46m83hg8cr2awk9pq38vusn76v9w5j5gqcq06kp5njj7ndfvwyr8xn7k6crsh4hmj4xu9ayxx08xp68uquy9ecr37d9v38ef43ygcxmeveqw57acp306cu6wmxx4xt40js8emp5ly6tjla7t7xx4gsd0uxv8ukh36v4dv69jfj6wjrrud5wjmpv4l96dhl6uq5ynqkgyeg9njuxfm8qfuxu8y53pqwuxch2y2ujwggea3snfkyq8w8yxs7ls7h8fkmn48rahj0k5rtqak762tepd9ks7sy34rvfa74f24zzk8h52k6tq2ru35ua8m36pjlr9gppwa9un20na4654peju8u95zlg9ev09edlnedla5vry8h92gkrz7g8vrw7jtt6xjasgu0eshtvlywz25zr5408mxu3l8s9lmgh5fsylampmfkyna64kykp2jetmsr8cese5fxa6u3s785lswpy3n2mu4fptvfmw20w4gsdqxlj7qnns3eunwahhwp7x48dq8auqjh3lqgmhek9fkfewdnrmqxhf0h8z7le5e3pknvlh0nc92wqfd5wx2n75m8tss6j8nkl2ajk2apf9qqr44y40l5dqydwxe9pwfssrvu08swlt9e9dwrdhlukcjq82ch9e9jmttjw7xkchaqt0fz49d4shddldghhy53dk0uewf0zx2mhs2qa05uc4wgpsxnc2ravardxprjlq0pqgnn5rk2v855aw8j67h20cghlfw29a5ph9dyz5c6shamer48shun52hqhylkj7fxfkdq2hnafjty2n8aaam5luthtkd4gt8xw6w9mum92twdzme07ly2wx6gtm688ctdr8c9gvj7vkgflr8eacmarwev2r3g0rpthdgsa8x2prvjyuh4y3yh6gv6ll49h237uk55c6kzj9wc93n7lw9x4cpwy2f3czd2amrhtjv55rtvmngc9jel62e22q3a064agv9xg34ekzwxxvqx4n87alzt05rtdcndfefhw6w6et45nvp0ngvgpzuum9sfj5kpfse3hravn5hs22j84e4udlt4qpf60nvqc48squmxxeq7kljtfjawmag6quzdt0ec0yvz3wttkalja5wcd27c8lje3v0ver2etkevp4xkjdmprkg0ggf9smp5c0ecw66vfghe7uyv6smyr06l860wvhx0nrdryur6daneu6tk2fah9xw2ww8qcce00q37hkf6w23t5sc2sjthn9uljesw7gqrjsn64h4dhhezmzr7t08tmyu5pwejynt9jqtdhc6525nf6k0ex3e7r28re9ng28ralugdq52ke5zvvrm3u7wjayhlh0hajn2dykmqllc7kegt3lzcn2m3d0v7ede54hc7zxfwyzg0v07k02n8lpjyf7cv5hwuw3ug86zp8dctwpfgww9a405r58ekxkzlwsq02lvsc8q6safkp637km4vuqleew7x9at5cd86jv4rp3yvm4dplcjcnnyupjvq8k5qdds7ywm3lpxeyumyqykjja6casllszplfwgszj9wsqhvzhucjkmtj9u9qnj7jhphftnkscs7vx4yzp4ft6u6p9g7akfmqpqzyuw7vqgyr7ajdjuamvrzu7q20udznpv3yvs0e6wv7xq8hml0xyr3m9xh8aupunf5c34fk92gw7zlyjhntk9dlp4k2qu80r9vzgl4p2f9gpgn2gjxkmgfm7t7enwzhux65xz3nnqeu2m4sc9ml7cwrsrjkvllh8radkcsxr4vs27e03y95lexx4qmzsp20x58lymumpvu5kq6w4mmcrn3tsu5k8gygz98r748zvea8tdd3mwkdd9zhdsdlx6e6chssuxysvdmlhly5fw7husa7q6huama23rd3e8jchfks5v8jmp04jcy96ac0gys3k8jggjxlnsgervnrskzw88fxp29xapy7syyzvlszahgx62x8nag9tj9lek9v7hqknzgp5phm93lcxv6x4qsj6tl7sxuzdr2ad7jjez9trw750kzuc3jrqvd0gd9d0h2w86nvkj3g8xq0t6zfw3n58v2569mrrz9v9uxqrwsjlgu59lqxlnxdjmrpda0jeq5dlvedzyn34m6ru7rmy9t8u9pawl9c9zh28p76dneexjezge7d2hfuj5pjas26gdslgdh956zaxszwczmspnjkmnptjvxdhcv893mdrzn2eqaxlymhyrzulnjkykvgxtd7xsglk5nm9mqcx3e6m92k0wvzaqjun6x6vu4t2ay2n9p73pvputgxjk60yy39qp3p3t2pehctptfjrzmfch6cwxmp5e2m7dav249z96ax0jw0nk4weegr9td7cnvrqrn0dlft8865kfuaf078s4drj9uaux07lshp6xm804wcu60zhwnnlsezuadn803r3qnft92vpsa8xp33aje26prsvrqhyr3zkdpddld5sp2d9n9vtu6kc6u0wm50jpxhwvt0vcyku3a4m7f7mu9j0ju80mapzz003nlmecvthzsz9stxyv4j7jkcvg8kc49e0tpucu4tuntezq87mgzla24p00txs2wyf3yatyvmwgtl40lx729e9js2v44a2p8pxeh3uv0nwq0x4c3kyujzlnvc23c2d8jl4kma3tjnc5qmv24yhhxtqldrxf5jyyquc3sch7ecwwgz7aafjyktzutr0xs2xakkfqv669549ax69sjp2mc8r7dantffe2mgwgstt3xj9j48nmk3yn24m8ap40ljq2zgd04vvxa8at9snp8skwf93pj2c5h3d4lvmjsl2sh6x5zm3fmfa235pryhqr6yy9jypvgc3qt78x0wnegu7jqyufx5xv5mdm9pkx5fkypqus05mrk8sydmmjxzcjdlmk5y7f3cjd5npa8pjsmj4pxj6lvqdnagdwfxyewl9hjhwc96maa530ypasl4ts4d3p4f3s5cvqr2xpfxld52xjj3w3m989j2kgcsumz2ns7960ymssv4slz7uwgjt87a520eeq7t9lyedfahauanxc7zpjtr7ug2p9ggykmlqnz0wjsjez3kzae49896z4z97lzufw5tl8ar0nzqy0q3nhsk0d05nj36c0w9he93sedujmw2hc69yh9v4mr2w0frxf8chj84esh9z8kukqa5gsd28fl5fqmqle05x6h98hw9hny77fpc0muc2hu2mch8mjuppt0g2492mggtyt3f0sw3uapz0tug87xgnv64nxdt7jpcm53gd39dnhe0nxzdufmnzq9sh0dhk0n2falgsvuv4xtay3
Note the prefix nolgam
and the separator 1
at the start. All generation addresses have this feature. The remaining part is the actualy payload, and it is different from address to address.
The address is bech32m-encoded, meaning that a) it has (some) error detection capacity, and b) ambiguous characters like "1", "b", "i", and "o" have been removed (except for the separator). These features are useful for users who want to type the entire string character by character. While that's possible, given the length it is preferable to just copy and paste.
Why is it so long? In short, it is the cost of post-quantum security. The longer answer is that the address encodes among other things a lattice-based public key, which is used to encrypt information about the UTXO that only the intended recipient should know.
Privacy
Users should note that generation addresses contain receiver identifiers which is copied into the transaction so as to alert the beneficiary of an incoming payment. This design choice voids the need to scan all transactions announced in a certain time-window and trial-decrypt the associated ciphertexts. However, it also means that different payments to the same generation address can be linked as payments to the same person.
Even when repeated payments are made to the same generation address, in the general case not even the sender (let alone third parties) can tell whether either of the payments have since been spent.
Users who want to avoid incoming payments from being linked should either a) generate a new generation address for every incoming payment; or b) use off-chain UTXO notifications.
Public Announcements
By default, a transaction that sends funds to a generation address includes a public announcement which contains a ciphertext that only the recipient of the transferred funds can decrypt. This ciphertext contains UTXO info such as the amount but also the sender randomness, which is a key piece of information required by the mutator set. Besides the ciphertext, the public announcement also contains the recipient identifier.
The benefit of using public announcements to transmit this information is that the blockchain acts as a robust back-up solution: all funds can be recovered from the user's secret seed phrase and historical block data. The drawback is a) the (marginal) loss of privacy when generation addresses are reused; and b) the larger size of transactions, which under reasonable economic assumptions means more fees are required. Using off-chain UTXO notifications instead addresses both drawbacks.
Determinism
All generation addresses are deterministically generated from the wallet secret seed and a derivation index. The derivation index is initially set to 0 and increases by one each time a new address is generated. This construction ensures that if the wallet file should be lost, the exact same sequence of generation addresses can be reproduced from the backed-up seed phrase.
Using neptune-dashboard
- Make sure a node is running:
> neptune-core
. - Start
neptune-dashboard
in a new console window> neptune-dashboard
. - Navigate to the "Receive" tab. Your current generation address is shown.
- Press
Enter
to generate a new generation address. - Press
C
to enter into console mode, where you can copy the address. - Press
Enter
to exit console mode.
Using neptune-cli
Next Address
To generate the next generation address without going through the dashboard, neptune-cli
is your friend:
- Make sure a node is running:
> neptune-core
. > neptune-cli next-generation-address
.
Nth Address
The previous two methods require a running node in order to read the derivation index and increment it. To generate a generation address with a given derivation address, run > neptune-cli nth-receiving-address n
and replace n
by the index.
Premine Receiving Address
For premine recipients, the command is > neptune-cli premine-receiving-address
.
Shamir Secret Sharing
Neptune Core supports Shamir secret sharing to distribute shares in the wallet secret.
How It Works
A \(t\)-out-of-\(n\) Shamir secret sharing scheme works as follows. Let \(S \in \mathbb{F}\) be the original secret. In the source code, we use XFieldElement
as the field \(\mathbb{F}\) and SecretKeyMaterial
as a wrapper around XFieldElement
s when they are used for this purpose.
Sample a univariate polynomial \(f(X)\) of degree at most \(t-1\) uniformly at random except for the constant coefficient. Choose \(S\) for the constant coefficient, so that \(f(0) = S\).
With an implicit embedding \(\mathbb{N} \rightarrow \mathbb{F}\) we can associate the \(i\)th share with the point \((i, f(i))\). Note that \(i=0\) is disallowed since \((0, f(0))\) corresponds to the secret. To generate \(n\) shares we let \(i\) range from \(1\) to \(n\) (including the upper bound).
To reconstruct the original secret it suffices to have any \(t\) secret shares. Just reconstruct the polynomial and evaluate it at \(0\).
However, any selection of fewer than \(t\) secret shares contains no information about the original secret.
How to Use It
First, make sure you have a wallet installed.
- Whenever you run
neptune-core
, it will read the wallet file or create one if none is found. Unless you moved or removed this file, it is still there. - To test if the wallet file is present, run
neptune-cli which-wallet
. - To generate a wallet file without running
neptune-core
, tryneptune-cli generate-wallet
. - To import a wallet from a seed phrase, first make sure there is no wallet file, and then run
neptune-cli import-seed-phrase
.
To generate \(n\) shares in a \(t\)-out-of-\(n\) scheme, run neptune-cli shamir-share t n
and replace t
and n
with the values you want. This command generates \(n\) seed phrases. Note: be sure to record the share index ("i/n
") along with each share, as you will need this information to reconstruct the original secret.
To reconstruct the original secret, first make sure the wallet file is absent. Then run neptune-cli shamir-combine t
and replace t
with the same value used earlier. This command will ask you for \(t\) secret shares (with index) which you can supply by writing the seed phrase words of each share.
Example
> neptune-cli shamir-share 2 3
Wallet for beta.
Read from file `[file name redacted]`.
Key share 1/3:
1. because
2. curtain
3. remove
4. marble
5. divide
6. what
7. early
8. tilt
9. debate
10. evidence
11. tag
12. ramp
13. acquire
14. side
15. tenant
16. cloud
17. nature
18. index
Key share 2/3:
1. twenty
2. pretty
3. shiver
4. position
5. panda
6. frown
7. cargo
8. target
9. country
10. deliver
11. remind
12. label
13. kick
14. call
15. exchange
16. vital
17. absent
18. barely
Key share 3/3:
1. senior
2. comfort
3. stomach
4. since
5. yard
6. dove
7. ability
8. okay
9. cloth
10. chaos
11. attack
12. enough
13. tilt
14. junk
15. risk
16. sail
17. horse
18. primary
> neptune-cli shamir-combine 2
Enter share index ("i/n"):
1/3
Enter seed phrase for key share 1/3:
1. because
2. curtain
3. remove
4. marble
5. divide
6. what
7. early
8. tilt
9. debate
10. evidence
11. tag
12. ramp
13. acquire
14. side
15. tenant
16. cloud
17. nature
18. index
Have shares {1}/3.
Enter share index ("i/n"):
3/3
Enter seed phrase for key share 3/3:
1. senior
2. comfort
3. stomach
4. since
5. yard
6. dove
7. ability
8. okay
9. cloth
10. chaos
11. attack
12. enough
13. tilt
14. junk
15. risk
16. sail
17. horse
18. primary
Have shares {1,3}/3.
Shamir recombination successful.
Saving wallet to disk at [file name redacted] ...
Success.
Contributing
Instructions and helpful information for people who want to contribute.
Git Workflow
Github Flow
We follow a standard GitHub Flow methodology with additional release branches.
It can be visualized like this:
--------
master / topic \
----*----------------------*--------------->
\ release \ release
------------------> --------->
\ hotfix /
--------
master branch (aka trunk)
The master
branch represents the tip of current development. It is an integration branch, in the sense that developer changes from smaller topic branches get merged and integrated into master
and github's CI performs testing for every pull-request.
The master branch of each crate should always build and should always pass all tests.
At present, any team member with repo write access may directly commit to the master
branch. However, as we get closer to a mainnet launch, master
should/will become locked so that all changes must go through the pull-request process and be peer reviewed.
topic branches
Even now, team members are encouraged to create a topic branch and pull-request for larger changes or anything that might be considered non-obvious or controversial.
tip: topic branches are sometimes called feature branches.
A topic branch typically branches off of master
or another topic branch. It is intended for an individual
feature or bug-fix. We should strive to keep each topic branch focused on a single change/feature and as short-lived as possible.
Third party contributors without repo write access must create a topic branch and submit a pull request for each change. This is accomplished by:
- fork the repo
- checkout and build the desired branch (usually master or a release branch)
- create a topic branch
- make your changes and commit them.
- push your topic branch to your forked repo
- submit the pull request.
Topic Branch Naming
When working on an open github issue, it is recommended to prefix the topic branch with the issue identifier.
When the branch is intended to become a pull request, it is recommended to add the suffix _pr
.
If the branch exists in a triton/neptune official repo, (as opposed to a personal fork), then it is recommended to prefix with your github username follwed by /
.
So if working on issue #232
and adding feature walk_and_chew_gum one might name the branch myuser/232_walk_and_chew_gum_pr
.
release branch
The master
branch can contain changes that are not compatible with whatever network is currently live. Beta-testers looking for the branch that will synchronize with the network that is currently live need branch release
. This branch may cherry-pick commits that are meant for master
so long as they are backwards-compatible. However, when this task is too cumbersome, branch release
will become effectively abandoned -- until the next network version is released.
TestNet Release Protocol
- Ensure that master builds against crates that live on crates.io. In particular, no dependencies on github repositories or revisions.
- Update
README.md
in case to make sure the installation instructions are up-to-date. - Ensure that all tests pass.
- Bump the version in
Cargo.toml
- Create a commit with the subject line
v0.0.6
(or watever the new version number is) and in the body list all the changes. - Push to
master
on github. - Add a tag marking the current commit with the version:
git tag v0.0.6
(or whatever the next version is)git push --tags
.
- Set branch
release
to point tomaster
:git checkout release
git reset master
git push
- Consider making an announcement.
Conventional Commits
It is preferred/requested that commit messages use the conventional commit format.
This aids readability of commit messages and facilitates automated generation of the ChangeLog.
For all but the most trivial changes, please provide some additional lines with a basic summary of the changes and also the reason/rationale for the changes.
A git template for assisting with creation of conventional commit messages can be found in the Git Message. This template can be added globally to git with this command:
git config --global commit.template /path/to/neptune-core/docs/src/contributing/.gitmessage
It can also be added on a per-repository basis by omitting the --global
flag.
Cargo dependencies
For published crate releases
When publishing a crate, and/or when making a release of neptune-core
, all dependencies should/must reference a version published to crates.io.
In particular, git repo references must not be used.
For development between crate releases.
Often parallel development will be occurring in multiple triton/neptune crates. In such cases there may be API or functionality changes that necessitate temporarily specifying a git dependency reference instead of a published crates.io version.
For this, we keep the original dependency line unchanged, and add a crates.io patch at the bottom of Cargo.toml.
Example:
[dependencies]
tasm-lib = "0.2.1"
[patch.crates-io]
# revision "f711ae27" is tip of tasm-lib master as of 2024-01-25
tasm-lib = { git = "https://github.com/TritonVM/tasm-lib.git", rev = "f711ae27" }
Note that:
tasm-lib = "0.2.1"
. We do not use{git = "..."}
here.- We specify a specific revision, rather than a branch name.
- We place a comment indicating the branch on which the revision resides, as of placement date.
A branch name is a moving target. So if we were to specify a branch, then our build might compile fine today and tomorrow it no longer does.
The patch section docs have more detail. In particular take note that:
- Cargo only looks at the patch settings in the Cargo.toml manifest at the root of the workspace.
- Patch settings defined in dependencies will be ignored.
This blog article is also helpful.
Finally, all such temporary patches must be removed before publishing a crate or issuing a new release!
Git Message
# Title: Summary, imperative, start upper case, don't end with a period
# use convential commit format. http://conventionalcommits.org
# <type>[optional scope]: <description>
# scope is in parens, eg: feat(lang): added polish language
# types: build, chore, ci, docs, feat, fix, perf, refactor, revert, style, test
# No more than 60 chars. #### 60 chars is here: #
# Body: Explain *what* and *why* (not *how*). Include task ID (Jira issue).
# BREAKING CHANGE: a commit that has the text BREAKING CHANGE: at the
# beginning of its optional body or footer section introduces a breaking
# API change (correlating with MAJOR in semantic versioning).
# Wrap at 72 chars. ################################## which is here: #
# At the end: Include Co-authored-by for all contributors.
# Include at least one empty line before it. Format:
# Co-authored-by: name <user@users.noreply.github.com>
#
# How to Write a Git Commit Message:
# https://chris.beams.io/posts/git-commit/
#
# 1.Separate subject from body with a blank line
# 2. Limit the subject line to 50 characters
# 3. Capitalize the subject line
# 4. Do not end the subject line with a period
# 5. Use the imperative mood in the subject line
# 6. Wrap the body at 72 characters
# 7. Use the body to explain what and why vs. how
# Instructions to use this as a template. see
# https://gist.github.com/lisawolderiksen/a7b99d94c92c6671181611be1641c733
Sharing proofs for faster test execution
Many tests in neptune-core
rely on cryptographic STARK proofs of correct
program execution generated by Triton VM. It's time
consuming to generate all the proofs required for the test suite. For this
reason, the tests that require STARK-proofs should deterministic such that
proofs can be reused across test runs.
In order to run the tests on machines that cannot produce the proofs easily, a proof server can be used. This proof server is a simple HTTP file server that has the proofs stored as files.
Getting the proofs from a proof server
Ask someone involved with the project for a URL and put the URL into the
proof_servers.txt
file.
Running a proof server
If you have a powerful machine you can generate all proofs yourself. You can
then run a file server that serves files that match the name of the files that
were produced (and placed in neptune-core/test_data/
) during the execution of
the test suite.
Such a server can e.g. be run as an nginx file server with the following settings:
limit_req_zone $binary_remote_addr zone=file_rate_limit:10m rate=1r/s;
limit_req_zone $server_name zone=global_rate_limit:10m rate=2r/s;
server {
listen 42580; # IPv4 listener
listen [::]:42580; # IPv6 listener
server_name <ip_or_url>;
# Block access to the root URL
location = / {
return 404; # Return 404 for the root URL
}
# Serve .proof files from the directory
location ~* ^/[a-z0-9]+\.proof$ {
alias /var/www/neptune-core-proofs/;
autoindex off;
autoindex_exact_size off;
autoindex_localtime off;
# Limit allowed HTTP methods to GET
limit_except GET {
deny all; # Block all other methods
}
# Ensure no trailing slash is appended to the URL
try_files $uri =404;
# Per-client rate limit
limit_req zone=file_rate_limit burst=1 nodelay;
# Global rate limit
limit_req zone=global_rate_limit burst=1 nodelay;
}
# Restrictive robots.txt
location = /robots.txt {
return 200 "User-agent: *\nDisallow: /\n";
add_header Content-Type text/plain;
# Limit allowed HTTP methods to GET
limit_except GET {
deny all; # Block all other methods
}
# Per-client rate limit
limit_req zone=file_rate_limit burst=1 nodelay;
# Global rate limit
limit_req zone=global_rate_limit burst=1 nodelay;
}
}
If you want to serve your proofs directly from your neptune-core
repository,
you can change the alias
argument above.
Releasing Neptune Core
This section describes the steps to publish a new version of Neptune Core, and to release & distribute its binary artifacts.
Pre-requisites
The following tools are used to ensure a high quality release.
- cargo-binstall – Faster installation of the needed tools (optional)
- cargo-semver-checks – Scans the crate for semver violations
- git cliff – Simplifies changelog creation
- cargo-release – Simplifies simultaneous publication of multiple crates
- dist – Creates installers and publishes them in a GitHub release
Use the following commands to install the needed tools.
If you decide against using cargo binstall
, it's generally possible to just cargo install
instead.
Some tools might require cargo install --locked
.
cargo install cargo-binstall
cargo binstall cargo-semver-checks
cargo binstall git-cliff
cargo binstall cargo-release
cargo binstall cargo-dist
Release Process Checklist
Not every step of the release process is (or should be) fully automated.
An example of a semi-automated step is changelog generation.
Tools like git cliff
help, but a manual edit is necessary to reduce noise and achieve the polish appreciated by readers of the changelog.
An example of a fully automated step is assembly and distribution of binaries by dist
.
Set Working Directory to Workspace Root
Unless indicated otherwise, the current working directory is assumed to be the workspace root.
cd /path/to/neptune-core
Check Distribution Workflow Files
Run dist init
to generate the latest GitHub workflow files that will take care of binary distribution.
The interface allows to add or remove target platforms as well as installers.
Feel free to change those settings, but be aware that not all installers are equally well supported; you might want to inform yourself before changing anything.
Usually, the generated GitHub workflow files are identical to the existing ones.
In this case, move on to the next step.
If the workflow files have changed, commit them.
An appropriate commit message could be:
ci: Update release workflow files
Bump Version
Bump the version in Cargo.toml
as appropriate.
Confirm Version Bump as Semantic
ℹ️ Because binaries cannot be used as a dependency, this step is only relevant if Neptune Core has library targets.
Make sure that the version bump conforms to semantic versioning.
cargo semver-checks
Generate Changelog Addition
Summarize the changes introduced since the last version.
Consistent use of Conventional Commits and git cliff
get you started:
git cliff v0.0.1..HEAD -t vX.Y.Z > /tmp/change_diff.md
# ~~~~~~~ ~~~~~~
# | the to-be-released version
# |
# at least 2 versions back for the GitHub “compare” link to work
If new commit types were introduced since the last release, git cliff
will not know about them.
You can recognize the commit types unknown to git cliff
by the missing associated emoji in the corresponding headline in the generated changelog addition.
Add the new commit types to cliff.toml
and rerun the above command.
Polish the Changelog Addition
Make the changelog addition (/tmp/change_diff.md
) concise.
This is a manual step.
Feel free to delete entries generously. For example, a branch that builds up to a certain feature might have a series of commits that are relevant for development and review. Users of Neptune Core probably only care about the feature itself; they should not be bombarded with minute details of its development process. Should they be interested in more details, the changelog will have a link to the commit that introduced the feature. From there, they can start their own journey of discovery.
If you find an entry in the changelog addition confusing or irrelevant, then with high probability, so will users of Neptune Core; delete the changelog entry, or investigate its meaning and rewrite it.
Focus only on the new version, even though the changelog addition contains sections for older versions.
The changelogs for those older versions are already in the CHANGELOG.md
, and should probably not be touched.
Amend CHANGELOG.md
Copy the now-polished changelog addition from /tmp/change_diff.md
into CHANGELOG.md
.
Commit
Add and commit the changed files.
git add Cargo.toml
git add CHANGELOG.md
git commit -m "chore: Release vX.Y.Z"
# ~~~~~
# the new version
Ensure that Tests Pass
Make sure all tests pass, preferably by waiting for GitHub's CI to finish. Alternatively, run them locally:
cargo test --all-targets
Publish to crates.io
The tool cargo-release
helps to publish multiple, possibly inter-depending crates with a single command.
ℹ️ If the workspace has only one member,
cargo publish
(instead ofcargo release
) works fine. Withcargo publish
, you will need to create git tags manually.
cargo release --execute --no-push
# ~~~~~~~~~ ~~~~~~~~~
# | gives you time to review the created git tag(s)
# |
# omit this to get a dry run
Get Green Light from Continuous Integration
Create a new git branch with the release commit and push it to GitHub. Open a pull request from that branch. Wait for continuous integration to do its job.
Once CI gives the green light, fast-forward the master branch to the tip of the feature branch and push it.
Push Tag to GitHub
In a previous step, cargo-release
automatically created one or multiple git tags.
Edit them until you are happy, then push the tag(s) to GitHub.
cargo release --execute push
Set Branch release
By convention, branch release
should always point to the latest stable commit compatible with the latest release.
git checkout release
git reset --hard master
git push --force-with-lease
Check Release Artifacts & Page
Pushing the git tag(s) triggers CI once more. After CI has done its job, check the release page to see if everything looks okay.
🎉 Congrats on the new release!
Documentation
The documentation for Neptune Cash lives in the neptune-core
repository under docs/
. It uses mdBook, a documentation-as-a-website engine popular with rust projects. The source material consists of Markdown files, and MdBook renders them as HTML pages.
Running Locally
- Make sure
mdbook
is installed:cargo install mdbook
. - Go to the
docs/
directory:cd docs/
. - (Optional:) use MdBook as a HTTP server:
mdbook serve
with an optional--open
flag. This command is useful for verifying that everything compiles in good order. It also rebuilds the website every time there is a change to the source material. - Build the files for a static website:
mdbook build
. The static files are located inbook/
.
Contributing
Due to resource constraints, this documentation is incomplete and may even deviate from the source code. Nevertheless, the goal is to have a complete and accurate documentation. You are warmly invited to help out and add to it – or fix it, if necessary. To do this, please open a pull request on Github.