| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
* Lock entire ProcessBlock
* Lock Consensus.processBlock
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
* fix discontinuous finalization height
* remove types.Block.Witness.Timestamp field
* add field: types.Block.Finalization.ParentHash
* fix log format of CRS
* switch round and chain in log of types.Position.
|
| |
|
|
|
|
|
|
|
|
|
| |
* separate test utility and interface implementation
for test.Governance.
* add test.State.
* integrate test.State to test.Governance.
test.State is mainly used to emulate state propagation
on fullnode.
|
| |
|
|
|
|
|
|
| |
* remove sanity check when adding blocks.
* call VerifyBlock after lattice's sanity check.
* remove checkRelation flag.
|
| |
|
|
|
|
| |
* Change interface of Application.VerifyBlock
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Besides adding equality, also renaming those fields.
- PublicKeyShares.shares -> shareCaches
- PublicKeyShares.shareIndex -> shareCacheIndex
- rlpPublicKeyShares.Shares -> ShareCaches
- rlpPublicKeyShares.ShareIndexK -> ShareCacheIndexK
- rlpPublicKeyShares.ShareIndexV -> ShareCahceIndexV
|
|
|
|
|
|
|
|
| |
Since all DKG set members may ProposeCRS, but only one will get through,
we need to be able to tell which round the CRS is intended for in order
to skip CRS submission of the same round.
Also this commit add check to make sure we have enough master public key
when initializing DKGGroupPublicKey.
|
| |
|
|
|
|
|
| |
* Register DKG after CRS is proposed
* Change round at only one place
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
* core: total-ordering: add test TestRunFromNonGenesis
|
| |
|
| |
|
| |
|
|
|
|
|
| |
* core: types: implement rlp.Encoder and rlp.Decoder
* crypto: dkg: fix PublicKey.Bytes
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Replace "log.*" with logger.
* Add simple logger to log with log package.
* Add debug logs to all calls to these interfaces:
- core.Application
- core.Governance
- core.Network
* Add Stringer to these types:
- types.DKGComplaint
- types.AgreementResult
- types.DKGMasterPublicKey
- types.DKGFinalize
|
| |
|
| |
|
|
|
| |
* core: total-ordering: change early flag to mode
|
|
|
|
|
|
|
| |
* Remove publicKey struct
* PublicKey.Bytes() return uncompressed public key to match
the address format of dexon (Keccak256(pubBytes[1:])[12:], where
pubBytes is 65 bytes uncompressed public key.).
* Rename ethcrypto to dexCrypto
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Implement flush
* Panic for all errors from total-ordering
* Fix test failure
All DAGs generated by blocks-generator would trigger round switching.
* Add NewBlocksGeneratorConfig
* Add test caes for numChains changes
* Resize internal structures
* Perform total ordering based on current numChains
* Fix not a valid DAG checking
* Comparing blocks by height is not correct
* Fix blocks from future round are delivered first by revealer
* Make sure only picking one candidate in one chain.
Blocks on the same chain in different rounds would not
have acking relation.
* Fix stuffs
* Fix the issue that two candidates from the same chain are picked.
* Rework candidateChainMapping
* Add test case for phi, k changed
* Refine testing code for round change
* Add breakpoints in global vector
* Remove not a valid dag checking.
* Adding comments
* Add check to forward acking
* Fix vet failure
* Prepareing height record with breakpoint
* Fixup: add check to make sure delivered round IDs are increasing.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* core: consensus-timestamp: add sync
* core: consensus-timestamp: add config change
* fix go comment
* add config change test
* fixup: add error case handling
* fixup: round interleave
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
* Refine the initial value for empty time slot.
* Fix DATA RACE
netowrkConnection is reset for each test,
however, our Consensus instance is not
stopped after one test is finished, they
might continue use network interface for
a while.
|
| |
|
| |
|
|
|
|
|
|
| |
* core: consensus-timestamp: modify for round change
* core: consensus-timestamp: fix typos
|
| |
|
|
|
|
|
|
| |
* leader selector will choose smaller hash if distance to crs is the same
* Set initial value of aID in BA before start
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
* Sync dMoment for all consensus core
* App check for randomness ignore round 0
|
| |
|
| |
|
|
|
|
|
| |
* No randomness for round 0
* Ignore round 0 randomness
|
| |
|
|
|
|
|
|
|
| |
* Block proposing based on timestamp, instead of
count of blocks generated.
* Add method to find tips of each round in blockdb.
* Block proposing based on tips of last round found
on blockdb.
|
|
|
|
| |
(#195)
|
|
|
|
| |
* Add a new method to notify full node about round cutting.
* Modify interface to return error when preparing block
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add test for num of chains changes.
* Return error in latticeData.prepareBlock
* Compare two positions
* Modify chainStatus from height-based to index-based.
* Fix consensus to use round variable
* Remove sanity check in chainStatus
* Fixup: refine sanity check
- verify if round switching is required or not by
chainTip's config.
- make the logic in sanity check more clear
- pospone acking relationship checking, they
are more expensive to check.
|
|
|
|
|
|
|
| |
* Extract types.FinalizationResult
* Change interface:
- Application.BlockConfirmed returns whole block.
- Application.BlockDelivered returns partial result.
|
|
|
|
|
|
|
|
| |
When a block is confirmed, all its txs are permitted to
be executed.
Therefore, exporting this method provides more and
earlier information about txs to be executed, and helps
application layer to process txs more efficiently.
|
|
|
|
|
| |
Use ethereum style nodeID generation (keccak on compressed public key
without the first byte).
|
| |
|
|\
| |
| | |
common: add Bytes() method to Hash
|
| |\
| |/
|/| |
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Make sure block pool is large enough
It's safe to use a larger blockPool when
the number of chains is smaller.
* Construct latticeData via config.
* Seek acked blocks in blockdb when
unable to find them in memory cache.
In previous implementation, we assume
our cache in memory is enough to perform
DAG's sanity check. However, it's no longer
true when the number of chains might be
changed between rounds.
* Simplify purge.
Remove the relation to purge block by chainStatus.
|
| |
|
| |
|
|
|
|
|
| |
- Split latticeData to another file
- Remove areAllAcksInLattice
|
|
|
|
| |
(#170)
|
| |
|
| |
|
|
|
|
| |
NumWitnessSet is no longer required as we don't have witness set in the
design anymore.
|
|
|
|
|
| |
verifyDKGMasterPublicKeySignature and verifyDKGComplaintSignature are
needed in the governance contract to verify the signature. Export than
so fullnode can use it.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
Also rename the argument name of ProposeCRS.
|
| |
|
|
|
|
|
|
| |
Update data model:
1) Remove witness ack.
2) Add round to block.
3) Update governance interface.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
* Refine core.Governance interface
- Remove types.NodeID from interface declaration.
- All parameter should be round based.
* Add core.NodeSetCache
* Agreement accepts map of nodeID directly.
* test.Transport.Peers method return public keys.
|
| |
|
|
|
|
|
|
| |
1) Remove RoundHeight from config.
2) NotarySet is not from governance contract, we get Node set intead.
Notary set is caculated from the NodeSet using CRS.
3) CRS is not in the governance interface.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
- Move key-holder to authenticator
Make core.keyHolder public as core.Authenticator, it
is not required to make this part an interface.
- Make private when there is no need to go public.
- Fix data race
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
- BlockDeliver -> BlockDelivered
- TotalOrderingDeliver -> TotalOrderingDelivered
- WitnessAckDeliver -> WitnessAckDelivered
- VerifyPayload -> VerifyPayloads
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Split interface
* Rename nonblocking-application to nonblocking
Parts needs nonblocking gets more.
* Implement core.nonBlocking based on interface split
* Fix: the witness parent hash could be parent on compaction chain.
* Rename Application.DeliverBlock to BlockDeliver
To sync with naming of other methods.
* Change methods' fingerprint
- BlockConfirmed provides block hash only.
- BlockDeliver provde a whole block.
|
| |
|
|
|
|
|
| |
The purpose to add this module is to export
the functionality to sign/verify data without
exporting private key directly.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
A shard is basically DEXON v1 components,
except the strongly acked part, including:
- maintaining lattice structure
- total ordering
- generate consensus timestamp
|
| |
|
|
|
|
|
|
|
|
| |
* Generate correct hash/signature when generating blocks.
* Refine naming, types.
- type of chainNum should be uint32 by default
- rename blockCount to blockNum
- rename nodeCount to chainNum
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Since witness data need to include data from application after it
processed a block (e.g. stateRoot). We should make the process of
witness data asynchronous.
An interface `BlockProcessedChan()` is added to the application
interface to return a channel for notifying the consensus core when a
block is processed. The notification object includes a byte slice
(witenss data) which will be include in the final witness data object.
|
|
|
|
|
|
| |
core.blockPool is used to cache blocks arrived
out-of-order. Our consensus should retry
those blocks after their acking blocks added
to lattice.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
blocklattice is used to replace reliable broadcast.
Aiming to fix these problems:
- The mechanism related to strong ack is no
longer required.
- The sanity-check of one block would be passed
even if its acking block doesn't exist.
This commit doesn't include logic to handle
out-of-order blocks. It should be done in
another PR.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Since we have a bunch of static configurations in the governance
contract, instead of using a Get* method for each of them, we instead
implement a GetConfiguration() method to return a structure of the configurations.
|
|
|
|
|
|
|
|
| |
In order for non-root homebrew install to work, we need to setup some
custom CFLAGS and LDFLAGS variables, we also need to patch the mcl
library so we are able to build.
A PR is sent to the mcl upstream, the local patch will be removed once
the upstream PR is merged.
|
|
|
|
|
|
|
|
|
| |
- remove BlockProposingInterval, it's replaced
by lambda.
- remove ticker parameter of NewConsensus,
ticker would be derived from governance.
- leave a backdoor to hook the construction
of ticker.
- move 'Lambda' config from to consensus.
|
|
|
|
|
|
|
| |
interface (#110)
Since third party apps will possibly implement their only blockdb class,
it make sense for the interface to be in core.
Also add GetNumShards into the governance interface.
|
| |
|
|
|
|
|
|
| |
- With context, we don't need stopChan
- Remove core.BlockChain.
- Remove unused variable.
|
| |
|
| |
|
|
|
|
| |
Since we are using a byte slice for storing payload. VerifyPayload()
should also accepts a byte slice.
|
|
|
|
|
|
|
|
| |
* DKG API and test.
* Change naming
* Broadcast pubShares
|
| |
|
|
|
|
|
| |
Change payload type to []byte instead of [][]byte to make it more
generic. The user of the consensus core should marshal the payload into
a byte slice by themselves.
|
| |
|
|
|
|
|
| |
- Add marshaller for simulation by encoding/json
- Implement peer server based on test.TranportServer
- Remove network models, they are replaced with test.LatencyModel
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The purpose of transport layer is to abstract the way to send messages and setup connections between peers in a p2p network. The peer discovery is simulated by a hosted server: every peer sends its address to a known server. Once collecting enough peers, respond the whole peers lists to all peers.
Changes:
- Add test.Trasnport interface
- Add test.Transport implementation by golang channel.
- Add test.transport implementation by TCP connection.
- Move LatencyModel to core/test package
- Add Marshaller interface
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
`Height` from `Block` (#89)
|
| |
|
| |
|
| |
|
|
|
|
| |
interface. (#84)
|
|
|
|
|
| |
* Add chainID in simulation.Validator
* Change validatorid to chainID in rbModule
|
|
|
|
|
| |
- Replace map with slice
Compared to slice, accessing to map is slower and
the memory usage is inefficient.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Avoid using recursive function in critical path.
- Do not write through when using levelDB. Things put to
levelDB would be safe from panic even we didn't force
to write through every time.
- Dump count of confirmed blocks proposed by self.
- Avoid allocating variables in loop.
- Return length of acking node set, we only need that
when total ordering.
- Fix potential bug: make sure win records updated when
acking height vectors of candidates are changed.
- Keep dirty validators in slice.
- Add cache for objects to ease the pressure to garbage
collector.
- Cache global acking status when total ordering.
- Add method to recycle blocks.
- Marshal JSON should be called once for each broadcast.
- Make updateWinRecord called in parallel.
- Log average / deviation of latencies when simulation
finished.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
To run a simulation with scheduler on k8s:
./run_scheduler.sh 61 5
Where *61* means the simulation contains 61 validators, and *5* means
the simulation utilizes 5 vCPUs and corresponding concurrent workers to run.
|
| |
|
|
|
|
|
|
|
|
| |
- Add new field in test.Event: HistoryIndex
HistoryIndex allow us to access them by their position in event history.
- Record local time in test.App when receiving events.
- Add statisitics module for slices of test.Event.
- add new command line utility *dexcon-simulation-with-scheduler
to verify the execution time of core.Consensus.
|
| |
|
|
|
|
|
|
| |
- the checking of `internal stability` is more expensive than checking `len(ANS) == validatorCount`. So only check it when `len(ANS) != validatorCount`.
- cache the result of `grade` between candidates.
- cache the `acking height vector` of each candidate.
- add test on total ordering with different acking frequency between blocks.
|
| |
|
|
|
|
|
|
| |
- Clone block once for each broadcast
- Add network latency model for TCPNetwork
- Fix map concurrent write
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
When simulating execution of core.Consensus by passing packets through golang-channel or real-socket, we need to utilize time.Sleep and time.Now to simulate the required network/proposing latency. It's problematic when we try to test a simulation with long network latency.
Instead, Scheduler would try to execute the event with minimum timestamp, thus time.Sleep is replaced with Scheduler.nextTick, and time.Now is replaced with Event.Time.
Changes:
- Add test.Scheduler.
- Add test.Stopper interface to provide encapsulate different stop conditions for scheduler.
- Add a reference implementation for test.Stopper, it will stop scheduler when all validators confirmed X blocks proposed from themselves.
- Add a test scenario on core.Consensus that all validators are not byzantine.
|
| |
|
|
|
|
|
| |
* Add functionality to test.App
* Add test utility to generate slices of types.ValidatorID
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
Fix concurrent map write and also change k8s settings.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
* Add SigToPub function in crypto
|
|
|
|
|
|
|
|
| |
* Add hash to block
* Check block hash in Consensus.sanityCheck
* Add hashBlockFn in block generator.go
|
|
|
|
| |
Delete all blocks in received blocks array for avoiding using too
much memory space.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix the issue that processing genesis block twice.
- Restore the mechanism to avoid sending block to proposer.
* Fix the 'keep-alive' not working
Quote from comments of net/http/request
For client requests, setting this field prevents re-use of
TCP connections between requests to the same hosts, as if
Transport.DisableKeepAlives were set.
* Remove useless field
* Fix the test bug: I should provide '3' when test K=3
* Fixup: the parent hash of genesis block should be zero
|
| |
|
| |
|
| |
|
|
|
|
| |
(#37)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Refine peer server
* k8s ignore
* Keep peer server alive on k8s
* Stop validators from accepting new blocks after peer server has shut down.
* Add comment
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Make Sequencer return slice of blocks.
* Fix naming issue
The function 'getHeightVecto' would return ackingStatusVector.
* Fix comment error.
* Add methods to collect info when proposing blocks.
* Add test.App
* Add test.Gov
* Move this type to core.types to avoid cyclic import.
* Add core.Consensus
* Move getMedianTime, interpoTime to util
These functions are not depending on members of core.consensusTimestamp and
is required when testing core.Consensus.
* Make sure types.Block.Clone would copy critical fields.
* Remove core.blocklattice
* Define 'infinity' in core/total-ordering
This definition is defined in core/blocklattice originally.
* Fix a bug when processing the same block twice.
* Integrate simulation with core.Consensus
core.Consensus is a replacement of core.Blocklattice
* Fix the comment to use sigular form.
* Move lock mechanism to sub modules.
* phi should be 2*fmax+1
* Fixup: should aborting when the validator is added
* Fix for new block fields
* Fix the bug that the total ordering sequence is wrong.
|
| |
|
| |
|
|
|
| |
Force connection reuse and TCP keep alive by using the same http client for all reqeusts.
|
|
|
|
|
|
|
|
|
|
|
| |
Rename these files:
- core/sequencer[_test].go -> core/total-ordering[_test].go
- core/acking[_test].go -> core/reliable-broadcast[_test].go
- core/timestamp[_test].go -> core/consensus-timestamp[_test].go
Rename these structs:
- core.sequencer -> core.totalOrdering
- core.acking -> core.reliableBroadcast
- core.timestamp -> core.consensusTimestamp
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add blocks generator.
This helper would randomly generate blocks that forms valid DAGs.
* Add revealer
Revealer is an extension of blockdb.BlockIterator. The block
sequence from 'Next' method would be either randomly (see
RandomRevealer) or meeting some specific condition (ex. forming
a DAG, see RandomDAGRevealer).
* Add test for sequencer based on random blocks.
* core: refine Application interface and add Governance interface (#24)
Add a new Governance interface for interaction with the governance contract.
Also remove the ValidateBlock call in application interface as the application should validate it before putting it into the consensus module.
A new BlockConverter interface is also added. The consensus module should accept the BlockConverter interface in future implementation, and use the Block() function to get the underlying block info.
|
| |
|
|
|
|
| |
This commit made acking module in core implicit for avoiding other
package to use. And fix underscore error messages.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Add a new Governance interface for interaction with the governance contract.
Also remove the ValidateBlock call in application interface as the application should validate it before putting it into the consensus module.
A new BlockConverter interface is also added. The consensus module should accept the BlockConverter interface in future implementation, and use the Block() function to get the underlying block info.
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Allow to dump blockdb to a json file
- Compared to leveldb, a json file is easier to trace.
- Add interfaces block database:
- Close would be required by database that needs cleanup.
- BlockIterator is required when we need to access 'all' blocks,
adding a new method 'GetAll' as the constructor for iterators.
- Remove GetByValidatorAndHeight from blockdb.Reader
- This function is not used anywhere, to make interface
minimum, remove it.
- Fix typo: backend -> backed
|
| |
|
| |
|
|
|
|
| |
- Add types.ByHeight to sort slice of blocks by their heights.
- Add test case for sorting methods of types.Block.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement K-Level Total ordering algorithm
Besides algorithm implementation,
these concepts are also included:
The candidate set and ackingStatusVector of
each candidate won't be recalculated upon
receiving each block.
Make the component to calculate total ordering
more self-contained. The access to block status
is only required when receiving a new block.
|
|
|
|
|
|
|
| |
* Refactor and add acking module
Extract acking module for unit testing. This commit splits functions
into small pieces for better understanding and easy unit testing.
|