| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* core: Add DKGDelayRound constant
* core: use constant value
* core, utils: set DKGDelayRound for utils.
* test: add dkgDelayRound to state
* core: do not run dkg and crs for round < DKGDelayRound
* fix test
|
|
|
|
|
|
|
|
|
|
| |
* core: resetDKG skeleton
* Add Equal test
* Add TestLocal
* Add TestPacking
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
* integration_test: Add a byzantine test
* test: fix flaky TestPullVote tes
|
|
|
|
|
|
|
|
| |
* types: Add RLP Encode/Decode to DKGComplaint
* Add test
* fix state test
|
| |
|
|
|
|
|
|
|
|
| |
* Add gosec to tools
* Run security check to ci
* Fix secrity issues
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add Restart to Ticker
* Change pre allocated size
* Return NextTime from lattice
* Few hacky fixes for BA
* PullVote in FastRollback state
* Add shallowBlock for agreementResult
* Extend period
* Fixup
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- when confirmed blocks passed to core.Consensus
aren't continuous in position in some chain, the
pulling would skip those missing blocks.
- fix: when some block is missing, avoid adding it
and all blocks after it to core.Consensus.
- we need to avoid the receive channel of network
module full.
- fix: during switching to core.Consensus, we
need to launch a dummy receiver to receive
from receive channel of network module.
- fix: between the period during core.Consensus
created and before running, a dummy receiver is
also required to receive from receive channel of
network module.
|
|
|
|
|
|
| |
* Handshake with server dmoment
* Start simulation from dMoment
* Update k8s config
|
|
|
|
|
|
| |
* simulation: fix k8s simulation stuff
* Default K to 0
|
|
|
|
|
|
| |
* test: fix marshal randomness pullrequest
* Add result for witness latency
|
| |
|
|
|
|
|
|
| |
* allow empty reqs
* Fix license
|
| |
|
| |
|
|
|
|
|
|
| |
* Merge core.Consensus constructors
* Downgrade severity of logs
* Refine logic to add blocks from pool to lattice
* Add test.LaunchDummyReceiver
|
|
|
|
|
|
|
|
| |
* Add PullRandomness to interface
* Add pendingBlocksWithoutRandomness to compactionChain
* Pull randomness every 1 second
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
* core: simulation: add throughput and block event monitoring
Added throughput and block event monitoring in TCP-local network. These
data is collected by nodes and reported to peer server.
* fix issues
* fix sent time of throughput issue
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add type DKGReady
* Add DKGReady to interface and state
* DKG will wait for MPK to be ready before running
* Modify test
* Check if self's MPK is registered
* Add test for delay add MPK
* Rename Ready to MPKReady
|
| |
|
| |
|
|
|
|
|
| |
* Replace JSON with RLP in levelDB implementation.
* Make sure blocks to sync following compaction chain tip
|
|
|
|
|
|
|
|
|
|
| |
* Rename blockdb package to db
* Rename 'BlockDB' to 'DB'
* Make all methods in db specific for ''block'.
* Rename db.BlockDatabase to db.Database
* Rename revealer to block-revealer
* Rename test.Revealer to test.BlockRevealer
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* simulation: add benchmark features
* tmp
* simulation: modify Debug interface
* Added BlockReceived and BlockReady function to Debug interface.
* Added Benchmark features.
* fix
* fix typos
|
|
|
|
|
|
| |
* Only apply each requests once
* Sort by time
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
* Broadcast to set of node instead of broadcasting when attaching cache.
* Fix pull blocks
|
| |
|
|
|
|
|
|
| |
* Rename NonByzantineTestSuite to WithSchedulerTestsuite
* Add a method to query the latest position delivered
* Add integration test for core.Consensus
* Show detailed list for test cases in CI
|
|
|
|
|
|
|
| |
* Add definition for test.PullRequest
* Cache notary sets for each round in network module
* Cache peers as nodeID in network module.
* Implement pull blocks
* Implement pull vote
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Move simulation.Network to test package
* Use test.Governance in simulation
* Pack/Apply state request in blocks payload
* Add Governance.SwitchToRemoteMode
This would trigger governance to broadcast
pending state change requests when changes.
* Allow to marshal/unmarshal packedStateChanges
* Attach test.Network and test.State
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make `test.StateChangeRequest` behaves like tx
on ethereum:
- Can be broadcasted and cached in a pool.
- Uniquely indexed, and be removed after
applied.
Changes:
- Make cloneDKGx functions in test.State as
utilities.
- Add hash and timestamp fields to
test.StateChangeRequest.
- Add two methods to test.State:
- PackOwnRequests would pack all pending
change requests owned by this instance as
byte slice, and move them to global pending
requests pool.
- AddRequestsFromOthers would add pending
change requests from others to global
pending requests pool.
- The method State.PackRequests now would pack
requests in global pending requests pool.
- The method State.Apply would remove
corresponding StateChangeRequest by hash.
|
| |
|
| |
|
|
|
|
|
| |
This info is required when application layer needs
to do something related to the underlying DAG, not
just compaction chain.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Besides making core.Lattice supports config change,
This PR also include the first test for below scenario:
- Configuration changes are registered before test
running
- Those changes are carried/broadcasted as payload
of blocks
- Only one node would initiate these changes, however,
all nodes would finally receive/apply those changes
to their own test.Governance instance.
|
|
|
|
|
|
| |
* Fix dummy error
* Check validity before apply state changes.
* Add RegisterConfigChange method to test.Governance
* Add SwitchToRemoteMode method to test.State
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
* separate test utility and interface implementation
for test.Governance.
* add test.State.
* integrate test.State to test.Governance.
test.State is mainly used to emulate state propagation
on fullnode.
|
|
|
|
| |
* Change interface of Application.VerifyBlock
|
|
|
|
|
|
|
|
| |
Since all DKG set members may ProposeCRS, but only one will get through,
we need to be able to tell which round the CRS is intended for in order
to skip CRS submission of the same round.
Also this commit add check to make sure we have enough master public key
when initializing DKGGroupPublicKey.
|
| |
|
|
|
|
| |
* core: total-ordering: add test TestRunFromNonGenesis
|
| |
|
|
|
| |
* core: total-ordering: change early flag to mode
|
|
|
|
|
|
|
| |
* Remove publicKey struct
* PublicKey.Bytes() return uncompressed public key to match
the address format of dexon (Keccak256(pubBytes[1:])[12:], where
pubBytes is 65 bytes uncompressed public key.).
* Rename ethcrypto to dexCrypto
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Implement flush
* Panic for all errors from total-ordering
* Fix test failure
All DAGs generated by blocks-generator would trigger round switching.
* Add NewBlocksGeneratorConfig
* Add test caes for numChains changes
* Resize internal structures
* Perform total ordering based on current numChains
* Fix not a valid DAG checking
* Comparing blocks by height is not correct
* Fix blocks from future round are delivered first by revealer
* Make sure only picking one candidate in one chain.
Blocks on the same chain in different rounds would not
have acking relation.
* Fix stuffs
* Fix the issue that two candidates from the same chain are picked.
* Rework candidateChainMapping
* Add test case for phi, k changed
* Refine testing code for round change
* Add breakpoints in global vector
* Remove not a valid dag checking.
* Adding comments
* Add check to forward acking
* Fix vet failure
* Prepareing height record with breakpoint
* Fixup: add check to make sure delivered round IDs are increasing.
|
|
|
|
|
|
|
| |
* Block proposing based on timestamp, instead of
count of blocks generated.
* Add method to find tips of each round in blockdb.
* Block proposing based on tips of last round found
on blockdb.
|
|
|
|
| |
* Add a new method to notify full node about round cutting.
* Modify interface to return error when preparing block
|
|
|
|
|
|
|
| |
* Extract types.FinalizationResult
* Change interface:
- Application.BlockConfirmed returns whole block.
- Application.BlockDelivered returns partial result.
|
|
|
|
|
| |
Use ethereum style nodeID generation (keccak on compressed public key
without the first byte).
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
(#170)
|
|
|
|
| |
NumWitnessSet is no longer required as we don't have witness set in the
design anymore.
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
Also rename the argument name of ProposeCRS.
|
| |
|
|
|
|
|
|
| |
Update data model:
1) Remove witness ack.
2) Add round to block.
3) Update governance interface.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
* Refine core.Governance interface
- Remove types.NodeID from interface declaration.
- All parameter should be round based.
* Add core.NodeSetCache
* Agreement accepts map of nodeID directly.
* test.Transport.Peers method return public keys.
|
|
|
|
|
|
| |
1) Remove RoundHeight from config.
2) NotarySet is not from governance contract, we get Node set intead.
Notary set is caculated from the NodeSet using CRS.
3) CRS is not in the governance interface.
|
| |
|
|
|
|
|
|
|
|
|
| |
- Move key-holder to authenticator
Make core.keyHolder public as core.Authenticator, it
is not required to make this part an interface.
- Make private when there is no need to go public.
- Fix data race
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
- BlockDeliver -> BlockDelivered
- TotalOrderingDeliver -> TotalOrderingDelivered
- WitnessAckDeliver -> WitnessAckDelivered
- VerifyPayload -> VerifyPayloads
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Split interface
* Rename nonblocking-application to nonblocking
Parts needs nonblocking gets more.
* Implement core.nonBlocking based on interface split
* Fix: the witness parent hash could be parent on compaction chain.
* Rename Application.DeliverBlock to BlockDeliver
To sync with naming of other methods.
* Change methods' fingerprint
- BlockConfirmed provides block hash only.
- BlockDeliver provde a whole block.
|
| |
|
|
|
|
|
|
|
| |
A shard is basically DEXON v1 components,
except the strongly acked part, including:
- maintaining lattice structure
- total ordering
- generate consensus timestamp
|
|
|
|
|
|
|
|
| |
* Generate correct hash/signature when generating blocks.
* Refine naming, types.
- type of chainNum should be uint32 by default
- rename blockCount to blockNum
- rename nodeCount to chainNum
|
|
|
|
|
|
|
|
|
|
|
| |
Since witness data need to include data from application after it
processed a block (e.g. stateRoot). We should make the process of
witness data asynchronous.
An interface `BlockProcessedChan()` is added to the application
interface to return a channel for notifying the consensus core when a
block is processed. The notification object includes a byte slice
(witenss data) which will be include in the final witness data object.
|
| |
|
| |
|
|
|
|
|
| |
Since we have a bunch of static configurations in the governance
contract, instead of using a Get* method for each of them, we instead
implement a GetConfiguration() method to return a structure of the configurations.
|
|
|
|
|
|
|
|
|
| |
- remove BlockProposingInterval, it's replaced
by lambda.
- remove ticker parameter of NewConsensus,
ticker would be derived from governance.
- leave a backdoor to hook the construction
of ticker.
- move 'Lambda' config from to consensus.
|
|
|
|
|
|
|
| |
interface (#110)
Since third party apps will possibly implement their only blockdb class,
it make sense for the interface to be in core.
Also add GetNumShards into the governance interface.
|
| |
|
|
|
|
| |
Since we are using a byte slice for storing payload. VerifyPayload()
should also accepts a byte slice.
|
| |
|
|
|
|
|
| |
Change payload type to []byte instead of [][]byte to make it more
generic. The user of the consensus core should marshal the payload into
a byte slice by themselves.
|
|
|
|
|
| |
- Add marshaller for simulation by encoding/json
- Implement peer server based on test.TranportServer
- Remove network models, they are replaced with test.LatencyModel
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The purpose of transport layer is to abstract the way to send messages and setup connections between peers in a p2p network. The peer discovery is simulated by a hosted server: every peer sends its address to a known server. Once collecting enough peers, respond the whole peers lists to all peers.
Changes:
- Add test.Trasnport interface
- Add test.Transport implementation by golang channel.
- Add test.transport implementation by TCP connection.
- Move LatencyModel to core/test package
- Add Marshaller interface
|
| |
|
| |
|
|
|
|
| |
`Height` from `Block` (#89)
|
| |
|
|
|
|
| |
interface. (#84)
|
|
|
|
|
| |
* Add chainID in simulation.Validator
* Change validatorid to chainID in rbModule
|
|
|
|
|
| |
- Replace map with slice
Compared to slice, accessing to map is slower and
the memory usage is inefficient.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Avoid using recursive function in critical path.
- Do not write through when using levelDB. Things put to
levelDB would be safe from panic even we didn't force
to write through every time.
- Dump count of confirmed blocks proposed by self.
- Avoid allocating variables in loop.
- Return length of acking node set, we only need that
when total ordering.
- Fix potential bug: make sure win records updated when
acking height vectors of candidates are changed.
- Keep dirty validators in slice.
- Add cache for objects to ease the pressure to garbage
collector.
- Cache global acking status when total ordering.
- Add method to recycle blocks.
- Marshal JSON should be called once for each broadcast.
- Make updateWinRecord called in parallel.
- Log average / deviation of latencies when simulation
finished.
|
| |
|
|
|
|
|
|
|
|
| |
- Add new field in test.Event: HistoryIndex
HistoryIndex allow us to access them by their position in event history.
- Record local time in test.App when receiving events.
- Add statisitics module for slices of test.Event.
- add new command line utility *dexcon-simulation-with-scheduler
to verify the execution time of core.Consensus.
|
| |
|
|
|
|
|
|
| |
- the checking of `internal stability` is more expensive than checking `len(ANS) == validatorCount`. So only check it when `len(ANS) != validatorCount`.
- cache the result of `grade` between candidates.
- cache the `acking height vector` of each candidate.
- add test on total ordering with different acking frequency between blocks.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
When simulating execution of core.Consensus by passing packets through golang-channel or real-socket, we need to utilize time.Sleep and time.Now to simulate the required network/proposing latency. It's problematic when we try to test a simulation with long network latency.
Instead, Scheduler would try to execute the event with minimum timestamp, thus time.Sleep is replaced with Scheduler.nextTick, and time.Now is replaced with Event.Time.
Changes:
- Add test.Scheduler.
- Add test.Stopper interface to provide encapsulate different stop conditions for scheduler.
- Add a reference implementation for test.Stopper, it will stop scheduler when all validators confirmed X blocks proposed from themselves.
- Add a test scenario on core.Consensus that all validators are not byzantine.
|
|
|
|
|
| |
* Add functionality to test.App
* Add test utility to generate slices of types.ValidatorID
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
* Add hash to block
* Check block hash in Consensus.sanityCheck
* Add hashBlockFn in block generator.go
|
| |
|
| |
|
|
|
|
| |
(#37)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Make Sequencer return slice of blocks.
* Fix naming issue
The function 'getHeightVecto' would return ackingStatusVector.
* Fix comment error.
* Add methods to collect info when proposing blocks.
* Add test.App
* Add test.Gov
* Move this type to core.types to avoid cyclic import.
* Add core.Consensus
* Move getMedianTime, interpoTime to util
These functions are not depending on members of core.consensusTimestamp and
is required when testing core.Consensus.
* Make sure types.Block.Clone would copy critical fields.
* Remove core.blocklattice
* Define 'infinity' in core/total-ordering
This definition is defined in core/blocklattice originally.
* Fix a bug when processing the same block twice.
* Integrate simulation with core.Consensus
core.Consensus is a replacement of core.Blocklattice
* Fix the comment to use sigular form.
* Move lock mechanism to sub modules.
* phi should be 2*fmax+1
* Fixup: should aborting when the validator is added
* Fix for new block fields
* Fix the bug that the total ordering sequence is wrong.
|
|
* Add blocks generator.
This helper would randomly generate blocks that forms valid DAGs.
* Add revealer
Revealer is an extension of blockdb.BlockIterator. The block
sequence from 'Next' method would be either randomly (see
RandomRevealer) or meeting some specific condition (ex. forming
a DAG, see RandomDAGRevealer).
* Add test for sequencer based on random blocks.
* core: refine Application interface and add Governance interface (#24)
Add a new Governance interface for interaction with the governance contract.
Also remove the ValidateBlock call in application interface as the application should validate it before putting it into the consensus module.
A new BlockConverter interface is also added. The consensus module should accept the BlockConverter interface in future implementation, and use the Block() function to get the underlying block info.
|