| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
* core: syncer: deliver pending blocks
* fixup
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* core: fix block timestamp (#529)
* Remove TODO
dMoment is still required when the block timestamp of
the genesis block is still need to be verified.
* Refine timestamp when preparing blocks
* Add timestamp checking in sanity check
* Revert code to patch position when preparing
* Remove TODOs that seems meaningless now
* Remove TODOs related to refactoring
* core: remove finalization (#531)
- Remove types.FinalizationResult, randomness
field would be moved to `types.Block` directly.
- Add a placeholder for types.Block.Randomness
field for blocks proposed from
round < DKGDelayRound. (refer to core.NoRand)
- Make the height of the genesis block starts
from 1. (refer to types.GenesisHeight)
- The fullnode's behavior of
core.Governance.GetRoundHeight is (assume
round-length is 100):
- round: 0 -> 0 (we need to workaround this)
- round: 1 -> 101
- round: 2 -> 201
- test.Governance already simulate this
behavior, and the workaround is wrapped at
utils.GetRoundHeight.
* core: fix issues (#536)
fixing code in these condition:
- assigning position without initializing them
and expected it's for genesis
- compare height with 0
|
| |
|
|
|
|
|
|
|
|
|
|
| |
* core: bring back agreement result
* add logger
* Fix
* fixup
|
|
|
|
|
|
|
|
|
|
|
|
| |
* core: remove agreement result for round with randomness
* remove agr test in syncer
* fixup
* remove randomness field from agreement result
* modify test
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* core: ăăăȘă DKGSet
* test logger
* temporary fix before finalized
* core: Sign psig on commit vote
* Add syncer log
* fixup
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Avoid aborting the DKG protocol registered later
Although that DKG protocol would be registered after
1/2 round, both of them are triggered in separated
go routine and we shouldn't assuem their execution order.
* Capitalize logs
* Add test
* Return aborted when not running
* Log DKG aborting result
* Remove duplicated DKG abort
|
|
|
|
| |
* Implement Governance.GetRoundHeight
in test.Governance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Capitalize log
* Fix DKG aborting hangs
Make sure reset cc.dkg to nil in runDKG
* Remember to purge tsig verfier too
* Replace abortCh with context.Context
* Fix obvious bug
* Fixup: forever blockin at Wait method when runDKG is not called
* Fixup: fix corner case
If the Add(1) moved to runDKG under cc.dkgLock,
we may not catch it after unlocking cc.dkgLock.
* fixup
|
|
|
|
|
|
|
| |
* Make utils.RoundEvent.ValidateNextRound non-blocking
* Make NotifyHeight called blockingly
* Trigger all height event handlers that should be triggered by initBlock
* Fixup: forget the syncer part
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Allow utils.NodeSetCache to purge by rounds.
* Purge utils.NodeSetCache when DKG reset.
* Add a utils.RoundEvent handler to abort all
previous running DKG
* Fix test.App hangs in BlockDelivered when
utils.RoundEvent is attached.
ValidateNextRound is a blocking call and would
block test.App.BlockDelivered.
|
| |
|
|
|
|
| |
Move timeout configuration from the parameter of `Start` to NewWatchCat
so it's easier for fullnode to configure the module.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* core: Add Recovery Interface
* core/syncer: modify recovery interface
* core: fix Recovery interface
* core/syncer: rename terminator to watchcat (#491)
* core/syncer: rename terminator to watchcat
* Add error log
* Rename Pat to Feed
* core/syncer: add force sync
* run prepareRandomness if round >= DKGDelayRound
* Add test for Forcsync
|
|
|
|
|
|
| |
* core/syncer: rename terminator to watchcat
* Add error log
* Rename Pat to Feed
|
|
|
|
|
| |
* core: Add Recovery Interface
* core/syncer: modify recovery interface
|
|
|
|
|
|
|
|
| |
* Fix dead lock
* core/syncer: prevent selecting on a nil channel
* Remove unnecessary reader lock holding
|
|
|
|
|
|
|
|
|
|
|
|
| |
One possible attack for syncer is:
- byzantine nodes periodically broadcast some very
old types.AgreementResults.
- If some syncer receive those
types.AgreementResult, they might synced
directly while still fall behind other nodes.
A quick workaround is ignore
types.AgreementResults older than the chain tip
when creating the syncer.Consensus instance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* core: Add DKGDelayRound constant
* core: use constant value
* core, utils: set DKGDelayRound for utils.
* test: add dkgDelayRound to state
* core: do not run dkg and crs for round < DKGDelayRound
* fix test
|
|
|
|
|
|
|
|
| |
* Fix syncer panic
We can't verify correctness for randomness result from rounds that
corresponding configurations are not ready yet.
* Fix blocks are not confirmed while they should be
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- when confirmed blocks passed to core.Consensus
aren't continuous in position in some chain, the
pulling would skip those missing blocks.
- fix: when some block is missing, avoid adding it
and all blocks after it to core.Consensus.
- we need to avoid the receive channel of network
module full.
- fix: during switching to core.Consensus, we
need to launch a dummy receiver to receive
from receive channel of network module.
- fix: between the period during core.Consensus
created and before running, a dummy receiver is
also required to receive from receive channel of
network module.
|
|
|
|
|
|
| |
This error would always be raised when trying to
sync from consensusHeight == 0. However, this
error is mostly meaningless to fullnode, they
just need to retry it later.
|
|
|
|
| |
NOTE: the assurance between block's hash and
block's position would be done in core.Consensus.
|
|
|
|
|
|
|
|
|
| |
Besides adding logs, also include these minor
fixes:
* Accessing con.validatedChains under locking
* Access con.latticeLastRound under locking
* Fix incorrect waitGroup usage
* Remove unused parameter in startAgreement
|
|
|
|
|
|
|
| |
* Remove block recycling mechanism
* Return directly when previous DKG is not finished.
* Adjust some logging level
* Skip pulling when hashes to pull is empty
|
| |
|
|
|
|
|
| |
* Merge several CRS notifications into one.
* Sync config when new CRS is found
|
| |
|
|
|
|
|
|
| |
* Merge core.Consensus constructors
* Downgrade severity of logs
* Refine logic to add blocks from pool to lattice
* Add test.LaunchDummyReceiver
|
|
|
|
|
|
|
|
| |
* Avoid panic when stopping multiple times.
* Fix syncer panic when round switching
* Add getCurrentConfig to total-ordering,
and panic with more info
* Avoid infinite loop.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Panic when config/crs not ready
For those calls to Governace.Configuration
and Governance.CRS without checking
returns, replace those calls with these newly
added helpers:
- utils.GetConfigurationWithPanic
- utils.GetCRSWithPanic
They would check returns, and panic directly
if not ready yet.
* Fix a bug that config is not ready
when syncing
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add a new method: GetSyncedConsensus
This method avoids calling BlockDelivered
in SyncBlocks methods, which avoid potential
deadlock.
* Add a method to stop the syncer before synced
* Enable nonBlockingApp for synced Consensus
instance.
|
|
|
|
|
| |
* Replace JSON with RLP in levelDB implementation.
* Make sure blocks to sync following compaction chain tip
|
|
|
|
|
|
|
|
|
|
| |
* Rename blockdb package to db
* Rename 'BlockDB' to 'DB'
* Make all methods in db specific for ''block'.
* Rename db.BlockDatabase to db.Database
* Rename revealer to block-revealer
* Rename test.Revealer to test.BlockRevealer
|
|
|
|
|
|
|
|
|
|
| |
* return delivered blocks when processing finalized blocks
* check deliver sequence when processing finalized blocks
* skip delivery of finalized blocks
* remove duplicated calls to BlockConfirmed
* add numChains change in test scenario
* fix the bug that restartNotary is triggered by older block
than current aID.
|
|
|
|
|
|
| |
* core: syncer: fix round finding process
* Fix comment
|
| |
|
|
|