aboutsummaryrefslogtreecommitdiffstats
path: root/core/syncer
Commit message (Collapse)AuthorAgeFilesLines
* core: syncer: add deliver pending blocks (#546)Jimmy Hu2019-04-031-0/+31
| | | | | | * core: syncer: deliver pending blocks * fixup
* core: clean TODOs (#539)Mission Liao2019-04-012-35/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * core: fix block timestamp (#529) * Remove TODO dMoment is still required when the block timestamp of the genesis block is still need to be verified. * Refine timestamp when preparing blocks * Add timestamp checking in sanity check * Revert code to patch position when preparing * Remove TODOs that seems meaningless now * Remove TODOs related to refactoring * core: remove finalization (#531) - Remove types.FinalizationResult, randomness field would be moved to `types.Block` directly. - Add a placeholder for types.Block.Randomness field for blocks proposed from round < DKGDelayRound. (refer to core.NoRand) - Make the height of the genesis block starts from 1. (refer to types.GenesisHeight) - The fullnode's behavior of core.Governance.GetRoundHeight is (assume round-length is 100): - round: 0 -> 0 (we need to workaround this) - round: 1 -> 101 - round: 2 -> 201 - test.Governance already simulate this behavior, and the workaround is wrapped at utils.GetRoundHeight. * core: fix issues (#536) fixing code in these condition: - assigning position without initializing them and expected it's for genesis - compare height with 0
* syncer: confirmed block without randomness (#532)Mission Liao2019-03-291-4/+8
|
* core: bring back agreement result (#515)Jimmy Hu2019-03-271-8/+35
| | | | | | | | | | * core: bring back agreement result * add logger * Fix * fixup
* core: Remove agreement result (#514)Jimmy Hu2019-03-273-138/+98
| | | | | | | | | | | | * core: remove agreement result for round with randomness * remove agr test in syncer * fixup * remove randomness field from agreement result * modify test
* core: merge notarySet and DKGSet (#488)Jimmy Hu2019-03-271-71/+12
| | | | | | | | | | | | | | * core: さよăȘら DKGSet * test logger * temporary fix before finalized * core: Sign psig on commit vote * Add syncer log * fixup
* core: refine DKG aborting (#512)Mission Liao2019-03-232-17/+17
| | | | | | | | | | | | | | | | | | * Avoid aborting the DKG protocol registered later Although that DKG protocol would be registered after 1/2 round, both of them are triggered in separated go routine and we shouldn't assuem their execution order. * Capitalize logs * Add test * Return aborted when not running * Log DKG aborting result * Remove duplicated DKG abort
* core: remove initRoundBeginHeight paramenterMission Liao2019-03-221-59/+14
| | | | * Implement Governance.GetRoundHeight in test.Governance.
* core: abort hang DKG (#508)Mission Liao2019-03-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | * Capitalize log * Fix DKG aborting hangs Make sure reset cc.dkg to nil in runDKG * Remember to purge tsig verfier too * Replace abortCh with context.Context * Fix obvious bug * Fixup: forever blockin at Wait method when runDKG is not called * Fixup: fix corner case If the Add(1) moved to runDKG under cc.dkgLock, we may not catch it after unlocking cc.dkgLock. * fixup
* core: height event handlers are not called (#509)Mission Liao2019-03-221-10/+4
| | | | | | | * Make utils.RoundEvent.ValidateNextRound non-blocking * Make NotifyHeight called blockingly * Trigger all height event handlers that should be triggered by initBlock * Fixup: forget the syncer part
* core: reset DKG (#502)Mission Liao2019-03-201-0/+10
| | | | | | | | | | | | | | * Allow utils.NodeSetCache to purge by rounds. * Purge utils.NodeSetCache when DKG reset. * Add a utils.RoundEvent handler to abort all previous running DKG * Fix test.App hangs in BlockDelivered when utils.RoundEvent is attached. ValidateNextRound is a blocking call and would block test.App.BlockDelivered.
* core/syncer: fix a bug in ForceSync (#499)Jimmy Hu2019-03-181-5/+8
|
* syncer: watchcat: move timeout config to constructor (#494)Wei-Ning Huang2019-03-162-11/+15
| | | | Move timeout configuration from the parameter of `Start` to NewWatchCat so it's easier for fullnode to configure the module.
* core, syncer: integrate utils.RoundEvent (#490)Mission Liao2019-03-161-61/+76
|
* core/syncer: add force sync (#468)Jimmy Hu2019-03-151-0/+25
| | | | | | | | | | | | | * core: Add Recovery Interface * core/syncer: modify recovery interface * core: fix Recovery interface * core/syncer: rename terminator to watchcat (#491) * core/syncer: rename terminator to watchcat * Add error log * Rename Pat to Feed * core/syncer: add force sync * run prepareRandomness if round >= DKGDelayRound * Add test for Forcsync
* core/syncer: rename terminator to watchcat (#491)Jimmy Hu2019-03-152-54/+61
| | | | | | * core/syncer: rename terminator to watchcat * Add error log * Rename Pat to Feed
* core: Add Recovery Interface (#463)Jimmy Hu2019-03-152-0/+260
| | | | | * core: Add Recovery Interface * core/syncer: modify recovery interface
* core/syncer: fix syncer deadlock (#479)Mission Liao2019-03-121-20/+24
| | | | | | | | * Fix dead lock * core/syncer: prevent selecting on a nil channel * Remove unnecessary reader lock holding
* syncer: avoid attacked by older AgreementResult when syncing (#471)Mission Liao2019-03-081-0/+8
| | | | | | | | | | | | One possible attack for syncer is: - byzantine nodes periodically broadcast some very old types.AgreementResults. - If some syncer receive those types.AgreementResult, they might synced directly while still fall behind other nodes. A quick workaround is ignore types.AgreementResults older than the chain tip when creating the syncer.Consensus instance.
* core: first few round will not have DKG (#455)Jimmy Hu2019-03-041-2/+2
| | | | | | | | | | | | | | * core: Add DKGDelayRound constant * core: use constant value * core, utils: set DKGDelayRound for utils. * test: add dkgDelayRound to state * core: do not run dkg and crs for round < DKGDelayRound * fix test
* syncer: fix syncer panic (#456)Mission Liao2019-02-273-3/+127
| | | | | | | | * Fix syncer panic We can't verify correctness for randomness result from rounds that corresponding configurations are not ready yet. * Fix blocks are not confirmed while they should be
* core: Change RoundInterval to RoundLength (#453)Jimmy Hu2019-02-261-1/+1
|
* core: remove acks (#451)Mission Liao2019-02-221-1/+0
|
* core: switch round by block height (#450)Mission Liao2019-02-201-30/+12
|
* big-bang: single chain (#446)Mission Liao2019-02-191-444/+128
|
* syncer: fix issues when switching to core.Consensus (#418)Mission Liao2019-01-112-35/+54
| | | | | | | | | | | | | | | | - when confirmed blocks passed to core.Consensus aren't continuous in position in some chain, the pulling would skip those missing blocks. - fix: when some block is missing, avoid adding it and all blocks after it to core.Consensus. - we need to avoid the receive channel of network module full. - fix: during switching to core.Consensus, we need to launch a dummy receiver to receive from receive channel of network module. - fix: between the period during core.Consensus created and before running, a dummy receiver is also required to receive from receive channel of network module.
* syncer: skip error (#412)Mission Liao2019-01-081-0/+4
| | | | | | This error would always be raised when trying to sync from consensusHeight == 0. However, this error is mostly meaningless to fullnode, they just need to retry it later.
* sync: Verify randomness result before caching them. (#392)Mission Liao2019-01-081-9/+34
| | | | NOTE: the assurance between block's hash and block's position would be done in core.Consensus.
* sync: add log for syncer to debug hanging issue (#407)Mission Liao2019-01-072-76/+143
| | | | | | | | | Besides adding logs, also include these minor fixes: * Accessing con.validatedChains under locking * Access con.latticeLastRound under locking * Fix incorrect waitGroup usage * Remove unused parameter in startAgreement
* core: fix stuffs (#401)Mission Liao2019-01-052-7/+7
| | | | | | | * Remove block recycling mechanism * Return directly when previous DKG is not finished. * Adjust some logging level * Skip pulling when hashes to pull is empty
* core: syncer: safe spawn go routine (#399)wmin02019-01-042-10/+6
|
* sync: fix panic (#388)Mission Liao2018-12-281-32/+47
| | | | | * Merge several CRS notifications into one. * Sync config when new CRS is found
* sync: filter duplicated randomness (#387)Mission Liao2018-12-281-12/+34
|
* core: fix stuffs (#383)Mission Liao2018-12-262-8/+8
| | | | | | * Merge core.Consensus constructors * Downgrade severity of logs * Refine logic to add blocks from pool to lattice * Add test.LaunchDummyReceiver
* core: fix issues found when testing syncing. (#379)Mission Liao2018-12-241-11/+13
| | | | | | | | * Avoid panic when stopping multiple times. * Fix syncer panic when round switching * Add getCurrentConfig to total-ordering, and panic with more info * Avoid infinite loop.
* utils: move authenticator to utils package (#378)Mission Liao2018-12-221-1/+1
|
* misc: panic not ready (#374)Mission Liao2018-12-181-7/+10
| | | | | | | | | | | | | | | | * Panic when config/crs not ready For those calls to Governace.Configuration and Governance.CRS without checking returns, replace those calls with these newly added helpers: - utils.GetConfigurationWithPanic - utils.GetCRSWithPanic They would check returns, and panic directly if not ready yet. * Fix a bug that config is not ready when syncing
* syncer: fix stuffs (#373)Mission Liao2018-12-181-46/+71
| | | | | | | | | | | | * Add a new method: GetSyncedConsensus This method avoids calling BlockDelivered in SyncBlocks methods, which avoid potential deadlock. * Add a method to stop the syncer before synced * Enable nonBlockingApp for synced Consensus instance.
* db: cache compaction chain tip in db (#369)Mission Liao2018-12-131-1/+23
| | | | | * Replace JSON with RLP in levelDB implementation. * Make sure blocks to sync following compaction chain tip
* db: rename blockdb to db (#367)Mission Liao2018-12-131-11/+11
| | | | | | | | | | * Rename blockdb package to db * Rename 'BlockDB' to 'DB' * Make all methods in db specific for ''block'. * Rename db.BlockDatabase to db.Database * Rename revealer to block-revealer * Rename test.Revealer to test.BlockRevealer
* syncer: fix stuffs (#366)Mission Liao2018-12-121-22/+67
| | | | | | | | | | * return delivered blocks when processing finalized blocks * check deliver sequence when processing finalized blocks * skip delivery of finalized blocks * remove duplicated calls to BlockConfirmed * add numChains change in test scenario * fix the bug that restartNotary is triggered by older block than current aID.
* core: syncer: fix round finding process (#357)haoping-ku2018-12-051-3/+3
| | | | | | * core: syncer: fix round finding process * Fix comment
* core: construct consensus from syncer (#352)Mission Liao2018-12-042-6/+40
|
* core: syncer: add syncer (#346)haoping-ku2018-11-292-0/+799