Documentation
¶
Index ¶
- Constants
- func HeightPlaceholderKey(prefix string, height uint64) string
- type Cache
- type CacheManager
- type Manager
- type PendingData
- func (pd *PendingData) GetLastSubmittedDataHeight() uint64
- func (pd *PendingData) GetPendingData(ctx context.Context) ([]*types.Data, [][]byte, error)
- func (pd *PendingData) NumPendingData() uint64
- func (pd *PendingData) SetLastSubmittedDataHeight(ctx context.Context, newLastSubmittedDataHeight uint64)
- type PendingHeaders
- func (ph *PendingHeaders) GetLastSubmittedHeaderHeight() uint64
- func (ph *PendingHeaders) GetPendingHeaders(ctx context.Context) ([]*types.SignedHeader, [][]byte, error)
- func (ph *PendingHeaders) NumPendingHeaders() uint64
- func (ph *PendingHeaders) SetLastSubmittedHeaderHeight(ctx context.Context, newLastSubmittedHeaderHeight uint64)
- type PendingManager
Constants ¶
const ( // DefaultItemsCacheSize is the default size for items cache. DefaultItemsCacheSize = 200_000 // DefaultHashesCacheSize is the default size for hash tracking. DefaultHashesCacheSize = 200_000 // DefaultDAIncludedCacheSize is the default size for DA inclusion tracking. DefaultDAIncludedCacheSize = 200_000 )
const ( // HeaderDAIncludedPrefix is the store key prefix for header DA inclusion tracking. HeaderDAIncludedPrefix = "cache/header-da-included/" // DataDAIncludedPrefix is the store key prefix for data DA inclusion tracking. DataDAIncludedPrefix = "cache/data-da-included/" // DefaultTxCacheRetention is the default time to keep transaction hashes in cache. DefaultTxCacheRetention = 24 * time.Hour )
const DefaultPendingCacheSize = 200_000
DefaultPendingCacheSize is the default size for the pending items cache.
const LastSubmittedDataHeightKey = "last-submitted-data-height"
LastSubmittedDataHeightKey is the key used for persisting the last submitted data height in store.
Variables ¶
This section is empty.
Functions ¶
func HeightPlaceholderKey ¶
HeightPlaceholderKey returns a store key for a height-indexed DA inclusion entry used when the real content hash is unavailable (e.g. after restore). Format: "<prefix>__h/<height_hex_16>" — cannot collide with real 64-char hashes.
Types ¶
type Cache ¶
type Cache[T any] struct { // contains filtered or unexported fields }
Cache tracks seen blocks and DA inclusion status using bounded LRU caches.
func NewCache ¶
NewCache creates a Cache. When store and keyPrefix are set, mutations persist a snapshot so RestoreFromStore can recover in-flight state.
func (*Cache[T]) ClearFromStore ¶
ClearFromStore deletes the snapshot key from the store.
func (*Cache[T]) RestoreFromStore ¶
RestoreFromStore loads the in-flight snapshot with a single store read. Each entry is installed as a height placeholder; real hashes replace them once the DA retriever re-fires SetHeaderDAIncluded after startup. Missing snapshot key is treated as a no-op (fresh node or pre-snapshot version).
type CacheManager ¶
type CacheManager interface {
DaHeight() uint64
// Header operations
IsHeaderSeen(hash string) bool
SetHeaderSeen(hash string, blockHeight uint64)
GetHeaderDAIncludedByHash(hash string) (uint64, bool)
GetHeaderDAIncludedByHeight(blockHeight uint64) (uint64, bool)
SetHeaderDAIncluded(hash string, daHeight uint64, blockHeight uint64)
RemoveHeaderDAIncluded(hash string)
// Data operations
IsDataSeen(hash string) bool
SetDataSeen(hash string, blockHeight uint64)
GetDataDAIncludedByHash(daCommitmentHash string) (uint64, bool)
GetDataDAIncludedByHeight(blockHeight uint64) (uint64, bool)
SetDataDAIncluded(daCommitmentHash string, daHeight uint64, blockHeight uint64)
RemoveDataDAIncluded(hash string)
// Transaction operations
IsTxSeen(hash string) bool
SetTxSeen(hash string)
CleanupOldTxs(olderThan time.Duration) int
// Pending events syncing coordination
GetNextPendingEvent(blockHeight uint64) *common.DAHeightEvent
SetPendingEvent(blockHeight uint64, event *common.DAHeightEvent)
// Store operations
SaveToStore() error
RestoreFromStore() error
// Cleanup operations
DeleteHeight(blockHeight uint64)
}
CacheManager provides thread-safe cache operations for tracking seen blocks and DA inclusion status.
type Manager ¶
type Manager interface {
CacheManager
PendingManager
}
Manager combines CacheManager and PendingManager.
type PendingData ¶
type PendingData struct {
// contains filtered or unexported fields
}
PendingData maintains Data that need to be published to DA layer
Important assertions: - data is safely stored in database before submission to DA - data is always pushed to DA in order (by height) - DA submission of multiple data is atomic - it's impossible to submit only part of a batch
lastSubmittedDataHeight is updated only after receiving confirmation from DA. Worst case scenario is when data was successfully submitted to DA, but confirmation was not received (e.g. node was restarted, networking issue occurred). In this case data is re-submitted to DA (it's extra cost). evolve is able to skip duplicate data so this shouldn't affect full nodes. Note: Submission of pending data to DA should account for the DA max blob size.
func NewPendingData ¶
NewPendingData returns a new PendingData struct
func (*PendingData) GetLastSubmittedDataHeight ¶
func (pd *PendingData) GetLastSubmittedDataHeight() uint64
func (*PendingData) GetPendingData ¶
GetPendingData returns a sorted slice of pending Data along with their marshalled bytes.
func (*PendingData) NumPendingData ¶
func (pd *PendingData) NumPendingData() uint64
func (*PendingData) SetLastSubmittedDataHeight ¶
func (pd *PendingData) SetLastSubmittedDataHeight(ctx context.Context, newLastSubmittedDataHeight uint64)
type PendingHeaders ¶
type PendingHeaders struct {
// contains filtered or unexported fields
}
PendingHeaders maintains headers that need to be published to DA layer
Important assertions: - headers are safely stored in database before submission to DA - headers are always pushed to DA in order (by height) - DA submission of multiple headers is atomic - it's impossible to submit only part of a batch
lastSubmittedHeaderHeight is updated only after receiving confirmation from DA. Worst case scenario is when headers were successfully submitted to DA, but confirmation was not received (e.g. node was restarted, networking issue occurred). In this case headers are re-submitted to DA (it's extra cost). evolve is able to skip duplicate headers so this shouldn't affect full nodes.
func NewPendingHeaders ¶
NewPendingHeaders returns a new PendingHeaders struct
func (*PendingHeaders) GetLastSubmittedHeaderHeight ¶
func (ph *PendingHeaders) GetLastSubmittedHeaderHeight() uint64
func (*PendingHeaders) GetPendingHeaders ¶
func (ph *PendingHeaders) GetPendingHeaders(ctx context.Context) ([]*types.SignedHeader, [][]byte, error)
GetPendingHeaders returns a sorted slice of pending headers along with their marshalled bytes.
func (*PendingHeaders) NumPendingHeaders ¶
func (ph *PendingHeaders) NumPendingHeaders() uint64
func (*PendingHeaders) SetLastSubmittedHeaderHeight ¶
func (ph *PendingHeaders) SetLastSubmittedHeaderHeight(ctx context.Context, newLastSubmittedHeaderHeight uint64)
type PendingManager ¶
type PendingManager interface {
GetPendingHeaders(ctx context.Context) ([]*types.SignedHeader, [][]byte, error)
GetPendingData(ctx context.Context) ([]*types.SignedData, [][]byte, error)
SetLastSubmittedHeaderHeight(ctx context.Context, height uint64)
GetLastSubmittedHeaderHeight() uint64
SetLastSubmittedDataHeight(ctx context.Context, height uint64)
GetLastSubmittedDataHeight() uint64
NumPendingHeaders() uint64
NumPendingData() uint64
}
PendingManager provides operations for managing pending headers and data.