EntropyEngine::Core::Concurrency::AdaptiveRankingScheduler
EntropyEngine::Core::Concurrency::AdaptiveRankingScheduler
Section titled “EntropyEngine::Core::Concurrency::AdaptiveRankingScheduler”Adaptive scheduler that learns from workload patterns to optimize distribution. More…
#include <AdaptiveRankingScheduler.h>
Inherits from EntropyEngine::Core::Concurrency::IWorkScheduler
Public Functions
Section titled “Public Functions”| Name | |
|---|---|
| ~AdaptiveRankingScheduler() override =default | |
| virtual ScheduleResult | selectNextGroup(const std::vector< WorkContractGroup * > & groups) override Selects next group using adaptive ranking algorithm. |
| virtual void | reset() override Resets all state including thread-local data. |
| virtual void | notifyWorkExecuted(WorkContractGroup * group, size_t threadId) override Updates execution counters for affinity tracking. |
| virtual void | notifyGroupsChanged(const std::vector< WorkContractGroup * > & newGroups) override Increments generation counter to invalidate cached rankings. |
| virtual const char * | getName() const override Returns “AdaptiveRanking”. |
| AdaptiveRankingScheduler(const Config & config) Constructs adaptive ranking scheduler with given configuration. |
Additional inherited members
Section titled “Additional inherited members”Public Classes inherited from EntropyEngine::Core::Concurrency::IWorkScheduler
| Name | |
|---|---|
| struct | ScheduleResult Result of a scheduling decision. |
| struct | Config Configuration for scheduler behavior. |
Public Functions inherited from EntropyEngine::Core::Concurrency::IWorkScheduler
| Name | |
|---|---|
| virtual | ~IWorkScheduler() =default |
Detailed Description
Section titled “Detailed Description”class EntropyEngine::Core::Concurrency::AdaptiveRankingScheduler;Adaptive scheduler that learns from workload patterns to optimize distribution.
The AdaptiveRankingScheduler serves as the default scheduler implementation. It maintains thread affinity for cache locality while preventing any single group from monopolizing thread resources. The scheduler functions as an adaptive load balancer that responds dynamically to changing work patterns.
The ranking algorithm: rank = (scheduledWork / (executingWork + 1)) * (1 - executingWork / totalThreads)
This formula produces the following behavior:
- Groups with high work volume but few threads receive maximum priority
- Groups with existing thread allocation receive proportionally lower priority
- Groups consuming excessive thread resources relative to total threads are penalized
- Groups without pending work are excluded from consideration
Thread affinity mechanism: Threads maintain affinity to their selected group for up to maxConsecutiveExecutionCount executions. Threads relinquish affinity when the group exhausts work or after reaching the consecutive execution limit.
Each thread maintains an independent view of group rankings through thread-local caching, updating only when necessary to minimize synchronization.
Recommended use cases: Optimal for heterogeneous workloads where groups exhibit varying work volumes or when work patterns change dynamically during execution.
Not recommended when: All groups maintain equal work distribution consistently, or when strict round-robin fairness is required. Consider RoundRobinScheduler for these scenarios.
// Configure for shorter sticky periods (more responsive to work changes)IWorkScheduler::Config config;config.maxConsecutiveExecutionCount = 4; // Default is 8config.updateCycleInterval = 8; // Update rankings more often
auto scheduler = std::make_unique<AdaptiveRankingScheduler>(config);WorkService service(wsConfig, std::move(scheduler));Public Functions Documentation
Section titled “Public Functions Documentation”function ~AdaptiveRankingScheduler
Section titled “function ~AdaptiveRankingScheduler”~AdaptiveRankingScheduler() override =defaultfunction selectNextGroup
Section titled “function selectNextGroup”virtual ScheduleResult selectNextGroup( const std::vector< WorkContractGroup * > & groups) overrideSelects next group using adaptive ranking algorithm.
Parameters:
- groups Available work groups
- context Current thread context
Return: Selected group or nullptr if no work available
Reimplements: EntropyEngine::Core::Concurrency::IWorkScheduler::selectNextGroup
Checks current affinity group first, then traverses ranked list. Recomputes rankings when stale.
function reset
Section titled “function reset”virtual void reset() overrideResets all state including thread-local data.
Reimplements: EntropyEngine::Core::Concurrency::IWorkScheduler::reset
Only resets calling thread. Others reset lazily on next schedule.
function notifyWorkExecuted
Section titled “function notifyWorkExecuted”virtual void notifyWorkExecuted( WorkContractGroup * group, size_t threadId) overrideUpdates execution counters for affinity tracking.
Parameters:
- group Group that work was executed from
- threadId Thread that executed the work
Reimplements: EntropyEngine::Core::Concurrency::IWorkScheduler::notifyWorkExecuted
Tracks consecutive executions to determine when to release affinity.
function notifyGroupsChanged
Section titled “function notifyGroupsChanged”virtual void notifyGroupsChanged( const std::vector< WorkContractGroup * > & newGroups) overrideIncrements generation counter to invalidate cached rankings.
Parameters:
- newGroups Updated group list
Reimplements: EntropyEngine::Core::Concurrency::IWorkScheduler::notifyGroupsChanged
Threads detect generation change and update rankings. Lock-free consistency.
function getName
Section titled “function getName”inline virtual const char * getName() const overrideReturns “AdaptiveRanking”.
Reimplements: EntropyEngine::Core::Concurrency::IWorkScheduler::getName
function AdaptiveRankingScheduler
Section titled “function AdaptiveRankingScheduler”explicit AdaptiveRankingScheduler( const Config & config)Constructs adaptive ranking scheduler with given configuration.
Parameters:
- config Scheduler configuration
Key parameters: maxConsecutiveExecutionCount (thread stickiness), updateCycleInterval (ranking refresh rate).
Updated on 2026-01-26 at 16:50:32 -0500