Skip to content

EntropyEngine::Core::Concurrency::AdaptiveRankingScheduler

EntropyEngine::Core::Concurrency::AdaptiveRankingScheduler

Section titled “EntropyEngine::Core::Concurrency::AdaptiveRankingScheduler”

Adaptive scheduler that learns from workload patterns to optimize distribution. More…

#include <AdaptiveRankingScheduler.h>

Inherits from EntropyEngine::Core::Concurrency::IWorkScheduler

Name
~AdaptiveRankingScheduler() override =default
virtual ScheduleResultselectNextGroup(const std::vector< WorkContractGroup * > & groups) override
Selects next group using adaptive ranking algorithm.
virtual voidreset() override
Resets all state including thread-local data.
virtual voidnotifyWorkExecuted(WorkContractGroup * group, size_t threadId) override
Updates execution counters for affinity tracking.
virtual voidnotifyGroupsChanged(const std::vector< WorkContractGroup * > & newGroups) override
Increments generation counter to invalidate cached rankings.
virtual const char *getName() const override
Returns “AdaptiveRanking”.
AdaptiveRankingScheduler(const Config & config)
Constructs adaptive ranking scheduler with given configuration.

Public Classes inherited from EntropyEngine::Core::Concurrency::IWorkScheduler

Name
structScheduleResult
Result of a scheduling decision.
structConfig
Configuration for scheduler behavior.

Public Functions inherited from EntropyEngine::Core::Concurrency::IWorkScheduler

Name
virtual~IWorkScheduler() =default
class EntropyEngine::Core::Concurrency::AdaptiveRankingScheduler;

Adaptive scheduler that learns from workload patterns to optimize distribution.

The AdaptiveRankingScheduler serves as the default scheduler implementation. It maintains thread affinity for cache locality while preventing any single group from monopolizing thread resources. The scheduler functions as an adaptive load balancer that responds dynamically to changing work patterns.

The ranking algorithm: rank = (scheduledWork / (executingWork + 1)) * (1 - executingWork / totalThreads)

This formula produces the following behavior:

  • Groups with high work volume but few threads receive maximum priority
  • Groups with existing thread allocation receive proportionally lower priority
  • Groups consuming excessive thread resources relative to total threads are penalized
  • Groups without pending work are excluded from consideration

Thread affinity mechanism: Threads maintain affinity to their selected group for up to maxConsecutiveExecutionCount executions. Threads relinquish affinity when the group exhausts work or after reaching the consecutive execution limit.

Each thread maintains an independent view of group rankings through thread-local caching, updating only when necessary to minimize synchronization.

Recommended use cases: Optimal for heterogeneous workloads where groups exhibit varying work volumes or when work patterns change dynamically during execution.

Not recommended when: All groups maintain equal work distribution consistently, or when strict round-robin fairness is required. Consider RoundRobinScheduler for these scenarios.

// Configure for shorter sticky periods (more responsive to work changes)
IWorkScheduler::Config config;
config.maxConsecutiveExecutionCount = 4; // Default is 8
config.updateCycleInterval = 8; // Update rankings more often
auto scheduler = std::make_unique<AdaptiveRankingScheduler>(config);
WorkService service(wsConfig, std::move(scheduler));
~AdaptiveRankingScheduler() override =default
virtual ScheduleResult selectNextGroup(
const std::vector< WorkContractGroup * > & groups
) override

Selects next group using adaptive ranking algorithm.

Parameters:

  • groups Available work groups
  • context Current thread context

Return: Selected group or nullptr if no work available

Reimplements: EntropyEngine::Core::Concurrency::IWorkScheduler::selectNextGroup

Checks current affinity group first, then traverses ranked list. Recomputes rankings when stale.

virtual void reset() override

Resets all state including thread-local data.

Reimplements: EntropyEngine::Core::Concurrency::IWorkScheduler::reset

Only resets calling thread. Others reset lazily on next schedule.

virtual void notifyWorkExecuted(
WorkContractGroup * group,
size_t threadId
) override

Updates execution counters for affinity tracking.

Parameters:

  • group Group that work was executed from
  • threadId Thread that executed the work

Reimplements: EntropyEngine::Core::Concurrency::IWorkScheduler::notifyWorkExecuted

Tracks consecutive executions to determine when to release affinity.

virtual void notifyGroupsChanged(
const std::vector< WorkContractGroup * > & newGroups
) override

Increments generation counter to invalidate cached rankings.

Parameters:

  • newGroups Updated group list

Reimplements: EntropyEngine::Core::Concurrency::IWorkScheduler::notifyGroupsChanged

Threads detect generation change and update rankings. Lock-free consistency.

inline virtual const char * getName() const override

Returns “AdaptiveRanking”.

Reimplements: EntropyEngine::Core::Concurrency::IWorkScheduler::getName

explicit AdaptiveRankingScheduler(
const Config & config
)

Constructs adaptive ranking scheduler with given configuration.

Parameters:

  • config Scheduler configuration

Key parameters: maxConsecutiveExecutionCount (thread stickiness), updateCycleInterval (ranking refresh rate).


Updated on 2026-01-26 at 16:50:32 -0500