Skip to content

EntropyEngine::Core::Concurrency::DirectScheduler

EntropyEngine::Core::Concurrency::DirectScheduler

Section titled “EntropyEngine::Core::Concurrency::DirectScheduler”

The “just give me work!” scheduler - absolute minimum overhead. More…

#include <DirectScheduler.h>

Inherits from EntropyEngine::Core::Concurrency::IWorkScheduler

Name
~DirectScheduler() override =default
virtual ScheduleResultselectNextGroup(const std::vector< WorkContractGroup * > & groups) override
Finds work by scanning from the start.
virtual const char *getName() const override
Returns “Direct”.
DirectScheduler(const Config & config)
Constructs the world’s simplest scheduler.

Public Classes inherited from EntropyEngine::Core::Concurrency::IWorkScheduler

Name
structScheduleResult
Result of a scheduling decision.
structConfig
Configuration for scheduler behavior.

Public Functions inherited from EntropyEngine::Core::Concurrency::IWorkScheduler

Name
virtual~IWorkScheduler() =default
virtual voidreset()
Resets scheduler to initial state.
virtual voidnotifyWorkExecuted(WorkContractGroup * group, size_t threadId)
Notifies scheduler that work was successfully executed.
virtual voidnotifyGroupsChanged(const std::vector< WorkContractGroup * > & newGroups)
Notifies scheduler that the group list has changed.
class EntropyEngine::Core::Concurrency::DirectScheduler;

The “just give me work!” scheduler - absolute minimum overhead.

This is the scheduler equivalent of a greedy algorithm. It scans from the beginning and grabs the first group with work. No fancy logic, no state, no optimization. Just pure, simple work-finding.

This scheduler was created to isolate scheduling logic from other system overheads in benchmarking scenarios.

The Good:

  • No state means no memory allocation
  • Dead simple to understand and debug
  • First groups get priority (which might be what you want)

The Bad:

  • Terrible work distribution - first groups get hammered
  • No load balancing whatsoever
  • Later groups might starve if early groups always have work
  • All threads pile onto the same groups

When to use this:

  • Benchmarking to establish absolute minimum overhead
  • Debugging to eliminate scheduler as a variable
  • When you have only one or two groups anyway
  • Testing worst-case contention scenarios

When NOT to use this:

  • Production systems (seriously, don’t)
  • Multiple groups that need fair execution
  • Any time you care about performance beyond raw overhead
// Only use this for testing!
auto scheduler = std::make_unique<DirectScheduler>(config);
// Now all threads will pile onto group[0] if it has work
~DirectScheduler() override =default
inline virtual ScheduleResult selectNextGroup(
const std::vector< WorkContractGroup * > & groups
) override

Finds work by scanning from the start.

Parameters:

  • groups Groups to scan (in order)

Return: First group with work, or nullptr

Reimplements: EntropyEngine::Core::Concurrency::IWorkScheduler::selectNextGroup

Returns first group with work. All threads converge on same group - bad for performance, good for measuring overhead.

inline virtual const char * getName() const override

Returns “Direct”.

Reimplements: EntropyEngine::Core::Concurrency::IWorkScheduler::getName

inline explicit DirectScheduler(
const Config & config
)

Constructs the world’s simplest scheduler.

Parameters:

  • config Ignored entirely

Config is ignored - this scheduler needs no configuration.


Updated on 2026-01-26 at 17:14:35 -0500