How is the domain scheduler supposed to work with SMP configurations?

Is the domain scheduler supposed to have a separate instance per SMP node, or a single global instance?

If I understand the SMP code correctly, there is currently one single domain schedule that all cores follow in lock-step.

This may be purely historical – the domain scheduler was written for uni-core systems, and the current version probably was just the simplest version to generalise to SMP. It is unclear to me how well-tested the combination of SMP and domain scheduler is. Is there anyone from the systems team who could shed light on that?

From the infoflow perspective, the lock-step behaviour makes sense, but there are no proofs about it.

The current implementation doesn’t seem correct for either a single global domain schedule or a per-core schedule.

If it is a single instance then the domain time should be tracked only on a single CPU node. Currently the time consumed across all cores is billed to the single domain time variable. Additionally, when there is a domain switch, only the current core that sees the domain change makes a rescheduling decision. Other cores wouldn’t switch threads until the next time the scheduler needed to be invoked.

If it is a different instance per CPU node, then the global variables are missing per-core declarations and are incorrectly shared across all cores.

This may well be a never-looked-at configuration combination.

AFAIK the domain scheduler implementation for SMP was made to compile and that’s it.

To be consistent with ARINC it should be a global domain schedule I think.

I don’t think having a per-core domain scheduler would make any sense anyway as it would no longer achieve isolation in the scheduler domains.