Not sure, but it sounds about right.
Originally Posted by micke_b
I surely hope that you don't get a tick interrupt every 1ns - even a modern dual core processors would be spending 100% of the time handling interrupts at this rate of interrupts - the clock-cycles on a 2GHz processor are 0.5ns long, so you would be looking at performing an interrupt every other clock-cycle.
2. When a rt-task is running in the CPU the function task_tick_rt(..) is called every tick, 1 tick = 1ns by default. Is this correct?
The time may be measured in nanoseconds, but it's certainly not interrupting every one of them - it's either measuring chunks of say 500-10000 ns at a time, or, if more precise time is required, it's using other ways to determine the time (e.g. the TSC).
3. When a rt-task gets blocked the function dequeue_task_rt(..) is called, is this correct?
Makes sense to me.
4. When a rt-task wakes up from being blocked, either enqueue_task_rt(..) or requeue_task_rt(..) is called, is this correct?
If you are on single processor core, I'd use TSC - it's relatively precise, and as long as you are not measuring TOO often, I'd use JUST a RDTSC and store the 64-bit result. If you want really short intervals with good precision, you'd need a serializing instruction to ensure that the RDTSC instruction isn't "misplaces" by out-of-order execution. For example "CPUID; RDTSC" would do the job - but this increases the overhead of the RDTSC - so if you want don't need to have very precise measurement, you are probably better off just using the RDTSC itself.
5. If i want to account for CPU-time in a rt-task, which clock to use? will sched_clock() be a good choice?
I'm pretty sure about 1 and 2, but not so sure about 3, 4 and 5, any help is welcome.