-
milliSleep systemcall
hi everyone.
i want to write a milliSleep systemcall to my kernel but i'm not supposed to use nanosleep function internally with a clever parameter tweak:)
my millisleep function should be like this: int millisleep(int *amount). I've looked at the systemcall of nanosleep in timer.c but it takes two struct timespec parameters as an input. here is the systemcall code of nanosleep:
Code:
asmlinkage long sys_nanosleep(struct timespec *rqtp, struct timespec *rmtp)
{
struct timespec t;
unsigned long expire;
if(copy_from_user(&t, rqtp, sizeof(struct timespec)))
return -EFAULT;
if (t.tv_nsec >= 1000000000L || t.tv_nsec < 0 || t.tv_sec < 0)
return -EINVAL;
if (t.tv_sec == 0 && t.tv_nsec <= 2000000L &&
current->policy != SCHED_OTHER)
{
/*
* Short delay requests up to 2 ms will be handled with
* high precision by a busy wait for all real-time processes.
*
* Its important on SMP not to do this holding locks.
*/
udelay((t.tv_nsec + 999) / 1000);
return 0;
}
expire = timespec_to_jiffies(&t) + (t.tv_sec || t.tv_nsec);
current->state = TASK_INTERRUPTIBLE;
expire = schedule_timeout(expire);
if (expire) {
if (rmtp) {
jiffies_to_timespec(expire, &t);
if (copy_to_user(rmtp, &t, sizeof(struct timespec)))
return -EFAULT;
}
return -EINTR;
}
return 0;
}
and the struct timespec is defined in time.h like this:
Code:
#ifndef _STRUCT_TIMESPEC
#define _STRUCT_TIMESPEC
struct timespec {
time_t tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};
#endif /* _STRUCT_TIMESPEC */
i want to convert the nanosleep function into my millisleep function in the given format. I dont understand the meaning of the nanosleep function's two arguments. i want to define them as function will work for 1ns so i will run the function in a for loop with amount*1000000 times. how can i convert this code and what is the meaning af the arguments of nanosleep function?
( btw sorry for my english:) some mistakes can be in the text )
-
Using select with only a timeout spec is another way of getting reasonably high (sub-second) timeouts.
-
You may want to read this:
SourceForge.net: POSIX timers - cpwiki
"Unfortunately, while accurately timing an event in usecs is possible, on a normal linux system scheduling latency makes it impossible to accurately ask for a delay with finer granularity than 10 milliseconds. You can test this yourself by calling nanosleep with a 10000 nanosecond (1 millisecond) delay 10000 times -- it will work out to much more than 10 seconds. However, if you ask for 100000 nsecs (1/100th second) 1000 times, you will get exactly 10 seconds."
-
So is there any way to convert the algorithm? And can you explain me the meaning of the arguments of nanosleep function? there are 2 struct pointer arguments(and struct has also 2 arguments--tv_sec and tv_nsec).
-
Okay, well here is the example from that page, which I actually wrote it so hopefully I understand :p
Code:
#include <time.h>
void gap (int secs, int hundredths) {
struct timespec interval;
interval.tv_sec=secs;
interval.tv_nsec=hundredths*10000;
nanosleep(&interval,NULL);
}
I have not fooled around with the second arg, the remaining time, and it is not essential, so here it's just NULL. The first arg is the duration. You declare a struct timespec and set it's values appropriately -- I think it is pretty self explanatory, tv_sec is whole seconds, tv_nsec is a fraction of a second in billionths but you should round this off to the nearest 10000 nanosecs since the kernel scheduler will not give you a resolution higher than that. In other words:
Would be a 2.25 second sleep.
-
thanks for your help:)
i also found that which works fine
Code:
#include <stdio.h>
#include <sys/time.h>
millisleep(int i)
{
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = i*1000;
select(0, NULL, NULL, NULL, &tv);
}
But when i try to add it as a systemcall it gives me error while re-compiling the kernel.
i added <linux/milliSleep> header(which i wrote for new entry to kernel). The other header file is not working correctly in my compilation. it gives an error for milliSleep.o in /usr/src/linux/kernel directory. i think i couldn't add the header <sys/time.h> correctly
-
I don't do a lot of kernel programming but I do know that the entire standard library is not available there, eg, even stdio.h is off limits if I recall...
Actually, if you want to implement a delay in kernel space, google "bogomips" and "jiffies" -- I think bogomips is the number of jiffies that have elapsed since boot, there is a kernal space global variable for it. Jiffies per second is basically your clock frequency, it's determined at boot time.
-
Thanks for your great help MK27. i will post the solution as soon as i correct my errors:)
-
One way to wait (and ensure that you know what is happening during the wait) is to execute a wait_event_timeout() with a 0 as the second parameter. You'll have to use something like
Code:
#define msec(x) ((x)*HZ/1000)
as the last parameter (this is the definition of jiffies, I believe. Also, you'll need a wait queue as the first parameter.
-
Guess it depends what the definition of what HZ is ;) but probably it is something like this. Going by "Essential Linux Device Drivers" (good book) a "jiffy" is the time interval between two ticks of the system timer. jiffies is actually the number of such intervals since start (this is a correction of my last post). BogoMIPS (Bogus Millions of Instructions Per Second) is based on a variable called loops_per_jiffie, which is initially set to 4096 and then calculated up something like this:
Code:
while ((loops_per_jiffie <<= 1) != 0) {
ticks = jiffies;
__delay(loops_per_jiffy); /* current calibration */
ticks = jiffies - ticks;
if (ticks) break;
}
__delay() is the "loop" run in loops_per_jiffie. There is more to this; values for the lower bits are calculated after the MSB is found, like remainders.
Apparently BogoMIPS then equals loops_per_jiffie*jiffies per second*number of processor instructions in __delay(), all divided by one million. This is the number the kernel reports at boot up (you might have to turn your splash screen off to notice this, or check /var/log/dmesg) and it should correspond to your processor frequency, eg, mine reports as 4400.26 BogoMIPS on a 2x2.2Ghz processor.
There is a section on Kernel Timers in that book too, and in every kernel programming oriented book I've seen. I would not want to bother trying to hack kernel space without one!
jiffies = ticks since boot
HZ = jiffies per second
A jiffy is not the lowest granularity of time, however. The kernel API includes:
mdelay() -- millisecond sleep
udelay() -- microsecond sleep
ndelay() -- nanosecond sleep
Haven't tried, but I would assume because you are not mediated by the kernel scheduler in kernel space itself, these should be really accurate. However, these are by necessity busy waits -- unlike the user space sleep functions, they do hog the processor!
-
i tried to modify the nanosleep system call but it gave segmentation fault. My code is :
Code:
asmlinkage int sys_millisleep(int amount)
{
struct timespec *rqtp;
struct timespec *rmtp;
struct timespec t;
unsigned long expire;
rmtp = NULL;
rqtp->tv_sec = 0;
rqtp->tv_nsec = 1000000*(long)amount
if(copy_from_user(&t, rqtp, sizeof(struct timespec)))
return -EFAULT;
.
.
.
.
rest of the code is same with nanosleep. MK27 said that udelay can cause overflows for big values and i changed the rqtp->tv_nsec = 1000000*(long)amount to rqtp->tv_nsec = 1000000 and repeat the rest of the process "amount" times in a for loop. Segmentation fault disappered but this time when i write millisleep(10) it wait much more than 10 millseconds. (i feel like it waits 1 sec :) ) What can i do to correct that?
-
Dude! You don't have any memory!!! struct timespec *rqtp should have some type memory associated with it. So, do something like remove the * and the in copy from user pass it via &.