[TriLUG] High resolution timer calls and the kernel
P. L. Charles Fischer
PLCFischer at nc.rr.com
Tue Jan 13 13:46:22 EST 2004
I have had some strange timing problems as well. Try the following call
when running as root.
/************************************************************************/
int osi_thread_priority_type(int realtime)
{
pthread_t self;
int error;
int policy;
int priority;
struct sched_param param;
int min, max;
self = pthread_self();
error = pthread_getschedparam(self, &policy, ¶m);
if (error != 0) return(-1);
if (realtime == 0)
error = pthread_setschedparam(self, SCHED_OTHER, ¶m);
else
{
min = sched_get_priority_min(SCHED_RR);
max = sched_get_priority_max(SCHED_RR);
param.sched_priority = (max + min) / 2;
error = pthread_setschedparam(self, SCHED_RR, ¶m);
}
if (error != 0)
{
return(-2);
}
return(0);
}
/************************************************************************/
This will change to Round Robin time slicing, which I have found works much
better with real time programs.
As for the gettimeofday problem, I think you will find that on most systems
it is +-10msec. It is higher on duel processor systems (about +-15msec.).
Good luck
Charles Fischer
At 12:41 PM 1/13/2004 -0500, you wrote:
>I have a real-time computer vision project I'm working on that does some
>analysis of incoming 10 fps video from a pair of webcams. The cycle of the
>cameras is reliable so each frame comes in 100 msec after the last. The only
>exception is the first frame which seems to take a while as the camera
>initializes.
>
>So what I want to do is as much processing as possible on the existing frame
>before switching to the next frame. I have an algorithm that can be cut
>short and still be useful. Rather than alter the parameters to get the frame
>rate I want, I have the frame rate and want to tune the parameters in
>real-time to get as much done as possible without exceeding a 100 msec
>curfew.
>
>I ran into a few strange things when it came to keeping track of the time in
>small increments. Was hoping someone could explain.
>
>I think the timer call I used was gettimeofday(). One call takes about 2
>usec, which is small compared to the 1300 usec or so for one frame of video
>capture, or the curfew of 100 msec = 100000 usec.
>
>If I make a loop of repeated calls for the time, however, the duration of
>these calls increases. I made a counter for how many times I could call for
>the time before breaking 100 msec, along with capturing video at that rate.
>It looks like this:
>
>Frame Loops
>0 1
>1 399935
>2 383604
>3 367687
>...
>21 1178
>22 1
>23 1
>
>So if you do nothing but ask for the time, like an annoying kid screaming
>"are
>we there yet?", the response goes from immediate to quite slow. I don't
>think this happens when there is real processing that puts some delay between
>the calls. I tried to simulate this with nanosleep() but that actually
>sleeps a good bit more than I requested, so I didn't get very far with a
>control to compare to.
>
>My guess was the kernel throttled back on my process because it was making
>too
>many calls that required a kernel response. Not knowing crap about kernels I
>thought I'd raise the question here and see if I could get help. Hardware is
>Logitech QuickCam 4000, kernel tested was a while back, 2.4.20 or so.
>
>I recently heard that gettimeofday() is only accurate to 18 msec or so. I
>thought it was working better for me but I need to double-check. The
>behavior above is still relevant.
>
>Cheers
>Sam
>
>--
>TriLUG mailing list : http://www.trilug.org/mailman/listinfo/trilug
>TriLUG Organizational FAQ : http://trilug.org/faq/
>TriLUG Member Services FAQ : http://members.trilug.org/services_faq/
>TriLUG PGP Keyring : http://trilug.org/~chrish/trilug.asc
More information about the TriLUG
mailing list