[Prev][Next][Index][Thread]
Re: Re(2): Re(2): Re(2): Quickcam device driver for Linux
>> The
>> viewer reads in a image via the parallel port, each read of the parallel
>> port consists of a context switch from user space to kernel space and back
>> to user space again.
Is the switch to kernel space necessary in Linux? I assumed that the
ioperm() call gave permissions to the access the parallel port i/o addresses
to the user-mode process.
>> Then the server pipes the data upstream to the viewer,
>> where each write on the pipe constitutes another context switch from user
>> space to kernel space and back again,
Of course the server only sends data up the pipe when it has
captured a whole image. If the image has an interesting depth and
size then this write isn't going to happen more than five times
a second, so the kernel overhead is insignificant.
Of course I'm just talking about the Quickcam. There are plenty of
other video devices out there, most based on add-on boards and
attached video camera, and each with its own low level programming
interface. I doubt that there is enough in common between them
and the Quickcam that you will be able to write a single /dev/camera
driver, or even a common programming interface that really expresses
the unique features of each camera (although you could settle for
the Video for Windows api, as that's know to work).
Follow-Ups:
References: