Page 1 of 1

EyeToy Driver

Posted: Mon Dec 27, 2004 12:04 pm
by LionX

I currently working on the driver for the eye toy. I just got a small part of it to work. the eyetoy use a standard chip inside. here is a elf sample of it: //-- the link was removed because of copyrights --//

Re: EyeToy Driver

Posted: Wed Dec 29, 2004 10:02 am
by LionX
here is the driver
http://cvs.ps2dev.org/ps2cam/



[quote="LionX"]
I currently working on the driver for the eye toy. I just got a small part of it to work. the eyetoy use a standard chip inside. here is a elf sample of it:


//-- the link was removed because of copyrights --//

Re: EyeToy Driver

Posted: Tue Feb 15, 2005 12:21 pm
by LionX
well here is the 1st sample to show that ps2dev.or's eyetoy driver is almost done:
// the link was removed because of copyrights

Posted: Fri Feb 25, 2005 10:34 am
by modman
Wow, really cool!

Any progress? It runs < 30 seconds on my PS2... I'm guessing this was just a proof of concept.

eyetoy driver

Posted: Sun Mar 13, 2005 10:21 pm
by LionX
well look like v1.0 is done (hope i didnt left anything out). they are on cvs in ps2sdk.


ps2sdk/ee/rpc/ps2cam
ps2sdk/iop/usb/ps2cam

ps2 eyetoy driver

Posted: Sun Mar 13, 2005 10:24 pm
by LionX
next project: PS2 Entertainment Center

Posted: Mon Mar 14, 2005 3:40 am
by J.F.
It doesn't use the IPU, but it's better than nothing. :)

It would be nice if the folks doing the driver using the IPU (over on ps2linux) got permission to release it.

ipu

Posted: Mon Mar 14, 2005 11:09 pm
by LionX
this driver is most likely to not use the IPU, its up to the jpg un-compressing lib to use the ipu.

Posted: Tue Mar 15, 2005 1:15 am
by mrbrown
The IPU is useless for JPEG decompression.

Posted: Tue Mar 15, 2005 2:22 am
by Guest
mrbrown wrote:The IPU is useless for JPEG decompression.
Why is this ? Are the various methods of JPEG encoding outside the capabilities of the IPU ?

I am just curious, because if you take away the Motion Compensation function of MPEG2 decoding, you are left with essentially a JPEG. The IPU does NOT do Motion Compensation anyway (it must be done on the EE core) so it seems that the IPU is nothing more than a fancy JPEG decompresser.

While I am sure your experience in PS2 technology allows you to understand clearly why the IPU is useless for JPEG compression, I am interested to learn more about what these limitations are. :)

From the MPEG2 FAQ:
41. How do MPEG and JPEG differ?

A. The most fundamental difference is MPEG's use of block-based motion
compensated prediction (MCP)---a method falling into the general category of
temporal DPCM.

The second most fundamental difference is in the target application.
JPEG adopts a general purpose philosophy: independence from color space
(up to 255 components per frame) and quantization tables for each
component. Extended modes in JPEG include two sample precision (8 and
12 bit sample accuracy), combinations of frequency progressive, spatial
hierarchically progressive, and amplitude (point transform) progressive
scanning modes. Further color independence is made possible thanks to
downloadable Huffman tables (up to one for each component.)

Since MPEG is targeted for a set of specific applications, there is only
one color space (4:2:0 YCbCr), one sample precision (8 bits), and one
scanning mode (sequential). Luminance and chrominance share quantization
and VLC tables. MPEG adds adaptive quantization at the macroblock (16 x
16 pixel area) layer. This permits both smoother bit rate control and
more perceptually uniform quantization throughout the picture and image
sequence. However, adaptive quantization is part of the Enhanced JPEG
charter (ISO/IEC 10918-3) currently in verification stage. MPEG variable
length coding tables are non-downloadable, and are therefore optimized
for a limited range of compression ratios appropriate for the target
applications.

The local spatial decorrelation methods in MPEG and JPEG are very
similar. Picture data is block transform coded with the two-dimensional
orthanormal 8x8 DCT, with asymmetric basis vectors about time (aka DCT-
II). The resulting 63 AC transform coefficients are mapped in a zig-zag
pattern (or alternative scan pattern in MPEG-2) to statistically
increase the runs of zeros. Coefficients of the vector are then
uniformly scalar quantized, run-length coded, and finally the run-length
symbols are variable length coded using a canonical (JPEG) or modified
Huffman (MPEG) scheme. Global frame redundancy is reduced by 1-D DPCM
of the block DC coefficients, followed by quantization and variable
length entropy coding of the quantized DC coefficient.

Posted: Tue Mar 15, 2005 3:42 am
by mrbrown
See the EE docs for IPU limitations. And no, sorry, taking away motion compensation from MPEG2 does not leave you with JPEG :).

Posted: Tue Mar 15, 2005 3:47 am
by Guest
mrbrown wrote:See the EE docs for IPU limitations. And no, sorry, taking away motion compensation from MPEG2 does not leave you with JPEG :).
Damn. ;)

Posted: Tue Mar 15, 2005 6:20 am
by bigboss
the sony eyetoy driver captures directly ipu frames, send them with sifcmd to ee and then you must only use the IPU to get rbga format to send to gs. So it's not needed extra libjpg stuff. The main difference with your driver is:

- sifcmd stuff instead of rpc to get frames
- IPU at ee side instead of libjpg
- isoc transfers, sony uses a multi isoc transfer so they get a callback each 8 isoc frames instead a 1 callback each isoc frame. Usbs from napalm has not this function so you can only use loading sony img before
- 896 size each isoc frame request instead of 384

In the other hand iop side in your driver is gpl based so i believe that perhaps it is incompatible with AFL license from ps2sdk

Posted: Thu Mar 17, 2005 8:45 am
by Saotome
Hey Lion,
Could you commit another example to the CVS please, on how to read a video stream (i.e. not just one frame) from the eyetoy?
I'm having some problems with your driver:
I'm initializing it the same way like you're doing in the test example. I'm using 320x240,25Hz and I get a picture from the eyetoy. but the PS2CamExtractFrame() function seems pretty slow (>30,000,000 cycles), and if I call it too often it hangs after a few seconds.

Maybe I did something wrong when compiling the IRX (?), never compiled IRXs yet. And I'm not using the newest version of ps2sdk, could this be a problem?

Posted: Fri Mar 18, 2005 12:49 am
by Saotome
ok, looks like it was a compiler issue. the irx i got from the ps2cam directory from cvs works (without freezing). but still slow - here are some lines of the inlink log:

Code: Select all

extframe cycles&#58;46931560 
extframe cycles&#58;33658300 
extframe cycles&#58;50477188 
extframe cycles&#58;36317470 
extframe cycles&#58;53219548 
extframe cycles&#58;33662128 
i get the cycle-count with this:

Code: Select all

startPs2Perf&#40;&#41;;
camfrmsiz = PS2CamExtractFrame&#40;camdevid, picbuf, 16384&#41;;
stopPs2Perf&#40;&#41;;
printf&#40;"extframe cycles&#58;%d \n", getPs2PerfPC0&#40;&#41;&#41;;
lion: if this is normal, i hope you have some ideas for optimizing this ;)

Posted: Fri Mar 18, 2005 6:18 am
by bigboss
saotome mavy will be glad to see someone using Ps2Perf :P

about your issue, lion should redesign the driver a little. Now you get frames like a snapshot so you must wait too much to get it from iop, and when you are making other things on ee side the driver is stopped and only checking for commands, if you want a real streaming support you will need iop side making the work quickly to get the frames from eyetoy and send the ipu frames with a double buffer to ee with sifcmd stuff, leave the rpc stuff only to enable/disable streaming or get a static snapshot ,this will let you in ee side make other stuff and you will not need to wait to iop to capture and send you a new frame like now.

Other thing to do for lion is to use IPU instead of libjpg.