EyeToy Driver
EyeToy Driver
I currently working on the driver for the eye toy. I just got a small part of it to work. the eyetoy use a standard chip inside. here is a elf sample of it: //-- the link was removed because of copyrights --//
Last edited by LionX on Mon Mar 14, 2005 1:15 am, edited 1 time in total.
Re: EyeToy Driver
here is the driver
http://cvs.ps2dev.org/ps2cam/
[quote="LionX"]
I currently working on the driver for the eye toy. I just got a small part of it to work. the eyetoy use a standard chip inside. here is a elf sample of it:
//-- the link was removed because of copyrights --//
http://cvs.ps2dev.org/ps2cam/
[quote="LionX"]
I currently working on the driver for the eye toy. I just got a small part of it to work. the eyetoy use a standard chip inside. here is a elf sample of it:
//-- the link was removed because of copyrights --//
Last edited by LionX on Mon Mar 14, 2005 1:16 am, edited 2 times in total.
Re: EyeToy Driver
well here is the 1st sample to show that ps2dev.or's eyetoy driver is almost done:
// the link was removed because of copyrights
// the link was removed because of copyrights
Last edited by LionX on Mon Mar 14, 2005 1:15 am, edited 1 time in total.
Wow, really cool!
Any progress? It runs < 30 seconds on my PS2... I'm guessing this was just a proof of concept.
Any progress? It runs < 30 seconds on my PS2... I'm guessing this was just a proof of concept.
SCPH-50001/N
HD SCPH-20401 U
Eyetoy SLEH-00031
Network Adaptor SCPH-10281
Logitech Z680 via FIber w00t!
Sony Wega TV + USB Keyboard
http://staff.philau.edu/barberej/
HD SCPH-20401 U
Eyetoy SLEH-00031
Network Adaptor SCPH-10281
Logitech Z680 via FIber w00t!
Sony Wega TV + USB Keyboard
http://staff.philau.edu/barberej/
eyetoy driver
well look like v1.0 is done (hope i didnt left anything out). they are on cvs in ps2sdk.
ps2sdk/ee/rpc/ps2cam
ps2sdk/iop/usb/ps2cam
ps2sdk/ee/rpc/ps2cam
ps2sdk/iop/usb/ps2cam
ps2 eyetoy driver
next project: PS2 Entertainment Center
Why is this ? Are the various methods of JPEG encoding outside the capabilities of the IPU ?mrbrown wrote:The IPU is useless for JPEG decompression.
I am just curious, because if you take away the Motion Compensation function of MPEG2 decoding, you are left with essentially a JPEG. The IPU does NOT do Motion Compensation anyway (it must be done on the EE core) so it seems that the IPU is nothing more than a fancy JPEG decompresser.
While I am sure your experience in PS2 technology allows you to understand clearly why the IPU is useless for JPEG compression, I am interested to learn more about what these limitations are. :)
From the MPEG2 FAQ:
41. How do MPEG and JPEG differ?
A. The most fundamental difference is MPEG's use of block-based motion
compensated prediction (MCP)---a method falling into the general category of
temporal DPCM.
The second most fundamental difference is in the target application.
JPEG adopts a general purpose philosophy: independence from color space
(up to 255 components per frame) and quantization tables for each
component. Extended modes in JPEG include two sample precision (8 and
12 bit sample accuracy), combinations of frequency progressive, spatial
hierarchically progressive, and amplitude (point transform) progressive
scanning modes. Further color independence is made possible thanks to
downloadable Huffman tables (up to one for each component.)
Since MPEG is targeted for a set of specific applications, there is only
one color space (4:2:0 YCbCr), one sample precision (8 bits), and one
scanning mode (sequential). Luminance and chrominance share quantization
and VLC tables. MPEG adds adaptive quantization at the macroblock (16 x
16 pixel area) layer. This permits both smoother bit rate control and
more perceptually uniform quantization throughout the picture and image
sequence. However, adaptive quantization is part of the Enhanced JPEG
charter (ISO/IEC 10918-3) currently in verification stage. MPEG variable
length coding tables are non-downloadable, and are therefore optimized
for a limited range of compression ratios appropriate for the target
applications.
The local spatial decorrelation methods in MPEG and JPEG are very
similar. Picture data is block transform coded with the two-dimensional
orthanormal 8x8 DCT, with asymmetric basis vectors about time (aka DCT-
II). The resulting 63 AC transform coefficients are mapped in a zig-zag
pattern (or alternative scan pattern in MPEG-2) to statistically
increase the runs of zeros. Coefficients of the vector are then
uniformly scalar quantized, run-length coded, and finally the run-length
symbols are variable length coded using a canonical (JPEG) or modified
Huffman (MPEG) scheme. Global frame redundancy is reduced by 1-D DPCM
of the block DC coefficients, followed by quantization and variable
length entropy coding of the quantized DC coefficient.
the sony eyetoy driver captures directly ipu frames, send them with sifcmd to ee and then you must only use the IPU to get rbga format to send to gs. So it's not needed extra libjpg stuff. The main difference with your driver is:
- sifcmd stuff instead of rpc to get frames
- IPU at ee side instead of libjpg
- isoc transfers, sony uses a multi isoc transfer so they get a callback each 8 isoc frames instead a 1 callback each isoc frame. Usbs from napalm has not this function so you can only use loading sony img before
- 896 size each isoc frame request instead of 384
In the other hand iop side in your driver is gpl based so i believe that perhaps it is incompatible with AFL license from ps2sdk
- sifcmd stuff instead of rpc to get frames
- IPU at ee side instead of libjpg
- isoc transfers, sony uses a multi isoc transfer so they get a callback each 8 isoc frames instead a 1 callback each isoc frame. Usbs from napalm has not this function so you can only use loading sony img before
- 896 size each isoc frame request instead of 384
In the other hand iop side in your driver is gpl based so i believe that perhaps it is incompatible with AFL license from ps2sdk
Hey Lion,
Could you commit another example to the CVS please, on how to read a video stream (i.e. not just one frame) from the eyetoy?
I'm having some problems with your driver:
I'm initializing it the same way like you're doing in the test example. I'm using 320x240,25Hz and I get a picture from the eyetoy. but the PS2CamExtractFrame() function seems pretty slow (>30,000,000 cycles), and if I call it too often it hangs after a few seconds.
Maybe I did something wrong when compiling the IRX (?), never compiled IRXs yet. And I'm not using the newest version of ps2sdk, could this be a problem?
Could you commit another example to the CVS please, on how to read a video stream (i.e. not just one frame) from the eyetoy?
I'm having some problems with your driver:
I'm initializing it the same way like you're doing in the test example. I'm using 320x240,25Hz and I get a picture from the eyetoy. but the PS2CamExtractFrame() function seems pretty slow (>30,000,000 cycles), and if I call it too often it hangs after a few seconds.
Maybe I did something wrong when compiling the IRX (?), never compiled IRXs yet. And I'm not using the newest version of ps2sdk, could this be a problem?
infj
ok, looks like it was a compiler issue. the irx i got from the ps2cam directory from cvs works (without freezing). but still slow - here are some lines of the inlink log:
i get the cycle-count with this:
lion: if this is normal, i hope you have some ideas for optimizing this ;)
Code: Select all
extframe cycles:46931560
extframe cycles:33658300
extframe cycles:50477188
extframe cycles:36317470
extframe cycles:53219548
extframe cycles:33662128
Code: Select all
startPs2Perf();
camfrmsiz = PS2CamExtractFrame(camdevid, picbuf, 16384);
stopPs2Perf();
printf("extframe cycles:%d \n", getPs2PerfPC0());
infj
saotome mavy will be glad to see someone using Ps2Perf :P
about your issue, lion should redesign the driver a little. Now you get frames like a snapshot so you must wait too much to get it from iop, and when you are making other things on ee side the driver is stopped and only checking for commands, if you want a real streaming support you will need iop side making the work quickly to get the frames from eyetoy and send the ipu frames with a double buffer to ee with sifcmd stuff, leave the rpc stuff only to enable/disable streaming or get a static snapshot ,this will let you in ee side make other stuff and you will not need to wait to iop to capture and send you a new frame like now.
Other thing to do for lion is to use IPU instead of libjpg.
about your issue, lion should redesign the driver a little. Now you get frames like a snapshot so you must wait too much to get it from iop, and when you are making other things on ee side the driver is stopped and only checking for commands, if you want a real streaming support you will need iop side making the work quickly to get the frames from eyetoy and send the ipu frames with a double buffer to ee with sifcmd stuff, leave the rpc stuff only to enable/disable streaming or get a static snapshot ,this will let you in ee side make other stuff and you will not need to wait to iop to capture and send you a new frame like now.
Other thing to do for lion is to use IPU instead of libjpg.