Little help with augmented reality
-
- Posts: 8
- Joined: Wed Apr 02, 2008 5:15 am
Little help with augmented reality
I'm trying to experiment a bit with augmented reality. I got my USB cam working. The next step is to 'read' which is shown on the screen.
So:
Can anyone help me with getting the color of a pixel shown on the screen?
Sould be pretty simple, like:
color = getPixelColor( x, y);
I have some experience with 3d programming, but not so much with the PSP (which explains my noob question..)
Please help,
thanx
So:
Can anyone help me with getting the color of a pixel shown on the screen?
Sould be pretty simple, like:
color = getPixelColor( x, y);
I have some experience with 3d programming, but not so much with the PSP (which explains my noob question..)
Please help,
thanx
This is sa good question, and you would have to get a bit creative to get it. I can only think of an expensive way of does this right now. Well maybe, I'm thinking this up as I go. What you would want to do is have a pointer point to the current display buffer. That will be the final image being currently drawn to the screen. You would then want to use pointer arithmatic here. If think that your pointer will point to the first pixel in your display buffer. Now, keep in mind that your display buffer's image dimensions are 512x272. The size of the framebuffer is 512 * 272 * (bitsPerPixel / 8). So if your framebuffer is 32-bit, it would be 512 * 272 * 4 which is: 557056 bytes. The "x" in your getPixelColor() function will probably range from 0 - 479 and the "y" will range from 0 - 271. Zero will be the first pixel. Now, we need to look at the pixels on the screen as a single-dimension array since the frambuffer is stored linearly in memory. Then, you would multiply that be the number of bytes each pixel takes up in memory which is the bitsPerPixel / 8.
Here is how to get to that pixel's address in a 32-bit framebuffer:
Now, pretty much all your do is add pixelAddrRel to your display buffer pointer's address, and I think that is what you are looking for. It might be just the relative pointer which is what is above (someone help me out here). Then, you return the value of that address. Keep in mind that the pointers should be 32-bit if the framebuffers are 16-bit, or 16-bit if the framebuffer is 16-bit. This was off the top of my head, so I'd like a second opinion on this:
That might be right, but that was off the top of my head, and I can't chekc it because I'm not home right now. Hope it works!
Here is how to get to that pixel's address in a 32-bit framebuffer:
Code: Select all
pixelAddrRel = ( (y * 512) + x) * 4;
Code: Select all
// use this for 32-bit framebuffers
int get32BitPixel(void *framePtr, int x, int y)
{
// bound the coordinates if it is beyond the screen dimensions
if(x < 0) x = 0;
if(x > 479) x = 479;
if(y < 0) y = 0;
if(y > 271) y = 271;
// find the relative address of the desired pixel
int pixelRelAddr = ( (y * 512) + x) * 4;
// add the pixelRelAddr to framePtr's pointing address
framePtr += pixelRelAddr;
// return pixel value
return *framePtr;
}
-
- Posts: 8
- Joined: Wed Apr 02, 2008 5:15 am
depending on your video mode, "Color" can be a pointer to some kind of structure holding pixel color, probably a RGBA: in this case, you can define a struct as
getVramDrawBuffer(); gets a pointer to the start of video framebuffer..in the code above it should be passed to function in first parameter "framePtr"; in the other thread saying
code computes position in framebuffer depending on size of the Color struct (not to be mentioned it cannot be changed at runtime). On a side note: it could be more performing to call getVramDrawBuffer(); _one_ single time,then pass it as a parameter (like in vincent's code) or store it in a global variable.
PS: i did my example relying on the fact you are in a 32 bit RGBA video mode; in the code above there's
in that "*4" you see that code is for 32 bit, too.
Code: Select all
struct{
a: u8;
b: u8;
g: u8;
r: u8;} Color;
// remember u8 means "unsigned 8 bit integer" or "byte"
Code: Select all
Color getPixelScreen(int x, int y)
{
Color* vram = getVramDrawBuffer();
return vram[PSP_LINE_SIZE * y + x];
}
PS: i did my example relying on the fact you are in a 32 bit RGBA video mode; in the code above there's
Code: Select all
int pixelRelAddr = ( (y * 512) + x) * 4;
augmented reality
I presume that when you mention augmented reality you mean that you are trying to get input from the camera and use that data and augment to it to create a better reality. For example, how the eyecreate software allows capturing real time images and then do things with it. And also I presume that the first step you are trying to do is to track something moving in the camera, and that is why you want to have a pointer into the display buffer (or the camera buffer image).
Well, the easiest way to capture something on the display buffer (that I can think of right now), is to divide the screen into course square blocks and then get value of a pixel in the center of those squares. You do this maybe like 60 times a second each time the frame changes. So, for example, if you want to track a finger moving close to the camera or if you want to track a bright light moving in the sceen. When you take into account the fourth dimension (time) along with your 2D of center pixel values of each block of data, you will see that item you are tracking (color of your fingertip, or bright light of a light pen) will show up in one of the squares, then disappear, then show up in another square. If your check loop is fast enough, you can divide the display buffer into smaller squares, or check multiple pixels inside a square.
I think the easiest way is to track bright objects or objects you know the scene will have very little of the color. Bright objects have RGB of 230 or higher of all Red Blue and Green. So you just check the RGB values of the pixels of the middle of the squares. similarly, you can check the RGB value of your finger and track that. Because the finger is close to the camera, you know that it is large, and to track that you would first check the center pixel of a square, and if it matches the color, check adjacent pixels a small distance away within the circle. The closer your finger, the bigger the radius of the circle you check for similar color. If many of the adjacent pixels match the center pixel color, then you know that you have a lock on the finger. Of course, the smaller the squares that you check, the more accurate the location of the tracking. The smaller the radius of the circle, the further away you finger can be from the camera.
Here is the post related to this subject:
http://www.edepot.com/forums/viewtopic.php?f=8&t=13
Well, the easiest way to capture something on the display buffer (that I can think of right now), is to divide the screen into course square blocks and then get value of a pixel in the center of those squares. You do this maybe like 60 times a second each time the frame changes. So, for example, if you want to track a finger moving close to the camera or if you want to track a bright light moving in the sceen. When you take into account the fourth dimension (time) along with your 2D of center pixel values of each block of data, you will see that item you are tracking (color of your fingertip, or bright light of a light pen) will show up in one of the squares, then disappear, then show up in another square. If your check loop is fast enough, you can divide the display buffer into smaller squares, or check multiple pixels inside a square.
I think the easiest way is to track bright objects or objects you know the scene will have very little of the color. Bright objects have RGB of 230 or higher of all Red Blue and Green. So you just check the RGB values of the pixels of the middle of the squares. similarly, you can check the RGB value of your finger and track that. Because the finger is close to the camera, you know that it is large, and to track that you would first check the center pixel of a square, and if it matches the color, check adjacent pixels a small distance away within the circle. The closer your finger, the bigger the radius of the circle you check for similar color. If many of the adjacent pixels match the center pixel color, then you know that you have a lock on the finger. Of course, the smaller the squares that you check, the more accurate the location of the tracking. The smaller the radius of the circle, the further away you finger can be from the camera.
Here is the post related to this subject:
http://www.edepot.com/forums/viewtopic.php?f=8&t=13
-
- Posts: 8
- Joined: Wed Apr 02, 2008 5:15 am
Indeed edepot, that's what I'm trying to do.
And now I can trace a red dot on a white peace of paper!
I had to read the camera input instead of the screen (otherwise, if I start drawing on the creen I can no longer read from it.. obviously..)
The important parts of the code look like this:
static u32 framebuffer[480*272] __attribute__((aligned(64))); //this is the part of the memory where the camera 'writes' (no idea what it means..)
[... allot of other code ...]
//read the camera input for 480 x 272
for (x=0; x<480; x++){
for (y=0; y<272; y++){
m = y*480;
n = y*512;
color = framebuffer[m+x];
if (color > traceColorMIN && color < traceColorMAX ){
playerx = x;
playery = y;
}
}
}
And I draw a cross at the playex,playery postition!
I now use 'unsigned int' for the color code. My guess is that it uses 32bit of color. Does anyone know how to change this 'unsigned int' to a hex like 0xFFFFFF. That would really help with reading the color.
And now I can trace a red dot on a white peace of paper!
I had to read the camera input instead of the screen (otherwise, if I start drawing on the creen I can no longer read from it.. obviously..)
The important parts of the code look like this:
static u32 framebuffer[480*272] __attribute__((aligned(64))); //this is the part of the memory where the camera 'writes' (no idea what it means..)
[... allot of other code ...]
//read the camera input for 480 x 272
for (x=0; x<480; x++){
for (y=0; y<272; y++){
m = y*480;
n = y*512;
color = framebuffer[m+x];
if (color > traceColorMIN && color < traceColorMAX ){
playerx = x;
playery = y;
}
}
}
And I draw a cross at the playex,playery postition!
I now use 'unsigned int' for the color code. My guess is that it uses 32bit of color. Does anyone know how to change this 'unsigned int' to a hex like 0xFFFFFF. That would really help with reading the color.
No no, that is not going to work. If you put a red dot on a piece of paper, then that means you are stuck with needing to hold a piece of paper in front of the camera. Also, your min and max values to check for the pixel is not right either. If you only want to check a red dot, you gotta check only the red component of the value read from the camera buffer. So you are going to have to break it down into RGB components and then check the value of the red (should be between 0 and 255 if you are in 24bit/pixel mode). To break it down you do an "or" on each eight bits of the full RGB value from the camera buffer, so you end up with three values for R G and B. Of course you need to shift two of the broken down color component to the range 0-255 (one by 8 bits, the other by 16 bits, assuming the X in XRGB is in the most significant bit range). I suggest you should do what J.F. does and post your whole source code somewhere so people can grab it and check out what you are trying to say. Just the basic whole sources that anyone can compile and take input from the Sony GoCamera.
-
- Posts: 8
- Joined: Wed Apr 02, 2008 5:15 am
I wil put my my code online somewhere, but I'm not home right now..
I use the traceColorMAX and traceColorMIN, because it isn't always red. In the beginning of the program you can aim at the color you want to trace, so maybe you want it to be green. Red just works best.
But you 're right; I'm using an integer, which doesn't work really good.
for example:
if you want te trace a red dot, the tracecolor is around 5000, to make sure the camera 'finds' that color it searches for a range between 5000+2000 and 5000-2000. Which are the traceColorMIN and traceColorMAX. Nor really the best solution..
(black = 0, white is over 600000).
I am really havind some problems with drawing. When the usbcam is active the colors get screwed.. The camera image is fine, but when I want to draw a red line (for example); the line isn't red, but changes color depending on the videoimage behind it.. really annoying.
But I will put my code online soon (when I am home).
I use the traceColorMAX and traceColorMIN, because it isn't always red. In the beginning of the program you can aim at the color you want to trace, so maybe you want it to be green. Red just works best.
But you 're right; I'm using an integer, which doesn't work really good.
for example:
if you want te trace a red dot, the tracecolor is around 5000, to make sure the camera 'finds' that color it searches for a range between 5000+2000 and 5000-2000. Which are the traceColorMIN and traceColorMAX. Nor really the best solution..
(black = 0, white is over 600000).
I am really havind some problems with drawing. When the usbcam is active the colors get screwed.. The camera image is fine, but when I want to draw a red line (for example); the line isn't red, but changes color depending on the videoimage behind it.. really annoying.
But I will put my code online soon (when I am home).
-
- Posts: 8
- Joined: Wed Apr 02, 2008 5:15 am