Buffer width and u32

Discuss the development of new homebrew software, tools and libraries.

Moderators: cheriff, TyRaNiD

Post Reply
ddgFFco
Posts: 11
Joined: Tue Sep 20, 2005 5:35 pm

Buffer width and u32

Post by ddgFFco »

Hey, I’m not new to programming but have recently been learning C/C++, reading books and looking over sample PSP code. I’ve managed to get quite a few things to compile and run on my PSP but there are a few things still confusing me:

From: pspdisplay.h

Code: Select all

/**
 * Display set framebuf
 *
 * @param topaddr - address of start of framebuffer
 * @param bufferwidth - buffer width (must be power of 2)
 * @param pixelformat - One of ::PspDisplayPixelFormats.
 * @param sync - One of ::PspDisplaySetBufSync
 */
void sceDisplaySetFrameBuf(void *topaddr, int bufferwidth, int pixelformat, int sync);
How do I calculate the bufferwidth I’m meant to use if I’m using the 32-bit RGBA pixel format (PSP_DISPLAY_PIXEL_FORMAT_8888)? Allot of the examples I’ve been looking at seam to have this as 512 but I just can’t understand why (512 what?).

What are the u16/u32 (etc) data types for? What are they meant to contain? I haven’t come across them in any C/C++ books I’ve been reading yet.

Thanks.
Bytrix
Posts: 72
Joined: Wed Sep 14, 2005 7:26 pm
Location: England

Post by Bytrix »

I'm not sure about the bufferwidth, possibly it's pixel type (in bytes) multiplied by pixels used (480 * 272 * 4) which is just under 512Kb.

u16 and u32 are just unsigned values, look for the chapter about unsigned values in your books to see why they're useful.
rinco
Posts: 255
Joined: Fri Jan 21, 2005 2:12 pm
Location: Canberra, Australia

Post by rinco »

u16/u32:

Code: Select all

/usr/local/pspdev/psp/sdk/include/psptypes.h:typedef unsigned short                     u16;
/usr/local/pspdev/psp/sdk/include/psptypes.h:typedef unsigned int                       u32;
You calculate the buffer width by taking the width of the screen (480 pixels) and round up to the nearest power of 2 (512 pixels). The last 32 pixels are not used, and http://www.scorpioncity.com/dj2.html briefly explains why (and has a nice ascii diagram):
Even though the screen resolution might be, say, 640x480x32, this does not necessarily mean that each row of pixels will take up 640*4 bytes in memory. For speed reasons, graphics cards often store surfaces wider than their logical width (a trade-off of memory for speed.) For example, a graphics card that supports a maximum of 1024x768 might store all modes from 320x200 up to 1024x768 as 1024x768 internally. This leaves a "margin" on the right side of a surface. This actual allocated width for a surface is known as the pitch or stride of the surface.
ddgFFco
Posts: 11
Joined: Tue Sep 20, 2005 5:35 pm

Post by ddgFFco »

Thanks, makes sense now.
StouffR
Posts: 11
Joined: Thu Sep 15, 2005 11:43 pm

Post by StouffR »

For u32 or u16 you can use the hexadecimal values like HTML code :

0xRRGGBBAA (R=Red, G=Green, B=blue, A=aliasing)

4x8 = 32bits (U32)
CyberBill
Posts: 86
Joined: Tue Jul 26, 2005 3:53 pm
Location: Redmond, WA

Post by CyberBill »

I'm sorry, I might be completely wrong here, but I dont believe its in bytes, I believe its in PIXELS. In which case it should always be 512, unless you are... for some reason... not using a 480x272 screen.

It does the internal conversion since it knows the size of each pixel.
Post Reply