Framerate
Framerate
Does anyone have experience on how much FPS you get for C++ code using the SDL layer on the PSP?
I've ported my C++ (not C) game to the PSP using the SDL layer for graphics. When I turn on the FPS counter, it shows I only get 20 frames per second, which seems very low for a simple 2D game.
(Libs: -lstdc++ -lSDLmain -lSDL_mixer -lSDL_image -lSDL_ttf -lFreeType -lpng -lz -ljpeg)
It might be that my counter is of, but judging by the screen updates, it's right on...
My questions are:
1. Is this a normal framerate, or is it indeed way to low?
2. Could this be caused by linking the -lstdc++ in the makefile?
3. Does anyone have any hints & tips on things I could try?
Some more info:
The game loads around 1.5 megs of graphics, and 600k of SoundFX.
No music is loaded yet.
Many thanks in advance for your time,
Michael
I've ported my C++ (not C) game to the PSP using the SDL layer for graphics. When I turn on the FPS counter, it shows I only get 20 frames per second, which seems very low for a simple 2D game.
(Libs: -lstdc++ -lSDLmain -lSDL_mixer -lSDL_image -lSDL_ttf -lFreeType -lpng -lz -ljpeg)
It might be that my counter is of, but judging by the screen updates, it's right on...
My questions are:
1. Is this a normal framerate, or is it indeed way to low?
2. Could this be caused by linking the -lstdc++ in the makefile?
3. Does anyone have any hints & tips on things I could try?
Some more info:
The game loads around 1.5 megs of graphics, and 600k of SoundFX.
No music is loaded yet.
Many thanks in advance for your time,
Michael
Hello,
I am noticing a similar issue with my program. My source is all written in C. I set my screen up to use HW acceleration, fullscreen mode and double buffering.
I have a main loop that uses SDL to check for user input, and move a single sprite across the screen. Each time through the main loop, I re-draw my background (SDL_BlitSurface), draw my sprite, then call SDL_Flip to update the screen. The background image I am using is 480x272. I also used SDL_DisplayFormat on my background and sprite to help speed up the process.
If I do not re-draw the background each time, the program runs significantly faster. If I do re-draw it, its terrorably slow. Is there something I can do to speed up the re-drawing of my background, like maybe have SDL use the video RAM to store it?
I am noticing a similar issue with my program. My source is all written in C. I set my screen up to use HW acceleration, fullscreen mode and double buffering.
I have a main loop that uses SDL to check for user input, and move a single sprite across the screen. Each time through the main loop, I re-draw my background (SDL_BlitSurface), draw my sprite, then call SDL_Flip to update the screen. The background image I am using is 480x272. I also used SDL_DisplayFormat on my background and sprite to help speed up the process.
If I do not re-draw the background each time, the program runs significantly faster. If I do re-draw it, its terrorably slow. Is there something I can do to speed up the re-drawing of my background, like maybe have SDL use the video RAM to store it?
SDL hardware surfaces do not use any type of hardware acceleration. The screen surface sits in VRAM, and any blits are done in software.
SDL software surfaces use libgu for accelerated blits into VRAM. If you are using hardware surfaces, consider switching to software (you also get more pixel formats to play with).
Perhaps we need to update README.PSP with this information.
SDL software surfaces use libgu for accelerated blits into VRAM. If you are using hardware surfaces, consider switching to software (you also get more pixel formats to play with).
Perhaps we need to update README.PSP with this information.
When you refer to the SDL Surfaces, are you referring to the Screen's SDL Surface, or the various image files we have stored into SDL surfaces?
I belive all of my SDL_Surfaces for images are done in system memory. I checked their flags, and that was what they indicated. Is there a way to even specify that a given SDL surface is generated in VRAM Vs Syustem memory (aside from the screen, which we setup via a call to SDL_SetVideoMode)?
I tried setting my screen up to use both hardware and software acceleration, and it was very slow both ways. In both cases I used a double buffer. I noticed that if I set my screen up in 16 bit mode, the blitting speed seems to double.
Right now my two main questions are:
1. Is there some way to make the blitting of 32 bit surfaces faster?
2. I still get a small tearing effect as the background slides behind my character. Why didn't the double buffer fix this?
mrbrown, thanks for the help. Any idea what I am missing?
I belive all of my SDL_Surfaces for images are done in system memory. I checked their flags, and that was what they indicated. Is there a way to even specify that a given SDL surface is generated in VRAM Vs Syustem memory (aside from the screen, which we setup via a call to SDL_SetVideoMode)?
I tried setting my screen up to use both hardware and software acceleration, and it was very slow both ways. In both cases I used a double buffer. I noticed that if I set my screen up in 16 bit mode, the blitting speed seems to double.
Right now my two main questions are:
1. Is there some way to make the blitting of 32 bit surfaces faster?
2. I still get a small tearing effect as the background slides behind my character. Why didn't the double buffer fix this?
mrbrown, thanks for the help. Any idea what I am missing?
I did some tests yesterday, and I have the same results as Gar.
SW or HW surface doesn't seem to have any impact at all.
I do a full background image blit.
and then a lot of blits of 8x8 tiles (the blocks in the tetris level), the next and next next block (8 blits), plus the scores etc, also around 20 blits.
The weird part is, 16bpp didn't seem to make a difference either (I'll have to retest that to make sure).
But I noticed that the call to SDL_DisplayFormat() was incorrect on the image surfaces, I fixed that and will retest this as well. The 16BPP mode could be related to this, as the surface was 32BPP or 16BPP and the image surfaces where 24BPP.
Not sure if surface sizes make a difference to...
I will test some more this weekend...
PS.
Is there a reason that I don't see much OO code floating around? Is it because most people are still coding C instead of C++?
I was just wondering if that might be related somehow...
SW or HW surface doesn't seem to have any impact at all.
I do a full background image blit.
and then a lot of blits of 8x8 tiles (the blocks in the tetris level), the next and next next block (8 blits), plus the scores etc, also around 20 blits.
The weird part is, 16bpp didn't seem to make a difference either (I'll have to retest that to make sure).
But I noticed that the call to SDL_DisplayFormat() was incorrect on the image surfaces, I fixed that and will retest this as well. The 16BPP mode could be related to this, as the surface was 32BPP or 16BPP and the image surfaces where 24BPP.
Not sure if surface sizes make a difference to...
I will test some more this weekend...
PS.
Is there a reason that I don't see much OO code floating around? Is it because most people are still coding C instead of C++?
I was just wondering if that might be related somehow...
You won't see a whole lot of OO code for the PSP (or any hand-held) because most of us developers who've worked with hand-helds for a long time have a background in assembly and C, and we try to keep things as fast and simple as possible. OO evolved as PCs were becoming more and more powerful, standard C code is often better for these low memory/low speed systems.
Garak and/or Madbutch: Perhaps you could submit what you have to svn.pspdev.org... ?
When porting SMC (written in C++), I found SDL_framerate (in SDL_gfx) smoothed things out. The framerate control that SMC originally employed was jerky.
Bytrix: Have you considered an OO approach using standard C? That's what cool kids do for compact fast code. On second thought... let's not derail this topic.
When porting SMC (written in C++), I found SDL_framerate (in SDL_gfx) smoothed things out. The framerate control that SMC originally employed was jerky.
Bytrix: Have you considered an OO approach using standard C? That's what cool kids do for compact fast code. On second thought... let's not derail this topic.
I'll submit it if you want, but I bet everyone is gonna had it. Last time the comments were: "112 files for a Tetris game? it should be 112 lines!"
Still... I'll put everything available for download for you guys first this weekend, since I just finishing up things now (cleaning up some graphics here and there), but the game is all there.
I'm up to 50 FPS now, which seems good.
What did I do:
- Set the screen surface to 16BPP.
- For each image I load, set the surface format to the screen surfaces format.
EDIT: Up to 76 FPS by enabling the double buffering (forgot I had turned it off).
Using 32BPP instead of 16BPP brings it down to 41 FPS.
SDL_HWSURFACE or SDL_SWFURFACE doesn't seem to have any impact at all...
And thanks for the OO comments.
Still... I'll put everything available for download for you guys first this weekend, since I just finishing up things now (cleaning up some graphics here and there), but the game is all there.
I'm up to 50 FPS now, which seems good.
What did I do:
- Set the screen surface to 16BPP.
- For each image I load, set the surface format to the screen surfaces format.
EDIT: Up to 76 FPS by enabling the double buffering (forgot I had turned it off).
Using 32BPP instead of 16BPP brings it down to 41 FPS.
SDL_HWSURFACE or SDL_SWFURFACE doesn't seem to have any impact at all...
And thanks for the OO comments.
So the big win was using 16BPP and making sure the surfaces all have the same format (doh!).
Here is some code for those wondering what I mean:
I removed all the error handling, you should obvioulsy check if the pointers are filled after each of those calls.
The last call looks kinda scary to, since we receive the pointer to the surface in the same variable as we loaded the image in. This could be a memory leak, if the SDL layer didn't clean it up right.
It seems SDL does everything right tho, since no mem leaks pop up. But if you don't trust it, then you should use a temp surface to load the image on, and free it after you received the new image surface in the correct format.
Hope this clarifies it somewhat :)
Here is some code for those wondering what I mean:
Code: Select all
pScreenSurface = SDL_SetVideoMode( 480, 272, 16, SDL_SWSURFACE|SDL_DOUBLEBUF );
pImgSurface = SDL_LoadBMP( "InsertFilename.BMP" );
pImgSurface = SDL_DisplayFormat( pImgSurface );
The last call looks kinda scary to, since we receive the pointer to the surface in the same variable as we loaded the image in. This could be a memory leak, if the SDL layer didn't clean it up right.
It seems SDL does everything right tho, since no mem leaks pop up. But if you don't trust it, then you should use a temp surface to load the image on, and free it after you received the new image surface in the correct format.
Hope this clarifies it somewhat :)
And here is a link to my game: HaCKaH_PSP.zip
And here is the source, for whomever wants to take a peek: HaCKaH_PSP_Source.zip
And here is the source, for whomever wants to take a peek: HaCKaH_PSP_Source.zip
Sorry to dig up an old post, but I was hoping MadButch has this subscribed...MadButch wrote:So the big win was using 16BPP and making sure the surfaces all have the same format (doh!).
Here is some code for those wondering what I mean:I removed all the error handling, you should obvioulsy check if the pointers are filled after each of those calls.Code: Select all
pScreenSurface = SDL_SetVideoMode( 480, 272, 16, SDL_SWSURFACE|SDL_DOUBLEBUF ); pImgSurface = SDL_LoadBMP( "InsertFilename.BMP" ); pImgSurface = SDL_DisplayFormat( pImgSurface );
The last call looks kinda scary to, since we receive the pointer to the surface in the same variable as we loaded the image in. This could be a memory leak, if the SDL layer didn't clean it up right.
It seems SDL does everything right tho, since no mem leaks pop up. But if you don't trust it, then you should use a temp surface to load the image on, and free it after you received the new image surface in the correct format.
Hope this clarifies it somewhat :)
I'm curious if by:
Code: Select all
pScreenSurface = SDL_SetVideoMode( 480, 272, 16, SDL_SWSURFACE|SDL_DOUBLEBUF );
pImgSurface = SDL_LoadBMP( "InsertFilename.BMP" );
pImgSurface = SDL_DisplayFormat( pImgSurface );
You meant:
Code: Select all
pScreenSurface = SDL_SetVideoMode( 480, 272, 16, SDL_SWSURFACE|SDL_DOUBLEBUF );
pImgSurface = SDL_LoadBMP( "InsertFilename.BMP" );
pImgSurface = SDL_DisplayFormat( pScreenSurface );
If not, I misinterpreted what was going on here, and could you please clarify why it's not as I stated?
Check out http://www.libsdl.org/cgi/docwiki.cgi/S ... playFormat.
Don't forget the mandatory error checks though ;)
Additionally:This function takes a surface and copies it to a new surface of the pixel format and colors of the video framebuffer, suitable for fast blitting onto the display surface. It calls SDL_ConvertSurface.
The proper call for MadButch's sample would be:Remember to use a different variable for the returned surface, otherwise you have a memory leak, since the original surface isn't freed.Code: Select all
surface = SDL_DisplayFormat(surface); // memory leak!! // correct version NewSurface = SDL_DisplayFormat(surface); SDL_FreeSurface(surface);
Code: Select all
pScreenSurface = SDL_SetVideoMode( 480, 272, 16, SDL_SWSURFACE|SDL_DOUBLEBUF );
pTmpImgSurface = SDL_LoadBMP( "InsertFilename.BMP" );
pImgSurface = SDL_DisplayFormat( pTmpImgSurface );
SDL_FreeSurface(pTmpImgSurface);