vertex normal format
vertex normal format
For vertex normals the GU_NORMAL_32BITF indicates a floating point value for the nomals which makes perfect sense.
Now for GU_NORMAL_16BIT does that use 8.8 fixed point or 1.15 fixed point or something else entirely?
Same question goes for GU_NORMAL_8BIT.
Thanks in advance for any answers.
Now for GU_NORMAL_16BIT does that use 8.8 fixed point or 1.15 fixed point or something else entirely?
Same question goes for GU_NORMAL_8BIT.
Thanks in advance for any answers.
When using 8-bit or 16-bit fixed values, they are mapped into the unit range (-1..1), which you then could scale to the proper range. For UV coordinates use sceGuTexScale() / sceGuTexOffset(), and for vertex-position you can use the GU_MODEL matrix to apply a scale/translation matrix to fit the model.
GE Dominator
-
- Posts: 75
- Joined: Mon Sep 19, 2005 5:41 am
Since storing them as 16-bit only affects memory performance, looking at the big picture (the whole vertex) is better. The optimal size for vertices is between 8 and 12 bytes, if you stray outside this limit you will get performance hits. Storing the buffer in VRAM buys you some flexibility, but not much.
GE Dominator
-
- Posts: 75
- Joined: Mon Sep 19, 2005 5:41 am
Any chance you can share a real-world best-case vertex format that you have been able to use for a 3D game?
I am currently using:
pos(x, y, z) == 3 floats == 12 bytes
texcoord(u, v) == 2 floats == 8 bytes
CPV == 1 DWORD == 4 bytes
====
24 bytes total.
CPV to 16 bit is quick and easy but needs more to be effective as vertex data needs to be multiples of 32 bits.
My game is up for an award at the Independent Games Festival in a couple of weeks, and on display, and I am currently running at 30 fps and need to get it to 60 fps.
I make heavy use of CLUT's - mostly 4 bit to keep things trim, and swizzle everything. I only send CPV or Normals - and not both. Mostly CPV's.
I subdivide my polys beforehand and keep the camera high enough so I don't need to clip.
I use one display list and was wondering if double-buffering might help. My display list is in system memory, I don't think I have enough VRAM unless I start cutting it up smaller.
I was wondering if there was a way like on PS2Linux where you can have pre-built DMA chains sitting around so there is no need to sceGumDrawArray(), just pre-build message buffers and sceGuFinish(). My vertex buffers are pre-built, but looking at the SDK it appears that sceGumDrawArray() further packetizes them?
Any help is greatly appreciated!!
I am currently using:
pos(x, y, z) == 3 floats == 12 bytes
texcoord(u, v) == 2 floats == 8 bytes
CPV == 1 DWORD == 4 bytes
====
24 bytes total.
CPV to 16 bit is quick and easy but needs more to be effective as vertex data needs to be multiples of 32 bits.
My game is up for an award at the Independent Games Festival in a couple of weeks, and on display, and I am currently running at 30 fps and need to get it to 60 fps.
I make heavy use of CLUT's - mostly 4 bit to keep things trim, and swizzle everything. I only send CPV or Normals - and not both. Mostly CPV's.
I subdivide my polys beforehand and keep the camera high enough so I don't need to clip.
I use one display list and was wondering if double-buffering might help. My display list is in system memory, I don't think I have enough VRAM unless I start cutting it up smaller.
I was wondering if there was a way like on PS2Linux where you can have pre-built DMA chains sitting around so there is no need to sceGumDrawArray(), just pre-build message buffers and sceGuFinish(). My vertex buffers are pre-built, but looking at the SDK it appears that sceGumDrawArray() further packetizes them?
Any help is greatly appreciated!!
1) Can you drop your position to 16-bit and use the model-transform to scale back to size? That would get it down to 6 bytes.
2) Dropping texcoords to 8-bit and transforming them using sceGuTexScale()/sceGuTexOffset() would get texture coordinates down to 2 bytes.
Using that you would get it down to 12 bytes in size (6+2+4) and it would become optimal from the memory POV.
However, are you sure it's vertex-bound? If you don't double-buffer your display-list I can see a big stall right there. Don't do it like the samples do, they try to keep stuff simple, not optimal in performance.
Yes, you can pre-build dma-chains, sceGuCallList() would allow you to call a previously generated list. If you do however, be aware of that sceGum*() functions do some caching behind the scenes, so transforms might not end up where you expected them when you run your list, and any transforms inside a pre-built chain will possibly break the caching-logic and would require you to re-load all affected matrices when returning.
2) Dropping texcoords to 8-bit and transforming them using sceGuTexScale()/sceGuTexOffset() would get texture coordinates down to 2 bytes.
Using that you would get it down to 12 bytes in size (6+2+4) and it would become optimal from the memory POV.
However, are you sure it's vertex-bound? If you don't double-buffer your display-list I can see a big stall right there. Don't do it like the samples do, they try to keep stuff simple, not optimal in performance.
Yes, you can pre-build dma-chains, sceGuCallList() would allow you to call a previously generated list. If you do however, be aware of that sceGum*() functions do some caching behind the scenes, so transforms might not end up where you expected them when you run your list, and any transforms inside a pre-built chain will possibly break the caching-logic and would require you to re-load all affected matrices when returning.
GE Dominator
-
- Posts: 75
- Joined: Mon Sep 19, 2005 5:41 am
Thanks for your help. I am running much faster now.
I was able to get down the the following format:
GU_TEXTURE_16BIT
GU_COLOR_5551
GU_VERTEX_16BIT
for a total of 12 bytes per vertex (down from 24 bytes per vertex before)
Things are MUCH faster, but I still have a ways to go.
I made the changes to stop using sceGum... so I can now try double-buffering my dlists.
One question, I got my texture scaling working just fine (using GU_TEXTURE_16BIT and scaling) with GU_VERTEX_16BIT I am seeing all my objects but initially they were all clumped together at the origin so I am working on getting the precise transformations setup.
I first figured out my maximum scale was +/- 4000 so I multiply my vertex x,y,z times 8.192f so they scale to +/ 32768 (4000 * 8.192f = 32768)
I then thought that as drawn everything would be super-tiny (confined to a 1 unit cube) so I would apply a scale matrix with a scale factor of (4000, 4000, 4000) to a Identity matrix and use that as my base Model Transform Matrix. Everything exploded so the parts were way off in the distance, so I instead played around with modifying my translation matrix and got things spread out but clearly I am missing something.
When you use GU_VERTEX_16BIT and is scales everything to [-1, +1] does that mean that something is full scale it is 1 unit, or it fills the view frustrum created with the perspective matrix?
I am also interested in Holgers comment:
Again, thanks for your help!! If you are going to GDC be sure to stop by our booth at the IGF Pavillion to see our game (Putt Nutz).
I was able to get down the the following format:
GU_TEXTURE_16BIT
GU_COLOR_5551
GU_VERTEX_16BIT
for a total of 12 bytes per vertex (down from 24 bytes per vertex before)
Things are MUCH faster, but I still have a ways to go.
I made the changes to stop using sceGum... so I can now try double-buffering my dlists.
One question, I got my texture scaling working just fine (using GU_TEXTURE_16BIT and scaling) with GU_VERTEX_16BIT I am seeing all my objects but initially they were all clumped together at the origin so I am working on getting the precise transformations setup.
I first figured out my maximum scale was +/- 4000 so I multiply my vertex x,y,z times 8.192f so they scale to +/ 32768 (4000 * 8.192f = 32768)
I then thought that as drawn everything would be super-tiny (confined to a 1 unit cube) so I would apply a scale matrix with a scale factor of (4000, 4000, 4000) to a Identity matrix and use that as my base Model Transform Matrix. Everything exploded so the parts were way off in the distance, so I instead played around with modifying my translation matrix and got things spread out but clearly I am missing something.
When you use GU_VERTEX_16BIT and is scales everything to [-1, +1] does that mean that something is full scale it is 1 unit, or it fills the view frustrum created with the perspective matrix?
I am also interested in Holgers comment:
My next step will to be get back lighting and I have exprienced the ill effects on lighting of applying a scale matrix to an object.keep in mind that the matrix transformations before the projective matrix should be non-scaling, the lighting calculations are only correct if the normals have unit length...
Again, thanks for your help!! If you are going to GDC be sure to stop by our booth at the IGF Pavillion to see our game (Putt Nutz).
-
- Posts: 75
- Joined: Mon Sep 19, 2005 5:41 am
I am still fudling with GU_VERTEX_16BIT. All my geometry is rendering ok, just the scale/perspective is off.
Has anyone actually put this to use beyond rendering a sample cube?
I figured out that when using 16bit vertex format that I need to scale my geometry down, not up. My original assumption about full-scale 16 bit vertex being a unit cube seemed to be incorrect.
Applying a scale matrix first thing to my model transform gets things close, but something is out of whack. I think that the scale components in the model transform, when internally mulitplied by the view transform are causing a problem.
holger says:
There's not a lot of info on 16 bit vertex formats, and I am wondering if anyone has gotten it to work in a real-world situation, or maybe there's a bug in the SDK? Probably it's just me. One thing I notice is that for several matrix types only 12 elements of the 4x4 get set.
Has anyone actually put this to use beyond rendering a sample cube?
I figured out that when using 16bit vertex format that I need to scale my geometry down, not up. My original assumption about full-scale 16 bit vertex being a unit cube seemed to be incorrect.
Applying a scale matrix first thing to my model transform gets things close, but something is out of whack. I think that the scale components in the model transform, when internally mulitplied by the view transform are causing a problem.
holger says:
Is there another way to account for the scaling of the geometry, perhaps in the perspective matrix?keep in mind that the matrix transformations before the projective matrix should be non-scaling, the lighting calculations are only correct if the normals have unit length...
There's not a lot of info on 16 bit vertex formats, and I am wondering if anyone has gotten it to work in a real-world situation, or maybe there's a bug in the SDK? Probably it's just me. One thing I notice is that for several matrix types only 12 elements of the 4x4 get set.
-
- Posts: 75
- Joined: Mon Sep 19, 2005 5:41 am
After much debugging I have come to the conclusion that when using GU_TEXTURE_16BIT the texture values are treated as positive shorts in the range [0..2] rather than [-1..1] as mentioned in the forums. Does anyone out there have any experience with this (this is when using them with GU_TRANSFORM_3D)
-
- Posts: 43
- Joined: Wed Aug 03, 2005 6:58 pm
Yes, I found this too -- UV's are not signed, which is a real pain at times. Just make sure your models do not use negative UV's.
Using 16 bit verts is fine for models, though, not had any problems myself. I even use 8 bit verts where the models are very small. You need to scale your models verts down to a unit cube, then put the scale into the projection matrix (this is what I do). If you put the scale into the view or world matrix it will screw up the lighting.
Also, use 8 bit normals. No reason not too -- the lighting can be massively wrong and no-one will ever notice. With 8-bit normals everything (to my eye) looks perfect. Will save a few bytes, also.
Oh, and another useful one is to get rid of the vertex colour all together if you just have a constant colour over the whole model by using one of the GE colour commands. Can be a useful saving.
I think that's all the tricks I use. I tend to use 16-bit UV's over 8-bit simply because of precision. But there's no reason why you can't use 8-bit if it looks good for you.
-Jw
Using 16 bit verts is fine for models, though, not had any problems myself. I even use 8 bit verts where the models are very small. You need to scale your models verts down to a unit cube, then put the scale into the projection matrix (this is what I do). If you put the scale into the view or world matrix it will screw up the lighting.
Also, use 8 bit normals. No reason not too -- the lighting can be massively wrong and no-one will ever notice. With 8-bit normals everything (to my eye) looks perfect. Will save a few bytes, also.
Oh, and another useful one is to get rid of the vertex colour all together if you just have a constant colour over the whole model by using one of the GE colour commands. Can be a useful saving.
I think that's all the tricks I use. I tend to use 16-bit UV's over 8-bit simply because of precision. But there's no reason why you can't use 8-bit if it looks good for you.
-Jw
-
- Posts: 75
- Joined: Mon Sep 19, 2005 5:41 am
Does anyone know why scale in the perspective matrix does not affect lighting? I assumes that in the end the MODEL, VIEW, and PERSPECTIVE matrices got multiplied together and acted as the final transformation matrix for the verts.
I guess as long as it works then that's ok, I'm just a little surprised that it does...
I guess as long as it works then that's ok, I'm just a little surprised that it does...
-
- Posts: 75
- Joined: Mon Sep 19, 2005 5:41 am
Back on this again after GDC. After some experimentation it looks to me that while it has been said in many places that the overall vertex has to align to 32 bits, it looks to me that the restriction is even tighter: that each element (uv, color, x,y,x, etc...) has to aligh to 32 bits.
For instance, this works:
but this does not seem to work:
Can anyone confirm this from experience? It may be that I have some other alignment issue going on in how my structures are packed, but initial experiments indicate that each element has to be 32 bit aligned.
For instance, this works:
Code: Select all
unsigned short u, v;
unsigned short color;
short pad;
float x,y,z;
Code: Select all
unsigned short u, v;
unsigned short color;
float x,y,z;
short pad;
No, every element doesn't have to be aligned to 32 bits, you answer this yourself in the example code you posted. They do however have to be aligned to whatever datasize they use, so 32-bit floats need 32-bit alignment, 16-bit fixed point values need 16-bit alignment and so on. Think C-alignment rules. Your second example won't work since the compiler will align the floats to 32-bit and then that pad-value will extend the structure further breaking the vertex size.
GE Dominator
-
- Posts: 75
- Joined: Mon Sep 19, 2005 5:41 am
Any hints from any of the graphics/math masters out there on how to "put the scale into the perspective matrix" to use 16 bit verts?
I've been playing around with:
1) After creating my perspecive matrix as I normally do multiplying it by a properly formated scale matrix - both (P x S) and (S x P) - no luck.
2) Multiplying the scale elements (0,0), (1,1), (2,2) in the perspective matrix by a scale value - no luck.
3) Just multiplying the near and far values going into the perspective matrix setup by a scale value - no luck.
There's not really any info in the forums on this and I wasn't able to find anything on google either.
I am able to multiply the MODEL_VIEW matrix by a scale matrix and get the geometry to render correct size, but then the lighting is mucked up...
I've been playing around with:
1) After creating my perspecive matrix as I normally do multiplying it by a properly formated scale matrix - both (P x S) and (S x P) - no luck.
2) Multiplying the scale elements (0,0), (1,1), (2,2) in the perspective matrix by a scale value - no luck.
3) Just multiplying the near and far values going into the perspective matrix setup by a scale value - no luck.
There's not really any info in the forums on this and I wasn't able to find anything on google either.
I am able to multiply the MODEL_VIEW matrix by a scale matrix and get the geometry to render correct size, but then the lighting is mucked up...
-
- Posts: 75
- Joined: Mon Sep 19, 2005 5:41 am
Offline, a friend of mine recommended another approach (given that I was having difficulty getting scaling working in the projection matrix and he had never done that) which was to put the scale in the MODEL transform and then put the inverse scale on the lighting values.
This seems to work very nicely and I have been able to nearly double my frame rate now by trimming my vertex definitions. If you are working on a 3d rendering-intensive PSP app I would HIGHLY recomend 16 bit Texture, color, normal, and vertex data. It's definitly the way to go!!
This seems to work very nicely and I have been able to nearly double my frame rate now by trimming my vertex definitions. If you are working on a 3d rendering-intensive PSP app I would HIGHLY recomend 16 bit Texture, color, normal, and vertex data. It's definitly the way to go!!
-
- Posts: 60
- Joined: Wed Jul 06, 2005 7:03 pm