memory stick defragmentation
memory stick defragmentation
I am currently finishing a dev that deals with partitioning and stuff like that.
I could add something to do defragmentation but I was wondering if it worth it.
Indeed It is clearly useful on a HD where the sector location is important, but I don't think it is useful on something like a memory stick.
Am I right?
I could add something to do defragmentation but I was wondering if it worth it.
Indeed It is clearly useful on a HD where the sector location is important, but I don't think it is useful on something like a memory stick.
Am I right?
--pspZorba--
NO to K1.5 !
NO to K1.5 !
Real test, a file of 13064 KB which is read in 2 MB chunks.
Firstly it was fragmented into 7 fragments, later it was defragmented to a simgle fragment. The results (they always give same speed after various tests):
- 7 fragments -> 1306 milisecs
- 1 fragment -> 1296 milisecs
I guess it doesn't worth the defragmentation.
Firstly it was fragmented into 7 fragments, later it was defragmented to a simgle fragment. The results (they always give same speed after various tests):
- 7 fragments -> 1306 milisecs
- 1 fragment -> 1296 milisecs
I guess it doesn't worth the defragmentation.
Try making fragments smaller - really small chunks are slow. Just look at the BlackFin speed test app (I think that's the name). There's a HUGE speed difference between the smallest couple of block sizes and the largest. So if the fragments are big enough, there will be little slow down, but smaller fragments will result in a loss of speed. Here's my 4GB stick on my Slim.moonlight wrote:Real test, a file of 13064 KB which is read in 2 MB chunks.
Firstly it was fragmented into 7 fragments, later it was defragmented to a simgle fragment. The results (they always give same speed after various tests):
- 7 fragments -> 1306 milisecs
- 1 fragment -> 1296 milisecs
I guess it doesn't worth the defragmentation.
Note the big difference between 2K and 32K... that's where fragmentation will make a visible difference in speed.[Blocksize 0.5 KBytes] [FS Write Speed 0.351 MBytes/sec]
[Blocksize 0.5 KBytes] [FS Read Speed 0.666 MBytes/sec]
[Blocksize 2.0 KBytes] [FS Write Speed 0.425 MBytes/sec]
[Blocksize 2.0 KBytes] [FS Read Speed 1.111 MBytes/sec]
[Blocksize 32.0 KBytes] [FS Write Speed 2.819 MBytes/sec]
[Blocksize 32.0 KBytes] [FS Read Speed 8.583 MBytes/sec]
[Blocksize 1024.0 KBytes] [FS Write Speed 4.739 MBytes/sec]
[Blocksize 1024.0 KBytes] [FS Read Speed 15.429 MBytes/sec]
- dennis96411
- Posts: 70
- Joined: Sun Jul 06, 2008 4:59 am
FAT fragments WAY too easy. Just copy a small file... say 1K. Then copy a big file, erase the small file, then copy another big file. TADA! You've got a fragment the size of the small file. FAT is NASTY that way, and why MS says to regularly defragment FAT partitions. NTFS isn't nearly as bad, which is one reason (among many) MS switched to it.Torch wrote:The controller will definitely be choking with requests with 2K. However you'll never see such tiny fragments practically and with larger sizes it makes no difference.J.F. wrote:Note the big difference between 2K and 32K... that's where fragmentation will make a visible difference in speed.
Defragging an SSD should never be an option - I'm not sure of MS's underlying wear leveling mechanism, but it is never good for the memory and as moonlight has already mentioned the speed from aligning clusters in order so seeks/jumps don't need to happen rather than leaving them wherever they happen to be is really not much different than the speed of leaving them fragmented.
This is not like a hard disk, where seeks take a certain amount of time depending on where the heads have to move from and to and thus can be optimized by aligning the files sequentially - performance gains from defragging a SSD will be minimal (unless the MS driver is broken/unoptimized in some way, of course) and seeks between raw blocks of data on the underlying NAND chips will always take the same amount of time (it is the code that handles lba's that can be the real slow down.)
The cost of defragging your SSD could well be your SSD (and in the least, using a safe/uncached method multiple writes/rewrites to the FAT records will occur, one for every lba realignment.) If fragmentation is a concern on a MS or similar card - copy everything off your disk, format it, then copy it back.
J.F.:
Blackspeed results can't apply to fragmentation directly. Of course writing in a chunk smaller than the cluster size is going to take longer than writing a full cluster (by the bye, just for curiosity what is the cluster size of the disk you tested?) - but you are never going to have anything that takes less than a cluster taking less space on the disk than a cluster (thus 1k fragments will be non-existant if your disk has 32k clusters.) If the memory stick is formatted properly (so the cluster size reflects the underlying NAND structure) some speed can be gained, but at a cost...
That said, the one thing I really hate about FAT on PSP (or any SSD), is the slack space (apps like irshell that use a multitude of tiny files never seem to consider this or the long write time to put many tiny files on there.) Defragging isn't going to help that, and especially in the case of multiple small 2k files existing on the drive it could take a really, really long time running from the PSP.
-----
I won't even touch on the fact that, even if you do defrag the LBA's, you aren't actually defragging the memory of an SSD. Just do a raw nand dump of a PSP, then install a different firmware, dump again. Compare the lba values in the physical chip dumps and see how it looks like a gambling house has been shuffling them. Performance on a SSD is not gained by physical location. It will definitely not be any less physically fragmented after either method (copy/format or defrag.)
:: fixed ::
This is not like a hard disk, where seeks take a certain amount of time depending on where the heads have to move from and to and thus can be optimized by aligning the files sequentially - performance gains from defragging a SSD will be minimal (unless the MS driver is broken/unoptimized in some way, of course) and seeks between raw blocks of data on the underlying NAND chips will always take the same amount of time (it is the code that handles lba's that can be the real slow down.)
The cost of defragging your SSD could well be your SSD (and in the least, using a safe/uncached method multiple writes/rewrites to the FAT records will occur, one for every lba realignment.) If fragmentation is a concern on a MS or similar card - copy everything off your disk, format it, then copy it back.
J.F.:
Blackspeed results can't apply to fragmentation directly. Of course writing in a chunk smaller than the cluster size is going to take longer than writing a full cluster (by the bye, just for curiosity what is the cluster size of the disk you tested?) - but you are never going to have anything that takes less than a cluster taking less space on the disk than a cluster (thus 1k fragments will be non-existant if your disk has 32k clusters.) If the memory stick is formatted properly (so the cluster size reflects the underlying NAND structure) some speed can be gained, but at a cost...
That said, the one thing I really hate about FAT on PSP (or any SSD), is the slack space (apps like irshell that use a multitude of tiny files never seem to consider this or the long write time to put many tiny files on there.) Defragging isn't going to help that, and especially in the case of multiple small 2k files existing on the drive it could take a really, really long time running from the PSP.
-----
I won't even touch on the fact that, even if you do defrag the LBA's, you aren't actually defragging the memory of an SSD. Just do a raw nand dump of a PSP, then install a different firmware, dump again. Compare the lba values in the physical chip dumps and see how it looks like a gambling house has been shuffling them. Performance on a SSD is not gained by physical location. It will definitely not be any less physically fragmented after either method (copy/format or defrag.)
:: fixed ::
Last edited by cory1492 on Tue Oct 07, 2008 6:41 pm, edited 1 time in total.
FAT32 defaults to a cluster size of 4K, unlike FAT16 which uses 32K. I think that's why many people report 1 or 2 GB sticks being far faster than 4 Gb sticks on their PSP. 4GB or bigger defaults to FAT32 formatting, while below that you can use FAT16.
The stick I used for the test results above was a FAT32 formatted 4 GB stick with 4K cluster size. I should probably reformat it changing the cluster size to 32K. That would reduce the slow down from fragmentation as 32K is still pretty faster as the results above show.
FAT32 usually defaults to a smaller cluster size to reduce the amount of space taken by smaller files. With a 32K cluser size, a 1 byte file will still take 32K of space. For a 80 GB harddrive full of lots of files of all sizes that's been defragmented, that's wasting a lot of space, but it would be a plus for a memstick.
The stick I used for the test results above was a FAT32 formatted 4 GB stick with 4K cluster size. I should probably reformat it changing the cluster size to 32K. That would reduce the slow down from fragmentation as 32K is still pretty faster as the results above show.
FAT32 usually defaults to a smaller cluster size to reduce the amount of space taken by smaller files. With a 32K cluser size, a 1 byte file will still take 32K of space. For a 80 GB harddrive full of lots of files of all sizes that's been defragmented, that's wasting a lot of space, but it would be a plus for a memstick.
Well, there's your problem. You don't need a defrag, you need to avoid using windows/linux default format options. They don't know anything about the underlying memory structure while the PSP's built in format option does (which, MS being designed by/for Sony makes that a big "duh".) To put it more simply, a block read/write takes less time than 32 page reads (and by spec, 32 consecutive page writes to a single block should never occur without an erase - how well the MS controller complies to this I can't know but that could well account for some of the small size speed hit.)
My 4G card formatted by the PSP defaults to FAT32 with a 32k cluster size - the cluster size should be a multiple of the internal NAND's block size rather than a division of a block into multiple clusters (fyi: a 32M card has 16k sized blocks and uses the same NAND as fat PSP, 512bytes by 32 pages per block - PSP formats it to FAT with a 16k cluster size, go figure.)
Seriously, there are very good reasons why manufactures recommend people stay away from defrag when it comes to memory cards (wear being the main point, as access time will never change - again, unlike a HDD where seeks to fragments takes physical time for the heads to reposition.) It can speed up access on cards which are not formatted to spec, the flipside being that by simply not formatting to spec you are creating the problem to begin with and increasing wear on any writes to the card.
Simply do a google search for something like "MS duo defrag safe?" There are a few people who claim performance increases, but I bet they are in the same boat with the format and will eventually lose all their data due to off-spec formatting.
My 4G card formatted by the PSP defaults to FAT32 with a 32k cluster size - the cluster size should be a multiple of the internal NAND's block size rather than a division of a block into multiple clusters (fyi: a 32M card has 16k sized blocks and uses the same NAND as fat PSP, 512bytes by 32 pages per block - PSP formats it to FAT with a 16k cluster size, go figure.)
Seriously, there are very good reasons why manufactures recommend people stay away from defrag when it comes to memory cards (wear being the main point, as access time will never change - again, unlike a HDD where seeks to fragments takes physical time for the heads to reposition.) It can speed up access on cards which are not formatted to spec, the flipside being that by simply not formatting to spec you are creating the problem to begin with and increasing wear on any writes to the card.
Simply do a google search for something like "MS duo defrag safe?" There are a few people who claim performance increases, but I bet they are in the same boat with the format and will eventually lose all their data due to off-spec formatting.
due to the differents things that where told in this thread I did a test myself.
I created a file with 16 clusters in a row, and the same file but in 16 fragments.
reading them several times show no real differences (<1%), which was my guess and what moonlight wrote.
So writting a defrag doesn't worth the effort.
Concerning what has been told about the manufacter telling to not do defrag, it is of course not true.
They just say to not do it __continously___ and it is because the ms have limited number of r/w (even if this limit is very high).
So it is much worse to do a format + copy than to do a defrag that will move only what has to be moved...
They would exactly tell the same for not __continously__ writing files on the ms...
@cory1492:
:: 1) you owe me one beer ;-) , because you lost your bet I didn't lost any data, while it is true that I don't claim for any performance increase.::
:: 2) whatever you want, nobody cares your opinion of what has to be developed or not so next time keep this for you::
:: 3) any judgement of newbs-ness you do isn't welcome too it is not usefull and you are probably the newbs of another one::
:: on that matter take example on moonlight, I've never seen him judging anybody and I feel that has much more than you to do it ::
:: 4) even if information you provide is always welcome::
I created a file with 16 clusters in a row, and the same file but in 16 fragments.
reading them several times show no real differences (<1%), which was my guess and what moonlight wrote.
So writting a defrag doesn't worth the effort.
Concerning what has been told about the manufacter telling to not do defrag, it is of course not true.
They just say to not do it __continously___ and it is because the ms have limited number of r/w (even if this limit is very high).
So it is much worse to do a format + copy than to do a defrag that will move only what has to be moved...
They would exactly tell the same for not __continously__ writing files on the ms...
@cory1492:
:: 1) you owe me one beer ;-) , because you lost your bet I didn't lost any data, while it is true that I don't claim for any performance increase.::
:: 2) whatever you want, nobody cares your opinion of what has to be developed or not so next time keep this for you::
:: 3) any judgement of newbs-ness you do isn't welcome too it is not usefull and you are probably the newbs of another one::
:: on that matter take example on moonlight, I've never seen him judging anybody and I feel that has much more than you to do it ::
:: 4) even if information you provide is always welcome::
--pspZorba--
NO to K1.5 !
NO to K1.5 !
And what was the cluster size? If it's 32 KB, we already said you would see little to no difference. I'd like to see what it is at 4 KB. :)pspZorba wrote:due to the differents things that where told in this thread I did a test myself.
I created a file with 16 clusters in a row, and the same file but in 16 fragments.
reading them several times show no real differences (<1%), which was my guess and what moonlight wrote.
So writting a defrag doesn't worth the effort.
Concerning what has been told about the manufacter telling to not do defrag, it is of course not true.
They just say to not do it __continously___ and it is because the ms have limited number of r/w (even if this limit is very high).
So it is much worse to do a format + copy than to do a defrag that will move only what has to be moved...
They would exactly tell the same for not __continously__ writing files on the ms...
@cory1492:
:: 1) you owe me one beer ;-) , because you lost your bet I didn't lost any data, while it is true that I don't claim for any performance increase.::
:: 2) whatever you want, nobody cares your opinion of what has to be developed or not so next time keep this for you::
:: 3) any judgement of newbs-ness you do isn't welcome too it is not usefull and you are probably the newbs of another one::
:: on that matter take example on moonlight, I've never seen him judging anybody and I feel that has much more than you to do it ::
:: 4) even if information you provide is always welcome::
I will try this WE with a cluster size of 4k, just for being completely sure.
But I cannot see any reason why the size of the cluster will have any impact on this test.
For me the only impact of the size of the cluster is on the probability of your files to be fragmented and it is a decreasing function (not sure of my translation) I mean The smaller the cluster is, the higher the probability to have fragments.
But I cannot see any reason why the size of the cluster will have any impact on this test.
For me the only impact of the size of the cluster is on the probability of your files to be fragmented and it is a decreasing function (not sure of my translation) I mean The smaller the cluster is, the higher the probability to have fragments.
--pspZorba--
NO to K1.5 !
NO to K1.5 !
Hi guys,
my first test wasn't a very good one.
so I created :
file => 16 clusters in a row
file1 => 16 clusters, fragmented like leaving one cluster between each cluster
file2 => 16 clusters, fragmented (partition start) s0 s2 s4 s6 s8 s10 s12 s14 ......................... s15 s13 s11 s9 s7 s5 s3 s1(partition end)
with those 3 files I've just done two tests
test1: I read the file in 16 pieces (one cluster after the other)
test2: I read the whole file in once
and the result is very surprising !
test1 : cluster size 32k - 16 reads of 1 cluster
file 77.275 msec
file1 77.688 msec
file2 88.728 msec
test2: cluster size 32k - 1 read of the 16 clusters in once
file 51.816 msec
file1 75.968 msec
file2 87.509 msec
So I guess it worth to do a defrag specially if :
- the files are really fragmented
- you read the files in once
my first test wasn't a very good one.
so I created :
file => 16 clusters in a row
file1 => 16 clusters, fragmented like leaving one cluster between each cluster
file2 => 16 clusters, fragmented (partition start) s0 s2 s4 s6 s8 s10 s12 s14 ......................... s15 s13 s11 s9 s7 s5 s3 s1(partition end)
with those 3 files I've just done two tests
test1: I read the file in 16 pieces (one cluster after the other)
test2: I read the whole file in once
and the result is very surprising !
test1 : cluster size 32k - 16 reads of 1 cluster
file 77.275 msec
file1 77.688 msec
file2 88.728 msec
test2: cluster size 32k - 1 read of the 16 clusters in once
file 51.816 msec
file1 75.968 msec
file2 87.509 msec
So I guess it worth to do a defrag specially if :
- the files are really fragmented
- you read the files in once
--pspZorba--
NO to K1.5 !
NO to K1.5 !
The fact that you had such a fast speed with one read of 16 contiguous clusters is because the memory stick driver detected that they are positioned one after the other and decided to do a DMA read - which as you might expect is faster.
And, as you mentioned "reading at once" (and writing) is a good habit, even if it can take up a bit more ram for processing. For instance, in the M33 updater application alex has put a 1MB buffer for the downloads, which has drastically speeded up the process, as he was flushing the buffer to memory stick less often and the flushing took less time.
It is always a good thing to have a file in one piece, but not a big necessity enough to defragment the memory stick :)
And, as you mentioned "reading at once" (and writing) is a good habit, even if it can take up a bit more ram for processing. For instance, in the M33 updater application alex has put a 1MB buffer for the downloads, which has drastically speeded up the process, as he was flushing the buffer to memory stick less often and the flushing took less time.
It is always a good thing to have a file in one piece, but not a big necessity enough to defragment the memory stick :)
Finally, took long enough. A real answer from a card manufacturer (who'd be the ones handling warranty replacements.)
The official reply with regards to defrag from Sandisk CSR, in their own words (quoted verbatim):
Sony told me to phone them (it was "too complex" to be handled by email), which would give no real quotable reply but which I expect would be similar in nature.
The official reply with regards to defrag from Sandisk CSR, in their own words (quoted verbatim):
They would not comment on whether defrag is considered "normal use" as specified in the warranty, and wanted me to register my cards' serial number (which I did, no problem with that sort of thing myself), likely to tie the tickets back to me if I ever need to RMA. If you want a link to the ticket Zorba, feel free to PM me (funny, it doesn't say anything about __constantly__ or whatever - it has absolutely no qualifications at all.)SanDisk wrote:We do not recommend defrag on cards. This will shorten the life of the card.
Sony told me to phone them (it was "too complex" to be handled by email), which would give no real quotable reply but which I expect would be similar in nature.
continuously is the qualifier that uses SONY itself on its website about defragmenting ms duo.
You could ask your manufacturer if doing a bakcup on a Pc, then formatting, then copying the files from the PC is something they recommend to do contnously.
They will answer: we do not recommend this, it will shorten the ms life...and they will be right !
But anyway, you have to understand that defragmenting is just the same as moving files. (and only if the file is fragmented).
So yes defragmenting is much "less worse" than formating and rewriting the files.
And yes, moving/removing/formatting/writing/etc-ing files is 'bad' as well for the ms (because the way it is built), and yes it will shorten the ms life.
This being said, the question is : does it worth it ?
the answer is:
if the HB deals with 'big' files AND if it is well coded (in reading files with buffers as big as possible, as Adrahil pointed it out) YES it does worth it.
But due to the limited amount of mem (and probably to lazyness) most of coder doesn't read files in once (unless for very specific HB) and HB that deals with lots of files (and files bigger than a cluster) are quite rare on a device such as a PSP.
conclusion: I doubt that it worth it.
You could ask your manufacturer if doing a bakcup on a Pc, then formatting, then copying the files from the PC is something they recommend to do contnously.
They will answer: we do not recommend this, it will shorten the ms life...and they will be right !
But anyway, you have to understand that defragmenting is just the same as moving files. (and only if the file is fragmented).
So yes defragmenting is much "less worse" than formating and rewriting the files.
And yes, moving/removing/formatting/writing/etc-ing files is 'bad' as well for the ms (because the way it is built), and yes it will shorten the ms life.
This being said, the question is : does it worth it ?
the answer is:
if the HB deals with 'big' files AND if it is well coded (in reading files with buffers as big as possible, as Adrahil pointed it out) YES it does worth it.
But due to the limited amount of mem (and probably to lazyness) most of coder doesn't read files in once (unless for very specific HB) and HB that deals with lots of files (and files bigger than a cluster) are quite rare on a device such as a PSP.
conclusion: I doubt that it worth it.
--pspZorba--
NO to K1.5 !
NO to K1.5 !
I think most people are interested in it as concerns game backups (ISO/CSO) and PSX EBOOTs. Some people have reported "stuttering" in the game if they copy the ISO to a partially filled stick (where the ISO is almost certainly fragmented), but plays smoothly if they format the stick and immediately copy the ISO (probably not fragmented). So it all comes down to if you're seeing "issues" on your game. It may or may not help. THEORETICALLY, it can be a factor. In practice, it probably isn't.
That is exactly the point I've been trying to get across, defragmenting is not less write intensive than rewriting the entire disk when fragmentation is bad enough to be a real concern - defrag also basically "shuffles" less of the overall memory blocks through wear leveling when compared to fresh format (any blocks containing sector data that is not written during defrag will not be cycled through the "unused block pool" of the memory chip.)pspZorba wrote:So yes defragmenting is much "less worse" than formating and rewriting the files.
Consider the scenario, consecutively sector wise on a disk, 32k/cluster allocation:
file 1 32k fragment 1 of 2 // file 2 100MiB file // file 1 32k fragment 2 of 2
Either you create fragmentable space by moving the 64k file after the 100M one, or you defragment the 32k file by moving the 100M file (the sizes don't really matter, even just sub bigger files) How would that be less write intensive than erasing the fat and just writing those files fresh? And at what point does memory cached-only data even become an option on a portable battery operated device? Even PS2 memory card had better protection than that (the origin sector is not erased until the destination is successfully written to prevent full card data loss on power problems.)
And I've answered your question all along: not only will the benefit be very little (usually on the level of placebo, I mean: where did that 0.03 of a second go?) - but there are very good reasons beyond what some CSR or web designer has to say to not encourage it.
Come on Cory, of course defragmenting is less write intensive than format + re-write of all the clusters
imagine that a file is complety fragmented then a defragmenter will write, at worst, all the clusters once which is exactly equal to write all the blocks after fomatting
and it never happens that all the files are all completly fragmented
so re-writting all the file from the PC >= defrag (in term of clusters to be written)
and you have to add the format part ie: erase fat (some clusters) , and if fat16 erase the root directory entry, etc
so format + re-writting all the file > defrag (in term of clusters to be written)
Imagine the scenario which is most likely to exist on a ms, 50 % isn't frag the rest is,
using the format+rewrite all the files, will juste write the 50% for absolutly nothing and the 50% for the fragmented files....
compared to the only 50% that will write a defragmenter.
It doesn't need to be completly fragmented to be a real concern, you only need to have the files you want to heavily read in the few % fragmented to have a problem.
I just re-read all your posts for this thread, and NO, you never answered to the question other than only saying "don't do it because someone told me it was dangerous", which for me is never satisfying, specially when there is absolutly no good reason given.
Don't take a piece of a post out of its context, the 0.03s has to be compared to 0.087 vs 0.051 so its is just 30% faster, yes it could mean something if having lots of file ios.
Don't misunderstand me, I don't fight for a defragmenter, my conclusion is it probably doesn't worth it, but I can't let you tell it is because it is dangerous or it is more dangerous than format+rewrite. It is just the opposite.
Actually I am just writting the code to resize the boot sector when you want to create a ddc/pandora ms without destroying what you have on it.
=============
@JF
For an ISO I don't think it would make any difference. For me there are two levels:
1) When the system loads pieces of the iso to execute the game, but it is not intensive enough to see a difference.
2) When the game has to read big pieces of file at once (otherwise it doesn't make a difference cf previous posts).
big pieces = several clusters = several * 32k (32k is the generally the cluster size for fat16)
Let's imagine that the case, then to read those pieces of file, the system uses the driver that reads the iso, and I doubt that the system can optimize the access (cf Adrahil's post) throught the driver.
may be someone can confirm this?
imagine that a file is complety fragmented then a defragmenter will write, at worst, all the clusters once which is exactly equal to write all the blocks after fomatting
and it never happens that all the files are all completly fragmented
so re-writting all the file from the PC >= defrag (in term of clusters to be written)
and you have to add the format part ie: erase fat (some clusters) , and if fat16 erase the root directory entry, etc
so format + re-writting all the file > defrag (in term of clusters to be written)
Imagine the scenario which is most likely to exist on a ms, 50 % isn't frag the rest is,
using the format+rewrite all the files, will juste write the 50% for absolutly nothing and the 50% for the fragmented files....
compared to the only 50% that will write a defragmenter.
It doesn't need to be completly fragmented to be a real concern, you only need to have the files you want to heavily read in the few % fragmented to have a problem.
I just re-read all your posts for this thread, and NO, you never answered to the question other than only saying "don't do it because someone told me it was dangerous", which for me is never satisfying, specially when there is absolutly no good reason given.
Don't take a piece of a post out of its context, the 0.03s has to be compared to 0.087 vs 0.051 so its is just 30% faster, yes it could mean something if having lots of file ios.
Don't misunderstand me, I don't fight for a defragmenter, my conclusion is it probably doesn't worth it, but I can't let you tell it is because it is dangerous or it is more dangerous than format+rewrite. It is just the opposite.
Actually I am just writting the code to resize the boot sector when you want to create a ddc/pandora ms without destroying what you have on it.
=============
@JF
For an ISO I don't think it would make any difference. For me there are two levels:
1) When the system loads pieces of the iso to execute the game, but it is not intensive enough to see a difference.
2) When the game has to read big pieces of file at once (otherwise it doesn't make a difference cf previous posts).
big pieces = several clusters = several * 32k (32k is the generally the cluster size for fat16)
Let's imagine that the case, then to read those pieces of file, the system uses the driver that reads the iso, and I doubt that the system can optimize the access (cf Adrahil's post) throught the driver.
may be someone can confirm this?
--pspZorba--
NO to K1.5 !
NO to K1.5 !
While I tend to agree that in most cases you will hit more clusters if a full format actually erases each cluster, I doubt this is what happens. More than likely the FAT (or other table of contents for other file system types) is just cleared and reformed, and the old dead data is still on the MS with no entry into the FAT. (Think back to the DOS days of undelete and unformat, the files and data still existed on the disk, just the table of contents was re-written at the time of format and delete.) I'm not sure if this is how the psp actually does the format or not but it would have been a big mistake for sony to clear each cluster. Of course, as you mention, you get exactly one cluster write for each cluster written back while moving your data back on.pspZorba wrote:imagine that a file is complety fragmented then a defragmenter will write, at worst, all the clusters once which is exactly equal to write all the blocks after fomatting
Secondly when you defragment a volume, the clusters more than likely will be moved more than one time. AT LEAST Once to defragment the FILE (with the potental for more with bad free space fragmentation) and once (with the potential for more) to defragment the volume FREE SPACE.
My point is that without a lot of research and thought about what actually happens under the covers in each case, you can't make blanket statements like "defragmenting will ALWAYS be less cluster intense than formatting".
Just adding my fuel to the fire :)
It's been recommended by Sandisk Memory not to defrag flash drives. Shortens life.