![]() ![]() Therefore, trying to re-compress a compressed file won't shorten it significantly, and might well lengthen it some. No compression algorithm, as we've seen, can effectively compress a random file, and that applies to a random-looking file also. If the compression algorithm is good, most of the structure and redundancy have been squeezed out, and what's left looks pretty much like randomness. However, the compressed file is not one of those types. By using a good compression algorithm, we can dramatically shorten files of the types we normally use. Most of the files we use have some sort of structure or other properties, whether they're text or program executables or meaningful images. Practical compression algorithms work because we don't usually use random files. This means that, on the average, compressing a random file can't shorten it, but might lengthen it. This means that a compression algorithm can only compress certain files, and it actually has to lengthen some. Therefore, if we can take some files and compress them, we have to have some files that length under compression, to balance out the ones that shorten. However, we can't express 2^N different files in less than N bits. There are 2^N possible files N bits long, and so our compression algorithm has to change one of these files to one of 2^N possible others. Suppose we have a file N bits long, and we want to compress it losslessly, so that we can recover the original file. (The reason? You only have so many bits to specify the lookback distance and the length, So a single large repeated pattern is encoded in several pieces, and those pieces are highly compressible.) If you compress a large rectangle of pixels (especially if it has a lot of background color, or if it's an animation), you can very often compress twice with good results.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |