Supreme CompressionFrom: MichaelNewsgroups:
Sat, 16 May 2020 09:12 UTC
View all headers
So I've been stressing about this compression idea I've made, it does not break any rules of math as far as I can tell but it does insane things. So I copyrighted it and I'm letting anyone make it. Yes I've paid a copyright fee and thus no one can patent this but anyone can make it.
The trick involves the very math I've covered for my other stuff. Only with a few tricks to make it work. Combinatorics.
First You take two bits, or three of you want to make your processor as hot as the sun, and you make all but two options into combinations (for the two, it's a little different for three). This is p. Let's say p costs three bits this time.
Next we need a library, Windows on the hidden partition will work for that.
You then need a range that you will look into. This is m. It needs to be a defined range. So for example you can do 64 to 127. This requires 6 bits to cover the entire range. But the value is n. Then we get n! /r! (n-r)! where r is the combinations we chose at step one (naturally occurring). We check every possible variation of m until we get a value of ten bits that matches what we are trying to compress. We need only find so many options before we are likely to get the correct one. We are not doing full on options, we are reducing them all down to ten bit options. For example n=90 r=44 would normally create up to 470 bits of options. Yes that is more than the 100 needed to mostly be assured of success. So getting 10 bits in the exact way we need it is easy. Sometimes the will be problems. I allow for that. I call it null when we design something that will have a value too low to use to create ten bits. Or whatever we are creating. Or step one can include a skip next function. Either way sometimes a null will happen. We win if we keep compressing higher than our costs.
Combinations allows that. Binary cannot on its own. Having a stable dictionary is also important.
It needs to be done until all compressed data is done.
If, and I mean if, it does not work we change the starting point by a bit and run it again. This is part of the reason it can be a computational pain. Trying each variation, checking them, then checking the next.. And the next, and so forth means it is a big strain on processors and RAM.
Now here is the rub. This is at minimum an NP-Medium problem and if you try to increase how large a file you are doing or the compression ratio it can go all the way to NP-Complete.
This system ultimately will be used by every software company, the compression is worth it if they can do it because bandwidth is a big deal and decompression is not a big deal. Unfortunately so will it be used by hackers because viruses don't need ten gigabytes or more and so they will have a lower computational need.
Its not violating the Pigeon Hole concept, but don't ask me why. I just know I can make combinations go so huge for a low overhead and to be able to provide enough variables to to account for every possible outcome.
Find flaws if you can, I cannot. Nor can I make it into software since I don't program.
Re: Supreme CompressionFrom: Fibonacci CodeNewsgroups:
Thu, 11 Jun 2020 13:16 UTC
View all headers
I don't understand how it works, may you put a concrete example of step by step.rocksolid light 0.7.2
I do write programs, but your dictionary is considered as the compressed size as well. If not, you could just give an url to point to a big file in the internet, that works the same way.