the first thing you have to do is hate yourself.
the second thing is, you need to come up with a justification to reinvent the wheel.
the third is deluding yourself that you did it first before everyone else.
after the hard initial setup, it just flows from there.
Revert the smugapu.png. Your idea already exists, and it's called Google brotli. The story is rather quite simple. Igor mcrussianname was dominating benchmarks with his shite slavic c code, however, his final implementation of lzma fell flat on its' in terms of CPU performance. The only mitigating factor it had was that you could multithread the main compression and decompression algorithm. Queue facebook and their billions of baby pictures, prostitutes taking dinner pictures, etc, etc. They needed an algorithm to heavily compress all their data or face exponentially higher costs of deploying more drives in their data centers. Therein was born ZSTD - the only algorithm that can compete with LZMA. It even supports threads! (...with some boring details im not going to discuss rn). Where does brolti come into this you may ask? Well, the pajeets at google were pissed that facebook was taking their spotlight, and that they too needed a compression algorithm. Instead of taking the smart approach like nvidia and microsoft, forking a decent enough implementation of deflate and starting over in some areas, they decided they were going to be special. By special i mean make a dog-shit tier compressor to compete with zstd for web usage. By web usage i mean theres a hidden dictionary of shit you'd expect to see on the western web. Random snippets of HTML, random phrases about god there, common bits of sentences; it's hard for me to do it justice. Imagine you had access to google searches index and you made a panjeet to export the top 5,000 patterns to index arbitrarily. That's basically google brolti. They "added more data" and lied about the performance in order to compete with facebook.
I am not seeing a problem here? If you want to just compress data that is composed of mostly western web related stuff then you build a dictionary of western related stuff. That is literally how data compression works
I make the enormous dictionary with every sentence in existence, then you connect to my site and compress your shit with my .galc format (Gay Ass Lazy Compression) as well as decompressing it.
Btw you have to pay a subscription monthly fee, share your personal data with me, and i also reserve the right to frick your mom every weekend.
Math
I'd hire some people in India to do it.
I'd make a compression algorithm no better than gzip but write it in pure rust so it's revolutionary.
I'd make it depend on every software imaginable in rust, publish a hello world that takes up 4GB
You should accept you are just a brainlet and that all of that is beyond your level, or pick a book and start learning
lossy compression with a finetuned and sophisticated inferencer to try and reconstruct the original data.
devilish
Keep just enough data so that AI can guess the missing parts when "uncompressing".
just watch the tv show about the geeks
the first thing you have to do is hate yourself.
the second thing is, you need to come up with a justification to reinvent the wheel.
the third is deluding yourself that you did it first before everyone else.
after the hard initial setup, it just flows from there.
Make it overly complicated, push it into open source shit, get bullied by some sock puppets to give a chink spy access rights.
You cannot compress random data, or in other words compression is just replacing large known/non random data chunks with smaller values
Also see
Why make something so revolutionary instead of just adding a backdoor and then selling the exploit to some goverment for millions of dollars
zplacebo with guaranteed 0% compression
By adding MORE data and lying.
Revert the smugapu.png. Your idea already exists, and it's called Google brotli. The story is rather quite simple. Igor mcrussianname was dominating benchmarks with his shite slavic c code, however, his final implementation of lzma fell flat on its' in terms of CPU performance. The only mitigating factor it had was that you could multithread the main compression and decompression algorithm. Queue facebook and their billions of baby pictures, prostitutes taking dinner pictures, etc, etc. They needed an algorithm to heavily compress all their data or face exponentially higher costs of deploying more drives in their data centers. Therein was born ZSTD - the only algorithm that can compete with LZMA. It even supports threads! (...with some boring details im not going to discuss rn). Where does brolti come into this you may ask? Well, the pajeets at google were pissed that facebook was taking their spotlight, and that they too needed a compression algorithm. Instead of taking the smart approach like nvidia and microsoft, forking a decent enough implementation of deflate and starting over in some areas, they decided they were going to be special. By special i mean make a dog-shit tier compressor to compete with zstd for web usage. By web usage i mean theres a hidden dictionary of shit you'd expect to see on the western web. Random snippets of HTML, random phrases about god there, common bits of sentences; it's hard for me to do it justice. Imagine you had access to google searches index and you made a panjeet to export the top 5,000 patterns to index arbitrarily. That's basically google brolti. They "added more data" and lied about the performance in order to compete with facebook.
>lzma fell flat on its' face in terms*
>panjeet export*
time for sleep
I am not seeing a problem here? If you want to just compress data that is composed of mostly western web related stuff then you build a dictionary of western related stuff. That is literally how data compression works
the singularity algorithm, makes the file infinitely small
echo -n > file-to-compress
Think about it really hard
Study algebraic topology and cohomologies.
That's not something you invent, that's the result of continued research and existing tools.
I wouldn't.
Is there a need for such an algorithm? What's wrong with the current ones?
zis is uhhh puhied pahiper
>guy sounded like he was either Chinese or moronic
How did they get away with it
I wanted to enjoy Stargate Atlantis but the wraith always struck me as an antisemetic trope
gotcha jannie
Like this:
how are you? ---> h
it only works for "how are you?" tho...
So you're saying build an enormous dictionary with every single sentence, and then encode it via hashing it?
I make the enormous dictionary with every sentence in existence, then you connect to my site and compress your shit with my .galc format (Gay Ass Lazy Compression) as well as decompressing it.
Btw you have to pay a subscription monthly fee, share your personal data with me, and i also reserve the right to frick your mom every weekend.
Isn't it a marvelous idea?
very godd. subscribed!
What car would a wraith drive?
Coding theory
remove half the bytes