Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  nodelist  faq  login

finlandia:~> apropos win win: nothing appropriate.


computers / comp.compression / Neural Image Compression

SubjectAuthor
* Neural Image CompressionJustin Tan
+- Re: Neural Image CompressionStephen Wolstenholme
`- Re: Neural Image CompressionEli the Bearded

1
Subject: Neural Image Compression
From: Justin Tan
Newsgroups: comp.compression
Date: Mon, 21 Sep 2020 04:29 UTC
X-Received: by 2002:a05:620a:2118:: with SMTP id l24mr32740133qkl.298.1600662589502;
Sun, 20 Sep 2020 21:29:49 -0700 (PDT)
X-Received: by 2002:a25:244a:: with SMTP id k71mr62155583ybk.504.1600662589213;
Sun, 20 Sep 2020 21:29:49 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.compression
Date: Sun, 20 Sep 2020 21:29:48 -0700 (PDT)
Complaints-To: groups-abuse@google.com
Injection-Info: google-groups.googlegroups.com; posting-host=121.214.98.131; posting-account=VsJkRgoAAABDIYRMrOFKY_GXWB-UtSAQ
NNTP-Posting-Host: 121.214.98.131
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <23ba620b-a85c-41d8-ad91-bacc2cffe70fn@googlegroups.com>
Subject: Neural Image Compression
From: justin....@coepp.org.au (Justin Tan)
Injection-Date: Mon, 21 Sep 2020 04:29:49 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
View all headers
Hi,

I'd like to share a side project I worked on which generalizes transform coding to the nonlinear case. Here the transforms are represented by neural networks, which learn the appropriate form of the transform. The result of the transform is then quantized using standard entropy coding.

Github: https://github.com/Justin-Tan/high-fidelity-generative-compression
Interactive Demo: https://colab.research.google.com/github/Justin-Tan/high-fidelity-generative-compression/blob/master/assets/HiFIC_torch_colab_demo.ipynb

There are some obvious shortcomings to this method - such as, that it only caters for image data, cannot be adjusted to attain a variable bitrate, short of training a different model, and is unrealistically slow for practical applications.
 
I'm not a traditional compression expert, so would appreciate any insight about the deficiencies of this method from those who are. Note this is not my original idea and is a reimplementation.


Subject: Re: Neural Image Compression
From: Stephen Wolstenholme
Newsgroups: comp.compression
Organization: Neural Planner Software Ltd
Date: Mon, 21 Sep 2020 14:45 UTC
References: 1
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder.eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: ste...@easynn.com (Stephen Wolstenholme)
Newsgroups: comp.compression
Subject: Re: Neural Image Compression
Date: Mon, 21 Sep 2020 15:45:51 +0100
Organization: Neural Planner Software Ltd
Lines: 24
Message-ID: <plehmf91qo6eltn58j2fmeahmcvjmqtt6v@4ax.com>
References: <23ba620b-a85c-41d8-ad91-bacc2cffe70fn@googlegroups.com>
Reply-To: steve@easynn.com
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Injection-Info: reader02.eternal-september.org; posting-host="98447efa9e1244aae5398408acf4b4b1";
logging-data="19845"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18h5kvTUAKZDhy+S6wo97qD1RPM0945RGE="
User-Agent: ForteAgent/8.00.32.1272
Cancel-Lock: sha1:41wCMCz9I0N7axRWslldK7RJLfs=
View all headers
On Sun, 20 Sep 2020 21:29:48 -0700 (PDT), Justin Tan
<justin.tan@coepp.org.au> wrote:

Hi,

I'd like to share a side project I worked on which generalizes transform coding to the nonlinear case. Here the transforms are represented by neural networks, which learn the appropriate form of the transform. The result of the transform is then quantized using standard entropy coding.

Github: https://github.com/Justin-Tan/high-fidelity-generative-compression
Interactive Demo: https://colab.research.google.com/github/Justin-Tan/high-fidelity-generative-compression/blob/master/assets/HiFIC_torch_colab_demo.ipynb

There are some obvious shortcomings to this method - such as, that it only caters for image data, cannot be adjusted to attain a variable bitrate, short of training a different model, and is unrealistically slow for practical applications.

I'm not a traditional compression expert, so would appreciate any insight about the deficiencies of this method from those who are. Note this is not my original idea and is a reimplementation.

EasyNN has image mode built in. I don't know how well it compresses
images because the person who tested and validated image encoding has
retired. I wrote the code a long time ago but I forget how it works.
I'm getting old!

Steve
 
--
http://www.npsnn.com



Subject: Re: Neural Image Compression
From: Eli the Bearded
Newsgroups: comp.compression
Organization: Some absurd concept
Date: Mon, 21 Sep 2020 17:50 UTC
References: 1
Path: i2pn2.org!i2pn.org!paganini.bofh.team!goblin1!goblin3!goblin.stu.neva.ru!panix!qz!not-for-mail
From: *...@eli.users.panix.com (Eli the Bearded)
Newsgroups: comp.compression
Subject: Re: Neural Image Compression
Date: Mon, 21 Sep 2020 17:50:31 +0000 (UTC)
Organization: Some absurd concept
Lines: 32
Message-ID: <eli$2009211350@qaz.wtf>
References: <23ba620b-a85c-41d8-ad91-bacc2cffe70fn@googlegroups.com>
NNTP-Posting-Host: panix5.panix.com
X-Trace: reader1.panix.com 1600710631 26375 166.84.1.5 (21 Sep 2020 17:50:31 GMT)
X-Complaints-To: abuse@panix.com
NNTP-Posting-Date: Mon, 21 Sep 2020 17:50:31 +0000 (UTC)
X-Liz: It's actually happened, the entire Internet is a massive game of Redcode
X-Motto: "Erosion of rights never seems to reverse itself." -- kenny@panix
X-US-Congress: Moronic Fucks.
X-Attribution: EtB
XFrom: is a real address
Encrypted: double rot-13
User-Agent: Vectrex rn 2.1 (beta)
View all headers
In comp.compression, Justin Tan  <justin.tan@coepp.org.au> wrote:
I'd like to share a side project I worked on which generalizes transform
coding to the nonlinear case. Here the transforms are represented by
neural networks, which learn the appropriate form of the transform. The
result of the transform is then quantized using standard entropy coding.

Github: https://github.com/Justin-Tan/high-fidelity-generative-compression
Interactive Demo:
https://colab.research.google.com/github/Justin-Tan/high-fidelity-generative-compression/blob/master/assets/HiFIC_torch_colab_demo.ipynb

There are some obvious shortcomings to this method - such as, that it
only caters for image data, cannot be adjusted to attain a variable
bitrate, short of training a different model, and is unrealistically
slow for practical applications.

Also:

   Clone repo and grab the model checkpoint (around 2 GB).

If you need 2GB of data around to compress / decompress images, you need
a lot of images before this starts "winning".

I'm not a traditional compression expert, so would appreciate any
insight about the deficiencies of this method from those who are. Note
this is not my original idea and is a reimplementation.

I'm no expert in compression, I just read this group for the occasional
insight.

Elijah
------
particularly interested in image compression


1
rocksolid light 0.7.2
clearneti2ptor