Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

Imagination is more important than knowledge. -- Albert Einstein


devel / comp.arch / Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

SubjectAuthor
* Someone's Trying Again (Ascenium)Quadibloc
+* Re: Someone's Trying Again (Ascenium)MitchAlsup
|`- Re: Someone's Trying Again (Ascenium)Terje Mathisen
+* Re: Someone's Trying Again (Ascenium)luke.l...@gmail.com
|+- Re: Someone's Trying Again (Ascenium)John Dallman
|`* Re: Someone's Trying Again (Ascenium)George Neuner
| `- Re: Someone's Trying Again (Ascenium)Chris M. Thomasson
+* Re: Someone's Trying Again (Ascenium)David Brown
|`* Re: Someone's Trying Again (Ascenium)Marcus
| +* Re: Someone's Trying Again (Ascenium)David Brown
| |+* Re: Someone's Trying Again (Ascenium)Theo Markettos
| ||+* Re: Someone's Trying Again (Ascenium)Marcus
| |||+* Re: Someone's Trying Again (Ascenium)David Brown
| ||||`* Re: Power efficient neural networks (was: Someone's Trying AgainMarcus
| |||| +- Re: Power efficient neural networks (was: Someone's Trying AgainDavid Brown
| |||| `* Re: Power efficient neural networks (was: Someone's Trying AgainStephen Fuld
| ||||  +* Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))MitchAlsup
| ||||  |+* Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))Quadibloc
| ||||  ||`* Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))Quadibloc
| ||||  || +* Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))MitchAlsup
| ||||  || |`* Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))Anton Ertl
| ||||  || | `* Re: Power efficient neural networks (was: Someone's Trying AgainThomas Koenig
| ||||  || |  `* Re: Power efficient neural networks (was: Someone's Trying AgainIvan Godard
| ||||  || |   `* Re: Power efficient neural networks (was: Someone's Trying AgainThomas Koenig
| ||||  || |    `* Re: Power efficient neural networks (was: Someone's Trying AgainIvan Godard
| ||||  || |     `* Re: Power efficient neural networks (was: Someone's Trying AgainThomas Koenig
| ||||  || |      +- Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))Quadibloc
| ||||  || |      +* Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))Quadibloc
| ||||  || |      |`* Re: Power efficient neural networks (was: Someone's Trying AgainThomas Koenig
| ||||  || |      | +* Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))MitchAlsup
| ||||  || |      | |`* Re: Power efficient neural networks (was: Someone's Trying AgainThomas Koenig
| ||||  || |      | | `- Re: Power efficient neural networks (was: Someone's Trying AgainIvan Godard
| ||||  || |      | +- Re: Power efficient neural networks (was: Someone's Trying AgainJohn Dallman
| ||||  || |      | +- Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))Quadibloc
| ||||  || |      | `* Cooling (was: Power efficient neural networks)Anton Ertl
| ||||  || |      |  `- Re: Cooling (was: Power efficient neural networks)Thomas Koenig
| ||||  || |      `* Re: Power efficient neural networks (was: Someone's Trying AgainIvan Godard
| ||||  || |       `* Re: Power efficient neural networks (was: Someone's Trying AgainThomas Koenig
| ||||  || |        `* Re: Power efficient neural networksantispam
| ||||  || |         `* Re: Power efficient neural networksThomas Koenig
| ||||  || |          +* Re: Power efficient neural networksMitchAlsup
| ||||  || |          |`* Re: Power efficient neural networksThomas Koenig
| ||||  || |          | `* Re: Power efficient neural networksMitchAlsup
| ||||  || |          |  `- Re: Power efficient neural networksThomas Koenig
| ||||  || |          `* Re: Power efficient neural networksantispam
| ||||  || |           `- Re: Power efficient neural networksThomas Koenig
| ||||  || `* Re: Power efficient neural networksStefan Monnier
| ||||  ||  `* Re: Power efficient neural networksQuadibloc
| ||||  ||   `- Re: Power efficient neural networksQuadibloc
| ||||  |`- Re: Power efficient neural networks (was: Someone's Trying AgainStephen Fuld
| ||||  `* Re: Power efficient neural networks (was: Someone's Trying AgainDavid Brown
| ||||   `- Re: Power efficient neural networks (was: Someone's Trying AgainStephen Fuld
| |||+- Re: Someone's Trying Again (Ascenium)Stefan Monnier
| |||`- Re: Someone's Trying Again (Ascenium)MitchAlsup
| ||`- Re: Someone's Trying Again (Ascenium)Marcus
| |`* Re: Someone's Trying Again (Ascenium)Quadibloc
| | +- Re: Someone's Trying Again (Ascenium)chris
| | `* Re: Someone's Trying Again (Ascenium)George Neuner
| |  +- Re: Someone's Trying Again (Ascenium)Quadibloc
| |  `* Re: Someone's Trying Again (Ascenium)Anton Ertl
| |   +* Re: Someone's Trying Again (Ascenium)MitchAlsup
| |   |+* Re: Someone's Trying Again (Ascenium)Quadibloc
| |   ||`* Re: Someone's Trying Again (Ascenium)MitchAlsup
| |   || `* Re: Someone's Trying Again (Ascenium)Marcus
| |   ||  `* Re: Someone's Trying Again (Ascenium)Stefan Monnier
| |   ||   `- Re: Someone's Trying Again (Ascenium)Marcus
| |   |`- Re: Someone's Trying Again (Ascenium)Thomas Koenig
| |   +* Re: Someone's Trying Again (Ascenium)George Neuner
| |   |+- Re: Someone's Trying Again (Ascenium)MitchAlsup
| |   |+* Re: Someone's Trying Again (Ascenium)Marcus
| |   ||`* Re: Someone's Trying Again (Ascenium)Thomas Koenig
| |   || +* Re: Someone's Trying Again (Ascenium)Quadibloc
| |   || |`* Re: Someone's Trying Again (Ascenium)Quadibloc
| |   || | `* Re: Someone's Trying Again (Ascenium)Thomas Koenig
| |   || |  `* Re: Someone's Trying Again (Ascenium)Quadibloc
| |   || |   `- Re: Someone's Trying Again (Ascenium)Quadibloc
| |   || `* Re: Someone's Trying Again (Ascenium)Terje Mathisen
| |   ||  +* Re: Someone's Trying Again (Ascenium)Thomas Koenig
| |   ||  |`* Re: Someone's Trying Again (Ascenium)pec...@gmail.com
| |   ||  | `* Re: Someone's Trying Again (Ascenium)Thomas Koenig
| |   ||  |  +* Re: Someone's Trying Again (Ascenium)Terje Mathisen
| |   ||  |  |`* Re: Someone's Trying Again (Ascenium)Thomas Koenig
| |   ||  |  | `- Re: Someone's Trying Again (Ascenium)Terje Mathisen
| |   ||  |  +* Re: Someone's Trying Again (Ascenium)Bill Findlay
| |   ||  |  |`- Re: Someone's Trying Again (Ascenium)Quadibloc
| |   ||  |  `* Re: Someone's Trying Again (Ascenium)pec...@gmail.com
| |   ||  |   `- Re: Someone's Trying Again (Ascenium)Terje Mathisen
| |   ||  `- Re: Someone's Trying Again (Ascenium)antispam
| |   |`* Parallelization (was: Someone's Trying Again (Ascenium))Anton Ertl
| |   | +- Re: ParallelizationStefan Monnier
| |   | +* Re: Parallelization (was: Someone's Trying Again (Ascenium))Branimir Maksimovic
| |   | |+* Re: Parallelization (was: Someone's Trying Again (Ascenium))Quadibloc
| |   | ||+* Re: Parallelization (was: Someone's Trying Again (Ascenium))Chris M. Thomasson
| |   | |||`- Re: Parallelization (was: Someone's Trying Again (Ascenium))Branimir Maksimovic
| |   | ||+- Re: Parallelization (was: Someone's Trying Again (Ascenium))Branimir Maksimovic
| |   | ||+* Re: ParallelizationStefan Monnier
| |   | |||`* Re: ParallelizationBranimir Maksimovic
| |   | ||| `* Re: ParallelizationStefan Monnier
| |   | |||  `* Re: ParallelizationBranimir Maksimovic
| |   | |||   `- Re: ParallelizationStefan Monnier
| |   | ||`- Re: Parallelization (was: Someone's Trying Again (Ascenium))Anton Ertl
| |   | |`* Re: Parallelization (was: Someone's Trying Again (Ascenium))Marcus
| |   | `* Re: Parallelization (was: Someone's Trying Again (Ascenium))Chris M. Thomasson
| |   `* Re: Someone's Trying Again (Ascenium)Tim Rentsch
| +- Re: Someone's Trying Again (Ascenium)MitchAlsup
| `* Re: wonderful compilers, or Someone's Trying Again (Ascenium)John Levine
`* Re: Someone's Trying Again (Ascenium)Theo Markettos

Pages:12345
Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<scnsn9$63f$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18772&group=comp.arch#18772

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.arch
Subject: Re: Power efficient neural networks (was: Someone's Trying Again
(Ascenium))
Date: Wed, 14 Jul 2021 16:41:28 -0700
Organization: A noiseless patient Spider
Lines: 43
Message-ID: <scnsn9$63f$1@dont-email.me>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk>
<scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me>
<scm5d0$49e$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 14 Jul 2021 23:41:29 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="0e796d4855daa3da64ca6b01a17746ba";
logging-data="6255"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18aEd3DOtVUYu7Vk1ZGFg0aAEvbPblTxS4="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
Thunderbird/78.12.0
Cancel-Lock: sha1:g6YOTGXNbn5+8rjvTH/e/iQYZHE=
In-Reply-To: <scm5d0$49e$1@dont-email.me>
Content-Language: en-US
 by: Stephen Fuld - Wed, 14 Jul 2021 23:41 UTC

On 7/14/2021 12:57 AM, Marcus wrote:
> On 2021-07-14 09:01, David Brown wrote:
>> On 14/07/2021 08:01, Marcus wrote:
>>> On 2021-07-13, Theo Markettos wrote:
>>>
>>> [snip]
>>>
>>>> The hardware is hugely worse in size and power efficiency than the
>>>> neural
>>>> network between our ears, but we pay that cost.
>>>>
>>>
>>> I personally think that the proper implementation of a power efficient
>>> neural network requires two things:
>>>
>>> 1) Memory (signals and weights) should be co-located with the ALU:s (or
>>>     distributed across the compute matrix, if you will).
>>>
>>> 2) Compute cells should only be active when activated by a signal.
>>>     Perhaps the design should not be clocked by a global clock at all?
>>>
>>
>> I agree on both accounts.  (I haven't looked much at neural networks
>> since university, but I assume the principles haven't changed.)
>>
>> A biological neuron encompasses its own memory (weights), its own
>> processing, its own IO, its own learning system.  To make really
>> powerful artificial neural networks, the component parts need that too.
>>   Then you can scale the whole thing by adding more of the same.
>
> Exactly. You'll not be bounded by memory bandwidth or similar - it's a
> truly distributed system that should scale very well.

The problem is the interconnect. In true, i.e. biological, neural nets,
a neuron receives input from thousands of other (apparently, but not
really, random) neurons. The "wires", AKA axons are each insulated (by
Glial cells) to prevent cross talk. You can't replicate this at scale
in a silicon chip without thousands of layers of interconnect.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18773&group=comp.arch#18773

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:407:: with SMTP id n7mr689246qtx.60.1626307006805; Wed, 14 Jul 2021 16:56:46 -0700 (PDT)
X-Received: by 2002:aca:bfd6:: with SMTP id p205mr4629945oif.122.1626307006591; Wed, 14 Jul 2021 16:56:46 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!4.us.feeder.erje.net!2.eu.feeder.erje.net!feeder.erje.net!newsfeed.xs4all.nl!newsfeed7.news.xs4all.nl!tr3.eu1.usenetexpress.com!feeder.usenetexpress.com!tr2.iad1.usenetexpress.com!border1.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Wed, 14 Jul 2021 16:56:46 -0700 (PDT)
In-Reply-To: <scnsn9$63f$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:291:29f0:2c4e:9b36:64c8:15e2; posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 2600:1700:291:29f0:2c4e:9b36:64c8:15e2
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com> <scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk> <scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me> <scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com>
Subject: Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))
From: MitchAl...@aol.com (MitchAlsup)
Injection-Date: Wed, 14 Jul 2021 23:56:46 +0000
Content-Type: text/plain; charset="UTF-8"
Lines: 45
 by: MitchAlsup - Wed, 14 Jul 2021 23:56 UTC

On Wednesday, July 14, 2021 at 6:41:31 PM UTC-5, Stephen Fuld wrote:
> On 7/14/2021 12:57 AM, Marcus wrote:
> > On 2021-07-14 09:01, David Brown wrote:
> >> On 14/07/2021 08:01, Marcus wrote:
> >>> On 2021-07-13, Theo Markettos wrote:
> >>>
> >>> [snip]
> >>>
> >>>> The hardware is hugely worse in size and power efficiency than the
> >>>> neural
> >>>> network between our ears, but we pay that cost.
> >>>>
> >>>
> >>> I personally think that the proper implementation of a power efficient
> >>> neural network requires two things:
> >>>
> >>> 1) Memory (signals and weights) should be co-located with the ALU:s (or
> >>> distributed across the compute matrix, if you will).
> >>>
> >>> 2) Compute cells should only be active when activated by a signal.
> >>> Perhaps the design should not be clocked by a global clock at all?
> >>>
> >>
> >> I agree on both accounts. (I haven't looked much at neural networks
> >> since university, but I assume the principles haven't changed.)
> >>
> >> A biological neuron encompasses its own memory (weights), its own
> >> processing, its own IO, its own learning system. To make really
> >> powerful artificial neural networks, the component parts need that too.
> >> Then you can scale the whole thing by adding more of the same.
> >
> > Exactly. You'll not be bounded by memory bandwidth or similar - it's a
> > truly distributed system that should scale very well.
> The problem is the interconnect. In true, i.e. biological, neural nets,
> a neuron receives input from thousands of other (apparently, but not
> really, random) neurons. The "wires", AKA axons are each insulated (by
> Glial cells) to prevent cross talk. You can't replicate this at scale
> in a silicon chip without thousands of layers of interconnect.
<
Lets make that thousands of layers of transistors ! You need to make both
the transistors and the interconnect 3D.
>
>
> --
> - Stephen Fuld
> (e-mail address disguised to prevent spam)

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18775&group=comp.arch#18775

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:1304:: with SMTP id o4mr814404qkj.366.1626310908530;
Wed, 14 Jul 2021 18:01:48 -0700 (PDT)
X-Received: by 2002:a4a:d781:: with SMTP id c1mr530140oou.23.1626310908153;
Wed, 14 Jul 2021 18:01:48 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Wed, 14 Jul 2021 18:01:47 -0700 (PDT)
In-Reply-To: <ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa3c:a000:8d1c:2589:3147:6cac;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa3c:a000:8d1c:2589:3147:6cac
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me>
<wXB*Pr2oy@news.chiark.greenend.org.uk> <scluiv$g9h$1@dont-email.me>
<scm256$s8d$1@dont-email.me> <scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
<ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com>
Subject: Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))
From: jsav...@ecn.ab.ca (Quadibloc)
Injection-Date: Thu, 15 Jul 2021 01:01:48 +0000
Content-Type: text/plain; charset="UTF-8"
 by: Quadibloc - Thu, 15 Jul 2021 01:01 UTC

On Wednesday, July 14, 2021 at 5:56:47 PM UTC-6, MitchAlsup wrote:
> On Wednesday, July 14, 2021 at 6:41:31 PM UTC-5, Stephen Fuld wrote:

> > The problem is the interconnect. In true, i.e. biological, neural nets,
> > a neuron receives input from thousands of other (apparently, but not
> > really, random) neurons. The "wires", AKA axons are each insulated (by
> > Glial cells) to prevent cross talk. You can't replicate this at scale
> > in a silicon chip without thousands of layers of interconnect.

> Lets make that thousands of layers of transistors ! You need to make both
> the transistors and the interconnect 3D.

Ideally. But with current technology, thermal issues mitigate against that.

And with a hundred layers of interconnect, and enough transistors on the
substrate, one could make the same circuit as one could with a hundred layers
of transistors - the interconnect runs would just be ten times longer.

John Savard

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18776&group=comp.arch#18776

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:4110:: with SMTP id j16mr829267qko.37.1626311492571; Wed, 14 Jul 2021 18:11:32 -0700 (PDT)
X-Received: by 2002:a05:6808:d54:: with SMTP id w20mr5083810oik.175.1626311492340; Wed, 14 Jul 2021 18:11:32 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!feeds.phibee-telecom.net!newsfeed.xs4all.nl!newsfeed7.news.xs4all.nl!tr1.eu1.usenetexpress.com!feeder.usenetexpress.com!tr3.iad1.usenetexpress.com!border1.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Wed, 14 Jul 2021 18:11:32 -0700 (PDT)
In-Reply-To: <2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa3c:a000:8d1c:2589:3147:6cac; posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa3c:a000:8d1c:2589:3147:6cac
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com> <scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk> <scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me> <scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me> <ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com> <2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com>
Subject: Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))
From: jsav...@ecn.ab.ca (Quadibloc)
Injection-Date: Thu, 15 Jul 2021 01:11:32 +0000
Content-Type: text/plain; charset="UTF-8"
Lines: 11
 by: Quadibloc - Thu, 15 Jul 2021 01:11 UTC

On Wednesday, July 14, 2021 at 7:01:49 PM UTC-6, Quadibloc wrote:
> On Wednesday, July 14, 2021 at 5:56:47 PM UTC-6, MitchAlsup wrote:

> > Lets make that thousands of layers of transistors ! You need to make both
> > the transistors and the interconnect 3D.
> Ideally. But with current technology, thermal issues mitigate against that.

Of course, that might be changing soon:

https://www.extremetech.com/computing/324625-tsmc-mulls-on-chip-water-cooling-for-future-high-performance-silicon

John Savard

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<3dd634e5-0a4c-4b46-9da1-f41a01b9fba6n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18777&group=comp.arch#18777

  copy link   Newsgroups: comp.arch
X-Received: by 2002:ac8:5f0d:: with SMTP id x13mr1175105qta.69.1626313557928;
Wed, 14 Jul 2021 18:45:57 -0700 (PDT)
X-Received: by 2002:a05:6808:14c8:: with SMTP id f8mr946472oiw.7.1626313557720;
Wed, 14 Jul 2021 18:45:57 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Wed, 14 Jul 2021 18:45:57 -0700 (PDT)
In-Reply-To: <eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:291:29f0:2c4e:9b36:64c8:15e2;
posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 2600:1700:291:29f0:2c4e:9b36:64c8:15e2
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me>
<wXB*Pr2oy@news.chiark.greenend.org.uk> <scluiv$g9h$1@dont-email.me>
<scm256$s8d$1@dont-email.me> <scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
<ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com> <2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com>
<eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <3dd634e5-0a4c-4b46-9da1-f41a01b9fba6n@googlegroups.com>
Subject: Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))
From: MitchAl...@aol.com (MitchAlsup)
Injection-Date: Thu, 15 Jul 2021 01:45:57 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: MitchAlsup - Thu, 15 Jul 2021 01:45 UTC

On Wednesday, July 14, 2021 at 8:11:33 PM UTC-5, Quadibloc wrote:
> On Wednesday, July 14, 2021 at 7:01:49 PM UTC-6, Quadibloc wrote:
> > On Wednesday, July 14, 2021 at 5:56:47 PM UTC-6, MitchAlsup wrote:
>
> > > Lets make that thousands of layers of transistors ! You need to make both
> > > the transistors and the interconnect 3D.
> > Ideally. But with current technology, thermal issues mitigate against that.
> Of course, that might be changing soon:
>
> https://www.extremetech.com/computing/324625-tsmc-mulls-on-chip-water-cooling-for-future-high-performance-silicon
<
This sounds a lot like what Stanford was trying to do (?experimenting?) on
in 1992-ish time frame. Stanford ultimately got 1000W/sq-cm ±
>
> John Savard

Re: Power efficient neural networks

<jwvim1cqrge.fsf-monnier+comp.arch@gnu.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18778&group=comp.arch#18778

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: monn...@iro.umontreal.ca (Stefan Monnier)
Newsgroups: comp.arch
Subject: Re: Power efficient neural networks
Date: Wed, 14 Jul 2021 21:53:24 -0400
Organization: A noiseless patient Spider
Lines: 22
Message-ID: <jwvim1cqrge.fsf-monnier+comp.arch@gnu.org>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk>
<scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me>
<scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
<ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com>
<2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com>
<eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain
Injection-Info: reader02.eternal-september.org; posting-host="e6b4dae0f5aef5dc37204fdf6927eb5f";
logging-data="31972"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18/f5sdjI07ZoQgo3Fele8U"
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.0.50 (gnu/linux)
Cancel-Lock: sha1:Q7Q0IyN1Bm+pKUUuGM6YNDCE8I4=
sha1:7RE3rhuqcsKVyA2M5YlXlzEe5p0=
 by: Stefan Monnier - Thu, 15 Jul 2021 01:53 UTC

Quadibloc [2021-07-14 18:11:32] wrote:
> On Wednesday, July 14, 2021 at 7:01:49 PM UTC-6, Quadibloc wrote:
>> On Wednesday, July 14, 2021 at 5:56:47 PM UTC-6, MitchAlsup wrote:
>> > Lets make that thousands of layers of transistors ! You need to make both
>> > the transistors and the interconnect 3D.
>> Ideally. But with current technology, thermal issues mitigate against that.
> Of course, that might be changing soon:
> https://www.extremetech.com/computing/324625-tsmc-mulls-on-chip-water-cooling-for-future-high-performance-silicon

That seems irrelevant: the real problem is not how to move power away
from the chip, but how to reduce the power per unit of work so as to
last longer on the same battery, or so as to use a smaller battery and
make the device lighter.

I'm thinking here about mobile devices, but the same is true for
data centers. The only exceptions seem to be the desktops, where people
don't seem to care about the cost of the power consumption, so they're
willing to waste money, space, and decibels on heat removal.
But I hear that the market for desktops is shrinking pretty fast.

Stefan

Re: Power efficient neural networks

<ccf02c6b-5ec0-4ff7-9b3d-fdd5c1aa87b3n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18779&group=comp.arch#18779

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:407:: with SMTP id n7mr1460632qtx.60.1626316746632;
Wed, 14 Jul 2021 19:39:06 -0700 (PDT)
X-Received: by 2002:a9d:491c:: with SMTP id e28mr1261319otf.342.1626316746323;
Wed, 14 Jul 2021 19:39:06 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.goja.nl.eu.org!3.eu.feeder.erje.net!feeder.erje.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Wed, 14 Jul 2021 19:39:06 -0700 (PDT)
In-Reply-To: <jwvim1cqrge.fsf-monnier+comp.arch@gnu.org>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa3c:a000:306d:44a:d6c:7104;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa3c:a000:306d:44a:d6c:7104
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me>
<wXB*Pr2oy@news.chiark.greenend.org.uk> <scluiv$g9h$1@dont-email.me>
<scm256$s8d$1@dont-email.me> <scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
<ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com> <2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com>
<eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com> <jwvim1cqrge.fsf-monnier+comp.arch@gnu.org>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <ccf02c6b-5ec0-4ff7-9b3d-fdd5c1aa87b3n@googlegroups.com>
Subject: Re: Power efficient neural networks
From: jsav...@ecn.ab.ca (Quadibloc)
Injection-Date: Thu, 15 Jul 2021 02:39:06 +0000
Content-Type: text/plain; charset="UTF-8"
 by: Quadibloc - Thu, 15 Jul 2021 02:39 UTC

On Wednesday, July 14, 2021 at 7:53:27 PM UTC-6, Stefan Monnier wrote:

> That seems irrelevant: the real problem is not how to move power away
> from the chip, but how to reduce the power per unit of work so as to
> last longer on the same battery, or so as to use a smaller battery and
> make the device lighter.

Irrelevant? The real problem is how to get the work *done*. If power consumption
can be reduced, great. But failing that, removing more heat is the second-best
way to allow more transistors to switch in a tiny space in a given time.

The goal is... to solve problems. To find answers.

John Savard

Re: Power efficient neural networks

<2dbf50fc-8c14-4bab-8376-2f4e39018f63n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18780&group=comp.arch#18780

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a0c:be85:: with SMTP id n5mr1530730qvi.59.1626316904652;
Wed, 14 Jul 2021 19:41:44 -0700 (PDT)
X-Received: by 2002:a9d:4e0a:: with SMTP id p10mr1237886otf.329.1626316904397;
Wed, 14 Jul 2021 19:41:44 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.goja.nl.eu.org!3.eu.feeder.erje.net!feeder.erje.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Wed, 14 Jul 2021 19:41:44 -0700 (PDT)
In-Reply-To: <ccf02c6b-5ec0-4ff7-9b3d-fdd5c1aa87b3n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa3c:a000:306d:44a:d6c:7104;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa3c:a000:306d:44a:d6c:7104
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me>
<wXB*Pr2oy@news.chiark.greenend.org.uk> <scluiv$g9h$1@dont-email.me>
<scm256$s8d$1@dont-email.me> <scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
<ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com> <2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com>
<eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com> <jwvim1cqrge.fsf-monnier+comp.arch@gnu.org>
<ccf02c6b-5ec0-4ff7-9b3d-fdd5c1aa87b3n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <2dbf50fc-8c14-4bab-8376-2f4e39018f63n@googlegroups.com>
Subject: Re: Power efficient neural networks
From: jsav...@ecn.ab.ca (Quadibloc)
Injection-Date: Thu, 15 Jul 2021 02:41:44 +0000
Content-Type: text/plain; charset="UTF-8"
 by: Quadibloc - Thu, 15 Jul 2021 02:41 UTC

On Wednesday, July 14, 2021 at 8:39:07 PM UTC-6, Quadibloc wrote:
> On Wednesday, July 14, 2021 at 7:53:27 PM UTC-6, Stefan Monnier wrote:

> > That seems irrelevant: the real problem is not how to move power away
> > from the chip, but how to reduce the power per unit of work so as to
> > last longer on the same battery, or so as to use a smaller battery and
> > make the device lighter.

> Irrelevant? The real problem is how to get the work *done*. If power consumption
> can be reduced, great. But failing that, removing more heat is the second-best
> way to allow more transistors to switch in a tiny space in a given time.
>
> The goal is... to solve problems. To find answers.

And, *of course*, even after one has reduced the amount of power
a processor needs a hundredfold, one _still_ can benefit from some method
of removing heat faster that allows a hundred times as many processors to
be packed into a tiny space so that they can communicate quickly.

There is no end to the problems that need solving.

John Savard

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<scocnm$i2u$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18782&group=comp.arch#18782

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.arch
Subject: Re: Power efficient neural networks (was: Someone's Trying Again
(Ascenium))
Date: Wed, 14 Jul 2021 21:14:45 -0700
Organization: A noiseless patient Spider
Lines: 62
Message-ID: <scocnm$i2u$1@dont-email.me>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk>
<scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me>
<scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
<ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Thu, 15 Jul 2021 04:14:46 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="f6a18f9413aa6bcbe4eaa207b146ef37";
logging-data="18526"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/amVewRhFqysJ36sBI9zuxcR7iOYsFbAg="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
Thunderbird/78.12.0
Cancel-Lock: sha1:e4rM3ZfxjDp88GI2Ugkd495Khb0=
In-Reply-To: <ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com>
Content-Language: en-US
 by: Stephen Fuld - Thu, 15 Jul 2021 04:14 UTC

On 7/14/2021 4:56 PM, MitchAlsup wrote:
> On Wednesday, July 14, 2021 at 6:41:31 PM UTC-5, Stephen Fuld wrote:
>> On 7/14/2021 12:57 AM, Marcus wrote:
>>> On 2021-07-14 09:01, David Brown wrote:
>>>> On 14/07/2021 08:01, Marcus wrote:
>>>>> On 2021-07-13, Theo Markettos wrote:
>>>>>
>>>>> [snip]
>>>>>
>>>>>> The hardware is hugely worse in size and power efficiency than the
>>>>>> neural
>>>>>> network between our ears, but we pay that cost.
>>>>>>
>>>>>
>>>>> I personally think that the proper implementation of a power efficient
>>>>> neural network requires two things:
>>>>>
>>>>> 1) Memory (signals and weights) should be co-located with the ALU:s (or
>>>>> distributed across the compute matrix, if you will).
>>>>>
>>>>> 2) Compute cells should only be active when activated by a signal.
>>>>> Perhaps the design should not be clocked by a global clock at all?
>>>>>
>>>>
>>>> I agree on both accounts. (I haven't looked much at neural networks
>>>> since university, but I assume the principles haven't changed.)
>>>>
>>>> A biological neuron encompasses its own memory (weights), its own
>>>> processing, its own IO, its own learning system. To make really
>>>> powerful artificial neural networks, the component parts need that too.
>>>> Then you can scale the whole thing by adding more of the same.
>>>
>>> Exactly. You'll not be bounded by memory bandwidth or similar - it's a
>>> truly distributed system that should scale very well.
>> The problem is the interconnect. In true, i.e. biological, neural nets,
>> a neuron receives input from thousands of other (apparently, but not
>> really, random) neurons. The "wires", AKA axons are each insulated (by
>> Glial cells) to prevent cross talk. You can't replicate this at scale
>> in a silicon chip without thousands of layers of interconnect.
> <
> Lets make that thousands of layers of transistors ! You need to make both
> the transistors and the interconnect 3D.

Perhaps. But other approaches are possible. IBM has done several,
including an analog chip that uses charge in a capacitor, sort of like a
real neuron uses,

https://research.ibm.com/publications/unassisted-true-analog-neural-network-training-chip

but the paper is behind a paywall,

and their True North chip

https://research.ibm.com/articles/brain-chip.shtml

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<scoq23$mch$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18783&group=comp.arch#18783

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: david.br...@hesbynett.no (David Brown)
Newsgroups: comp.arch
Subject: Re: Power efficient neural networks (was: Someone's Trying Again
(Ascenium))
Date: Thu, 15 Jul 2021 10:02:06 +0200
Organization: A noiseless patient Spider
Lines: 62
Message-ID: <scoq23$mch$1@dont-email.me>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk>
<scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me>
<scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 15 Jul 2021 08:02:11 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="cf3d1b9c9891fbee0f816cf663e8751b";
logging-data="22929"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+K0duKV/nRsCif+vLnSCQgk5JFJbcbc5Y="
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
Thunderbird/68.10.0
Cancel-Lock: sha1:o82KMkOh3zU9CwMmbt5XqRb8eBw=
In-Reply-To: <scnsn9$63f$1@dont-email.me>
Content-Language: en-GB
 by: David Brown - Thu, 15 Jul 2021 08:02 UTC

On 15/07/2021 01:41, Stephen Fuld wrote:
> On 7/14/2021 12:57 AM, Marcus wrote:
>> On 2021-07-14 09:01, David Brown wrote:
>>> On 14/07/2021 08:01, Marcus wrote:
>>>> On 2021-07-13, Theo Markettos wrote:
>>>>
>>>> [snip]
>>>>
>>>>> The hardware is hugely worse in size and power efficiency than the
>>>>> neural
>>>>> network between our ears, but we pay that cost.
>>>>>
>>>>
>>>> I personally think that the proper implementation of a power efficient
>>>> neural network requires two things:
>>>>
>>>> 1) Memory (signals and weights) should be co-located with the ALU:s (or
>>>>     distributed across the compute matrix, if you will).
>>>>
>>>> 2) Compute cells should only be active when activated by a signal.
>>>>     Perhaps the design should not be clocked by a global clock at all?
>>>>
>>>
>>> I agree on both accounts.  (I haven't looked much at neural networks
>>> since university, but I assume the principles haven't changed.)
>>>
>>> A biological neuron encompasses its own memory (weights), its own
>>> processing, its own IO, its own learning system.  To make really
>>> powerful artificial neural networks, the component parts need that too.
>>>   Then you can scale the whole thing by adding more of the same.
>>
>> Exactly. You'll not be bounded by memory bandwidth or similar - it's a
>> truly distributed system that should scale very well.
>
> The problem is the interconnect.  In true, i.e. biological, neural nets,
> a neuron receives input from thousands of other (apparently, but not
> really, random) neurons.  The "wires", AKA axons are each insulated (by
> Glial cells) to prevent cross talk.  You can't replicate this at scale
> in a silicon chip without thousands of layers of interconnect.
>
>

That is a problem if you are trying to fully replicate biological neural
systems. But that is not a practical aim - at least, not for a long
time yet. In particular, current neural networks are based on layers,
as a compromise between the capability of the networks and our
understanding of algorithms to teach and tune them. I think you can
come a /long/ way with layers that have a lot of interconnects within
the layer, but only connect to adjacent layers rather than having
connections throughout the system.

When you look at biological neural networks, the great majority of
connections are quite local. The number of long-distance connections
drops rapidly with the distance. After all, the scaling, spacing,
power-management and heat management challenges in biology are not much
different from those in silicon. The key difference is that details of
where these connections are made can change somewhat in a biological
system, while they are fixed in silicon.

So you could make your artificial neural networks with a small number of
fixed long-distance connections, rather than trying to support arbitrary
connections across the network.

Re: wonderful compilers, or Someone's Trying Again (Ascenium)

<2021Jul15.103045@mips.complang.tuwien.ac.at>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18784&group=comp.arch#18784

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: ant...@mips.complang.tuwien.ac.at (Anton Ertl)
Newsgroups: comp.arch
Subject: Re: wonderful compilers, or Someone's Trying Again (Ascenium)
Date: Thu, 15 Jul 2021 08:30:45 GMT
Organization: Institut fuer Computersprachen, Technische Universitaet Wien
Lines: 43
Message-ID: <2021Jul15.103045@mips.complang.tuwien.ac.at>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com> <scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sckss7$1o1i$1@gal.iecc.com> <7188b1b8-abc9-4f33-bb6b-81dd98f95bd7n@googlegroups.com>
Injection-Info: reader02.eternal-september.org; posting-host="126c6ad3297766e81c717a863c30e363";
logging-data="17049"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18qP9UGs4PiuUGzCQZCgmXM"
Cancel-Lock: sha1:oF04SRzusdfgCBg9H3on0TyOAzY=
X-newsreader: xrn 10.00-beta-3
 by: Anton Ertl - Thu, 15 Jul 2021 08:30 UTC

MitchAlsup <MitchAlsup@aol.com> writes:
>On Tuesday, July 13, 2021 at 3:25:45 PM UTC-5, John Levine wrote:
>> The compiler did what it could to schedule memory references statically,
>> but once you could do that in hardware, dynamic hardware scheduling worked a lot better.
><
>Except for that power thing............

Including that power thing.

E.g., looking at

<https://images.anandtech.com/doci/14072/Exynos9820-Perf-Eff-Estimated.png>

we see that the OoO Cortex-A75 has better Perf/W than the in-order A55
as soon a you need more than 1.5 SPEC2006 Int+FP of performance, and
even if you need less, you fare hardly better.

Intel tried in-order for their low-power line (Atom) with Bonnell,
while AMD went for OoO with Bobcat. Bobcat is twice as fast per cycle
in my testing as Bonnell. Looking at dual-core chips with integrated
graphics, we see

core proc. chip CPU cl. TDP
Bobcat 40nm AMD C-70 1333MHz 9W
Bobcat 40nm AMD E2-2000 1750MHz 18W
Bonnell 45nm Atom D525 1833MHz 13W
Bonnell 32nm Atom D2700 2133MHz 10W

Taking the 2x IPC advantage of Bobcat into account, Bobcat in 40nm in
a 9W power bracket outperforms Bonnell in 32nm in a 10W power bracket.

Later Intel switched their low-power line to OoO with Silvermont, and
has stayed with OoO since.

Apple uses OoO for their energy-efficient cores (Icestorm in the A14).

The only ones who still seem to believe in in-order for
energy-efficient computing are ARM with their Cortex-A510.

- anton
--
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

Re: wonderful compilers, or Someone's Trying Again (Ascenium)

<5778ed73-6778-4e95-8b6d-322f1e54601bn@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18785&group=comp.arch#18785

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:6214:1051:: with SMTP id l17mr3417905qvr.18.1626341354947;
Thu, 15 Jul 2021 02:29:14 -0700 (PDT)
X-Received: by 2002:aca:53ce:: with SMTP id h197mr7071644oib.30.1626341354721;
Thu, 15 Jul 2021 02:29:14 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Thu, 15 Jul 2021 02:29:14 -0700 (PDT)
In-Reply-To: <2021Jul15.103045@mips.complang.tuwien.ac.at>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa3c:a000:b528:2961:c3a1:d9c5;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa3c:a000:b528:2961:c3a1:d9c5
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sckss7$1o1i$1@gal.iecc.com>
<7188b1b8-abc9-4f33-bb6b-81dd98f95bd7n@googlegroups.com> <2021Jul15.103045@mips.complang.tuwien.ac.at>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <5778ed73-6778-4e95-8b6d-322f1e54601bn@googlegroups.com>
Subject: Re: wonderful compilers, or Someone's Trying Again (Ascenium)
From: jsav...@ecn.ab.ca (Quadibloc)
Injection-Date: Thu, 15 Jul 2021 09:29:14 +0000
Content-Type: text/plain; charset="UTF-8"
 by: Quadibloc - Thu, 15 Jul 2021 09:29 UTC

On Thursday, July 15, 2021 at 3:20:50 AM UTC-6, Anton Ertl wrote:

> Later Intel switched their low-power line to OoO with Silvermont, and
> has stayed with OoO since.

Couldn't that just be a consequence of process improvements?

A laptop has a certain power budget; it's bigger than that of a
smartphone.

So, at one point, OoO was only within the power budget of a desktop
computer. When it became possible to make an OoO processor with
a power budget suitable for laptops, of course this was faster than
perhaps having more cores that were in-order.

Today, we're at the point were processors in smartphones are
usually OoO. In-order is now used only for very small embedded
processors. This isn't because OoO doesn't need more transistors
and more power. It's because transistors got smaller and better,
so it was easier and easier to come up with the power that OoO
needed.

John Savard

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<2021Jul15.112634@mips.complang.tuwien.ac.at>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18786&group=comp.arch#18786

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: ant...@mips.complang.tuwien.ac.at (Anton Ertl)
Newsgroups: comp.arch
Subject: Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))
Date: Thu, 15 Jul 2021 09:26:34 GMT
Organization: Institut fuer Computersprachen, Technische Universitaet Wien
Lines: 17
Message-ID: <2021Jul15.112634@mips.complang.tuwien.ac.at>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com> <sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk> <scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me> <scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me> <ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com> <2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com> <eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com> <3dd634e5-0a4c-4b46-9da1-f41a01b9fba6n@googlegroups.com>
Injection-Info: reader02.eternal-september.org; posting-host="126c6ad3297766e81c717a863c30e363";
logging-data="17049"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/SFYGRZLAZ1vHVGpYb729Y"
Cancel-Lock: sha1:jzFSC65TsgQldEKLB8x0CK6RAZ8=
X-newsreader: xrn 10.00-beta-3
 by: Anton Ertl - Thu, 15 Jul 2021 09:26 UTC

MitchAlsup <MitchAlsup@aol.com> writes:
>This sounds a lot like what Stanford was trying to do (?experimenting?) on
>in 1992-ish time frame. Stanford ultimately got 1000W/sq-cm =C2=B1

<Heavy speculation>It seems to me that efforts like your AMD K9 and
Intel Tejas were designed with that cooling capacity in mind, and in
2005 it turned out to be not practically doable, and both projects
were canceled.</>

Currently, the power density of hot chips seems to be at maybe
200W/cm^2 (140W for a Ryzen 3800XT with 0.7cm^2 chiplet area), with
higher power density in various spots on the die.

- anton
--
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<scp3it$8mm$1@newsreader4.netcologne.de>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18788&group=comp.arch#18788

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!.POSTED.2001-4dd7-ffcf-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de!not-for-mail
From: tkoe...@netcologne.de (Thomas Koenig)
Newsgroups: comp.arch
Subject: Re: Power efficient neural networks (was: Someone's Trying Again
(Ascenium))
Date: Thu, 15 Jul 2021 10:44:45 -0000 (UTC)
Organization: news.netcologne.de
Distribution: world
Message-ID: <scp3it$8mm$1@newsreader4.netcologne.de>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk>
<scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me>
<scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
<ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com>
<2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com>
<eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com>
<3dd634e5-0a4c-4b46-9da1-f41a01b9fba6n@googlegroups.com>
<2021Jul15.112634@mips.complang.tuwien.ac.at>
Injection-Date: Thu, 15 Jul 2021 10:44:45 -0000 (UTC)
Injection-Info: newsreader4.netcologne.de; posting-host="2001-4dd7-ffcf-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de:2001:4dd7:ffcf:0:7285:c2ff:fe6c:992d";
logging-data="8918"; mail-complaints-to="abuse@netcologne.de"
User-Agent: slrn/1.0.3 (Linux)
 by: Thomas Koenig - Thu, 15 Jul 2021 10:44 UTC

Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:

> Currently, the power density of hot chips seems to be at maybe
> 200W/cm^2 (140W for a Ryzen 3800XT with 0.7cm^2 chiplet area), with
> higher power density in various spots on the die.

That's a high enery density, but it is within the range that
can be removed by pool boiling, certainly of water, possibly
by refrigerants (especially if you have some space between
the chips).

Mechanical stresses on the delicate chips could be another matter,
though.

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<scp40d$h2v$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18789&group=comp.arch#18789

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: iva...@millcomputing.com (Ivan Godard)
Newsgroups: comp.arch
Subject: Re: Power efficient neural networks (was: Someone's Trying Again
(Ascenium))
Date: Thu, 15 Jul 2021 03:51:57 -0700
Organization: A noiseless patient Spider
Lines: 17
Message-ID: <scp40d$h2v$1@dont-email.me>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk>
<scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me>
<scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
<ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com>
<2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com>
<eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com>
<3dd634e5-0a4c-4b46-9da1-f41a01b9fba6n@googlegroups.com>
<2021Jul15.112634@mips.complang.tuwien.ac.at>
<scp3it$8mm$1@newsreader4.netcologne.de>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Thu, 15 Jul 2021 10:51:57 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="9041418af269df353e642f76083bf8c2";
logging-data="17503"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/xWGZJ0Jb4czl5uTWc6iTu"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
Thunderbird/78.11.0
Cancel-Lock: sha1:vlyqliVpn2HRUpQDqKURnJvaBa0=
In-Reply-To: <scp3it$8mm$1@newsreader4.netcologne.de>
Content-Language: en-US
 by: Ivan Godard - Thu, 15 Jul 2021 10:51 UTC

On 7/15/2021 3:44 AM, Thomas Koenig wrote:
> Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
>
>> Currently, the power density of hot chips seems to be at maybe
>> 200W/cm^2 (140W for a Ryzen 3800XT with 0.7cm^2 chiplet area), with
>> higher power density in various spots on the die.
>
> That's a high enery density, but it is within the range that
> can be removed by pool boiling, certainly of water, possibly
> by refrigerants (especially if you have some space between
> the chips).
>
> Mechanical stresses on the delicate chips could be another matter,
> though.
>

Did you used to work on pressurized water reactors?

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<scp4o2$9ii$1@newsreader4.netcologne.de>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18790&group=comp.arch#18790

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!.POSTED.2001-4dd7-ffcf-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de!not-for-mail
From: tkoe...@netcologne.de (Thomas Koenig)
Newsgroups: comp.arch
Subject: Re: Power efficient neural networks (was: Someone's Trying Again
(Ascenium))
Date: Thu, 15 Jul 2021 11:04:34 -0000 (UTC)
Organization: news.netcologne.de
Distribution: world
Message-ID: <scp4o2$9ii$1@newsreader4.netcologne.de>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk>
<scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me>
<scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
<ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com>
<2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com>
<eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com>
<3dd634e5-0a4c-4b46-9da1-f41a01b9fba6n@googlegroups.com>
<2021Jul15.112634@mips.complang.tuwien.ac.at>
<scp3it$8mm$1@newsreader4.netcologne.de> <scp40d$h2v$1@dont-email.me>
Injection-Date: Thu, 15 Jul 2021 11:04:34 -0000 (UTC)
Injection-Info: newsreader4.netcologne.de; posting-host="2001-4dd7-ffcf-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de:2001:4dd7:ffcf:0:7285:c2ff:fe6c:992d";
logging-data="9810"; mail-complaints-to="abuse@netcologne.de"
User-Agent: slrn/1.0.3 (Linux)
 by: Thomas Koenig - Thu, 15 Jul 2021 11:04 UTC

Ivan Godard <ivan@millcomputing.com> schrieb:
> On 7/15/2021 3:44 AM, Thomas Koenig wrote:
>> Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
>>
>>> Currently, the power density of hot chips seems to be at maybe
>>> 200W/cm^2 (140W for a Ryzen 3800XT with 0.7cm^2 chiplet area), with
>>> higher power density in various spots on the die.
>>
>> That's a high enery density, but it is within the range that
>> can be removed by pool boiling, certainly of water, possibly
>> by refrigerants (especially if you have some space between
>> the chips).
>>
>> Mechanical stresses on the delicate chips could be another matter,
>> though.
>>

> Did you used to work on pressurized water reactors?

No, but my diploma thesis was in the field of pool boiling,
and the subject of how much heat you can remove by boiling
(a.k.a. the critical heat flux) is a standard topic in studying
chemical engineering.

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<scp5dt$61e$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18791&group=comp.arch#18791

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: iva...@millcomputing.com (Ivan Godard)
Newsgroups: comp.arch
Subject: Re: Power efficient neural networks (was: Someone's Trying Again
(Ascenium))
Date: Thu, 15 Jul 2021 04:16:13 -0700
Organization: A noiseless patient Spider
Lines: 27
Message-ID: <scp5dt$61e$1@dont-email.me>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk>
<scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me>
<scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
<ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com>
<2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com>
<eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com>
<3dd634e5-0a4c-4b46-9da1-f41a01b9fba6n@googlegroups.com>
<2021Jul15.112634@mips.complang.tuwien.ac.at>
<scp3it$8mm$1@newsreader4.netcologne.de> <scp40d$h2v$1@dont-email.me>
<scp4o2$9ii$1@newsreader4.netcologne.de>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Thu, 15 Jul 2021 11:16:13 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="9041418af269df353e642f76083bf8c2";
logging-data="6190"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/Hi+bAx9AhhWc65txO+svL"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
Thunderbird/78.11.0
Cancel-Lock: sha1:0EviOlFggTrHo0jdaxgpHmkFnLA=
In-Reply-To: <scp4o2$9ii$1@newsreader4.netcologne.de>
Content-Language: en-US
 by: Ivan Godard - Thu, 15 Jul 2021 11:16 UTC

On 7/15/2021 4:04 AM, Thomas Koenig wrote:
> Ivan Godard <ivan@millcomputing.com> schrieb:
>> On 7/15/2021 3:44 AM, Thomas Koenig wrote:
>>> Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
>>>
>>>> Currently, the power density of hot chips seems to be at maybe
>>>> 200W/cm^2 (140W for a Ryzen 3800XT with 0.7cm^2 chiplet area), with
>>>> higher power density in various spots on the die.
>>>
>>> That's a high enery density, but it is within the range that
>>> can be removed by pool boiling, certainly of water, possibly
>>> by refrigerants (especially if you have some space between
>>> the chips).
>>>
>>> Mechanical stresses on the delicate chips could be another matter,
>>> though.
>>>
>
>> Did you used to work on pressurized water reactors?
>
> No, but my diploma thesis was in the field of pool boiling,
> and the subject of how much heat you can remove by boiling
> (a.k.a. the critical heat flux) is a standard topic in studying
> chemical engineering.
>

My second guess was superheaters in steam locomotives :-)

Re: Someone's Trying Again (Ascenium)

<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18796&group=comp.arch#18796

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: gneun...@comcast.net (George Neuner)
Newsgroups: comp.arch
Subject: Re: Someone's Trying Again (Ascenium)
Date: Thu, 15 Jul 2021 10:32:10 -0400
Organization: A noiseless patient Spider
Lines: 45
Message-ID: <2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com> <scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me> <f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Injection-Info: reader02.eternal-september.org; posting-host="5ea153c762e852b9b4e3b54f25dc6b34";
logging-data="26922"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+SJ7+7+kGuR1tzFqQxxoEudejGqjbfeFc="
User-Agent: ForteAgent/8.00.32.1272
Cancel-Lock: sha1:etE7rc9vYmp8JVPTSbW26Q1W/O0=
 by: George Neuner - Thu, 15 Jul 2021 14:32 UTC

On Tue, 13 Jul 2021 14:58:48 -0700 (PDT), Quadibloc
<jsavard@ecn.ab.ca> wrote:

>On Tuesday, July 13, 2021 at 6:13:13 AM UTC-6, David Brown wrote:
>
>> What seems to be missing here (at least, /I/ have missed it) is a
>> discusion about what people actually want to do with computing power.
>> Is it fair to suggest that most tasks that people currently want to run
>> faster, are actually reasonably parallel? And that most tasks that
>> people want to run a /lot/ faster are /very/ parallel?
>
>Unfortunately, no, it isn't.
>
>Many of the tasks that people want to run faster are parallel, but some of
>them are not.
>
>Also, it's at least possible that *some* of the tasks people want to run
>faster... are perhaps more parallel than people realize, and some new
>programming paradigm might enable this parallelism to be brought to
>light. At least that's the hope that fuels attempts to design compilers
>that will bring out extra parallelism that's not obvious to human programmers.
>
>John Savard

The problem - at least with current hardware - is that programmers are
much better at identifying what CAN be done in parallel than what
SHOULD be done in parallel.

Starting scads of threads, many (or most) of which will end up blocked
due to lack of memory bandwidth to feed the processor(s), is not a
good idea.

I really like Mitch's VVM. I'm primarily a software guy, but from what
I've understood of it, VVM seems to address a number of the problems
that plague auto-vectorization. Waiting for actual hardware. 8-)

But vectorization is just one aspect of parallelism. There's quite a
lot of micro-thread parallelism inherent in many programs, but getting
compilers to extract it is not easy, and current hardware really is
not designed with micro-threads in mind. Watching the Mill and hoping
it succeeds.

YMMV,
George

Re: Someone's Trying Again (Ascenium)

<17c4ba0c-c855-4b60-9f65-0d0252307a74n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18798&group=comp.arch#18798

  copy link   Newsgroups: comp.arch
X-Received: by 2002:ac8:7608:: with SMTP id t8mr4574368qtq.246.1626361859217;
Thu, 15 Jul 2021 08:10:59 -0700 (PDT)
X-Received: by 2002:a9d:3b0:: with SMTP id f45mr4319675otf.5.1626361858974;
Thu, 15 Jul 2021 08:10:58 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Thu, 15 Jul 2021 08:10:58 -0700 (PDT)
In-Reply-To: <2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa3c:a000:a988:4b1e:7ee4:3073;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa3c:a000:a988:4b1e:7ee4:3073
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com> <2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <17c4ba0c-c855-4b60-9f65-0d0252307a74n@googlegroups.com>
Subject: Re: Someone's Trying Again (Ascenium)
From: jsav...@ecn.ab.ca (Quadibloc)
Injection-Date: Thu, 15 Jul 2021 15:10:59 +0000
Content-Type: text/plain; charset="UTF-8"
 by: Quadibloc - Thu, 15 Jul 2021 15:10 UTC

On Thursday, July 15, 2021 at 8:32:15 AM UTC-6, George Neuner wrote:

> Starting scads of threads, many (or most) of which will end up blocked
> due to lack of memory bandwidth to feed the processor(s), is not a
> good idea.

That's true. But you can buy bandwidth, while latency tends to be a hard
limit. So that isn't a fatal obstacle to doing things faster, whereas not
knowing any way to do things in parallel would be.

John Savard

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<scpj90$ntd$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18799&group=comp.arch#18799

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.arch
Subject: Re: Power efficient neural networks (was: Someone's Trying Again
(Ascenium))
Date: Thu, 15 Jul 2021 08:12:26 -0700
Organization: A noiseless patient Spider
Lines: 90
Message-ID: <scpj90$ntd$1@dont-email.me>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk>
<scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me>
<scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
<scoq23$mch$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 15 Jul 2021 15:12:32 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="f6a18f9413aa6bcbe4eaa207b146ef37";
logging-data="24493"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/9D0Pn955e2VUnOfiJMa7SGMUkp9X0iok="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
Thunderbird/78.12.0
Cancel-Lock: sha1:H/FazsQIFx8+FbadmqkufQmsCmo=
In-Reply-To: <scoq23$mch$1@dont-email.me>
Content-Language: en-US
 by: Stephen Fuld - Thu, 15 Jul 2021 15:12 UTC

On 7/15/2021 1:02 AM, David Brown wrote:
> On 15/07/2021 01:41, Stephen Fuld wrote:
>> On 7/14/2021 12:57 AM, Marcus wrote:
>>> On 2021-07-14 09:01, David Brown wrote:
>>>> On 14/07/2021 08:01, Marcus wrote:
>>>>> On 2021-07-13, Theo Markettos wrote:
>>>>>
>>>>> [snip]
>>>>>
>>>>>> The hardware is hugely worse in size and power efficiency than the
>>>>>> neural
>>>>>> network between our ears, but we pay that cost.
>>>>>>
>>>>>
>>>>> I personally think that the proper implementation of a power efficient
>>>>> neural network requires two things:
>>>>>
>>>>> 1) Memory (signals and weights) should be co-located with the ALU:s (or
>>>>>     distributed across the compute matrix, if you will).
>>>>>
>>>>> 2) Compute cells should only be active when activated by a signal.
>>>>>     Perhaps the design should not be clocked by a global clock at all?
>>>>>
>>>>
>>>> I agree on both accounts.  (I haven't looked much at neural networks
>>>> since university, but I assume the principles haven't changed.)
>>>>
>>>> A biological neuron encompasses its own memory (weights), its own
>>>> processing, its own IO, its own learning system.  To make really
>>>> powerful artificial neural networks, the component parts need that too.
>>>>   Then you can scale the whole thing by adding more of the same.
>>>
>>> Exactly. You'll not be bounded by memory bandwidth or similar - it's a
>>> truly distributed system that should scale very well.
>>
>> The problem is the interconnect.  In true, i.e. biological, neural nets,
>> a neuron receives input from thousands of other (apparently, but not
>> really, random) neurons.  The "wires", AKA axons are each insulated (by
>> Glial cells) to prevent cross talk.  You can't replicate this at scale
>> in a silicon chip without thousands of layers of interconnect.
>>
>>
>
> That is a problem if you are trying to fully replicate biological neural
> systems. But that is not a practical aim - at least, not for a long
> time yet.

Certainly true for humans. Getting closer for flies! :-)

> In particular, current neural networks are based on layers,
> as a compromise between the capability of the networks and our
> understanding of algorithms to teach and tune them. I think you can
> come a /long/ way with layers that have a lot of interconnects within
> the layer, but only connect to adjacent layers rather than having
> connections throughout the system.

"long way" toward what goal? If you want to do engineering, that is, do
useful things, then I certainly agree. If you want to do science, that
is investigate more realistic models to better understand how brains
work, then not so much. (I don't mean to imply that science is useless
- far from it.)

> When you look at biological neural networks, the great majority of
> connections are quite local. The number of long-distance connections
> drops rapidly with the distance. After all, the scaling, spacing,
> power-management and heat management challenges in biology are not much
> different from those in silicon. The key difference is that details of
> where these connections are made can change somewhat in a biological
> system, while they are fixed in silicon.

Yes. Silicon systems emulate the changing connections by having lots of
"excess" connections, many of which have weights such that they are
essentially never used. By changing the weights, you emulate making new
and breaking connections. But this means you still have to have more
long distance connections than are "in use" at any particular time.

> So you could make your artificial neural networks with a small number of
> fixed long-distance connections, rather than trying to support arbitrary
> connections across the network.

Works within the limitations implied by the above.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<scpk6s$l9j$1@newsreader4.netcologne.de>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18800&group=comp.arch#18800

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!aioe.org!news.uzoreto.com!newsreader4.netcologne.de!news.netcologne.de!.POSTED.2001-4dd7-ffcf-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de!not-for-mail
From: tkoe...@netcologne.de (Thomas Koenig)
Newsgroups: comp.arch
Subject: Re: Power efficient neural networks (was: Someone's Trying Again
(Ascenium))
Date: Thu, 15 Jul 2021 15:28:28 -0000 (UTC)
Organization: news.netcologne.de
Distribution: world
Message-ID: <scpk6s$l9j$1@newsreader4.netcologne.de>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk>
<scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me>
<scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
<ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com>
<2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com>
<eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com>
<3dd634e5-0a4c-4b46-9da1-f41a01b9fba6n@googlegroups.com>
<2021Jul15.112634@mips.complang.tuwien.ac.at>
<scp3it$8mm$1@newsreader4.netcologne.de> <scp40d$h2v$1@dont-email.me>
<scp4o2$9ii$1@newsreader4.netcologne.de> <scp5dt$61e$1@dont-email.me>
Injection-Date: Thu, 15 Jul 2021 15:28:28 -0000 (UTC)
Injection-Info: newsreader4.netcologne.de; posting-host="2001-4dd7-ffcf-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de:2001:4dd7:ffcf:0:7285:c2ff:fe6c:992d";
logging-data="21811"; mail-complaints-to="abuse@netcologne.de"
User-Agent: slrn/1.0.3 (Linux)
 by: Thomas Koenig - Thu, 15 Jul 2021 15:28 UTC

Ivan Godard <ivan@millcomputing.com> schrieb:
> On 7/15/2021 4:04 AM, Thomas Koenig wrote:
>> Ivan Godard <ivan@millcomputing.com> schrieb:
>>> On 7/15/2021 3:44 AM, Thomas Koenig wrote:
>>>> Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
>>>>
>>>>> Currently, the power density of hot chips seems to be at maybe
>>>>> 200W/cm^2 (140W for a Ryzen 3800XT with 0.7cm^2 chiplet area), with
>>>>> higher power density in various spots on the die.
>>>>
>>>> That's a high enery density, but it is within the range that
>>>> can be removed by pool boiling, certainly of water, possibly
>>>> by refrigerants (especially if you have some space between
>>>> the chips).
>>>>
>>>> Mechanical stresses on the delicate chips could be another matter,
>>>> though.
>>>>
>>
>>> Did you used to work on pressurized water reactors?
>>
>> No, but my diploma thesis was in the field of pool boiling,
>> and the subject of how much heat you can remove by boiling
>> (a.k.a. the critical heat flux) is a standard topic in studying
>> chemical engineering.
>>
>
> My second guess was superheaters in steam locomotives :-)

My name is not, and has never been, David Wardale :-) (who, AFAIK,
was the last person to do serious steam locomotive engineering).

#ifdef PEDANTIC

Superheaters do what the name says, they superheat steam.
When steam leaves a boiler, is in equilibrium with the liqid it
is in close contact with. A superheater is a heat exchanger which
increases the temperature further. This is a pure gas-phase heat
exchanger, with much lower heat transfer coefficients, but also
with a much lower heat load, so this is not a big problem.

So, you could in principle use a boiling liquid for cooling
compouter chips if you can solve the mechanical and other assorted
problems, but using a steam superheater would make little sense.

#endif

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<4c82f3ac-ec7b-4fbd-9444-25fd1de7c84cn@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18804&group=comp.arch#18804

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a37:668f:: with SMTP id a137mr4922323qkc.481.1626369090677; Thu, 15 Jul 2021 10:11:30 -0700 (PDT)
X-Received: by 2002:a9d:491c:: with SMTP id e28mr4528690otf.342.1626369090454; Thu, 15 Jul 2021 10:11:30 -0700 (PDT)
Path: i2pn2.org!i2pn.org!aioe.org!feeder1.feed.usenet.farm!feed.usenet.farm!tr3.eu1.usenetexpress.com!feeder.usenetexpress.com!tr1.iad1.usenetexpress.com!border1.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Thu, 15 Jul 2021 10:11:30 -0700 (PDT)
In-Reply-To: <scpk6s$l9j$1@newsreader4.netcologne.de>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa3c:a000:a988:4b1e:7ee4:3073; posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa3c:a000:a988:4b1e:7ee4:3073
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com> <sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk> <scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me> <scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me> <ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com> <2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com> <eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com> <3dd634e5-0a4c-4b46-9da1-f41a01b9fba6n@googlegroups.com> <2021Jul15.112634@mips.complang.tuwien.ac.at> <scp3it$8mm$1@newsreader4.netcologne.de> <scp40d$h2v$1@dont-email.me> <scp4o2$9ii$1@newsreader4.netcologne.de> <scp5dt$61e$1@dont-email.me> <scpk6s$l9j$1@newsreader4.netcologne.de>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <4c82f3ac-ec7b-4fbd-9444-25fd1de7c84cn@googlegroups.com>
Subject: Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))
From: jsav...@ecn.ab.ca (Quadibloc)
Injection-Date: Thu, 15 Jul 2021 17:11:30 +0000
Content-Type: text/plain; charset="UTF-8"
Lines: 12
 by: Quadibloc - Thu, 15 Jul 2021 17:11 UTC

On Thursday, July 15, 2021 at 9:28:30 AM UTC-6, Thomas Koenig wrote:

> So, you could in principle use a boiling liquid for cooling
> compouter chips if you can solve the mechanical and other assorted
> problems, but using a steam superheater would make little sense.

Yes; even if one uses a working fluid with a sufficiently low boiling
point to be helpful, the point is that one is no longer benefiting from
the large latent heat of condensation. Just like the latent heat of
freezing makes ice so useful in cooling things (but sadly not computer
chips, as the solid phase is inconvenient to move around).

John Savard

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<35b5e227-1d22-40cf-9832-00ab33d70da1n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18806&group=comp.arch#18806

  copy link   Newsgroups: comp.arch
X-Received: by 2002:ac8:7f01:: with SMTP id f1mr4992670qtk.362.1626369466234;
Thu, 15 Jul 2021 10:17:46 -0700 (PDT)
X-Received: by 2002:a05:6830:3108:: with SMTP id b8mr4644621ots.182.1626369466036;
Thu, 15 Jul 2021 10:17:46 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Thu, 15 Jul 2021 10:17:45 -0700 (PDT)
In-Reply-To: <scpk6s$l9j$1@newsreader4.netcologne.de>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa3c:a000:a988:4b1e:7ee4:3073;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa3c:a000:a988:4b1e:7ee4:3073
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk>
<scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me> <scm5d0$49e$1@dont-email.me>
<scnsn9$63f$1@dont-email.me> <ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com>
<2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com> <eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com>
<3dd634e5-0a4c-4b46-9da1-f41a01b9fba6n@googlegroups.com> <2021Jul15.112634@mips.complang.tuwien.ac.at>
<scp3it$8mm$1@newsreader4.netcologne.de> <scp40d$h2v$1@dont-email.me>
<scp4o2$9ii$1@newsreader4.netcologne.de> <scp5dt$61e$1@dont-email.me> <scpk6s$l9j$1@newsreader4.netcologne.de>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <35b5e227-1d22-40cf-9832-00ab33d70da1n@googlegroups.com>
Subject: Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))
From: jsav...@ecn.ab.ca (Quadibloc)
Injection-Date: Thu, 15 Jul 2021 17:17:46 +0000
Content-Type: text/plain; charset="UTF-8"
 by: Quadibloc - Thu, 15 Jul 2021 17:17 UTC

On Thursday, July 15, 2021 at 9:28:30 AM UTC-6, Thomas Koenig wrote:

> So, you could in principle use a boiling liquid for cooling
> compouter chips if you can solve the mechanical and other assorted
> problems,

Isn't that what heat pipes use *already*?

John Savard

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<scpstk$rm3$1@newsreader4.netcologne.de>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18810&group=comp.arch#18810

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!.POSTED.2001-4dd7-ffcf-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de!not-for-mail
From: tkoe...@netcologne.de (Thomas Koenig)
Newsgroups: comp.arch
Subject: Re: Power efficient neural networks (was: Someone's Trying Again
(Ascenium))
Date: Thu, 15 Jul 2021 17:57:08 -0000 (UTC)
Organization: news.netcologne.de
Distribution: world
Message-ID: <scpstk$rm3$1@newsreader4.netcologne.de>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk>
<scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me>
<scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me>
<ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com>
<2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com>
<eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com>
<3dd634e5-0a4c-4b46-9da1-f41a01b9fba6n@googlegroups.com>
<2021Jul15.112634@mips.complang.tuwien.ac.at>
<scp3it$8mm$1@newsreader4.netcologne.de> <scp40d$h2v$1@dont-email.me>
<scp4o2$9ii$1@newsreader4.netcologne.de> <scp5dt$61e$1@dont-email.me>
<scpk6s$l9j$1@newsreader4.netcologne.de>
<35b5e227-1d22-40cf-9832-00ab33d70da1n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 15 Jul 2021 17:57:08 -0000 (UTC)
Injection-Info: newsreader4.netcologne.de; posting-host="2001-4dd7-ffcf-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de:2001:4dd7:ffcf:0:7285:c2ff:fe6c:992d";
logging-data="28355"; mail-complaints-to="abuse@netcologne.de"
User-Agent: slrn/1.0.3 (Linux)
 by: Thomas Koenig - Thu, 15 Jul 2021 17:57 UTC

Quadibloc <jsavard@ecn.ab.ca> schrieb:
> On Thursday, July 15, 2021 at 9:28:30 AM UTC-6, Thomas Koenig wrote:
>
>> So, you could in principle use a boiling liquid for cooling
>> compouter chips if you can solve the mechanical and other assorted
>> problems,
>
> Isn't that what heat pipes use *already*?

Yep, but I was thinking of direct contact of the chips with the
boiling liquid (with a thin insulating laryer, presumably).

_Much_ more efficient. The nice thing is that your temperature
stays pretty much constant as long as there is enough liquid -
no hot edges.

Another nice property is that, if your chips run at 70°C (let's
say) and your vapor comes out the system at 60°C, the vapor is
easy and cheap to condense - even cooling water at 45°C can do it.

Water would be ideal because of its high enthalpy of vaporization
and because you can realize the highest heat fluxes with it.
It is also non-toxic.

However, it has some unpleasant properties for electronics, such
as being rather conductive with only trace amount of ions needed,
so that is probably out. You would also have to run it in a slight
vacuum to get below 100°C, which could be problematic.

So, another liquid would be called for, preferably something legal,
moral and non-fattening, with a boiling point of around 60°C, or
maybe a bit lower.

Hmm... a bit of a search turns up a Wikipedia aricle on
https://en.wikipedia.org/wiki/Novec_649/1230 which cites a source
that this has already been tried by Intel and SGI, so the
idea isn't new (and frankly, I would have been surprised if it was).

https://multimedia.3m.com/mws/media/569865O/3m-novec-engineered-fluid-649.pdf
tells me that the fluid they used has a rather low enthalpy of
vaporization, only 88 kJ/kg. That's not so great, water has
2100 kJ/kg and many other organic liquids have around 300.

Soo... maybe do a better insulation, put the chips on stacks and
build it all up like a plate heat exchanger.

(And no, I'm not 100% serious.)

Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))

<15a3b01a-24a2-4e95-bfb7-8fb106eac501n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18811&group=comp.arch#18811

  copy link   Newsgroups: comp.arch
X-Received: by 2002:ac8:665a:: with SMTP id j26mr5302368qtp.254.1626372828865; Thu, 15 Jul 2021 11:13:48 -0700 (PDT)
X-Received: by 2002:a05:6808:14c8:: with SMTP id f8mr4660962oiw.7.1626372828672; Thu, 15 Jul 2021 11:13:48 -0700 (PDT)
Path: i2pn2.org!i2pn.org!aioe.org!feeder1.feed.usenet.farm!feed.usenet.farm!tr3.eu1.usenetexpress.com!feeder.usenetexpress.com!tr3.iad1.usenetexpress.com!border1.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Thu, 15 Jul 2021 11:13:48 -0700 (PDT)
In-Reply-To: <scpstk$rm3$1@newsreader4.netcologne.de>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:291:29f0:95b1:3c6f:d12f:872c; posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 2600:1700:291:29f0:95b1:3c6f:d12f:872c
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com> <sck00m$a4r$1@dont-email.me> <wXB*Pr2oy@news.chiark.greenend.org.uk> <scluiv$g9h$1@dont-email.me> <scm256$s8d$1@dont-email.me> <scm5d0$49e$1@dont-email.me> <scnsn9$63f$1@dont-email.me> <ea506cd8-f33d-40a1-982f-c982ebc65577n@googlegroups.com> <2570ff06-15f5-4490-b375-788a1f9ecce9n@googlegroups.com> <eac2ea6d-6a61-4809-9c5d-885682486a59n@googlegroups.com> <3dd634e5-0a4c-4b46-9da1-f41a01b9fba6n@googlegroups.com> <2021Jul15.112634@mips.complang.tuwien.ac.at> <scp3it$8mm$1@newsreader4.netcologne.de> <scp40d$h2v$1@dont-email.me> <scp4o2$9ii$1@newsreader4.netcologne.de> <scp5dt$61e$1@dont-email.me> <scpk6s$l9j$1@newsreader4.netcologne.de> <35b5e227-1d22-40cf-9832-00ab33d70da1n@googlegroups.com> <scpstk$rm3$1@newsreader4.netcologne.de>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <15a3b01a-24a2-4e95-bfb7-8fb106eac501n@googlegroups.com>
Subject: Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))
From: MitchAl...@aol.com (MitchAlsup)
Injection-Date: Thu, 15 Jul 2021 18:13:48 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Lines: 56
 by: MitchAlsup - Thu, 15 Jul 2021 18:13 UTC

On Thursday, July 15, 2021 at 12:57:10 PM UTC-5, Thomas Koenig wrote:
> Quadibloc <jsa...@ecn.ab.ca> schrieb:
> > On Thursday, July 15, 2021 at 9:28:30 AM UTC-6, Thomas Koenig wrote:
> >
> >> So, you could in principle use a boiling liquid for cooling
> >> compouter chips if you can solve the mechanical and other assorted
> >> problems,
> >
> > Isn't that what heat pipes use *already*?
> Yep, but I was thinking of direct contact of the chips with the
> boiling liquid (with a thin insulating laryer, presumably).
>
> _Much_ more efficient. The nice thing is that your temperature
> stays pretty much constant as long as there is enough liquid -
> no hot edges.
<
And a bit more dangerous. Boiling a liquid requires the liquid to
create bubbles (of the gas) and these bubbles invariably form at
the solid liquid interface. These microscopic bubbles can lift the
solid surface one atom at a time causing the surface to decompose
over time--limiting lifetime.
<
Non-boiling temperature transfer has much longer actual lifetimes.
>
> Another nice property is that, if your chips run at 70°C (let's
> say) and your vapor comes out the system at 60°C, the vapor is
> easy and cheap to condense - even cooling water at 45°C can do it.
>
> Water would be ideal because of its high enthalpy of vaporization
> and because you can realize the highest heat fluxes with it.
> It is also non-toxic.
>
> However, it has some unpleasant properties for electronics, such
> as being rather conductive with only trace amount of ions needed,
> so that is probably out. You would also have to run it in a slight
> vacuum to get below 100°C, which could be problematic.
>
> So, another liquid would be called for, preferably something legal,
> moral and non-fattening, with a boiling point of around 60°C, or
> maybe a bit lower.
>
> Hmm... a bit of a search turns up a Wikipedia aricle on
> https://en.wikipedia.org/wiki/Novec_649/1230 which cites a source
> that this has already been tried by Intel and SGI, so the
> idea isn't new (and frankly, I would have been surprised if it was).
>
> https://multimedia.3m.com/mws/media/569865O/3m-novec-engineered-fluid-649..pdf
> tells me that the fluid they used has a rather low enthalpy of
> vaporization, only 88 kJ/kg. That's not so great, water has
> 2100 kJ/kg and many other organic liquids have around 300.
>
> Soo... maybe do a better insulation, put the chips on stacks and
> build it all up like a plate heat exchanger.
>
> (And no, I'm not 100% serious.)

Pages:12345
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor