Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

Except for 75% of the women, everyone in the whole world wants to have sex. -- Ellyn Mustard


devel / comp.arch / Neural Network Accelerators

SubjectAuthor
* Neural Network Acceleratorsrobf...@gmail.com
`* Re: Neural Network AcceleratorsStephen Fuld
 +- Re: Neural Network AcceleratorsJohnG
 `* Re: Neural Network AcceleratorsTerje Mathisen
  `* Re: Neural Network AcceleratorsJimBrakefield
   `* Re: Neural Network AcceleratorsMitchAlsup
    +* Re: Neural Network AcceleratorsEricP
    |+* Re: Neural Network AcceleratorsEricP
    ||`* Re: Neural Network AcceleratorsIvan Godard
    || +* Re: Neural Network AcceleratorsScott Smader
    || |`* Re: Neural Network AcceleratorsIvan Godard
    || | +- Re: Neural Network AcceleratorsScott Smader
    || | `* Re: Neural Network AcceleratorsMitchAlsup
    || |  `- Re: Neural Network AcceleratorsIvan Godard
    || `* Re: Neural Network AcceleratorsEricP
    ||  `* Re: Neural Network AcceleratorsScott Smader
    ||   `* Re: Neural Network AcceleratorsEricP
    ||    +* Re: Neural Network AcceleratorsMitchAlsup
    ||    |+* Re: Neural Network AcceleratorsTerje Mathisen
    ||    ||`* Re: Neural Network AcceleratorsThomas Koenig
    ||    || +- Re: Neural Network AcceleratorsMitchAlsup
    ||    || +- Re: Neural Network AcceleratorsStefan Monnier
    ||    || `- Re: Neural Network AcceleratorsTerje Mathisen
    ||    |+- Re: Neural Network AcceleratorsStephen Fuld
    ||    |`* Re: Neural Network AcceleratorsEricP
    ||    | `- Re: Neural Network AcceleratorsBGB
    ||    `* Re: Neural Network AcceleratorsEricP
    ||     +* Re: Neural Network AcceleratorsScott Smader
    ||     |`* Re: Neural Network AcceleratorsEricP
    ||     | +* Re: Neural Network AcceleratorsStephen Fuld
    ||     | |`* Re: Neural Network AcceleratorsEricP
    ||     | | +- Re: Neural Network AcceleratorsScott Smader
    ||     | | +- Re: Neural Network AcceleratorsScott Smader
    ||     | | `* Re: Neural Network AcceleratorsStephen Fuld
    ||     | |  `* Re: Neural Network AcceleratorsMitchAlsup
    ||     | |   +* Re: Neural Network AcceleratorsEricP
    ||     | |   |`* Re: Neural Network AcceleratorsEricP
    ||     | |   | +* Re: Neural Network AcceleratorsMitchAlsup
    ||     | |   | |+* Re: Neural Network AcceleratorsTerje Mathisen
    ||     | |   | ||`- Re: Neural Network AcceleratorsMitchAlsup
    ||     | |   | |`* Re: Neural Network AcceleratorsEricP
    ||     | |   | | `* Re: Neural Network AcceleratorsMitchAlsup
    ||     | |   | |  `* Re: Neural Network Acceleratorsrobf...@gmail.com
    ||     | |   | |   +* Re: Neural Network AcceleratorsJimBrakefield
    ||     | |   | |   |`* Re: Neural Network AcceleratorsIvan Godard
    ||     | |   | |   | +- Re: Neural Network AcceleratorsMitchAlsup
    ||     | |   | |   | +- Re: Neural Network AcceleratorsJimBrakefield
    ||     | |   | |   | +- Re: Neural Network AcceleratorsMitchAlsup
    ||     | |   | |   | `- Re: Neural Network AcceleratorsMitchAlsup
    ||     | |   | |   +- Re: Neural Network AcceleratorsJimBrakefield
    ||     | |   | |   `- Re: Neural Network AcceleratorsSean O'Connor
    ||     | |   | `- Re: Neural Network AcceleratorsThomas Koenig
    ||     | |   `* Re: Neural Network AcceleratorsThomas Koenig
    ||     | |    +- Re: Neural Network AcceleratorsMitchAlsup
    ||     | |    `- Re: Neural Network AcceleratorsJimBrakefield
    ||     | `* Re: Neural Network AcceleratorsScott Smader
    ||     |  `* Re: Neural Network AcceleratorsEricP
    ||     |   `* Re: Neural Network AcceleratorsYoga Man
    ||     |    `- Re: Neural Network AcceleratorsScott Smader
    ||     `- Re: Neural Network AcceleratorsIvan Godard
    |+- Re: Neural Network AcceleratorsEricP
    |+- Re: Neural Network AcceleratorsJimBrakefield
    |`- Re: Neural Network AcceleratorsStephen Fuld
    `* Re: Neural Network AcceleratorsBGB
     `* Re: Neural Network AcceleratorsMitchAlsup
      `- Re: Neural Network AcceleratorsBGB

Pages:123
Neural Network Accelerators

<8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21959&group=comp.arch#21959

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:3c9:: with SMTP id r9mr10628550qkm.297.1636699085585;
Thu, 11 Nov 2021 22:38:05 -0800 (PST)
X-Received: by 2002:a9d:60c:: with SMTP id 12mr10401761otn.94.1636699085267;
Thu, 11 Nov 2021 22:38:05 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!border2.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Thu, 11 Nov 2021 22:38:05 -0800 (PST)
Injection-Info: google-groups.googlegroups.com; posting-host=2607:fea8:1de1:fb00:4c19:cfe0:b2ef:b45d;
posting-account=QId4bgoAAABV4s50talpu-qMcPp519Eb
NNTP-Posting-Host: 2607:fea8:1de1:fb00:4c19:cfe0:b2ef:b45d
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
Subject: Neural Network Accelerators
From: robfi...@gmail.com (robf...@gmail.com)
Injection-Date: Fri, 12 Nov 2021 06:38:05 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Lines: 19
 by: robf...@gmail.com - Fri, 12 Nov 2021 06:38 UTC

For the Thor 2021 version core neural network accelerator instructions
are included. Most of the instructions are for loading values into arrays
like the inputs or weights array, or variables like the bias value. The
neural network performs computations when triggered by software.
The network runs asynchronously and has a status register used to
indicate completion of a computation cycle. Eight neurons all compute
at the same time resulting in a sigmoid output for each neuron.
Thor’s neural network consists of eight neurons computing using 16.16
fixed point arithmetic. Each neuron may have up to 1024 inputs.
Activation values are computed serially with a fixed point multiply and
accumulate operation.
While there are only eight neurons in a single layer, multiple layers may
be built up by reading the output levels and using them as inputs for the
next round of calculations. The input / weights array may be partitioned
so that only part of it is used in any one calculation cycle.
I was wondering if there were any processors supporting neural networks
that I could study?
I had thought the network may be better as a memory mapped I/O device.

Re: Neural Network Accelerators

<sml4tj$img$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21961&group=comp.arch#21961

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Thu, 11 Nov 2021 23:25:39 -0800
Organization: A noiseless patient Spider
Lines: 35
Message-ID: <sml4tj$img$1@dont-email.me>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 12 Nov 2021 07:25:39 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="5a76f5c8378d967d9529b1de100efc3f";
logging-data="19152"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+flOSA5mL5O8uVZueR2UGGUL+u+6zoMCk="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.3.0
Cancel-Lock: sha1:+8oUeJ67CLr5Qj9iQHiLxV3hw+8=
In-Reply-To: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
Content-Language: en-US
 by: Stephen Fuld - Fri, 12 Nov 2021 07:25 UTC

On 11/11/2021 10:38 PM, robf...@gmail.com wrote:
> For the Thor 2021 version core neural network accelerator instructions
> are included. Most of the instructions are for loading values into arrays
> like the inputs or weights array, or variables like the bias value. The
> neural network performs computations when triggered by software.
> The network runs asynchronously and has a status register used to
> indicate completion of a computation cycle. Eight neurons all compute
> at the same time resulting in a sigmoid output for each neuron.
> Thor’s neural network consists of eight neurons computing using 16.16
> fixed point arithmetic. Each neuron may have up to 1024 inputs.
> Activation values are computed serially with a fixed point multiply and
> accumulate operation.
> While there are only eight neurons in a single layer, multiple layers may
> be built up by reading the output levels and using them as inputs for the
> next round of calculations. The input / weights array may be partitioned
> so that only part of it is used in any one calculation cycle.
> I was wondering if there were any processors supporting neural networks
> that I could study?

There are some historical chips, and several current, either in use or
under development. I am not sure how much information you can get about
their internal architecture.

You might start with

https://en.wikipedia.org/wiki/AI_accelerator

and check out Google's Tensor Processing Unit, and for a different
approach, IBM's True North.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: Neural Network Accelerators

<00b10075-eff2-4db4-991a-e0e812fc4299n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21962&group=comp.arch#21962

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:27c3:: with SMTP id i3mr12169386qkp.442.1636717027537;
Fri, 12 Nov 2021 03:37:07 -0800 (PST)
X-Received: by 2002:a9d:f45:: with SMTP id 63mr8241215ott.350.1636717027236;
Fri, 12 Nov 2021 03:37:07 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!peer01.ams1!peer.ams1.xlned.com!news.xlned.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Fri, 12 Nov 2021 03:37:06 -0800 (PST)
In-Reply-To: <sml4tj$img$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=108.91.172.125; posting-account=5gV3HwoAAAAce05MvbMFVKxb-iBCVVSr
NNTP-Posting-Host: 108.91.172.125
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com> <sml4tj$img$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <00b10075-eff2-4db4-991a-e0e812fc4299n@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: gomijaco...@gmail.com (JohnG)
Injection-Date: Fri, 12 Nov 2021 11:37:07 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 3630
 by: JohnG - Fri, 12 Nov 2021 11:37 UTC

On Thursday, November 11, 2021 at 11:25:42 PM UTC-8, Stephen Fuld wrote:
> On 11/11/2021 10:38 PM, robf...@gmail.com wrote:
> > For the Thor 2021 version core neural network accelerator instructions
> > are included. Most of the instructions are for loading values into arrays
> > like the inputs or weights array, or variables like the bias value. The
> > neural network performs computations when triggered by software.
> > The network runs asynchronously and has a status register used to
> > indicate completion of a computation cycle. Eight neurons all compute
> > at the same time resulting in a sigmoid output for each neuron.
> > Thor’s neural network consists of eight neurons computing using 16.16
> > fixed point arithmetic. Each neuron may have up to 1024 inputs.
> > Activation values are computed serially with a fixed point multiply and
> > accumulate operation.
> > While there are only eight neurons in a single layer, multiple layers may
> > be built up by reading the output levels and using them as inputs for the
> > next round of calculations. The input / weights array may be partitioned
> > so that only part of it is used in any one calculation cycle.
> > I was wondering if there were any processors supporting neural networks
> > that I could study?
> There are some historical chips, and several current, either in use or
> under development. I am not sure how much information you can get about
> their internal architecture.
>
> You might start with
>
> https://en.wikipedia.org/wiki/AI_accelerator
>
> and check out Google's Tensor Processing Unit, and for a different
> approach, IBM's True North.
>
>
>
> --
> - Stephen Fuld
> (e-mail address disguised to prevent spam)

While this documents an abstract machine rather than an exact implementation, it still gives a lot of insight IMHO. https://docs.nvidia.com/deeplearning/performance/dl-performance-convolutional/index.html

Another couple papers that look interesting on first scan...
https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/
https://arxiv.org/pdf/1410.0759.pdf

Another direction might be programming for Apple's M1 Neural Engine -
https://developer.apple.com/documentation/accelerate/bnns
https://developer.apple.com/documentation/metalperformanceshaders

-JohnG

Re: Neural Network Accelerators

<smlm6d$o6a$1@gioia.aioe.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21965&group=comp.arch#21965

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!aioe.org!Liunnst7X9VOeBBqqVtBCw.user.46.165.242.91.POSTED!not-for-mail
From: terje.ma...@tmsw.no (Terje Mathisen)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Fri, 12 Nov 2021 13:20:32 +0100
Organization: Aioe.org NNTP Server
Message-ID: <smlm6d$o6a$1@gioia.aioe.org>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: gioia.aioe.org; logging-data="24778"; posting-host="Liunnst7X9VOeBBqqVtBCw.user.gioia.aioe.org"; mail-complaints-to="abuse@aioe.org";
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:60.0) Gecko/20100101
Firefox/60.0 SeaMonkey/2.53.9.1
X-Notice: Filtered by postfilter v. 0.9.2
 by: Terje Mathisen - Fri, 12 Nov 2021 12:20 UTC

Stephen Fuld wrote:
> On 11/11/2021 10:38 PM, robf...@gmail.com wrote:
>> For the Thor 2021 version core neural network accelerator instructions
>> are included. Most of the instructions are for loading values into arrays
>> like the inputs or weights array, or variables like the bias value. The
>> neural network performs computations when triggered by software.
>> The network runs asynchronously and has a status register used to
>> indicate completion of a computation cycle. Eight neurons all compute
>> at the same time resulting in a sigmoid output for each neuron.
>> Thor’s neural network consists of eight neurons computing using 16.16
>> fixed point arithmetic. Each neuron may have up to 1024 inputs.
>> Activation values are computed serially with a fixed point multiply and
>> accumulate operation.
>> While there are only eight neurons in a single layer, multiple layers may
>> be built up by reading the output levels and using them as inputs for the
>> next round of calculations. The input / weights array may be partitioned
>> so that only part of it is used in any one calculation cycle.
>> I was wondering if there were any processors supporting neural networks
>> that I could study?
>
> There are some historical chips, and several current, either in use or
> under development.  I am not sure how much information you can get about
> their internal architecture.
>
> You might start with
>
> https://en.wikipedia.org/wiki/AI_accelerator
>
> and check out Google's Tensor Processing Unit, and for a different
> approach, IBM's True North.

Please include Tesla's custom chips as well, both the runtime engine
which is installed in all their cars (10K 8-bit MACs, presumably with a
32-bit accumulator) and the training chip they use in Dojo, which I
believe is using a more or less standard 16-bit fp format.

Tesla had much more severe power constraint than Google, so I believe
their in-car chips are leading in performance/watt.

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"

Re: Neural Network Accelerators

<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21969&group=comp.arch#21969

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a37:8ec6:: with SMTP id q189mr12861952qkd.145.1636729029978;
Fri, 12 Nov 2021 06:57:09 -0800 (PST)
X-Received: by 2002:a05:6830:348f:: with SMTP id c15mr12977854otu.254.1636729029629;
Fri, 12 Nov 2021 06:57:09 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!3.eu.feeder.erje.net!feeder.erje.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Fri, 12 Nov 2021 06:57:09 -0800 (PST)
In-Reply-To: <smlm6d$o6a$1@gioia.aioe.org>
Injection-Info: google-groups.googlegroups.com; posting-host=136.50.253.102; posting-account=AoizIQoAAADa7kQDpB0DAj2jwddxXUgl
NNTP-Posting-Host: 136.50.253.102
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: jim.brak...@ieee.org (JimBrakefield)
Injection-Date: Fri, 12 Nov 2021 14:57:09 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: JimBrakefield - Fri, 12 Nov 2021 14:57 UTC

On Friday, November 12, 2021 at 6:20:32 AM UTC-6, Terje Mathisen wrote:
> Stephen Fuld wrote:
> > On 11/11/2021 10:38 PM, robf...@gmail.com wrote:
> >> For the Thor 2021 version core neural network accelerator instructions
> >> are included. Most of the instructions are for loading values into arrays
> >> like the inputs or weights array, or variables like the bias value. The
> >> neural network performs computations when triggered by software.
> >> The network runs asynchronously and has a status register used to
> >> indicate completion of a computation cycle. Eight neurons all compute
> >> at the same time resulting in a sigmoid output for each neuron.
> >> Thor’s neural network consists of eight neurons computing using 16.16
> >> fixed point arithmetic. Each neuron may have up to 1024 inputs.
> >> Activation values are computed serially with a fixed point multiply and
> >> accumulate operation.
> >> While there are only eight neurons in a single layer, multiple layers may
> >> be built up by reading the output levels and using them as inputs for the
> >> next round of calculations. The input / weights array may be partitioned
> >> so that only part of it is used in any one calculation cycle.
> >> I was wondering if there were any processors supporting neural networks
> >> that I could study?
> >
> > There are some historical chips, and several current, either in use or
> > under development. I am not sure how much information you can get about
> > their internal architecture.
> >
> > You might start with
> >
> > https://en.wikipedia.org/wiki/AI_accelerator
> >
> > and check out Google's Tensor Processing Unit, and for a different
> > approach, IBM's True North.
> Please include Tesla's custom chips as well, both the runtime engine
> which is installed in all their cars (10K 8-bit MACs, presumably with a
> 32-bit accumulator) and the training chip they use in Dojo, which I
> believe is using a more or less standard 16-bit fp format.
>
> Tesla had much more severe power constraint than Google, so I believe
> their in-car chips are leading in performance/watt.
>
> Terje
>
>
> --
> - <Terje.Mathisen at tmsw.no>
> "almost all programming can be viewed as an exercise in caching"

Dojo also supports 8-bit fp using a "block floating-point" exponent offset
https://cdn.motor1.com/pdf-files/535242876-tesla-dojo-technology.pdf
The Posit and NN literature have studies of shortened fp for NN weights.

Not sure if training uses larger fp and then application instances use shortened fp?

Re: Neural Network Accelerators

<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21974&group=comp.arch#21974

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:6214:1453:: with SMTP id b19mr18123109qvy.20.1636757546460;
Fri, 12 Nov 2021 14:52:26 -0800 (PST)
X-Received: by 2002:a9d:6297:: with SMTP id x23mr15210610otk.142.1636757546148;
Fri, 12 Nov 2021 14:52:26 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!border2.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Fri, 12 Nov 2021 14:52:25 -0800 (PST)
In-Reply-To: <bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=104.59.204.55; posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 104.59.204.55
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org> <bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: MitchAl...@aol.com (MitchAlsup)
Injection-Date: Fri, 12 Nov 2021 22:52:26 +0000
Content-Type: text/plain; charset="UTF-8"
Lines: 3
 by: MitchAlsup - Fri, 12 Nov 2021 22:52 UTC

I think it is rather safe to say (at this point in time) the NN-accelerators
are in the 704 days of these kinds of architectures.

A bit more than barely function, and a long way to go.........

Re: Neural Network Accelerators

<FtTjJ.17520$hm7.7298@fx07.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21976&group=comp.arch#21976

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!paganini.bofh.team!news.dns-netz.com!news.freedyn.net!newsreader4.netcologne.de!news.netcologne.de!peer01.ams1!peer.ams1.xlned.com!news.xlned.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx07.iad.POSTED!not-for-mail
From: ThatWoul...@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com> <sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org> <bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
In-Reply-To: <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 30
Message-ID: <FtTjJ.17520$hm7.7298@fx07.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Sat, 13 Nov 2021 18:28:53 UTC
Date: Sat, 13 Nov 2021 13:28:17 -0500
X-Received-Bytes: 2118
 by: EricP - Sat, 13 Nov 2021 18:28 UTC

MitchAlsup wrote:
> I think it is rather safe to say (at this point in time) the NN-accelerators
> are in the 704 days of these kinds of architectures.
>
> A bit more than barely function, and a long way to go.........

The NN talked about mostly in the press and which most vendors
are trying to sell you are Convolution Neural Networks (CNN)
which are basically multiple layers of sums.
You can build a fancy pattern matcher out of it but
it will never make a good decision mechanism.
It will always be a mystery why it "decided" a certain way
because it is just calculating a great whacking polynomial.
As NN go, I have a gut feeling that is a dead end.

Real NN have feedback, called recurrent, and real neurons are spiky
which introduces signal timing and phase delays as attributes.

In particular signal phase timing adds a whole new dimension for
information storage. Feedback allows resonances to enhance or
suppress different combinations of inputs.

That is the basis for my suspicion that we will eventually find
that brains, all brains, are akin to _holograms_.

Clusters of neurons can build holographic modules and form
holographic modules of modules.

Re: Neural Network Accelerators

<smp0lu$28u$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21977&group=comp.arch#21977

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: cr88...@gmail.com (BGB)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Sat, 13 Nov 2021 12:36:17 -0600
Organization: A noiseless patient Spider
Lines: 52
Message-ID: <smp0lu$28u$1@dont-email.me>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sat, 13 Nov 2021 18:37:50 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="93cc9a7bb33d052cf6c5b82fab39f261";
logging-data="2334"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18XRNwcoaZRuvWlaJhW5vEd"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.3.0
Cancel-Lock: sha1:9jdOvJTkmT7f1YbPeTm7mMewRTQ=
In-Reply-To: <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
Content-Language: en-US
 by: BGB - Sat, 13 Nov 2021 18:36 UTC

On 11/12/2021 4:52 PM, MitchAlsup wrote:
> I think it is rather safe to say (at this point in time) the NN-accelerators
> are in the 704 days of these kinds of architectures.
>
> A bit more than barely function, and a long way to go.........
>

Yeah, it is a bit hit or miss even what they should be doing exactly...

Some of the popular options seem to be in-effect doing large glorified
matrix multiplies using a truncated floating format (BF16, essentially
Binary32 with the low 16 bits cut off).

Not sure if there is a "good" reason for BF16, or if it is more a
workaround for popular / mainstream CPU architectures lacking support
for Binary16 SIMD.

It comes off like, say, if I proposed a new 32-bit floating point format
that was effectively just Double with the low 32 bits cut off (then try
to pass it off as "better" when it is really just a way to allow cheaper
format conversions).

Both Binary16 and BF16 could make sense for dedicated low-precision SIMD
ops, and also are small enough to be implemented reasonably affordably
on an FPGA.

For general use, Binary16 (S.E5.F10) probably makes more sense, though
one could debate whether BF16 (S.E8.F7) has enough value in the
general-case to make it worthwhile (short of trying to do a TensorFlow
port or similar, I have doubts).

More debatable, but one could argue for an 8x or 16x (S.E4.F3) vectors
for Neural-Net uses. Though, I suspect the gains would be small as the
relative cost of wrangling inputs would somewhat outweigh the possible
savings from such operators over operating on 16-bit elements (with
possible Packed FP8 <-> FP16 conversion operators).

Intermediate options are mostly those involving FP10 (S.E5.F4) or FP12
(S.E5.F6):
3x FP10 in 32 bits (Maps to 4x FP16, X/Y/Z/{0/-/1/-1});
6x FP10 in 64 bits (Maps to 8x FP16, X/Y/Z/0,P/Q/R/0);
4x FP12 in 48 bits;
...

If one expected to do a lot of FP16, this could justify the cost of have
dedicated SIMD units for these (rather than run them internally through
a slower but higher precision FPU).

Re: Neural Network Accelerators

<oSTjJ.50938$SR4.17611@fx43.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21979&group=comp.arch#21979

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!paganini.bofh.team!news.dns-netz.com!news.freedyn.net!newsreader4.netcologne.de!news.netcologne.de!peer03.ams1!peer.ams1.xlned.com!news.xlned.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx43.iad.POSTED!not-for-mail
From: ThatWoul...@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com> <sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org> <bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com> <FtTjJ.17520$hm7.7298@fx07.iad>
In-Reply-To: <FtTjJ.17520$hm7.7298@fx07.iad>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 28
Message-ID: <oSTjJ.50938$SR4.17611@fx43.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Sat, 13 Nov 2021 18:55:16 UTC
Date: Sat, 13 Nov 2021 13:54:58 -0500
X-Received-Bytes: 2090
 by: EricP - Sat, 13 Nov 2021 18:54 UTC

EricP wrote:
>
> Real NN have feedback, called recurrent, and real neurons are spiky
> which introduces signal timing and phase delays as attributes.
>
> In particular signal phase timing adds a whole new dimension for
> information storage. Feedback allows resonances to enhance or
> suppress different combinations of inputs.
>
> That is the basis for my suspicion that we will eventually find
> that brains, all brains, are akin to _holograms_.
>
> Clusters of neurons can build holographic modules and form
> holographic modules of modules.

One mystery about this is why doesn't every brain immediately
collapse in a giant epileptic fit. Nature must have found a
way to detect and prevent it as the organism grows.
Natural selection would be a poor mechanism because there are
so many ways to fail and many fewer ways to succeed
that almost no brains would survive.

So there must be some mechanism that "drives" these self organizing
networks toward interconnections that do not have uncontrolled feedback.

I've said this before but I suspect _that_ is what nature discovered
at the Cambrian explosion 540 million years ago.

Re: Neural Network Accelerators

<e212fada-ffa9-49ab-8b08-8ecf1d572ac9n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21981&group=comp.arch#21981

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:4044:: with SMTP id i4mr18002942qko.271.1636831821809;
Sat, 13 Nov 2021 11:30:21 -0800 (PST)
X-Received: by 2002:a9d:7615:: with SMTP id k21mr20164282otl.38.1636831821603;
Sat, 13 Nov 2021 11:30:21 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sat, 13 Nov 2021 11:30:21 -0800 (PST)
In-Reply-To: <smp0lu$28u$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=104.59.204.55; posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 104.59.204.55
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<smp0lu$28u$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <e212fada-ffa9-49ab-8b08-8ecf1d572ac9n@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: MitchAl...@aol.com (MitchAlsup)
Injection-Date: Sat, 13 Nov 2021 19:30:21 +0000
Content-Type: text/plain; charset="UTF-8"
 by: MitchAlsup - Sat, 13 Nov 2021 19:30 UTC

On Saturday, November 13, 2021 at 12:37:53 PM UTC-6, BGB wrote:
> On 11/12/2021 4:52 PM, MitchAlsup wrote:
> > I think it is rather safe to say (at this point in time) the NN-accelerators
> > are in the 704 days of these kinds of architectures.
> >
> > A bit more than barely function, and a long way to go.........
> >
> Yeah, it is a bit hit or miss even what they should be doing exactly...
>
>
> Some of the popular options seem to be in-effect doing large glorified
> matrix multiplies using a truncated floating format (BF16, essentially
> Binary32 with the low 16 bits cut off).
>
>
> Not sure if there is a "good" reason for BF16, or if it is more a
> workaround for popular / mainstream CPU architectures lacking support
> for Binary16 SIMD.
>
> It comes off like, say, if I proposed a new 32-bit floating point format
> that was effectively just Double with the low 32 bits cut off (then try
> to pass it off as "better" when it is really just a way to allow cheaper
> format conversions).
>
>
> Both Binary16 and BF16 could make sense for dedicated low-precision SIMD
> ops, and also are small enough to be implemented reasonably affordably
> on an FPGA.
>
> For general use, Binary16 (S.E5.F10) probably makes more sense, though
> one could debate whether BF16 (S.E8.F7) has enough value in the
> general-case to make it worthwhile (short of trying to do a TensorFlow
> port or similar, I have doubts).
<
A lot of the self-driving NNs can use as few as 1-bit weighting matrices
and 8-bit accumulators and achieve useful pattern recognition rates.
Indeed, much of the work, here, is centered around decreasing the storage
BW to feed to convolution engine than on making the convolution engine
faster or larger.
<
I am pretty sure this kind of experiments will continue for another decade.
>
>
> More debatable, but one could argue for an 8x or 16x (S.E4.F3) vectors
> for Neural-Net uses. Though, I suspect the gains would be small as the
> relative cost of wrangling inputs would somewhat outweigh the possible
> savings from such operators over operating on 16-bit elements (with
> possible Packed FP8 <-> FP16 conversion operators).
<
One big problem is how does one "express" such a matrix multiplication,
where the size of the containers in the weighting matrix change on a per
weight basis.
>
> Intermediate options are mostly those involving FP10 (S.E5.F4) or FP12
> (S.E5.F6):
> 3x FP10 in 32 bits (Maps to 4x FP16, X/Y/Z/{0/-/1/-1});
> 6x FP10 in 64 bits (Maps to 8x FP16, X/Y/Z/0,P/Q/R/0);
> 4x FP12 in 48 bits;
> ...
>
> If one expected to do a lot of FP16, this could justify the cost of have
> dedicated SIMD units for these (rather than run them internally through
> a slower but higher precision FPU).

Re: Neural Network Accelerators

<2xUjJ.21482$452.13388@fx22.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21982&group=comp.arch#21982

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!aioe.org!news.uzoreto.com!npeer.as286.net!npeer-ng0.as286.net!peer03.ams1!peer.ams1.xlned.com!news.xlned.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx22.iad.POSTED!not-for-mail
From: ThatWoul...@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com> <sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org> <bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com> <FtTjJ.17520$hm7.7298@fx07.iad>
In-Reply-To: <FtTjJ.17520$hm7.7298@fx07.iad>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 25
Message-ID: <2xUjJ.21482$452.13388@fx22.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Sat, 13 Nov 2021 19:40:46 UTC
Date: Sat, 13 Nov 2021 14:40:19 -0500
X-Received-Bytes: 2019
 by: EricP - Sat, 13 Nov 2021 19:40 UTC

EricP wrote:
> MitchAlsup wrote:
>> I think it is rather safe to say (at this point in time) the
>> NN-accelerators
>> are in the 704 days of these kinds of architectures.
>>
>> A bit more than barely function, and a long way to go.........
>
> The NN talked about mostly in the press and which most vendors
> are trying to sell you are Convolution Neural Networks (CNN)
> which are basically multiple layers of sums.
> You can build a fancy pattern matcher out of it but
> it will never make a good decision mechanism.
> It will always be a mystery why it "decided" a certain way
> because it is just calculating a great whacking polynomial.
> As NN go, I have a gut feeling that is a dead end.

This all reminds me of the fuzzy logic fad of the mid 1980's.
It was invented in the 1920's and the term was coined in 1965.
For some reason is became "a thing" in the tech magazines around 1985
for a while (though a quicky search still finds lots of references).
Everything was going fuzzy.
Fuzzy logic sometimes was and still is labeled artificial intelligence too.

Re: Neural Network Accelerators

<smp6vg$c4v$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21986&group=comp.arch#21986

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: iva...@millcomputing.com (Ivan Godard)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Sat, 13 Nov 2021 12:25:22 -0800
Organization: A noiseless patient Spider
Lines: 31
Message-ID: <smp6vg$c4v$1@dont-email.me>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sat, 13 Nov 2021 20:25:20 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="b346a41de29610f4b98cc77ae394cab5";
logging-data="12447"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/RgY0qVCZ8GkjvLt9Dyqki"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.3.0
Cancel-Lock: sha1:9wDIUDOZ1LZbZhgvonCG4aQi9cY=
In-Reply-To: <oSTjJ.50938$SR4.17611@fx43.iad>
Content-Language: en-US
 by: Ivan Godard - Sat, 13 Nov 2021 20:25 UTC

On 11/13/2021 10:54 AM, EricP wrote:
> EricP wrote:
>>
>> Real NN have feedback, called recurrent, and real neurons are spiky
>> which introduces signal timing and phase delays as attributes.
>>
>> In particular signal phase timing adds a whole new dimension for
>> information storage. Feedback allows resonances to enhance or
>> suppress different combinations of inputs.
>>
>> That is the basis for my suspicion that we will eventually find
>> that brains, all brains, are akin to _holograms_.
>>
>> Clusters of neurons can build holographic modules and form
>> holographic modules of modules.
>
> One mystery about this is why doesn't every brain immediately
> collapse in a giant epileptic fit. Nature must have found a
> way to detect and prevent it as the organism grows.
> Natural selection would be a poor mechanism because there are
> so many ways to fail and many fewer ways to succeed
> that almost no brains would survive.
>
> So there must be some mechanism that "drives" these self organizing
> networks toward interconnections that do not have uncontrolled feedback.
>
> I've said this before but I suspect _that_ is what nature discovered
> at the Cambrian explosion 540 million years ago.
>

I thought the Cambrian invention was teeth?

Re: Neural Network Accelerators

<smpgrd$f3n$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21989&group=comp.arch#21989

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: cr88...@gmail.com (BGB)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Sat, 13 Nov 2021 17:12:17 -0600
Organization: A noiseless patient Spider
Lines: 150
Message-ID: <smpgrd$f3n$1@dont-email.me>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<smp0lu$28u$1@dont-email.me>
<e212fada-ffa9-49ab-8b08-8ecf1d572ac9n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sat, 13 Nov 2021 23:13:49 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="82909dbc929d05c68e05f97147b979c7";
logging-data="15479"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/a76DcqjvONYcN8oY0C2ZF"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.3.0
Cancel-Lock: sha1:ZErbqbmM9d3CgYCTM7P1iyv45YE=
In-Reply-To: <e212fada-ffa9-49ab-8b08-8ecf1d572ac9n@googlegroups.com>
Content-Language: en-US
 by: BGB - Sat, 13 Nov 2021 23:12 UTC

On 11/13/2021 1:30 PM, MitchAlsup wrote:
> On Saturday, November 13, 2021 at 12:37:53 PM UTC-6, BGB wrote:
>> On 11/12/2021 4:52 PM, MitchAlsup wrote:
>>> I think it is rather safe to say (at this point in time) the NN-accelerators
>>> are in the 704 days of these kinds of architectures.
>>>
>>> A bit more than barely function, and a long way to go.........
>>>
>> Yeah, it is a bit hit or miss even what they should be doing exactly...
>>
>>
>> Some of the popular options seem to be in-effect doing large glorified
>> matrix multiplies using a truncated floating format (BF16, essentially
>> Binary32 with the low 16 bits cut off).
>>
>>
>> Not sure if there is a "good" reason for BF16, or if it is more a
>> workaround for popular / mainstream CPU architectures lacking support
>> for Binary16 SIMD.
>>
>> It comes off like, say, if I proposed a new 32-bit floating point format
>> that was effectively just Double with the low 32 bits cut off (then try
>> to pass it off as "better" when it is really just a way to allow cheaper
>> format conversions).
>>
>>
>> Both Binary16 and BF16 could make sense for dedicated low-precision SIMD
>> ops, and also are small enough to be implemented reasonably affordably
>> on an FPGA.
>>
>> For general use, Binary16 (S.E5.F10) probably makes more sense, though
>> one could debate whether BF16 (S.E8.F7) has enough value in the
>> general-case to make it worthwhile (short of trying to do a TensorFlow
>> port or similar, I have doubts).
> <
> A lot of the self-driving NNs can use as few as 1-bit weighting matrices
> and 8-bit accumulators and achieve useful pattern recognition rates.
> Indeed, much of the work, here, is centered around decreasing the storage
> BW to feed to convolution engine than on making the convolution engine
> faster or larger.
> <
> I am pretty sure this kind of experiments will continue for another decade.

Yeah.

I suspect in my case, some of the Block-Texture and Block-Audio ops
could also conceivably be applicable to NN weights, at least in very
specialized use-cases.

It is also possible (for cases involving vectors with constant weights),
that I could define encodings for, say:
PLDCM8SH Imm32, Rn //Load 4x FP8 into 4x Binary16 in Rn.
PLDCM8UH Imm32, Rn //Load 4x FP8 into 4x Binary16 in Rn.
PLDCH Imm32, Rn //Load 2x Binary16 into 2x Binary32

These could also make sense for vector-literals in C, but at the moment
is a lower priority due to the use of vector-literals in C being a
fairly niche feature.

Went and added a few possible encodings to the listing (in Op64 space),
which mostly add alternative forms to the existing "FLDCF Imm32, Rn"
encoding.

>>
>>
>> More debatable, but one could argue for an 8x or 16x (S.E4.F3) vectors
>> for Neural-Net uses. Though, I suspect the gains would be small as the
>> relative cost of wrangling inputs would somewhat outweigh the possible
>> savings from such operators over operating on 16-bit elements (with
>> possible Packed FP8 <-> FP16 conversion operators).
> <
> One big problem is how does one "express" such a matrix multiplication,
> where the size of the containers in the weighting matrix change on a per
> weight basis.

In many contexts, there is a trick that one can store a matrix in a
pre-transposed form and turn it into a bunch of vector-multiply and
accumulate operations.

Though, as can be noted, doing a net with naive matrix multiplies does
mean one is going to be doing a whole lot of meaningless multiplies with
zero.

I guess one possible trick (if doing a large matrix multiply with a
loop), would be using an RLE scheme to skip over vectors for runs where
all of the components are zeroes (and "pre-cooking" the model to
eliminate large sections of nearly-zero values).

Though, as can be noted:
I don't really use these sort of "huge matrix multiply" style nets in
any of my own projects.

My own uses have tended to be mostly things like using genetic
algorithms to build more specialized classifiers.

These would tend to be more things like "do some math on some vectors"
followed by comparing the results against some threshold biases, and
running the result into a decision tree. The "fancier" version would
likely be having some way to transform the vector-compare result into a
bit-mask which could be fed into a "switch()" block or similar (as-is,
this can theoretically be done for 4-element vector-compare results).

There is not currently a dedicated S-Curve operation, though when
needed, variations of the Heaviside function can be built using packed
compare and packed select (*1).

In these cases, the weight and bias vectors would typically be embedded
directly into the program logic (though, granted, this sort of thing
falls slightly outside the scope of "standard C").

*1:
const __vec4sf cv_n1={-1,-1,-1,-1}, cv_p1={1,1,1,1};
__vec4sf v, bi, wv;

... calculate v ...
wv = __vec4sf_pcsel_gt(v, bi, cv_p1, cv_n1);

Or, say, psuedo-asm:
PLDCM8SH 0x38383838, cv_p1 //possible encoding
PLDCM8SH 0xB8B8B8B8, cv_n1 //possible encoding
...
PCMPGT.H bi, v
PCSELT.W cv_p1, cv_n1, wv

Which, for each vector element, does effectively:
wv[i] = (v[i] > bi[i]) ? cv_p1[i] : cv_n1[i];
Or, alternately:
wv[i] = (v[i] > bi[i]) ? 1.0 : -1.0;

....

>>
>> Intermediate options are mostly those involving FP10 (S.E5.F4) or FP12
>> (S.E5.F6):
>> 3x FP10 in 32 bits (Maps to 4x FP16, X/Y/Z/{0/-/1/-1});
>> 6x FP10 in 64 bits (Maps to 8x FP16, X/Y/Z/0,P/Q/R/0);
>> 4x FP12 in 48 bits;
>> ...
>>
>> If one expected to do a lot of FP16, this could justify the cost of have
>> dedicated SIMD units for these (rather than run them internally through
>> a slower but higher precision FPU).

Re: Neural Network Accelerators

<af7e9761-852c-4252-b6c8-ff9873c5378an@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21990&group=comp.arch#21990

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:2153:: with SMTP id m19mr11645518qkm.77.1636847763648;
Sat, 13 Nov 2021 15:56:03 -0800 (PST)
X-Received: by 2002:aca:c6d0:: with SMTP id w199mr35572586oif.30.1636847763430;
Sat, 13 Nov 2021 15:56:03 -0800 (PST)
Path: i2pn2.org!i2pn.org!aioe.org!news.uzoreto.com!alphared!2.eu.feeder.erje.net!feeder.erje.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sat, 13 Nov 2021 15:56:03 -0800 (PST)
In-Reply-To: <smp6vg$c4v$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=162.229.185.59; posting-account=Gm3E_woAAACkDRJFCvfChVjhgA24PTsb
NNTP-Posting-Host: 162.229.185.59
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad> <smp6vg$c4v$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <af7e9761-852c-4252-b6c8-ff9873c5378an@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: yogaman...@yahoo.com (Scott Smader)
Injection-Date: Sat, 13 Nov 2021 23:56:03 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: Scott Smader - Sat, 13 Nov 2021 23:56 UTC

On Saturday, November 13, 2021 at 12:25:22 PM UTC-8, Ivan Godard wrote:
> On 11/13/2021 10:54 AM, EricP wrote:
> > EricP wrote:
> >>
> >> Real NN have feedback, called recurrent, and real neurons are spiky
> >> which introduces signal timing and phase delays as attributes.
> >>
> >> In particular signal phase timing adds a whole new dimension for
> >> information storage. Feedback allows resonances to enhance or
> >> suppress different combinations of inputs.
> >>
> >> That is the basis for my suspicion that we will eventually find
> >> that brains, all brains, are akin to _holograms_.
> >>
> >> Clusters of neurons can build holographic modules and form
> >> holographic modules of modules.
> >
> > One mystery about this is why doesn't every brain immediately
> > collapse in a giant epileptic fit. Nature must have found a
> > way to detect and prevent it as the organism grows.
> > Natural selection would be a poor mechanism because there are
> > so many ways to fail and many fewer ways to succeed
> > that almost no brains would survive.
> >
> > So there must be some mechanism that "drives" these self organizing
> > networks toward interconnections that do not have uncontrolled feedback..
> >
> > I've said this before but I suspect _that_ is what nature discovered
> > at the Cambrian explosion 540 million years ago.
> >
> I thought the Cambrian invention was teeth?

The internet says an early Cambrian innovation was hard parts, like shells and plates. The oldest toothed fossil dates to 410 Mya, but the Cambrian Era was 541 - 485.4 Mya. (Btw, more recent fossils suggest that the appearance of new body plans was more gradual than once believed, so not so "explosive.")

Arguing in favor of EricP's point, the cerebellum (which has been shown to at least be able to influence seizures) was a Cambrian "invention," according to Paul Cisek in "Resynthesizing behavior through phylogenetic refinement." (Great read!) However, Cisek also says image-forming eyes were already being used in visually guided approach and reinforcement learning by telencephalons during the Pre-Cambrian Era. To me that suggests that the solution for avoiding epileptic seizures was baked in even earlier.

Re: Neural Network Accelerators

<smpjk8$uto$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21991&group=comp.arch#21991

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: iva...@millcomputing.com (Ivan Godard)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Sat, 13 Nov 2021 16:01:12 -0800
Organization: A noiseless patient Spider
Lines: 39
Message-ID: <smpjk8$uto$1@dont-email.me>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me>
<af7e9761-852c-4252-b6c8-ff9873c5378an@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 14 Nov 2021 00:01:13 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="cbf9eb5bcca5dfe07c2d8b97a42419c6";
logging-data="31672"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+ub1BGxm3K1NNOlMrsDh1E"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.3.0
Cancel-Lock: sha1:z0D1ULWW9XcaYDLx2QkrrlGWT4U=
In-Reply-To: <af7e9761-852c-4252-b6c8-ff9873c5378an@googlegroups.com>
Content-Language: en-US
 by: Ivan Godard - Sun, 14 Nov 2021 00:01 UTC

On 11/13/2021 3:56 PM, Scott Smader wrote:
> On Saturday, November 13, 2021 at 12:25:22 PM UTC-8, Ivan Godard wrote:
>> On 11/13/2021 10:54 AM, EricP wrote:
>>> EricP wrote:
>>>>
>>>> Real NN have feedback, called recurrent, and real neurons are spiky
>>>> which introduces signal timing and phase delays as attributes.
>>>>
>>>> In particular signal phase timing adds a whole new dimension for
>>>> information storage. Feedback allows resonances to enhance or
>>>> suppress different combinations of inputs.
>>>>
>>>> That is the basis for my suspicion that we will eventually find
>>>> that brains, all brains, are akin to _holograms_.
>>>>
>>>> Clusters of neurons can build holographic modules and form
>>>> holographic modules of modules.
>>>
>>> One mystery about this is why doesn't every brain immediately
>>> collapse in a giant epileptic fit. Nature must have found a
>>> way to detect and prevent it as the organism grows.
>>> Natural selection would be a poor mechanism because there are
>>> so many ways to fail and many fewer ways to succeed
>>> that almost no brains would survive.
>>>
>>> So there must be some mechanism that "drives" these self organizing
>>> networks toward interconnections that do not have uncontrolled feedback.
>>>
>>> I've said this before but I suspect _that_ is what nature discovered
>>> at the Cambrian explosion 540 million years ago.
>>>
>> I thought the Cambrian invention was teeth?
>
> The internet says an early Cambrian innovation was hard parts, like shells and plates. The oldest toothed fossil dates to 410 Mya, but the Cambrian Era was 541 - 485.4 Mya. (Btw, more recent fossils suggest that the appearance of new body plans was more gradual than once believed, so not so "explosive.")
>
> Arguing in favor of EricP's point, the cerebellum (which has been shown to at least be able to influence seizures) was a Cambrian "invention," according to Paul Cisek in "Resynthesizing behavior through phylogenetic refinement." (Great read!) However, Cisek also says image-forming eyes were already being used in visually guided approach and reinforcement learning by telencephalons during the Pre-Cambrian Era. To me that suggests that the solution for avoiding epileptic seizures was baked in even earlier.

Why would you evolve shells if nobody has teeth?

Re: Neural Network Accelerators

<a4851bed-84f7-4865-b262-62b95636c619n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21992&group=comp.arch#21992

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:190a:: with SMTP id w10mr21289310qtc.224.1636848469460;
Sat, 13 Nov 2021 16:07:49 -0800 (PST)
X-Received: by 2002:a05:6830:348f:: with SMTP id c15mr21748987otu.254.1636848469250;
Sat, 13 Nov 2021 16:07:49 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sat, 13 Nov 2021 16:07:49 -0800 (PST)
In-Reply-To: <smpjk8$uto$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=162.229.185.59; posting-account=Gm3E_woAAACkDRJFCvfChVjhgA24PTsb
NNTP-Posting-Host: 162.229.185.59
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <af7e9761-852c-4252-b6c8-ff9873c5378an@googlegroups.com>
<smpjk8$uto$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <a4851bed-84f7-4865-b262-62b95636c619n@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: yogaman...@yahoo.com (Scott Smader)
Injection-Date: Sun, 14 Nov 2021 00:07:49 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: Scott Smader - Sun, 14 Nov 2021 00:07 UTC

On Saturday, November 13, 2021 at 4:01:16 PM UTC-8, Ivan Godard wrote:
> On 11/13/2021 3:56 PM, Scott Smader wrote:
> > On Saturday, November 13, 2021 at 12:25:22 PM UTC-8, Ivan Godard wrote:
> >> On 11/13/2021 10:54 AM, EricP wrote:
> >>> EricP wrote:
> >>>>
> >>>> Real NN have feedback, called recurrent, and real neurons are spiky
> >>>> which introduces signal timing and phase delays as attributes.
> >>>>
> >>>> In particular signal phase timing adds a whole new dimension for
> >>>> information storage. Feedback allows resonances to enhance or
> >>>> suppress different combinations of inputs.
> >>>>
> >>>> That is the basis for my suspicion that we will eventually find
> >>>> that brains, all brains, are akin to _holograms_.
> >>>>
> >>>> Clusters of neurons can build holographic modules and form
> >>>> holographic modules of modules.
> >>>
> >>> One mystery about this is why doesn't every brain immediately
> >>> collapse in a giant epileptic fit. Nature must have found a
> >>> way to detect and prevent it as the organism grows.
> >>> Natural selection would be a poor mechanism because there are
> >>> so many ways to fail and many fewer ways to succeed
> >>> that almost no brains would survive.
> >>>
> >>> So there must be some mechanism that "drives" these self organizing
> >>> networks toward interconnections that do not have uncontrolled feedback.
> >>>
> >>> I've said this before but I suspect _that_ is what nature discovered
> >>> at the Cambrian explosion 540 million years ago.
> >>>
> >> I thought the Cambrian invention was teeth?
> >
> > The internet says an early Cambrian innovation was hard parts, like shells and plates. The oldest toothed fossil dates to 410 Mya, but the Cambrian Era was 541 - 485.4 Mya. (Btw, more recent fossils suggest that the appearance of new body plans was more gradual than once believed, so not so "explosive.")
> >
> > Arguing in favor of EricP's point, the cerebellum (which has been shown to at least be able to influence seizures) was a Cambrian "invention," according to Paul Cisek in "Resynthesizing behavior through phylogenetic refinement." (Great read!) However, Cisek also says image-forming eyes were already being used in visually guided approach and reinforcement learning by telencephalons during the Pre-Cambrian Era. To me that suggests that the solution for avoiding epileptic seizures was baked in even earlier.
> Why would you evolve shells if nobody has teeth?

There are some mean suckers out there!

Re: Neural Network Accelerators

<039ed604-e088-4766-916f-014aca2aa079n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21993&group=comp.arch#21993

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:6214:19e9:: with SMTP id q9mr25864801qvc.52.1636853194289;
Sat, 13 Nov 2021 17:26:34 -0800 (PST)
X-Received: by 2002:a05:6830:1445:: with SMTP id w5mr22411620otp.112.1636853194070;
Sat, 13 Nov 2021 17:26:34 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sat, 13 Nov 2021 17:26:33 -0800 (PST)
In-Reply-To: <smpjk8$uto$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=104.59.204.55; posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 104.59.204.55
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <af7e9761-852c-4252-b6c8-ff9873c5378an@googlegroups.com>
<smpjk8$uto$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <039ed604-e088-4766-916f-014aca2aa079n@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: MitchAl...@aol.com (MitchAlsup)
Injection-Date: Sun, 14 Nov 2021 01:26:34 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: MitchAlsup - Sun, 14 Nov 2021 01:26 UTC

On Saturday, November 13, 2021 at 6:01:16 PM UTC-6, Ivan Godard wrote:
> On 11/13/2021 3:56 PM, Scott Smader wrote:
> > On Saturday, November 13, 2021 at 12:25:22 PM UTC-8, Ivan Godard wrote:
> >> On 11/13/2021 10:54 AM, EricP wrote:
> >>> EricP wrote:
> >>>>
> >>>> Real NN have feedback, called recurrent, and real neurons are spiky
> >>>> which introduces signal timing and phase delays as attributes.
> >>>>
> >>>> In particular signal phase timing adds a whole new dimension for
> >>>> information storage. Feedback allows resonances to enhance or
> >>>> suppress different combinations of inputs.
> >>>>
> >>>> That is the basis for my suspicion that we will eventually find
> >>>> that brains, all brains, are akin to _holograms_.
> >>>>
> >>>> Clusters of neurons can build holographic modules and form
> >>>> holographic modules of modules.
> >>>
> >>> One mystery about this is why doesn't every brain immediately
> >>> collapse in a giant epileptic fit. Nature must have found a
> >>> way to detect and prevent it as the organism grows.
> >>> Natural selection would be a poor mechanism because there are
> >>> so many ways to fail and many fewer ways to succeed
> >>> that almost no brains would survive.
> >>>
> >>> So there must be some mechanism that "drives" these self organizing
> >>> networks toward interconnections that do not have uncontrolled feedback.
> >>>
> >>> I've said this before but I suspect _that_ is what nature discovered
> >>> at the Cambrian explosion 540 million years ago.
> >>>
> >> I thought the Cambrian invention was teeth?
> >
> > The internet says an early Cambrian innovation was hard parts, like shells and plates. The oldest toothed fossil dates to 410 Mya, but the Cambrian Era was 541 - 485.4 Mya. (Btw, more recent fossils suggest that the appearance of new body plans was more gradual than once believed, so not so "explosive.")
> >
> > Arguing in favor of EricP's point, the cerebellum (which has been shown to at least be able to influence seizures) was a Cambrian "invention," according to Paul Cisek in "Resynthesizing behavior through phylogenetic refinement." (Great read!) However, Cisek also says image-forming eyes were already being used in visually guided approach and reinforcement learning by telencephalons during the Pre-Cambrian Era. To me that suggests that the solution for avoiding epileptic seizures was baked in even earlier.
<
> Why would you evolve shells if nobody has teeth?
<
The precursor to teeth is chitin (stuff of fingernails and claws). This is sufficient to "bite" through non-hardened skin (and also how the first hardened-skin evolved.
<
So after the evolution of chitin, somebody had to evolve a series of stuff even harder (for protection) and the arms race continues........even until today........

Re: Neural Network Accelerators

<7bcd2135-f089-4423-8922-30f10e152ef0n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21994&group=comp.arch#21994

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:15c5:: with SMTP id d5mr21995121qty.227.1636855487702;
Sat, 13 Nov 2021 18:04:47 -0800 (PST)
X-Received: by 2002:aca:646:: with SMTP id 67mr1328163oig.175.1636855487402;
Sat, 13 Nov 2021 18:04:47 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sat, 13 Nov 2021 18:04:47 -0800 (PST)
In-Reply-To: <FtTjJ.17520$hm7.7298@fx07.iad>
Injection-Info: google-groups.googlegroups.com; posting-host=136.50.253.102; posting-account=AoizIQoAAADa7kQDpB0DAj2jwddxXUgl
NNTP-Posting-Host: 136.50.253.102
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <7bcd2135-f089-4423-8922-30f10e152ef0n@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: jim.brak...@ieee.org (JimBrakefield)
Injection-Date: Sun, 14 Nov 2021 02:04:47 +0000
Content-Type: text/plain; charset="UTF-8"
 by: JimBrakefield - Sun, 14 Nov 2021 02:04 UTC

On Saturday, November 13, 2021 at 12:28:55 PM UTC-6, EricP wrote:
> MitchAlsup wrote:
> > I think it is rather safe to say (at this point in time) the NN-accelerators
> > are in the 704 days of these kinds of architectures.
> >
> > A bit more than barely function, and a long way to go.........
> The NN talked about mostly in the press and which most vendors
> are trying to sell you are Convolution Neural Networks (CNN)
> which are basically multiple layers of sums.
> You can build a fancy pattern matcher out of it but
> it will never make a good decision mechanism.
> It will always be a mystery why it "decided" a certain way
> because it is just calculating a great whacking polynomial.
> As NN go, I have a gut feeling that is a dead end.
>
> Real NN have feedback, called recurrent, and real neurons are spiky
> which introduces signal timing and phase delays as attributes.
>
> In particular signal phase timing adds a whole new dimension for
> information storage. Feedback allows resonances to enhance or
> suppress different combinations of inputs.
>
> That is the basis for my suspicion that we will eventually find
> that brains, all brains, are akin to _holograms_.
>
> Clusters of neurons can build holographic modules and form
> holographic modules of modules.

The Long Short Term Memory (LSTM) type of ANN has such feedback
connections and works well. Also see Recurrent Neural Network (RNN).

Re: Neural Network Accelerators

<smqbja$jbq$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21995&group=comp.arch#21995

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Sat, 13 Nov 2021 22:50:16 -0800
Organization: A noiseless patient Spider
Lines: 51
Message-ID: <smqbja$jbq$1@dont-email.me>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 14 Nov 2021 06:50:18 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="e7de95c535b20755256ae938b7d57e2d";
logging-data="19834"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/IF/CNGgY97Ahuqf5IIjkjo0fb9pn0ziw="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.3.0
Cancel-Lock: sha1:TWMz7gMWkZvvGmXePmkODiEEfnQ=
In-Reply-To: <FtTjJ.17520$hm7.7298@fx07.iad>
Content-Language: en-US
 by: Stephen Fuld - Sun, 14 Nov 2021 06:50 UTC

On 11/13/2021 10:28 AM, EricP wrote:
> MitchAlsup wrote:
>> I think it is rather safe to say (at this point in time) the
>> NN-accelerators
>> are in the 704 days of these kinds of architectures.
>>
>> A bit more than barely function, and a long way to go.........
>
> The NN talked about mostly in the press and which most vendors
> are trying to sell you are Convolution Neural Networks (CNN)
> which are basically multiple layers of sums.
> You can build a fancy pattern matcher out of it but
> it will never make a good decision mechanism.

I disagree with the last phrase. NNs have been used for decision
making, and I expect their use will continue, as they are better at
certain kinds of decisions than other, more "conventional" algorithms.
Of course, one may quibble about how "good" is good, but that is a
different question.

BTW, the brain seems to be a sophisticated pattern matcher, and it works
pretty well at many tasks.

> It will always be a mystery why it "decided" a certain way
> because it is just calculating a great whacking polynomial.

There has been, and continues to be work on this, and some progress is
being made. But, while it certainly would be nice, I am not sure it is
necessary for them to be useful.

> As NN go, I have a gut feeling that is a dead end.

Could be. Time will tell.

>
> Real NN have feedback, called recurrent, and real neurons are spiky
> which introduces signal timing and phase delays as attributes.

Sure. You have to distinguish between two different "uses" for neural
networks. One is "science", trying to figure out how the brain works.
These are usually research projects, and their emphasis is on biological
faithfulness. The other is "engineering", trying to find a better way
of solving some useful problem. For this purpose, biological
faithfulness isn't critical. That is where most of the money is, and all
of the hype.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: Neural Network Accelerators

<smqc9q$lr0$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21996&group=comp.arch#21996

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: iva...@millcomputing.com (Ivan Godard)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Sat, 13 Nov 2021 23:02:18 -0800
Organization: A noiseless patient Spider
Lines: 49
Message-ID: <smqc9q$lr0$1@dont-email.me>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me>
<af7e9761-852c-4252-b6c8-ff9873c5378an@googlegroups.com>
<smpjk8$uto$1@dont-email.me>
<039ed604-e088-4766-916f-014aca2aa079n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 14 Nov 2021 07:02:18 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="cbf9eb5bcca5dfe07c2d8b97a42419c6";
logging-data="22368"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/G5Z7xlUNz6IZG5xihaC+g"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.3.0
Cancel-Lock: sha1:WpvXNPpQhrvQTUEuL6GiGsIzq8Y=
In-Reply-To: <039ed604-e088-4766-916f-014aca2aa079n@googlegroups.com>
Content-Language: en-US
 by: Ivan Godard - Sun, 14 Nov 2021 07:02 UTC

On 11/13/2021 5:26 PM, MitchAlsup wrote:
> On Saturday, November 13, 2021 at 6:01:16 PM UTC-6, Ivan Godard wrote:
>> On 11/13/2021 3:56 PM, Scott Smader wrote:
>>> On Saturday, November 13, 2021 at 12:25:22 PM UTC-8, Ivan Godard wrote:
>>>> On 11/13/2021 10:54 AM, EricP wrote:
>>>>> EricP wrote:
>>>>>>
>>>>>> Real NN have feedback, called recurrent, and real neurons are spiky
>>>>>> which introduces signal timing and phase delays as attributes.
>>>>>>
>>>>>> In particular signal phase timing adds a whole new dimension for
>>>>>> information storage. Feedback allows resonances to enhance or
>>>>>> suppress different combinations of inputs.
>>>>>>
>>>>>> That is the basis for my suspicion that we will eventually find
>>>>>> that brains, all brains, are akin to _holograms_.
>>>>>>
>>>>>> Clusters of neurons can build holographic modules and form
>>>>>> holographic modules of modules.
>>>>>
>>>>> One mystery about this is why doesn't every brain immediately
>>>>> collapse in a giant epileptic fit. Nature must have found a
>>>>> way to detect and prevent it as the organism grows.
>>>>> Natural selection would be a poor mechanism because there are
>>>>> so many ways to fail and many fewer ways to succeed
>>>>> that almost no brains would survive.
>>>>>
>>>>> So there must be some mechanism that "drives" these self organizing
>>>>> networks toward interconnections that do not have uncontrolled feedback.
>>>>>
>>>>> I've said this before but I suspect _that_ is what nature discovered
>>>>> at the Cambrian explosion 540 million years ago.
>>>>>
>>>> I thought the Cambrian invention was teeth?
>>>
>>> The internet says an early Cambrian innovation was hard parts, like shells and plates. The oldest toothed fossil dates to 410 Mya, but the Cambrian Era was 541 - 485.4 Mya. (Btw, more recent fossils suggest that the appearance of new body plans was more gradual than once believed, so not so "explosive.")
>>>
>>> Arguing in favor of EricP's point, the cerebellum (which has been shown to at least be able to influence seizures) was a Cambrian "invention," according to Paul Cisek in "Resynthesizing behavior through phylogenetic refinement." (Great read!) However, Cisek also says image-forming eyes were already being used in visually guided approach and reinforcement learning by telencephalons during the Pre-Cambrian Era. To me that suggests that the solution for avoiding epileptic seizures was baked in even earlier.
> <
>> Why would you evolve shells if nobody has teeth?
> <
> The precursor to teeth is chitin (stuff of fingernails and claws). This is sufficient to "bite" through non-hardened skin (and also how the first hardened-skin evolved.
> <
> So after the evolution of chitin, somebody had to evolve a series of stuff even harder (for protection) and the arms race continues........even until today........
>

And shells had precursors too. Call any offensive hard surface a
"tooth", and any defensive hard surface a "shell". I assert that teeth
precede shells.

Re: Neural Network Accelerators

<cV8kJ.31989$_Y5.24885@fx29.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21997&group=comp.arch#21997

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!peer01.ams1!peer.ams1.xlned.com!news.xlned.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx29.iad.POSTED!not-for-mail
From: ThatWoul...@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com> <sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org> <bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com> <FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad> <smp6vg$c4v$1@dont-email.me>
In-Reply-To: <smp6vg$c4v$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 50
Message-ID: <cV8kJ.31989$_Y5.24885@fx29.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Sun, 14 Nov 2021 14:18:48 UTC
Date: Sun, 14 Nov 2021 09:18:07 -0500
X-Received-Bytes: 3086
 by: EricP - Sun, 14 Nov 2021 14:18 UTC

Ivan Godard wrote:
> On 11/13/2021 10:54 AM, EricP wrote:
>> EricP wrote:
>>>
>>> Real NN have feedback, called recurrent, and real neurons are spiky
>>> which introduces signal timing and phase delays as attributes.
>>>
>>> In particular signal phase timing adds a whole new dimension for
>>> information storage. Feedback allows resonances to enhance or
>>> suppress different combinations of inputs.
>>>
>>> That is the basis for my suspicion that we will eventually find
>>> that brains, all brains, are akin to _holograms_.
>>>
>>> Clusters of neurons can build holographic modules and form
>>> holographic modules of modules.
>>
>> One mystery about this is why doesn't every brain immediately
>> collapse in a giant epileptic fit. Nature must have found a
>> way to detect and prevent it as the organism grows.
>> Natural selection would be a poor mechanism because there are
>> so many ways to fail and many fewer ways to succeed
>> that almost no brains would survive.
>>
>> So there must be some mechanism that "drives" these self organizing
>> networks toward interconnections that do not have uncontrolled feedback.
>>
>> I've said this before but I suspect _that_ is what nature discovered
>> at the Cambrian explosion 540 million years ago.
>>
>
> I thought the Cambrian invention was teeth?

Eyes developed then too, but so did legs, antenna, all complex life forms.
And probably teeth too, which need muscles to work them.
And all that requires a complex controller, particularly eyes.

In pre-Cambrian the most complex life was things like jellyfish which
are multicellular organisms and have nerves that allows them to swim,
and a few neurons are specialized to detect light or dark,
but no central controller, no complex signal processing or
decision making capability.

After the Cambrian line are arthropods and all the animals of today.
Eyes developed at this time which needs complex signal processing.

Clearly something changed at that boundary that allowed the assembly
of complex NN to control all of these new functions.

Re: Neural Network Accelerators

<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21998&group=comp.arch#21998

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:4687:: with SMTP id bq7mr25702977qkb.231.1636905612630;
Sun, 14 Nov 2021 08:00:12 -0800 (PST)
X-Received: by 2002:a9d:6358:: with SMTP id y24mr25733606otk.85.1636905612356;
Sun, 14 Nov 2021 08:00:12 -0800 (PST)
Path: i2pn2.org!i2pn.org!aioe.org!feeder1.feed.usenet.farm!feed.usenet.farm!2.eu.feeder.erje.net!feeder.erje.net!news.uzoreto.com!feeder1.cambriumusenet.nl!feed.tweak.nl!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sun, 14 Nov 2021 08:00:12 -0800 (PST)
In-Reply-To: <cV8kJ.31989$_Y5.24885@fx29.iad>
Injection-Info: google-groups.googlegroups.com; posting-host=162.229.185.59; posting-account=Gm3E_woAAACkDRJFCvfChVjhgA24PTsb
NNTP-Posting-Host: 162.229.185.59
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: yogaman...@yahoo.com (Scott Smader)
Injection-Date: Sun, 14 Nov 2021 16:00:12 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: Scott Smader - Sun, 14 Nov 2021 16:00 UTC

On Sunday, November 14, 2021 at 6:18:51 AM UTC-8, EricP wrote:
> Ivan Godard wrote:
> > On 11/13/2021 10:54 AM, EricP wrote:
> >> EricP wrote:
> >>>
> >>> Real NN have feedback, called recurrent, and real neurons are spiky
> >>> which introduces signal timing and phase delays as attributes.
> >>>
> >>> In particular signal phase timing adds a whole new dimension for
> >>> information storage. Feedback allows resonances to enhance or
> >>> suppress different combinations of inputs.
> >>>
> >>> That is the basis for my suspicion that we will eventually find
> >>> that brains, all brains, are akin to _holograms_.
> >>>
> >>> Clusters of neurons can build holographic modules and form
> >>> holographic modules of modules.
> >>
> >> One mystery about this is why doesn't every brain immediately
> >> collapse in a giant epileptic fit. Nature must have found a
> >> way to detect and prevent it as the organism grows.
> >> Natural selection would be a poor mechanism because there are
> >> so many ways to fail and many fewer ways to succeed
> >> that almost no brains would survive.
> >>
> >> So there must be some mechanism that "drives" these self organizing
> >> networks toward interconnections that do not have uncontrolled feedback.
> >>
> >> I've said this before but I suspect _that_ is what nature discovered
> >> at the Cambrian explosion 540 million years ago.
> >>
> >
> > I thought the Cambrian invention was teeth?
> Eyes developed then too, but so did legs, antenna, all complex life forms..
> And probably teeth too, which need muscles to work them.
> And all that requires a complex controller, particularly eyes.
>
> In pre-Cambrian the most complex life was things like jellyfish which
> are multicellular organisms and have nerves that allows them to swim,
> and a few neurons are specialized to detect light or dark,
> but no central controller, no complex signal processing or
> decision making capability.
>
> After the Cambrian line are arthropods and all the animals of today.
> Eyes developed at this time which needs complex signal processing.
>
> Clearly something changed at that boundary that allowed the assembly
> of complex NN to control all of these new functions.

The opinions being expressed here would do well to refer to contemporary research. A "tooth" means something specific in the fossil record. The Cambrian Era was not a "line" or "boundary." (https://www.nature.com/articles/s41559-019-0821-6) The neural tube had already yielded to archencephalon which had yielded to telencephalon as the most complex neural circuitry long before the Cambrian Era started. Tens of millions of years before the Cambrian, dopamine was already generating sophisticated foraging behavior that is today displayed by "microorganisms, insects, mollusks, reptiles, fish, birds, and even human hunter–gatherers" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/).

Please consider reading this paper by an expert in the field: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/ or at least have a good look at Figure 2 from that paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/figure/Fig2/?report=objectonly

Then maybe we should start a separate thread?

Re: Neural Network Accelerators

<N3bkJ.63875$Wkjc.36258@fx35.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=21999&group=comp.arch#21999

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!peer01.ams1!peer.ams1.xlned.com!news.xlned.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx35.iad.POSTED!not-for-mail
From: ThatWoul...@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com> <sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org> <bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com> <FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad> <smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad> <17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com>
In-Reply-To: <17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 82
Message-ID: <N3bkJ.63875$Wkjc.36258@fx35.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Sun, 14 Nov 2021 16:46:37 UTC
Date: Sun, 14 Nov 2021 11:46:24 -0500
X-Received-Bytes: 5510
 by: EricP - Sun, 14 Nov 2021 16:46 UTC

Scott Smader wrote:
> On Sunday, November 14, 2021 at 6:18:51 AM UTC-8, EricP wrote:
>> Ivan Godard wrote:
>>> On 11/13/2021 10:54 AM, EricP wrote:
>>>> EricP wrote:
>>>>> Real NN have feedback, called recurrent, and real neurons are spiky
>>>>> which introduces signal timing and phase delays as attributes.
>>>>>
>>>>> In particular signal phase timing adds a whole new dimension for
>>>>> information storage. Feedback allows resonances to enhance or
>>>>> suppress different combinations of inputs.
>>>>>
>>>>> That is the basis for my suspicion that we will eventually find
>>>>> that brains, all brains, are akin to _holograms_.
>>>>>
>>>>> Clusters of neurons can build holographic modules and form
>>>>> holographic modules of modules.
>>>> One mystery about this is why doesn't every brain immediately
>>>> collapse in a giant epileptic fit. Nature must have found a
>>>> way to detect and prevent it as the organism grows.
>>>> Natural selection would be a poor mechanism because there are
>>>> so many ways to fail and many fewer ways to succeed
>>>> that almost no brains would survive.
>>>>
>>>> So there must be some mechanism that "drives" these self organizing
>>>> networks toward interconnections that do not have uncontrolled feedback.
>>>>
>>>> I've said this before but I suspect _that_ is what nature discovered
>>>> at the Cambrian explosion 540 million years ago.
>>>>
>>> I thought the Cambrian invention was teeth?
>> Eyes developed then too, but so did legs, antenna, all complex life forms..
>> And probably teeth too, which need muscles to work them.
>> And all that requires a complex controller, particularly eyes.
>>
>> In pre-Cambrian the most complex life was things like jellyfish which
>> are multicellular organisms and have nerves that allows them to swim,
>> and a few neurons are specialized to detect light or dark,
>> but no central controller, no complex signal processing or
>> decision making capability.
>>
>> After the Cambrian line are arthropods and all the animals of today.
>> Eyes developed at this time which needs complex signal processing.
>>
>> Clearly something changed at that boundary that allowed the assembly
>> of complex NN to control all of these new functions.
>
> The opinions being expressed here would do well to refer to contemporary research. A "tooth" means something specific in the fossil record. The Cambrian Era was not a "line" or "boundary." (https://www.nature.com/articles/s41559-019-0821-6) The neural tube had already yielded to archencephalon which had yielded to telencephalon as the most complex neural circuitry long before the Cambrian Era started. Tens of millions of years before the Cambrian, dopamine was already generating sophisticated foraging behavior that is today displayed by "microorganisms, insects, mollusks, reptiles, fish, birds, and even human hunter–gatherers" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/)." rel="nofollow" target="_blank">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/).
>
> Please consider reading this paper by an expert in the field: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/ or at least have a good look at Figure 2 from that paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/figure/Fig2/?report=objectonly

Thanks, I just found the Cisek 2019 paper online this morning.
I'll have a look at the other too.

Problem is that when I started searching on this I came across
a whole bunch of papers that are also on-topic so it is easy to
get side tracked. E.G. I started looking at

[open access]
On the Independent Origins of Complex Brains and Neurons, 2019
https://www.karger.com/Article/FullText/258665

The problem is finding research that deals specifically with the
development neural interconnect and how it was able to scale out,
as opposed to say the evolution of different neural transmitters.

> Then maybe we should start a separate thread?

Seems on-topic to me (certainly more so than some of the other
threads discussions on the motivations of deities).

Understanding the origin of the wiring of biological NN (BNN)
is appropriate to discussion of NN Accelerators as we are
endeavoring to improve such simulators.

Hardware architectures like Tensorflow are likely not appropriate
for recurrent spiking NN (RSNN) and some researchers have been
exploring new hardware accelerators in this area.

Re: Neural Network Accelerators

<94d907bf-0e1a-44c2-8a90-e23ee1dde3cdn@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22000&group=comp.arch#22000

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:54d:: with SMTP id m13mr34461248qtx.33.1636920743239;
Sun, 14 Nov 2021 12:12:23 -0800 (PST)
X-Received: by 2002:a05:6808:4d2:: with SMTP id a18mr26718169oie.99.1636920742971;
Sun, 14 Nov 2021 12:12:22 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sun, 14 Nov 2021 12:12:22 -0800 (PST)
In-Reply-To: <N3bkJ.63875$Wkjc.36258@fx35.iad>
Injection-Info: google-groups.googlegroups.com; posting-host=104.59.204.55; posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 104.59.204.55
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <94d907bf-0e1a-44c2-8a90-e23ee1dde3cdn@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: MitchAl...@aol.com (MitchAlsup)
Injection-Date: Sun, 14 Nov 2021 20:12:23 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: MitchAlsup - Sun, 14 Nov 2021 20:12 UTC

On Sunday, November 14, 2021 at 10:46:40 AM UTC-6, EricP wrote:
> Scott Smader wrote:
> > On Sunday, November 14, 2021 at 6:18:51 AM UTC-8, EricP wrote:
> >> Ivan Godard wrote:
> >>> On 11/13/2021 10:54 AM, EricP wrote:
> >>>> EricP wrote:
> >>>>> Real NN have feedback, called recurrent, and real neurons are spiky
> >>>>> which introduces signal timing and phase delays as attributes.
> >>>>>
> >>>>> In particular signal phase timing adds a whole new dimension for
> >>>>> information storage. Feedback allows resonances to enhance or
> >>>>> suppress different combinations of inputs.
> >>>>>
> >>>>> That is the basis for my suspicion that we will eventually find
> >>>>> that brains, all brains, are akin to _holograms_.
> >>>>>
> >>>>> Clusters of neurons can build holographic modules and form
> >>>>> holographic modules of modules.
> >>>> One mystery about this is why doesn't every brain immediately
> >>>> collapse in a giant epileptic fit. Nature must have found a
> >>>> way to detect and prevent it as the organism grows.
> >>>> Natural selection would be a poor mechanism because there are
> >>>> so many ways to fail and many fewer ways to succeed
> >>>> that almost no brains would survive.
> >>>>
> >>>> So there must be some mechanism that "drives" these self organizing
> >>>> networks toward interconnections that do not have uncontrolled feedback.
> >>>>
> >>>> I've said this before but I suspect _that_ is what nature discovered
> >>>> at the Cambrian explosion 540 million years ago.
> >>>>
> >>> I thought the Cambrian invention was teeth?
> >> Eyes developed then too, but so did legs, antenna, all complex life forms..
> >> And probably teeth too, which need muscles to work them.
> >> And all that requires a complex controller, particularly eyes.
> >>
> >> In pre-Cambrian the most complex life was things like jellyfish which
> >> are multicellular organisms and have nerves that allows them to swim,
> >> and a few neurons are specialized to detect light or dark,
> >> but no central controller, no complex signal processing or
> >> decision making capability.
> >>
> >> After the Cambrian line are arthropods and all the animals of today.
> >> Eyes developed at this time which needs complex signal processing.
> >>
> >> Clearly something changed at that boundary that allowed the assembly
> >> of complex NN to control all of these new functions.
> >
> > The opinions being expressed here would do well to refer to contemporary research. A "tooth" means something specific in the fossil record. The Cambrian Era was not a "line" or "boundary." (https://www.nature.com/articles/s41559-019-0821-6) The neural tube had already yielded to archencephalon which had yielded to telencephalon as the most complex neural circuitry long before the Cambrian Era started. Tens of millions of years before the Cambrian, dopamine was already generating sophisticated foraging behavior that is today displayed by "microorganisms, insects, mollusks, reptiles, fish, birds, and even human hunter–gatherers" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/)." rel="nofollow" target="_blank">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/).
> >
> > Please consider reading this paper by an expert in the field: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/ or at least have a good look at Figure 2 from that paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/figure/Fig2/?report=objectonly
> Thanks, I just found the Cisek 2019 paper online this morning.
> I'll have a look at the other too.
>
> Problem is that when I started searching on this I came across
> a whole bunch of papers that are also on-topic so it is easy to
> get side tracked. E.G. I started looking at
>
> [open access]
> On the Independent Origins of Complex Brains and Neurons, 2019
> https://www.karger.com/Article/FullText/258665
>
> The problem is finding research that deals specifically with the
> development neural interconnect and how it was able to scale out,
> as opposed to say the evolution of different neural transmitters.
> > Then maybe we should start a separate thread?
> Seems on-topic to me (certainly more so than some of the other
> threads discussions on the motivations of deities).
>
> Understanding the origin of the wiring of biological NN (BNN)
> is appropriate to discussion of NN Accelerators as we are
> endeavoring to improve such simulators.
<
It is pretty clear that NNs are "pattern matchers" where one does not
necessarily know the pattern a-priori.
<
The still open question is what kind of circuitry/algorithm is appropriate
to match the patterns one has never even dreamed up ??
<
>
> Hardware architectures like Tensorflow are likely not appropriate
> for recurrent spiking NN (RSNN) and some researchers have been
> exploring new hardware accelerators in this area.

Re: Neural Network Accelerators

<smrscg$1i8v$1@gioia.aioe.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22001&group=comp.arch#22001

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!aioe.org!ppYixYMWAWh/woI8emJOIQ.user.46.165.242.91.POSTED!not-for-mail
From: terje.ma...@tmsw.no (Terje Mathisen)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Sun, 14 Nov 2021 21:42:59 +0100
Organization: Aioe.org NNTP Server
Message-ID: <smrscg$1i8v$1@gioia.aioe.org>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com>
<N3bkJ.63875$Wkjc.36258@fx35.iad>
<94d907bf-0e1a-44c2-8a90-e23ee1dde3cdn@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: gioia.aioe.org; logging-data="51487"; posting-host="ppYixYMWAWh/woI8emJOIQ.user.gioia.aioe.org"; mail-complaints-to="abuse@aioe.org";
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:60.0) Gecko/20100101
Firefox/60.0 SeaMonkey/2.53.9.1
X-Notice: Filtered by postfilter v. 0.9.2
 by: Terje Mathisen - Sun, 14 Nov 2021 20:42 UTC

MitchAlsup wrote:
> On Sunday, November 14, 2021 at 10:46:40 AM UTC-6, EricP wrote:
>> Scott Smader wrote:
>>> On Sunday, November 14, 2021 at 6:18:51 AM UTC-8, EricP wrote:
>>>> Ivan Godard wrote:
>>>>> On 11/13/2021 10:54 AM, EricP wrote:
>>>>>> EricP wrote:
>>>>>>> Real NN have feedback, called recurrent, and real neurons are spiky
>>>>>>> which introduces signal timing and phase delays as attributes.
>>>>>>>
>>>>>>> In particular signal phase timing adds a whole new dimension for
>>>>>>> information storage. Feedback allows resonances to enhance or
>>>>>>> suppress different combinations of inputs.
>>>>>>>
>>>>>>> That is the basis for my suspicion that we will eventually find
>>>>>>> that brains, all brains, are akin to _holograms_.
>>>>>>>
>>>>>>> Clusters of neurons can build holographic modules and form
>>>>>>> holographic modules of modules.
>>>>>> One mystery about this is why doesn't every brain immediately
>>>>>> collapse in a giant epileptic fit. Nature must have found a
>>>>>> way to detect and prevent it as the organism grows.
>>>>>> Natural selection would be a poor mechanism because there are
>>>>>> so many ways to fail and many fewer ways to succeed
>>>>>> that almost no brains would survive.
>>>>>>
>>>>>> So there must be some mechanism that "drives" these self organizing
>>>>>> networks toward interconnections that do not have uncontrolled feedback.
>>>>>>
>>>>>> I've said this before but I suspect _that_ is what nature discovered
>>>>>> at the Cambrian explosion 540 million years ago.
>>>>>>
>>>>> I thought the Cambrian invention was teeth?
>>>> Eyes developed then too, but so did legs, antenna, all complex life forms..
>>>> And probably teeth too, which need muscles to work them.
>>>> And all that requires a complex controller, particularly eyes.
>>>>
>>>> In pre-Cambrian the most complex life was things like jellyfish which
>>>> are multicellular organisms and have nerves that allows them to swim,
>>>> and a few neurons are specialized to detect light or dark,
>>>> but no central controller, no complex signal processing or
>>>> decision making capability.
>>>>
>>>> After the Cambrian line are arthropods and all the animals of today.
>>>> Eyes developed at this time which needs complex signal processing.
>>>>
>>>> Clearly something changed at that boundary that allowed the assembly
>>>> of complex NN to control all of these new functions.
>>>
>>> The opinions being expressed here would do well to refer to contemporary research. A "tooth" means something specific in the fossil record. The Cambrian Era was not a "line" or "boundary." (https://www.nature.com/articles/s41559-019-0821-6) The neural tube had already yielded to archencephalon which had yielded to telencephalon as the most complex neural circuitry long before the Cambrian Era started. Tens of millions of years before the Cambrian, dopamine was already generating sophisticated foraging behavior that is today displayed by "microorganisms, insects, mollusks, reptiles, fish, birds, and even human hunter–gatherers" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/)." rel="nofollow" target="_blank">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/).
>>>
>>> Please consider reading this paper by an expert in the field: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/ or at least have a good look at Figure 2 from that paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/figure/Fig2/?report=objectonly
>> Thanks, I just found the Cisek 2019 paper online this morning.
>> I'll have a look at the other too.
>>
>> Problem is that when I started searching on this I came across
>> a whole bunch of papers that are also on-topic so it is easy to
>> get side tracked. E.G. I started looking at
>>
>> [open access]
>> On the Independent Origins of Complex Brains and Neurons, 2019
>> https://www.karger.com/Article/FullText/258665
>>
>> The problem is finding research that deals specifically with the
>> development neural interconnect and how it was able to scale out,
>> as opposed to say the evolution of different neural transmitters.
>>> Then maybe we should start a separate thread?
>> Seems on-topic to me (certainly more so than some of the other
>> threads discussions on the motivations of deities).
>>
>> Understanding the origin of the wiring of biological NN (BNN)
>> is appropriate to discussion of NN Accelerators as we are
>> endeavoring to improve such simulators.
> <
> It is pretty clear that NNs are "pattern matchers" where one does not
> necessarily know the pattern a-priori.
> <
> The still open question is what kind of circuitry/algorithm is appropriate
> to match the patterns one has never even dreamed up ??

That is the pattern of "I have never seen this pattern before! I wonder
why?" which is the basis for most new research & inventions, right?

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"

Pages:123
server_pubkey.txt

rocksolid light 0.9.8
clearnet tor