Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

"And remember: Evil will always prevail, because Good is dumb." -- Spaceballs


devel / comp.arch / Re: Neural Network Accelerators

SubjectAuthor
* Neural Network Acceleratorsrobf...@gmail.com
`* Re: Neural Network AcceleratorsStephen Fuld
 +- Re: Neural Network AcceleratorsJohnG
 `* Re: Neural Network AcceleratorsTerje Mathisen
  `* Re: Neural Network AcceleratorsJimBrakefield
   `* Re: Neural Network AcceleratorsMitchAlsup
    +* Re: Neural Network AcceleratorsEricP
    |+* Re: Neural Network AcceleratorsEricP
    ||`* Re: Neural Network AcceleratorsIvan Godard
    || +* Re: Neural Network AcceleratorsScott Smader
    || |`* Re: Neural Network AcceleratorsIvan Godard
    || | +- Re: Neural Network AcceleratorsScott Smader
    || | `* Re: Neural Network AcceleratorsMitchAlsup
    || |  `- Re: Neural Network AcceleratorsIvan Godard
    || `* Re: Neural Network AcceleratorsEricP
    ||  `* Re: Neural Network AcceleratorsScott Smader
    ||   `* Re: Neural Network AcceleratorsEricP
    ||    +* Re: Neural Network AcceleratorsMitchAlsup
    ||    |+* Re: Neural Network AcceleratorsTerje Mathisen
    ||    ||`* Re: Neural Network AcceleratorsThomas Koenig
    ||    || +- Re: Neural Network AcceleratorsMitchAlsup
    ||    || +- Re: Neural Network AcceleratorsStefan Monnier
    ||    || `- Re: Neural Network AcceleratorsTerje Mathisen
    ||    |+- Re: Neural Network AcceleratorsStephen Fuld
    ||    |`* Re: Neural Network AcceleratorsEricP
    ||    | `- Re: Neural Network AcceleratorsBGB
    ||    `* Re: Neural Network AcceleratorsEricP
    ||     +* Re: Neural Network AcceleratorsScott Smader
    ||     |`* Re: Neural Network AcceleratorsEricP
    ||     | +* Re: Neural Network AcceleratorsStephen Fuld
    ||     | |`* Re: Neural Network AcceleratorsEricP
    ||     | | +- Re: Neural Network AcceleratorsScott Smader
    ||     | | +- Re: Neural Network AcceleratorsScott Smader
    ||     | | `* Re: Neural Network AcceleratorsStephen Fuld
    ||     | |  `* Re: Neural Network AcceleratorsMitchAlsup
    ||     | |   +* Re: Neural Network AcceleratorsEricP
    ||     | |   |`* Re: Neural Network AcceleratorsEricP
    ||     | |   | +* Re: Neural Network AcceleratorsMitchAlsup
    ||     | |   | |+* Re: Neural Network AcceleratorsTerje Mathisen
    ||     | |   | ||`- Re: Neural Network AcceleratorsMitchAlsup
    ||     | |   | |`* Re: Neural Network AcceleratorsEricP
    ||     | |   | | `* Re: Neural Network AcceleratorsMitchAlsup
    ||     | |   | |  `* Re: Neural Network Acceleratorsrobf...@gmail.com
    ||     | |   | |   +* Re: Neural Network AcceleratorsJimBrakefield
    ||     | |   | |   |`* Re: Neural Network AcceleratorsIvan Godard
    ||     | |   | |   | +- Re: Neural Network AcceleratorsMitchAlsup
    ||     | |   | |   | +- Re: Neural Network AcceleratorsJimBrakefield
    ||     | |   | |   | +- Re: Neural Network AcceleratorsMitchAlsup
    ||     | |   | |   | `- Re: Neural Network AcceleratorsMitchAlsup
    ||     | |   | |   +- Re: Neural Network AcceleratorsJimBrakefield
    ||     | |   | |   `- Re: Neural Network AcceleratorsSean O'Connor
    ||     | |   | `- Re: Neural Network AcceleratorsThomas Koenig
    ||     | |   `* Re: Neural Network AcceleratorsThomas Koenig
    ||     | |    +- Re: Neural Network AcceleratorsMitchAlsup
    ||     | |    `- Re: Neural Network AcceleratorsJimBrakefield
    ||     | `* Re: Neural Network AcceleratorsScott Smader
    ||     |  `* Re: Neural Network AcceleratorsEricP
    ||     |   `* Re: Neural Network AcceleratorsYoga Man
    ||     |    `- Re: Neural Network AcceleratorsScott Smader
    ||     `- Re: Neural Network AcceleratorsIvan Godard
    |+- Re: Neural Network AcceleratorsEricP
    |+- Re: Neural Network AcceleratorsJimBrakefield
    |`- Re: Neural Network AcceleratorsStephen Fuld
    `* Re: Neural Network AcceleratorsBGB
     `* Re: Neural Network AcceleratorsMitchAlsup
      `- Re: Neural Network AcceleratorsBGB

Pages:123
Re: Neural Network Accelerators

<smru5b$o27$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22002&group=comp.arch#22002

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Sun, 14 Nov 2021 13:13:13 -0800
Organization: A noiseless patient Spider
Lines: 23
Message-ID: <smru5b$o27$1@dont-email.me>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com>
<N3bkJ.63875$Wkjc.36258@fx35.iad>
<94d907bf-0e1a-44c2-8a90-e23ee1dde3cdn@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 14 Nov 2021 21:13:15 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="e7de95c535b20755256ae938b7d57e2d";
logging-data="24647"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/ISwOzwTJxd7k1cJhwccP4Q8HLQfNTY2M="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.3.0
Cancel-Lock: sha1:VI1JJ+1KAvbe7gSQ/BFUVC+mrI0=
In-Reply-To: <94d907bf-0e1a-44c2-8a90-e23ee1dde3cdn@googlegroups.com>
Content-Language: en-US
 by: Stephen Fuld - Sun, 14 Nov 2021 21:13 UTC

On 11/14/2021 12:12 PM, MitchAlsup wrote:

snip

> It is pretty clear that NNs are "pattern matchers" where one does not
> necessarily know the pattern a-priori.
> <
> The still open question is what kind of circuitry/algorithm is appropriate
> to match the patterns one has never even dreamed up ??

There are algorithms that extract some pattern from whatever data is
presented to it. They generally fall into the area of "unsupervised
learning". Whether the pattern extracted is of value is, of course, a
totally different question. :-)

But these are probably not appropriate for mass implementation in
circuitry, as they don't have as widespread use as matching a previously
known pattern.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: Neural Network Accelerators

<aCfkJ.35506$SW5.13028@fx45.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22003&group=comp.arch#22003

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!ecngs!feeder2.ecngs.de!178.20.174.213.MISMATCH!feeder1.feed.usenet.farm!feed.usenet.farm!peer02.ams4!peer.am4.highwinds-media.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx45.iad.POSTED!not-for-mail
From: ThatWoul...@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com> <sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org> <bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com> <FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad> <smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad> <17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad>
In-Reply-To: <N3bkJ.63875$Wkjc.36258@fx35.iad>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 112
Message-ID: <aCfkJ.35506$SW5.13028@fx45.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Sun, 14 Nov 2021 21:56:22 UTC
Date: Sun, 14 Nov 2021 16:55:06 -0500
X-Received-Bytes: 6747
 by: EricP - Sun, 14 Nov 2021 21:55 UTC

EricP wrote:
> Scott Smader wrote:
>> On Sunday, November 14, 2021 at 6:18:51 AM UTC-8, EricP wrote:
>>> Ivan Godard wrote:
>>>> On 11/13/2021 10:54 AM, EricP wrote:
>>>>> EricP wrote:
>>>>>> Real NN have feedback, called recurrent, and real neurons are
>>>>>> spiky which introduces signal timing and phase delays as attributes.
>>>>>> In particular signal phase timing adds a whole new dimension for
>>>>>> information storage. Feedback allows resonances to enhance or
>>>>>> suppress different combinations of inputs.
>>>>>> That is the basis for my suspicion that we will eventually find
>>>>>> that brains, all brains, are akin to _holograms_.
>>>>>> Clusters of neurons can build holographic modules and form
>>>>>> holographic modules of modules.
>>>>> One mystery about this is why doesn't every brain immediately
>>>>> collapse in a giant epileptic fit. Nature must have found a way to
>>>>> detect and prevent it as the organism grows. Natural selection
>>>>> would be a poor mechanism because there are so many ways to fail
>>>>> and many fewer ways to succeed that almost no brains would survive.
>>>>> So there must be some mechanism that "drives" these self organizing
>>>>> networks toward interconnections that do not have uncontrolled
>>>>> feedback.
>>>>> I've said this before but I suspect _that_ is what nature
>>>>> discovered at the Cambrian explosion 540 million years ago.
>>>> I thought the Cambrian invention was teeth?
>>> Eyes developed then too, but so did legs, antenna, all complex life
>>> forms.. And probably teeth too, which need muscles to work them. And
>>> all that requires a complex controller, particularly eyes.
>>> In pre-Cambrian the most complex life was things like jellyfish which
>>> are multicellular organisms and have nerves that allows them to swim,
>>> and a few neurons are specialized to detect light or dark, but no
>>> central controller, no complex signal processing or decision making
>>> capability.
>>> After the Cambrian line are arthropods and all the animals of today.
>>> Eyes developed at this time which needs complex signal processing.
>>> Clearly something changed at that boundary that allowed the assembly
>>> of complex NN to control all of these new functions.
>>
>> The opinions being expressed here would do well to refer to
>> contemporary research. A "tooth" means something specific in the
>> fossil record. The Cambrian Era was not a "line" or "boundary."
>> (https://www.nature.com/articles/s41559-019-0821-6) The neural tube
>> had already yielded to archencephalon which had yielded to
>> telencephalon as the most complex neural circuitry long before the
>> Cambrian Era started. Tens of millions of years before the Cambrian,
>> dopamine was already generating sophisticated foraging behavior that
>> is today displayed by "microorganisms, insects, mollusks, reptiles,
>> fish, birds, and even human hunter–gatherers"
>> (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/)." rel="nofollow" target="_blank">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/).
>>
>> Please consider reading this paper by an expert in the field:
>> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/ or at least have
>> a good look at Figure 2 from that paper:
>> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6848052/figure/Fig2/?report=objectonly
>>
>
> Thanks, I just found the Cisek 2019 paper online this morning.
> I'll have a look at the other too.

Ok I've read them quickly.
Yes there are some burrowing animals with a central nervous system
in the Ediacaran but both papers acknowledge the development of
limbs and vision across the Ediacaran–Cambrian boundary.

Burrowing doesn't strike me as complicated behavior,
perhaps just a variation on the jellyfish nerve system
and doesn't require any signal processing capability.

And jellyfish today have single neuron light sensors.
Again it doesn't require any signal processing capability to move
towards light, move away from dark, eat whatever you bump into.

I saw nothing in those to contradict the idea that something important
happened to allow the construction far more complex NN around the
Ediacaran–Cambrian boundary as an enabling technology to all the
other changes.

> Problem is that when I started searching on this I came across
> a whole bunch of papers that are also on-topic so it is easy to
> get side tracked. E.G. I started looking at
>
> [open access]
> On the Independent Origins of Complex Brains and Neurons, 2019
> https://www.karger.com/Article/FullText/258665
>
> The problem is finding research that deals specifically with the
> development neural interconnect and how it was able to scale out,
> as opposed to say the evolution of different neural transmitters.

I came across this paper later.
It is more on target for what I'm thinking of as it addresses
the change from simple nerve nets to centralized brains.

[open access]
Of Circuits and Brains: The Origin and Diversification
of Neural Architectures, 2020
https://www.frontiersin.org/articles/10.3389/fevo.2020.00082/full

In the above papers' section "How Do These Neural Circuits Evolve?"
it references to the one below which I haven't read yet but also
looks relevant (see what I mean about getting side tracked) as it
has a section titled "2.1. Evolution of connectivity":

[open access]
Developmental and genetic mechanisms of neural circuit evolution, 2017
https://www.sciencedirect.com/science/article/pii/S0012160617301495

But I don't see anything yet in either one that addresses how
they construct NN correctly (non-epileptic).

Re: Neural Network Accelerators

<sms2jf$1dg$1@newsreader4.netcologne.de>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22004&group=comp.arch#22004

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.goja.nl.eu.org!3.eu.feeder.erje.net!feeder.erje.net!newsreader4.netcologne.de!news.netcologne.de!.POSTED.2001-4dd7-10ca-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de!not-for-mail
From: tkoe...@netcologne.de (Thomas Koenig)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Sun, 14 Nov 2021 22:29:03 -0000 (UTC)
Organization: news.netcologne.de
Distribution: world
Message-ID: <sms2jf$1dg$1@newsreader4.netcologne.de>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com>
<N3bkJ.63875$Wkjc.36258@fx35.iad>
<94d907bf-0e1a-44c2-8a90-e23ee1dde3cdn@googlegroups.com>
<smrscg$1i8v$1@gioia.aioe.org>
Injection-Date: Sun, 14 Nov 2021 22:29:03 -0000 (UTC)
Injection-Info: newsreader4.netcologne.de; posting-host="2001-4dd7-10ca-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de:2001:4dd7:10ca:0:7285:c2ff:fe6c:992d";
logging-data="1456"; mail-complaints-to="abuse@netcologne.de"
User-Agent: slrn/1.0.3 (Linux)
 by: Thomas Koenig - Sun, 14 Nov 2021 22:29 UTC

Terje Mathisen <terje.mathisen@tmsw.no> schrieb:

> That is the pattern of "I have never seen this pattern before! I wonder
> why?" which is the basis for most new research & inventions, right?

My personal experience is that a lot of inventions is due to
transfer, recognizing a pattern in another set of circumstances
than the one you originally saw them, and applying it.

I've read recently that there is a high correlation between the
inventiveness of people and their tendency to make bad jokes.
The latter requires taking things out of their usual context, just
as invention des.

Re: Neural Network Accelerators

<ea369d71-2980-4a2a-9c16-b53e0a67ad06n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22005&group=comp.arch#22005

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:4551:: with SMTP id u17mr9959501qkp.351.1636933146926;
Sun, 14 Nov 2021 15:39:06 -0800 (PST)
X-Received: by 2002:a9d:6358:: with SMTP id y24mr27611147otk.85.1636933146698;
Sun, 14 Nov 2021 15:39:06 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!peer01.ams1!peer.ams1.xlned.com!news.xlned.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sun, 14 Nov 2021 15:39:06 -0800 (PST)
In-Reply-To: <sms2jf$1dg$1@newsreader4.netcologne.de>
Injection-Info: google-groups.googlegroups.com; posting-host=104.59.204.55; posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 104.59.204.55
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad>
<94d907bf-0e1a-44c2-8a90-e23ee1dde3cdn@googlegroups.com> <smrscg$1i8v$1@gioia.aioe.org>
<sms2jf$1dg$1@newsreader4.netcologne.de>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <ea369d71-2980-4a2a-9c16-b53e0a67ad06n@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: MitchAl...@aol.com (MitchAlsup)
Injection-Date: Sun, 14 Nov 2021 23:39:06 +0000
Content-Type: text/plain; charset="UTF-8"
X-Received-Bytes: 2351
 by: MitchAlsup - Sun, 14 Nov 2021 23:39 UTC

On Sunday, November 14, 2021 at 4:29:05 PM UTC-6, Thomas Koenig wrote:
> Terje Mathisen <terje.m...@tmsw.no> schrieb:
> > That is the pattern of "I have never seen this pattern before! I wonder
> > why?" which is the basis for most new research & inventions, right?
<
> My personal experience is that a lot of inventions is due to
> transfer, recognizing a pattern in another set of circumstances
> than the one you originally saw them, and applying it.
>
> I've read recently that there is a high correlation between the
> inventiveness of people and their tendency to make bad jokes.
> The latter requires taking things out of their usual context, just
> as invention des.
<
Guilty as charged............

Re: Neural Network Accelerators

<779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22006&group=comp.arch#22006

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a37:8ec6:: with SMTP id q189mr27233754qkd.145.1636942856451;
Sun, 14 Nov 2021 18:20:56 -0800 (PST)
X-Received: by 2002:a9d:5c18:: with SMTP id o24mr27879452otk.243.1636942856184;
Sun, 14 Nov 2021 18:20:56 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!border2.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sun, 14 Nov 2021 18:20:56 -0800 (PST)
In-Reply-To: <aCfkJ.35506$SW5.13028@fx45.iad>
Injection-Info: google-groups.googlegroups.com; posting-host=162.229.185.59; posting-account=Gm3E_woAAACkDRJFCvfChVjhgA24PTsb
NNTP-Posting-Host: 162.229.185.59
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad>
<aCfkJ.35506$SW5.13028@fx45.iad>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: yogaman...@yahoo.com (Scott Smader)
Injection-Date: Mon, 15 Nov 2021 02:20:56 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Lines: 90
 by: Scott Smader - Mon, 15 Nov 2021 02:20 UTC

On Sunday, November 14, 2021 at 1:56:27 PM UTC-8, EricP wrote:
> EricP wrote:

> Ok I've read them quickly.
> Yes there are some burrowing animals with a central nervous system
> in the Ediacaran but both papers acknowledge the development of
> limbs and vision across the Ediacaran–Cambrian boundary.
>
> Burrowing doesn't strike me as complicated behavior,
> perhaps just a variation on the jellyfish nerve system
> and doesn't require any signal processing capability.
>
> And jellyfish today have single neuron light sensors.
> Again it doesn't require any signal processing capability to move
> towards light, move away from dark, eat whatever you bump into.
>
> I saw nothing in those to contradict the idea that something important
> happened to allow the construction far more complex NN around the
> Ediacaran–Cambrian boundary as an enabling technology to all the
> other changes.

> > Problem is that when I started searching on this I came across
> > a whole bunch of papers that are also on-topic so it is easy to
> > get side tracked. E.G. I started looking at
> >
> > [open access]
> > On the Independent Origins of Complex Brains and Neurons, 2019
> > https://www.karger.com/Article/FullText/258665
> >
> > The problem is finding research that deals specifically with the
> > development neural interconnect and how it was able to scale out,
> > as opposed to say the evolution of different neural transmitters.
> I came across this paper later.
> It is more on target for what I'm thinking of as it addresses
> the change from simple nerve nets to centralized brains.
>
> [open access]
> Of Circuits and Brains: The Origin and Diversification
> of Neural Architectures, 2020
> https://www.frontiersin.org/articles/10.3389/fevo.2020.00082/full
>
> In the above papers' section "How Do These Neural Circuits Evolve?"
> it references to the one below which I haven't read yet but also
> looks relevant (see what I mean about getting side tracked) as it
> has a section titled "2.1. Evolution of connectivity":
>
> [open access]
> Developmental and genetic mechanisms of neural circuit evolution, 2017
> https://www.sciencedirect.com/science/article/pii/S0012160617301495
>
> But I don't see anything yet in either one that addresses how
> they construct NN correctly (non-epileptic).

Thank you for those references. I look forward to reading them.

I apologize for leading you to believe that the Cisek and Wood, et al, papers denied increasing neural complexity during the Cambrian Era or explained how epileptic synchrony was prevented. I sought simply to point out that sophisticated behavior and teeth did not "explode" in at a moment in time.

At the risk of burdening you with more possibly unfruitful reading, may I recommend the work of Ramin Hasani at MIT with others from various institutions? His group have built robotic control systems based ultimately on the careful measurements they had made previously on the 302 neurons and 5000 synapses of C. elegans. I don't think they have published specifically about the epileptic problem, but their work seems like it might at least be another interesting tangent for you. They do show that the hidden state of every neuron in one of their networks is bounded, but I doubt that that directly relates to epileptic avoidance.

Liquid Time Constant Networks https://arxiv.org/abs/2006.04439
This paper describes their LTC networks in some detail and compares performance against LSTM, CT-RNN, Neural ODE and CT-GRU.

Hasani also gave this talk in March about: Liquid Time Constant Networks https://simons.berkeley.edu/talks/tbd-296
Images and sequences shown starting at about 31 minutes are quite impressive, especially considering the much smaller number of neurons required and the robustness against noise. He also mentions cascading LTCs into what he calls Neural Circuit Policies which he then shows are Dynamic Causal Models.

I have only skimmed most of his publications: http://www.raminhasani.com/publications/
which document his journey from worms to LTCNs, but perhaps it may help you sort out whether the epileptic avoidance solution is earlier or later than the C elegans brain!

Best wishes!

Re: Neural Network Accelerators

<jwvpmr22ktj.fsf-monnier+comp.arch@gnu.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22007&group=comp.arch#22007

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: monn...@iro.umontreal.ca (Stefan Monnier)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Sun, 14 Nov 2021 21:52:00 -0500
Organization: A noiseless patient Spider
Lines: 9
Message-ID: <jwvpmr22ktj.fsf-monnier+comp.arch@gnu.org>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com>
<N3bkJ.63875$Wkjc.36258@fx35.iad>
<94d907bf-0e1a-44c2-8a90-e23ee1dde3cdn@googlegroups.com>
<smrscg$1i8v$1@gioia.aioe.org>
<sms2jf$1dg$1@newsreader4.netcologne.de>
Mime-Version: 1.0
Content-Type: text/plain
Injection-Info: reader02.eternal-september.org; posting-host="959b153dea61a5f31c9286e4da195f3c";
logging-data="7672"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18cbjOji9mkUGpgb25Lpd4i"
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.0.50 (gnu/linux)
Cancel-Lock: sha1:3hPHa1nwFg6CmFFKgticFgTwRA0=
sha1:KralFipRZVS+rkUW+gNq+i0o9Hw=
 by: Stefan Monnier - Mon, 15 Nov 2021 02:52 UTC

> I've read recently that there is a high correlation between the
> inventiveness of people and their tendency to make bad jokes.

Sadly, it's only a correlation.
I think I'm pretty good(?) at making bad jokes, but it doesn't seem to
carry over to inventiveness,

Stefan

Re: Neural Network Accelerators

<smsi9f$8uq$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22008&group=comp.arch#22008

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: iva...@millcomputing.com (Ivan Godard)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Sun, 14 Nov 2021 18:56:46 -0800
Organization: A noiseless patient Spider
Lines: 13
Message-ID: <smsi9f$8uq$1@dont-email.me>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com>
<N3bkJ.63875$Wkjc.36258@fx35.iad> <aCfkJ.35506$SW5.13028@fx45.iad>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 15 Nov 2021 02:56:47 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="589c4e4948368e7a054dd2209e45fa25";
logging-data="9178"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19M0dm8dlsFfEr6qIZf95x9"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.3.0
Cancel-Lock: sha1:1DHBbMDvx4jRF9XKGD1JY/9SGqU=
In-Reply-To: <aCfkJ.35506$SW5.13028@fx45.iad>
Content-Language: en-US
 by: Ivan Godard - Mon, 15 Nov 2021 02:56 UTC

On 11/14/2021 1:55 PM, EricP wrote:
> EricP wrote:

<snip>

> But I don't see anything yet in either one that addresses how
> they construct NN correctly (non-epileptic).

A feedback system can go into oscillation (i.e. epilepsy) and get eaten.
Or it can simply damp down and get eaten. Over evolutionary time it will
hunt to the chaotic boundary; that's us.

Re: Neural Network Accelerators

<smtnaq$f8k$1@gioia.aioe.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22009&group=comp.arch#22009

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!aioe.org!ppYixYMWAWh/woI8emJOIQ.user.46.165.242.91.POSTED!not-for-mail
From: terje.ma...@tmsw.no (Terje Mathisen)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Mon, 15 Nov 2021 14:28:59 +0100
Organization: Aioe.org NNTP Server
Message-ID: <smtnaq$f8k$1@gioia.aioe.org>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com>
<N3bkJ.63875$Wkjc.36258@fx35.iad>
<94d907bf-0e1a-44c2-8a90-e23ee1dde3cdn@googlegroups.com>
<smrscg$1i8v$1@gioia.aioe.org> <sms2jf$1dg$1@newsreader4.netcologne.de>
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Info: gioia.aioe.org; logging-data="15636"; posting-host="ppYixYMWAWh/woI8emJOIQ.user.gioia.aioe.org"; mail-complaints-to="abuse@aioe.org";
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:60.0) Gecko/20100101
Firefox/60.0 SeaMonkey/2.53.9.1
X-Notice: Filtered by postfilter v. 0.9.2
 by: Terje Mathisen - Mon, 15 Nov 2021 13:28 UTC

Thomas Koenig wrote:
> Terje Mathisen <terje.mathisen@tmsw.no> schrieb:
>
>> That is the pattern of "I have never seen this pattern before! I wonder
>> why?" which is the basis for most new research & inventions, right?
>
> My personal experience is that a lot of inventions is due to
> transfer, recognizing a pattern in another set of circumstances
> than the one you originally saw them, and applying it.
>
> I've read recently that there is a high correlation between the
> inventiveness of people and their tendency to make bad jokes.
> The latter requires taking things out of their usual context, just
> as invention des.

I thought that was pretty well established?

Punning is a well-known form of this, some people supposedly hate puns,
I tend to love them, the more groan-inducing the better.

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"

Re: Neural Network Accelerators

<HLvkJ.66682$Wkjc.57313@fx35.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22010&group=comp.arch#22010

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!news.uzoreto.com!newsreader4.netcologne.de!news.netcologne.de!peer01.ams1!peer.ams1.xlned.com!news.xlned.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx35.iad.POSTED!not-for-mail
From: ThatWoul...@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com> <sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org> <bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com> <FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad> <smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad> <17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad> <94d907bf-0e1a-44c2-8a90-e23ee1dde3cdn@googlegroups.com>
In-Reply-To: <94d907bf-0e1a-44c2-8a90-e23ee1dde3cdn@googlegroups.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 24
Message-ID: <HLvkJ.66682$Wkjc.57313@fx35.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Mon, 15 Nov 2021 16:18:47 UTC
Date: Mon, 15 Nov 2021 11:15:35 -0500
X-Received-Bytes: 2113
 by: EricP - Mon, 15 Nov 2021 16:15 UTC

MitchAlsup wrote:
> On Sunday, November 14, 2021 at 10:46:40 AM UTC-6, EricP wrote:
>>
>> Understanding the origin of the wiring of biological NN (BNN)
>> is appropriate to discussion of NN Accelerators as we are
>> endeavoring to improve such simulators.
> <
> It is pretty clear that NNs are "pattern matchers" where one does not
> necessarily know the pattern a-priori.
> <
> The still open question is what kind of circuitry/algorithm is appropriate
> to match the patterns one has never even dreamed up ??

The artificial convolution NN are basically fancy curve fit algorithms
that adjust a polynomial with tens or hundreds of thousands of terms
to some number of inputs after millions of examples.

Biological NN perform associative learning after just a few examples
with just a few neurons.

Both are suitable for sorting fish but only one
can fit inside and control a fruit fly.

Re: Neural Network Accelerators

<ILvkJ.66683$Wkjc.15861@fx35.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22011&group=comp.arch#22011

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!feeder1.feed.usenet.farm!feed.usenet.farm!news-out.netnews.com!news.alt.net!fdc2.netnews.com!peer03.ams1!peer.ams1.xlned.com!news.xlned.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx35.iad.POSTED!not-for-mail
From: ThatWoul...@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com> <sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org> <bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com> <FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad> <smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad> <17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad> <aCfkJ.35506$SW5.13028@fx45.iad> <779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>
In-Reply-To: <779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 113
Message-ID: <ILvkJ.66683$Wkjc.15861@fx35.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Mon, 15 Nov 2021 16:18:48 UTC
Date: Mon, 15 Nov 2021 11:18:02 -0500
X-Received-Bytes: 7514
 by: EricP - Mon, 15 Nov 2021 16:18 UTC

Scott Smader wrote:
> On Sunday, November 14, 2021 at 1:56:27 PM UTC-8, EricP wrote:
>> EricP wrote:
>
>> Ok I've read them quickly.
>> Yes there are some burrowing animals with a central nervous system
>> in the Ediacaran but both papers acknowledge the development of
>> limbs and vision across the Ediacaran–Cambrian boundary.
>>
>> Burrowing doesn't strike me as complicated behavior,
>> perhaps just a variation on the jellyfish nerve system
>> and doesn't require any signal processing capability.
>>
>> And jellyfish today have single neuron light sensors.
>> Again it doesn't require any signal processing capability to move
>> towards light, move away from dark, eat whatever you bump into.
>>
>> I saw nothing in those to contradict the idea that something important
>> happened to allow the construction far more complex NN around the
>> Ediacaran–Cambrian boundary as an enabling technology to all the
>> other changes.
>
>
>>> Problem is that when I started searching on this I came across
>>> a whole bunch of papers that are also on-topic so it is easy to
>>> get side tracked. E.G. I started looking at
>>>
>>> [open access]
>>> On the Independent Origins of Complex Brains and Neurons, 2019
>>> https://www.karger.com/Article/FullText/258665
>>>
>>> The problem is finding research that deals specifically with the
>>> development neural interconnect and how it was able to scale out,
>>> as opposed to say the evolution of different neural transmitters.
>> I came across this paper later.
>> It is more on target for what I'm thinking of as it addresses
>> the change from simple nerve nets to centralized brains.
>>
>> [open access]
>> Of Circuits and Brains: The Origin and Diversification
>> of Neural Architectures, 2020
>> https://www.frontiersin.org/articles/10.3389/fevo.2020.00082/full
>>
>> In the above papers' section "How Do These Neural Circuits Evolve?"
>> it references to the one below which I haven't read yet but also
>> looks relevant (see what I mean about getting side tracked) as it
>> has a section titled "2.1. Evolution of connectivity":
>>
>> [open access]
>> Developmental and genetic mechanisms of neural circuit evolution, 2017
>> https://www.sciencedirect.com/science/article/pii/S0012160617301495
>>
>> But I don't see anything yet in either one that addresses how
>> they construct NN correctly (non-epileptic).
>
> Thank you for those references. I look forward to reading them.
>
> I apologize for leading you to believe that the Cisek and Wood, et al, papers denied increasing neural complexity during the Cambrian Era or explained how epileptic synchrony was prevented. I sought simply to point out that sophisticated behavior and teeth did not "explode" in at a moment in time.

Not at all, they were both interesting. And I have seen others
questioning when the Cambrian Explosion started and what caused it.

> At the risk of burdening you with more possibly unfruitful reading, may I recommend the work of Ramin Hasani at MIT with others from various institutions? His group have built robotic control systems based ultimately on the careful measurements they had made previously on the 302 neurons and 5000 synapses of C. elegans. I don't think they have published specifically about the epileptic problem, but their work seems like it might at least be another interesting tangent for you. They do show that the hidden state of every neuron in one of their networks is bounded, but I doubt that that directly relates to epileptic avoidance.

Yes, this is the track I'm thinking of.
He uses the word "liquid" to mean time-varying.

Designing Worm-inspired Neural Networks for
Interpretable Robotic Control, 2019
https://publik.tuwien.ac.at/files/publik_287624.pdf

"In this paper, we design novel liquid time-constant recurrent neural
networks for robotic control, inspired by the brain of the nematode,
C. elegans. In the worm’s nervous system, neurons communicate through
nonlinear time-varying synaptic links established amongst them by their
particular wiring structure. This property enables neurons to express
liquid time-constants dynamics and therefore allows the network to
originate complex behaviors with a small number of neurons."
....
"We evaluate their performance in controlling mobile and arm robots"
....
"The C. elegans nematode, with a rather simple nervous system composed
of 302 neurons and 8000 synapses, exhibits remarkable controllability
in it’s surroundings; it expresses behaviors such as processing complex
chemical input stimulations, sleeping, realizing adaptive behavior,
performing mechano-sensation, and controlling 96 muscles.
How does C. elegans perform so much with so little?"

> Liquid Time Constant Networks https://arxiv.org/abs/2006.04439
> This paper describes their LTC networks in some detail and compares performance against LSTM, CT-RNN, Neural ODE and CT-GRU.

This sounds like what I was trying to speculate earlier, that information
in the NN is encoded not only in the location of connections and
in their weights, but also in the phase delay of the signal arrivals.

An analogy would be an asynchronous logic circuit with feedback pathways
where propagation delay on the interconnect wire encodes part of the
signal processing logic.

I see they say the networks are stable and bounded but its not clear
to me yet why they are. I've searched for terms like "meta stability".

> Hasani also gave this talk in March about: Liquid Time Constant Networks https://simons.berkeley.edu/talks/tbd-296
> Images and sequences shown starting at about 31 minutes are quite impressive, especially considering the much smaller number of neurons required and the robustness against noise. He also mentions cascading LTCs into what he calls Neural Circuit Policies which he then shows are Dynamic Causal Models.
>
> I have only skimmed most of his publications: http://www.raminhasani.com/publications/
> which document his journey from worms to LTCNs, but perhaps it may help you sort out whether the epileptic avoidance solution is earlier or later than the C elegans brain!
>
> Best wishes!

Thanks for the pointers.

Re: Neural Network Accelerators

<smu31d$te9$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22013&group=comp.arch#22013

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Mon, 15 Nov 2021 08:48:43 -0800
Organization: A noiseless patient Spider
Lines: 31
Message-ID: <smu31d$te9$1@dont-email.me>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com>
<N3bkJ.63875$Wkjc.36258@fx35.iad> <aCfkJ.35506$SW5.13028@fx45.iad>
<779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>
<ILvkJ.66683$Wkjc.15861@fx35.iad>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 15 Nov 2021 16:48:45 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="fc64b761deab47531da8153ca891c1db";
logging-data="30153"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18DCDJeFjYsCTwi7sTAacKc4Q+OQBnHa9A="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.3.0
Cancel-Lock: sha1:aJGhELS3nO0dBG9AFlnj5R/3CzQ=
In-Reply-To: <ILvkJ.66683$Wkjc.15861@fx35.iad>
Content-Language: en-US
 by: Stephen Fuld - Mon, 15 Nov 2021 16:48 UTC

On 11/15/2021 8:18 AM, EricP wrote:

snip

> This sounds like what I was trying to speculate earlier, that information
> in the NN is encoded not only in the location of connections and
> in their weights, but also in the phase delay of the signal arrivals.

The weights in Artificial Neural Networks are a stand in for the timing
of real neurons. Weighted amount of firing doesn't really exist in real
neurons.

Real neurons are, of course, analogue. When the pre-synaptic neuron
fires, it send a relatively fixed amount of a neurotransmitter into the
synapse. This causes the receptors in the post synaptic neuron to open
a channel that allows ions into the cell. The cell has ion pumps that
continually try to maintain the potential across the cell membrane, so
over time, the effect of the depolarization from the synapse is
dissipated. But if enough synaptic receptors open before the ion pumps
can cope with the effects, the whole neuron depolarizes (i.e. "fires").

But the amount of depolarization from any one synapse is relatively
fixed. It is the number and timing of the firings of the presynaptic
neurons, not their "weight" that determines when the post synaptic
neuron fires. And, of course there is no "clock". ANNs use weights and
defined update times to simulate this.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: Neural Network Accelerators

<fae39d16-2b37-4edb-bb3f-00bb4941a7ccn@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22014&group=comp.arch#22014

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:13cf:: with SMTP id p15mr1170401qtk.9.1637001488162;
Mon, 15 Nov 2021 10:38:08 -0800 (PST)
X-Received: by 2002:aca:2412:: with SMTP id n18mr707906oic.119.1637001487893;
Mon, 15 Nov 2021 10:38:07 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!border2.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Mon, 15 Nov 2021 10:38:07 -0800 (PST)
In-Reply-To: <ILvkJ.66683$Wkjc.15861@fx35.iad>
Injection-Info: google-groups.googlegroups.com; posting-host=162.229.185.59; posting-account=Gm3E_woAAACkDRJFCvfChVjhgA24PTsb
NNTP-Posting-Host: 162.229.185.59
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad>
<aCfkJ.35506$SW5.13028@fx45.iad> <779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>
<ILvkJ.66683$Wkjc.15861@fx35.iad>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <fae39d16-2b37-4edb-bb3f-00bb4941a7ccn@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: yogaman...@yahoo.com (Scott Smader)
Injection-Date: Mon, 15 Nov 2021 18:38:08 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Lines: 61
 by: Scott Smader - Mon, 15 Nov 2021 18:38 UTC

On Monday, November 15, 2021 at 8:18:50 AM UTC-8, EricP wrote:

> Thanks for the pointers.
Well, quoting Bobcat Goldthwait, "Thank you for encouraging my behavior."

> This sounds like what I was trying to speculate earlier, that information
> in the NN is encoded not only in the location of connections and
> in their weights, but also in the phase delay of the signal arrivals.
>
> An analogy would be an asynchronous logic circuit with feedback pathways
> where propagation delay on the interconnect wire encodes part of the
> signal processing logic.

That is very much in haromony with the thinking in this paper which documents the use of varying myelination of axons to produce precise timing in birdsong.
Local axonal conduction delays underlie precise timing of a neural sequence
https://www.biorxiv.org/content/10.1101/864231v1

This 2009 paper directly states your proposition: "[T]he visual detection threshold fluctuates over time along with the phase of ongoing EEG activity. The results support the notion that ongoing oscillations shape our perception, possibly by providing a temporal reference frame for neural codes that rely on precise spike timing."
The Phase of Ongoing EEG Oscillations Predicts Visual Perception
https://www.jneurosci.org/content/29/24/7869

And the criticality of phase-related information is also suggested by this:
Intracranial recordings reveal ubiquitous in-phase and in- antiphase functional connectivity between homologous brain regions in humans
https://www.biorxiv.org/content/10.1101/2020.06.19.162065v2

Possibly related, this paper claims that brain signaling is divided into frequency bands:
Causal evidence of network communication in whole-brain dynamics through a multiplexed neural code
https://doi.org/10.1101/2020.06.09.142695
I don't believe the paper addresses this, but multiple bands could be used simultaneously for individual phase-synchronization signals in separated functional networks.

In line with Ivan's insightful comment about evolution optimizing control systems to the edge of chaos, it also makes sense that given enough time, evolution would find an (approximate) implementation of almost every possible signal processing technique.

And maybe even back-propagation, too, as speculated in this very recent paper about some simulations they did:
Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits
https://www.nature.com/articles/s41593-021-00857-x
I'm too cheap to buy the article, but this Wired article describes it:
https://www.wired.com/story/neuron-bursts-can-mimic-a-famous-ai-learning-strategy/
Numenta forum has a discussion about it: https://discourse.numenta.org/t/burst-as-a-local-learning-rule-in-apical-but-not-basal-dendrites/9093

Fun stuff!

Re: Neural Network Accelerators

<5PxkJ.72842$g35.33193@fx11.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22015&group=comp.arch#22015

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!border2.nntp.dca1.giganews.com!nntp.giganews.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx11.iad.POSTED!not-for-mail
From: ThatWoul...@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com> <sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org> <bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com> <FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad> <smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad> <17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad> <aCfkJ.35506$SW5.13028@fx45.iad> <779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com> <ILvkJ.66683$Wkjc.15861@fx35.iad> <smu31d$te9$1@dont-email.me>
In-Reply-To: <smu31d$te9$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 57
Message-ID: <5PxkJ.72842$g35.33193@fx11.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Mon, 15 Nov 2021 18:38:57 UTC
Date: Mon, 15 Nov 2021 13:37:59 -0500
X-Received-Bytes: 3783
X-Original-Bytes: 3732
 by: EricP - Mon, 15 Nov 2021 18:37 UTC

Stephen Fuld wrote:
> On 11/15/2021 8:18 AM, EricP wrote:
>
> snip
>
>> This sounds like what I was trying to speculate earlier, that information
>> in the NN is encoded not only in the location of connections and
>> in their weights, but also in the phase delay of the signal arrivals.
>
> The weights in Artificial Neural Networks are a stand in for the timing
> of real neurons. Weighted amount of firing doesn't really exist in real
> neurons.
>
> Real neurons are, of course, analogue. When the pre-synaptic neuron
> fires, it send a relatively fixed amount of a neurotransmitter into the
> synapse. This causes the receptors in the post synaptic neuron to open
> a channel that allows ions into the cell. The cell has ion pumps that
> continually try to maintain the potential across the cell membrane, so
> over time, the effect of the depolarization from the synapse is
> dissipated. But if enough synaptic receptors open before the ion pumps
> can cope with the effects, the whole neuron depolarizes (i.e. "fires").
>
> But the amount of depolarization from any one synapse is relatively
> fixed. It is the number and timing of the firings of the presynaptic
> neurons, not their "weight" that determines when the post synaptic
> neuron fires. And, of course there is no "clock". ANNs use weights and
> defined update times to simulate this.

I don't think this is correct, or maybe we are thinking of
different types of neurons.

It is my understanding that after a neuron fires the action potential
is constant down the axon (no-fire or fire), but there are different
numbers of synaptic vesicles that release neurotransmitters and receptors
to receive them and this controls the strength of individual connections.
The number of vesicles and/or receptors are adjusted over time
to increase or decrease the individual connection weights.

Adjusting the number of vesicles or receptors and thereby the weights of
connections is thought to be one of the mechanisms for long term memory,
but also I was told to addiction to some drugs like cocaine
and possibly how LSD can cause flashbacks.

https://en.wikipedia.org/wiki/Chemical_synapse#Synaptic_strength

https://en.wikipedia.org/wiki/Synaptic_plasticity

https://en.wikipedia.org/wiki/Long-term_potentiation

Also in real neurons some connections are excitatory and others inhibitory
but that can be modeled by using signed weights.

One thing that the classic artificial "sum of multiplied weights" neuron
can't model is an XOR gate - it can only do AND and OR.

Re: Neural Network Accelerators

<65744a3f-23ff-443b-afae-e0285a1b4aa9n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22016&group=comp.arch#22016

  copy link   Newsgroups: comp.arch
X-Received: by 2002:ad4:5aa4:: with SMTP id u4mr39795690qvg.7.1637003449117;
Mon, 15 Nov 2021 11:10:49 -0800 (PST)
X-Received: by 2002:a05:6808:19aa:: with SMTP id bj42mr910562oib.37.1637003448982;
Mon, 15 Nov 2021 11:10:48 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!border2.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Mon, 15 Nov 2021 11:10:48 -0800 (PST)
In-Reply-To: <5PxkJ.72842$g35.33193@fx11.iad>
Injection-Info: google-groups.googlegroups.com; posting-host=162.229.185.59; posting-account=Gm3E_woAAACkDRJFCvfChVjhgA24PTsb
NNTP-Posting-Host: 162.229.185.59
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad>
<aCfkJ.35506$SW5.13028@fx45.iad> <779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>
<ILvkJ.66683$Wkjc.15861@fx35.iad> <smu31d$te9$1@dont-email.me> <5PxkJ.72842$g35.33193@fx11.iad>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <65744a3f-23ff-443b-afae-e0285a1b4aa9n@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: yogaman...@yahoo.com (Scott Smader)
Injection-Date: Mon, 15 Nov 2021 19:10:49 +0000
Content-Type: text/plain; charset="UTF-8"
Lines: 62
 by: Scott Smader - Mon, 15 Nov 2021 19:10 UTC

On Monday, November 15, 2021 at 10:39:01 AM UTC-8, EricP wrote:
> Stephen Fuld wrote:
> > On 11/15/2021 8:18 AM, EricP wrote:
> >
> > snip
> >
> >> This sounds like what I was trying to speculate earlier, that information
> >> in the NN is encoded not only in the location of connections and
> >> in their weights, but also in the phase delay of the signal arrivals.
> >
> > The weights in Artificial Neural Networks are a stand in for the timing
> > of real neurons. Weighted amount of firing doesn't really exist in real
> > neurons.
> >
> > Real neurons are, of course, analogue. When the pre-synaptic neuron
> > fires, it send a relatively fixed amount of a neurotransmitter into the
> > synapse. This causes the receptors in the post synaptic neuron to open
> > a channel that allows ions into the cell. The cell has ion pumps that
> > continually try to maintain the potential across the cell membrane, so
> > over time, the effect of the depolarization from the synapse is
> > dissipated. But if enough synaptic receptors open before the ion pumps
> > can cope with the effects, the whole neuron depolarizes (i.e. "fires").
> >
> > But the amount of depolarization from any one synapse is relatively
> > fixed. It is the number and timing of the firings of the presynaptic
> > neurons, not their "weight" that determines when the post synaptic
> > neuron fires. And, of course there is no "clock". ANNs use weights and
> > defined update times to simulate this.
> I don't think this is correct, or maybe we are thinking of
> different types of neurons.
>
> It is my understanding that after a neuron fires the action potential
> is constant down the axon (no-fire or fire), but there are different
> numbers of synaptic vesicles that release neurotransmitters and receptors
> to receive them and this controls the strength of individual connections.
> The number of vesicles and/or receptors are adjusted over time
> to increase or decrease the individual connection weights.
>
> Adjusting the number of vesicles or receptors and thereby the weights of
> connections is thought to be one of the mechanisms for long term memory,
> but also I was told to addiction to some drugs like cocaine
> and possibly how LSD can cause flashbacks.
>
> https://en.wikipedia.org/wiki/Chemical_synapse#Synaptic_strength
>
> https://en.wikipedia.org/wiki/Synaptic_plasticity
>
> https://en.wikipedia.org/wiki/Long-term_potentiation
>
> Also in real neurons some connections are excitatory and others inhibitory
> but that can be modeled by using signed weights.
>

It's not just the number of pre-synaptic neurons; there are typically multiple synapses between connected neurons, and varying numbers of vesicles on synapses.
Pretty video from Sebastian Seung's lab in 2013:
How to map neurons in 3D
https://www.youtube.com/watch?v=_iKrE2A2Vx4
The red and green neurons can be seen contacting at two separated areas.

> One thing that the classic artificial "sum of multiplied weights" neuron
> can't model is an XOR gate - it can only do AND and OR.

Um, that was true for the original Perceptron, but XOR is a long-solved problem for networks. If you've got AND, OR, and NOT with multiple levels, you can do anything!

Re: Neural Network Accelerators

<5dd2a72a-c100-4707-8637-abd7d22d5831n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22017&group=comp.arch#22017

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:6214:4007:: with SMTP id kd7mr40310981qvb.52.1637003579537;
Mon, 15 Nov 2021 11:12:59 -0800 (PST)
X-Received: by 2002:a9d:764c:: with SMTP id o12mr1131274otl.129.1637003579361;
Mon, 15 Nov 2021 11:12:59 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!border2.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Mon, 15 Nov 2021 11:12:59 -0800 (PST)
In-Reply-To: <5PxkJ.72842$g35.33193@fx11.iad>
Injection-Info: google-groups.googlegroups.com; posting-host=162.229.185.59; posting-account=Gm3E_woAAACkDRJFCvfChVjhgA24PTsb
NNTP-Posting-Host: 162.229.185.59
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad>
<aCfkJ.35506$SW5.13028@fx45.iad> <779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>
<ILvkJ.66683$Wkjc.15861@fx35.iad> <smu31d$te9$1@dont-email.me> <5PxkJ.72842$g35.33193@fx11.iad>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <5dd2a72a-c100-4707-8637-abd7d22d5831n@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: yogaman...@yahoo.com (Scott Smader)
Injection-Date: Mon, 15 Nov 2021 19:12:59 +0000
Content-Type: text/plain; charset="UTF-8"
Lines: 61
 by: Scott Smader - Mon, 15 Nov 2021 19:12 UTC

On Monday, November 15, 2021 at 10:39:01 AM UTC-8, EricP wrote:
> Stephen Fuld wrote:
> > On 11/15/2021 8:18 AM, EricP wrote:
> >
> > snip
> >
> >> This sounds like what I was trying to speculate earlier, that information
> >> in the NN is encoded not only in the location of connections and
> >> in their weights, but also in the phase delay of the signal arrivals.
> >
> > The weights in Artificial Neural Networks are a stand in for the timing
> > of real neurons. Weighted amount of firing doesn't really exist in real
> > neurons.
> >
> > Real neurons are, of course, analogue. When the pre-synaptic neuron
> > fires, it send a relatively fixed amount of a neurotransmitter into the
> > synapse. This causes the receptors in the post synaptic neuron to open
> > a channel that allows ions into the cell. The cell has ion pumps that
> > continually try to maintain the potential across the cell membrane, so
> > over time, the effect of the depolarization from the synapse is
> > dissipated. But if enough synaptic receptors open before the ion pumps
> > can cope with the effects, the whole neuron depolarizes (i.e. "fires").
> >
> > But the amount of depolarization from any one synapse is relatively
> > fixed. It is the number and timing of the firings of the presynaptic
> > neurons, not their "weight" that determines when the post synaptic
> > neuron fires. And, of course there is no "clock". ANNs use weights and
> > defined update times to simulate this.
> I don't think this is correct, or maybe we are thinking of
> different types of neurons.
>
> It is my understanding that after a neuron fires the action potential
> is constant down the axon (no-fire or fire), but there are different
> numbers of synaptic vesicles that release neurotransmitters and receptors
> to receive them and this controls the strength of individual connections.
> The number of vesicles and/or receptors are adjusted over time
> to increase or decrease the individual connection weights.
>
> Adjusting the number of vesicles or receptors and thereby the weights of
> connections is thought to be one of the mechanisms for long term memory,
> but also I was told to addiction to some drugs like cocaine
> and possibly how LSD can cause flashbacks.
>
> https://en.wikipedia.org/wiki/Chemical_synapse#Synaptic_strength
>
> https://en.wikipedia.org/wiki/Synaptic_plasticity
>
> https://en.wikipedia.org/wiki/Long-term_potentiation
>
> Also in real neurons some connections are excitatory and others inhibitory
> but that can be modeled by using signed weights.
>
t's not just the number of pre-synaptic neurons; there are typically multiple synapses between connected neurons, and varying numbers of vesicles on synapses.
Pretty video from Sebastian Seung's lab in 2013:
How to map neurons in 3D
https://www.youtube.com/watch?v=_iKrE2A2Vx4
The red and green neurons can be seen contacting at two separated areas.

> One thing that the classic artificial "sum of multiplied weights" neuron
> can't model is an XOR gate - it can only do AND and OR.

Um, that was true for the original Perceptron, but XOR is a long-solved problem for networks. If you've got AND and NOT, or OR and NOT, with multiple levels, you can do anything!

Re: Neural Network Accelerators

<sn1dac$gdc$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22032&group=comp.arch#22032

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Tue, 16 Nov 2021 15:02:34 -0800
Organization: A noiseless patient Spider
Lines: 60
Message-ID: <sn1dac$gdc$1@dont-email.me>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com>
<N3bkJ.63875$Wkjc.36258@fx35.iad> <aCfkJ.35506$SW5.13028@fx45.iad>
<779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>
<ILvkJ.66683$Wkjc.15861@fx35.iad> <smu31d$te9$1@dont-email.me>
<5PxkJ.72842$g35.33193@fx11.iad>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 16 Nov 2021 23:02:36 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="cba5a7ced4da002aa3de841eda0eab68";
logging-data="16812"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19PBERCzQqlPm0kBptRrDDFQowl1O+iUxU="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.3.0
Cancel-Lock: sha1:4SaeBLwzXYwF26sEjR6SPqUaeuo=
In-Reply-To: <5PxkJ.72842$g35.33193@fx11.iad>
Content-Language: en-US
 by: Stephen Fuld - Tue, 16 Nov 2021 23:02 UTC

On 11/15/2021 10:37 AM, EricP wrote:
> Stephen Fuld wrote:
>> On 11/15/2021 8:18 AM, EricP wrote:
>>
>> snip
>>
>>> This sounds like what I was trying to speculate earlier, that
>>> information
>>> in the NN is encoded not only in the location of connections and
>>> in their weights, but also in the phase delay of the signal arrivals.
>>
>> The weights in Artificial Neural Networks are a stand in for the
>> timing of real neurons. Weighted amount of firing doesn't really exist
>> in real neurons.
>>
>> Real neurons are, of course, analogue.  When the pre-synaptic neuron
>> fires, it send a relatively fixed amount of a neurotransmitter into
>> the synapse.  This causes the receptors in the post synaptic neuron to
>> open a channel that allows ions into the cell.  The cell has ion pumps
>> that continually try to maintain the potential across the cell
>> membrane, so over time, the effect of the depolarization from the
>> synapse is dissipated.  But if enough synaptic receptors open before
>> the ion pumps can cope with the effects, the whole neuron depolarizes
>> (i.e. "fires").
>>
>> But the amount of depolarization from any one synapse is relatively
>> fixed.  It is the number and timing of the firings of the presynaptic
>> neurons, not their "weight" that determines when the post synaptic
>> neuron fires.  And, of course there is no "clock".  ANNs use weights
>> and defined update times to simulate this.
>
> I don't think this is correct, or maybe we are thinking of
> different types of neurons.

>
> It is my understanding that after a neuron fires the action potential
> is constant down the axon (no-fire or fire), but there are different
> numbers of synaptic vesicles that release neurotransmitters and receptors
> to receive them and this controls the strength of individual connections.
> The number of vesicles and/or receptors are adjusted over time
> to increase or decrease the individual connection weights.

You are right, of course. I obviously had a malfunction in my neural
network. :-( I apologize.

snip

> One thing that the classic artificial "sum of multiplied weights" neuron
> can't model is an XOR gate - it can only do AND and OR.

That was a failure of the original perceptron. Minsky and Papert showed
this (for any number of layers), and that was what led to the "dark
winter" of NN research. The realization that non-linear activation
functions could get around this problem is what led to their "renascence".

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: Neural Network Accelerators

<b3e44c57-e16e-4cd2-b824-78ad1302bd1fn@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22033&group=comp.arch#22033

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:2cc:: with SMTP id a12mr12264054qtx.101.1637105037963;
Tue, 16 Nov 2021 15:23:57 -0800 (PST)
X-Received: by 2002:a9d:5f15:: with SMTP id f21mr9477812oti.331.1637105037698;
Tue, 16 Nov 2021 15:23:57 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!border2.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Tue, 16 Nov 2021 15:23:57 -0800 (PST)
In-Reply-To: <sn1dac$gdc$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=104.59.204.55; posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 104.59.204.55
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad>
<aCfkJ.35506$SW5.13028@fx45.iad> <779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>
<ILvkJ.66683$Wkjc.15861@fx35.iad> <smu31d$te9$1@dont-email.me>
<5PxkJ.72842$g35.33193@fx11.iad> <sn1dac$gdc$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <b3e44c57-e16e-4cd2-b824-78ad1302bd1fn@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: MitchAl...@aol.com (MitchAlsup)
Injection-Date: Tue, 16 Nov 2021 23:23:57 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Lines: 92
 by: MitchAlsup - Tue, 16 Nov 2021 23:23 UTC

On Tuesday, November 16, 2021 at 5:02:39 PM UTC-6, Stephen Fuld wrote:
> On 11/15/2021 10:37 AM, EricP wrote:
> > Stephen Fuld wrote:
> >> On 11/15/2021 8:18 AM, EricP wrote:
> >>
> >> snip
> >>
> >>> This sounds like what I was trying to speculate earlier, that
> >>> information
> >>> in the NN is encoded not only in the location of connections and
> >>> in their weights, but also in the phase delay of the signal arrivals.
> >>
> >> The weights in Artificial Neural Networks are a stand in for the
> >> timing of real neurons. Weighted amount of firing doesn't really exist
> >> in real neurons.
> >>
> >> Real neurons are, of course, analogue. When the pre-synaptic neuron
> >> fires, it send a relatively fixed amount of a neurotransmitter into
> >> the synapse. This causes the receptors in the post synaptic neuron to
> >> open a channel that allows ions into the cell. The cell has ion pumps
> >> that continually try to maintain the potential across the cell
> >> membrane, so over time, the effect of the depolarization from the
> >> synapse is dissipated. But if enough synaptic receptors open before
> >> the ion pumps can cope with the effects, the whole neuron depolarizes
> >> (i.e. "fires").
> >>
> >> But the amount of depolarization from any one synapse is relatively
> >> fixed. It is the number and timing of the firings of the presynaptic
> >> neurons, not their "weight" that determines when the post synaptic
> >> neuron fires. And, of course there is no "clock". ANNs use weights
> >> and defined update times to simulate this.
> >
> > I don't think this is correct, or maybe we are thinking of
> > different types of neurons.
>
>
> >
> > It is my understanding that after a neuron fires the action potential
> > is constant down the axon (no-fire or fire), but there are different
> > numbers of synaptic vesicles that release neurotransmitters and receptors
> > to receive them and this controls the strength of individual connections.
> > The number of vesicles and/or receptors are adjusted over time
> > to increase or decrease the individual connection weights.
> You are right, of course. I obviously had a malfunction in my neural
> network. :-( I apologize.
>
> snip
<
> > One thing that the classic artificial "sum of multiplied weights" neuron
> > can't model is an XOR gate - it can only do AND and OR.
<
> That was a failure of the original perceptron. Minsky and Papert showed
> this (for any number of layers), and that was what led to the "dark
> winter" of NN research. The realization that non-linear activation
> functions could get around this problem is what led to their "renascence"..
<
This leads to a short story about microcode.........
<
Basically, microcode is a ROM built from a PLA--a PLA is simply 2 NOT planes
back to back. There are a lot of things a PLA cannot do easily, but the addition
of a row of XOR gates between the NOR-planes significantly increases the kinds
of things a PLA can compute (sequence...)
<
But, computer NNs are built around × and +, they could just as easily be built
around × and ± ; if the weights (and/or coefficients) were signed..
<
It is just arithmetic..........
<
Getting nice signmoid functions is not that hard with look-up-tables..........
> --
> - Stephen Fuld
> (e-mail address disguised to prevent spam)

Re: Neural Network Accelerators

<rV8lJ.112170$IW4.87639@fx48.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22047&group=comp.arch#22047

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!paganini.bofh.team!news.dns-netz.com!news.freedyn.net!newsreader4.netcologne.de!news.netcologne.de!peer02.ams1!peer.ams1.xlned.com!news.xlned.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx48.iad.POSTED!not-for-mail
From: ThatWoul...@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com> <sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org> <bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com> <FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad> <smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad> <17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad> <aCfkJ.35506$SW5.13028@fx45.iad> <779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com> <ILvkJ.66683$Wkjc.15861@fx35.iad> <smu31d$te9$1@dont-email.me> <5PxkJ.72842$g35.33193@fx11.iad> <sn1dac$gdc$1@dont-email.me> <b3e44c57-e16e-4cd2-b824-78ad1302bd1fn@googlegroups.com>
In-Reply-To: <b3e44c57-e16e-4cd2-b824-78ad1302bd1fn@googlegroups.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 40
Message-ID: <rV8lJ.112170$IW4.87639@fx48.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Wed, 17 Nov 2021 15:08:07 UTC
Date: Wed, 17 Nov 2021 10:06:40 -0500
X-Received-Bytes: 3068
 by: EricP - Wed, 17 Nov 2021 15:06 UTC

MitchAlsup wrote:
> <
> This leads to a short story about microcode.........
> <
> Basically, microcode is a ROM built from a PLA--a PLA is simply 2 NOT planes

NOR planes or course.

> back to back. There are a lot of things a PLA cannot do easily, but the addition
> of a row of XOR gates between the NOR-planes significantly increases the kinds
> of things a PLA can compute (sequence...)

The planes are both dynamic logic because CMOS doesn't allow static
wired-OR and I saw some mention that there are some design fiddly bits
ensuring the second NOR plane doesn't discharge the first plane too soon.

Hmmm... XOR's between NOR planes... interesting, I never thought of that.

> <
> But, computer NNs are built around × and +, they could just as easily be built
> around × and ± ; if the weights (and/or coefficients) were signed..
> <
> It is just arithmetic..........

That is how it is usually represented in the equations,
as the sum of a series of synapses state multiplies times the weights,
and an unsigned compare to a trigger value.

The sum and trigger have enough bit to not overflow.
For 1024 8-bit integer synapse weights the parallel adder looks
like it requires 2048 adders varying in size from 8 to 8+10 bits
producing an 18 bit total. Does that sound correct?

Note that the synapse state is 0 or 1 so the multiply is unnecessary.
I would consider replacing the above neuron mechanism with a PHI or MUX
function to select between 0 or a signed weight for each synapse based on
the 0 or 1 state, the sum done with signed arithmetic without overflow,
and a signed compare to the trigger level.

Re: Neural Network Accelerators

<sn3el7$tjj$1@newsreader4.netcologne.de>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22049&group=comp.arch#22049

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!.POSTED.2001-4dd7-10ca-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de!not-for-mail
From: tkoe...@netcologne.de (Thomas Koenig)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Wed, 17 Nov 2021 17:37:43 -0000 (UTC)
Organization: news.netcologne.de
Distribution: world
Message-ID: <sn3el7$tjj$1@newsreader4.netcologne.de>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com>
<N3bkJ.63875$Wkjc.36258@fx35.iad> <aCfkJ.35506$SW5.13028@fx45.iad>
<779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>
<ILvkJ.66683$Wkjc.15861@fx35.iad> <smu31d$te9$1@dont-email.me>
<5PxkJ.72842$g35.33193@fx11.iad> <sn1dac$gdc$1@dont-email.me>
<b3e44c57-e16e-4cd2-b824-78ad1302bd1fn@googlegroups.com>
Injection-Date: Wed, 17 Nov 2021 17:37:43 -0000 (UTC)
Injection-Info: newsreader4.netcologne.de; posting-host="2001-4dd7-10ca-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de:2001:4dd7:10ca:0:7285:c2ff:fe6c:992d";
logging-data="30323"; mail-complaints-to="abuse@netcologne.de"
User-Agent: slrn/1.0.3 (Linux)
 by: Thomas Koenig - Wed, 17 Nov 2021 17:37 UTC

MitchAlsup <MitchAlsup@aol.com> schrieb:

> Basically, microcode is a ROM built from a PLA--a PLA is simply
> 2 NOT planes back to back. There are a lot of things a PLA cannot
> do easily, but the addition of a row of XOR gates between the
> NOR-planes significantly increases the kinds of things a PLA can
> compute (sequence...)

Sound interesting.

Do you have a reference for that, by any chance?

Re: Neural Network Accelerators

<471aacce-f048-49ce-ba4d-b2c59a759ae4n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22050&group=comp.arch#22050

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:1a8d:: with SMTP id bl13mr15343656qkb.200.1637178271523;
Wed, 17 Nov 2021 11:44:31 -0800 (PST)
X-Received: by 2002:a9d:764c:: with SMTP id o12mr16371386otl.129.1637178271303;
Wed, 17 Nov 2021 11:44:31 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!news.mixmin.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Wed, 17 Nov 2021 11:44:31 -0800 (PST)
In-Reply-To: <sn3el7$tjj$1@newsreader4.netcologne.de>
Injection-Info: google-groups.googlegroups.com; posting-host=104.59.204.55; posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 104.59.204.55
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad>
<aCfkJ.35506$SW5.13028@fx45.iad> <779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>
<ILvkJ.66683$Wkjc.15861@fx35.iad> <smu31d$te9$1@dont-email.me>
<5PxkJ.72842$g35.33193@fx11.iad> <sn1dac$gdc$1@dont-email.me>
<b3e44c57-e16e-4cd2-b824-78ad1302bd1fn@googlegroups.com> <sn3el7$tjj$1@newsreader4.netcologne.de>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <471aacce-f048-49ce-ba4d-b2c59a759ae4n@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: MitchAl...@aol.com (MitchAlsup)
Injection-Date: Wed, 17 Nov 2021 19:44:31 +0000
Content-Type: text/plain; charset="UTF-8"
 by: MitchAlsup - Wed, 17 Nov 2021 19:44 UTC

On Wednesday, November 17, 2021 at 11:37:45 AM UTC-6, Thomas Koenig wrote:
> MitchAlsup <Mitch...@aol.com> schrieb:
> > Basically, microcode is a ROM built from a PLA--a PLA is simply
> > 2 NOT planes back to back. There are a lot of things a PLA cannot
> > do easily, but the addition of a row of XOR gates between the
> > NOR-planes significantly increases the kinds of things a PLA can
> > compute (sequence...)
> Sound interesting.
>
> Do you have a reference for that, by any chance?
<
Probably not:: we did use this trick on the 68000, 68010 and 68020 microcode
stores.
<
Use cases: Say you have a term that is asserted by 98% of all microcode
"instructions", you can save power by only computing it on the 2% that
don't need it and then use XOR to flip the polarity.
<
Another use case: Say you have a calculation and an available function
unit and, as long as "blah" does not happen, you can use it, so you setup
microcode to assume you can do it, and then use the XORs to cancel
it on those "special" occasions when someone else consumes that function
unit.

Re: Neural Network Accelerators

<8b3acf7d-f656-4f4f-8829-c2d4853c532dn@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22051&group=comp.arch#22051

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:164c:: with SMTP id y12mr20286611qtj.63.1637181522735;
Wed, 17 Nov 2021 12:38:42 -0800 (PST)
X-Received: by 2002:a4a:94e4:: with SMTP id l33mr10130125ooi.7.1637181522457;
Wed, 17 Nov 2021 12:38:42 -0800 (PST)
Path: i2pn2.org!i2pn.org!aioe.org!news.uzoreto.com!newsfeed.xs4all.nl!newsfeed7.news.xs4all.nl!news-out.netnews.com!news.alt.net!fdc2.netnews.com!peer02.ams1!peer.ams1.xlned.com!news.xlned.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Wed, 17 Nov 2021 12:38:42 -0800 (PST)
In-Reply-To: <sn3el7$tjj$1@newsreader4.netcologne.de>
Injection-Info: google-groups.googlegroups.com; posting-host=136.50.253.102; posting-account=AoizIQoAAADa7kQDpB0DAj2jwddxXUgl
NNTP-Posting-Host: 136.50.253.102
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad>
<aCfkJ.35506$SW5.13028@fx45.iad> <779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>
<ILvkJ.66683$Wkjc.15861@fx35.iad> <smu31d$te9$1@dont-email.me>
<5PxkJ.72842$g35.33193@fx11.iad> <sn1dac$gdc$1@dont-email.me>
<b3e44c57-e16e-4cd2-b824-78ad1302bd1fn@googlegroups.com> <sn3el7$tjj$1@newsreader4.netcologne.de>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <8b3acf7d-f656-4f4f-8829-c2d4853c532dn@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: jim.brak...@ieee.org (JimBrakefield)
Injection-Date: Wed, 17 Nov 2021 20:38:42 +0000
Content-Type: text/plain; charset="UTF-8"
X-Received-Bytes: 2685
 by: JimBrakefield - Wed, 17 Nov 2021 20:38 UTC

On Wednesday, November 17, 2021 at 11:37:45 AM UTC-6, Thomas Koenig wrote:
> MitchAlsup <Mitch...@aol.com> schrieb:
> > Basically, microcode is a ROM built from a PLA--a PLA is simply
> > 2 NOT planes back to back. There are a lot of things a PLA cannot
> > do easily, but the addition of a row of XOR gates between the
> > NOR-planes significantly increases the kinds of things a PLA can
> > compute (sequence...)
> Sound interesting.
>
> Do you have a reference for that, by any chance?
wikipedia has a writeup on "CPLD" most of which have the XOR of a product term with other sum-of-product terms.
Digikey has them in stock along with data sheets
"Embedded - CPLDs (Complex Programmable Logic Devices)"

I've used the XC9536X series.
It has a schematic of the "Macrocell"
Totally obsoleted by FPGAs which are bigger, faster and lower power.

Re: Neural Network Accelerators

<sn3poa$u8p$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22052&group=comp.arch#22052

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: cr88...@gmail.com (BGB)
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
Date: Wed, 17 Nov 2021 14:47:03 -0600
Organization: A noiseless patient Spider
Lines: 148
Message-ID: <sn3poa$u8p$1@dont-email.me>
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com>
<896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com>
<N3bkJ.63875$Wkjc.36258@fx35.iad>
<94d907bf-0e1a-44c2-8a90-e23ee1dde3cdn@googlegroups.com>
<HLvkJ.66682$Wkjc.57313@fx35.iad>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 17 Nov 2021 20:47:06 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="ce339134b537c6fd9ef5d326debf4b0a";
logging-data="31001"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19dYTnAAXrVObs267O1puGs"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.3.0
Cancel-Lock: sha1:srv85Oy2sUcWiVSY9AbANUuM57g=
In-Reply-To: <HLvkJ.66682$Wkjc.57313@fx35.iad>
Content-Language: en-US
 by: BGB - Wed, 17 Nov 2021 20:47 UTC

On 11/15/2021 10:15 AM, EricP wrote:
> MitchAlsup wrote:
>> On Sunday, November 14, 2021 at 10:46:40 AM UTC-6, EricP wrote:
>>>
>>> Understanding the origin of the wiring of biological NN (BNN) is
>>> appropriate to discussion of NN Accelerators as we are endeavoring to
>>> improve such simulators.
>> <
>> It is pretty clear that NNs are "pattern matchers" where one does not
>> necessarily know the pattern a-priori.
>> <
>> The still open question is what kind of circuitry/algorithm is
>> appropriate to match the patterns one has never even dreamed up ??
>
> The artificial convolution NN are basically fancy curve fit algorithms
> that adjust a polynomial with tens or hundreds of thousands of terms
> to some number of inputs after millions of examples.
>

A lot of what people are doing with NNs could be done with
auto-correlation and FIR filters.

Though, in the overly loose definitions often used, one could also try
to classify auto-correlation and FIR filters as a type of NN.

> Biological NN perform associative learning after just a few examples
> with just a few neurons.
>
> Both are suitable for sorting fish but only one
> can fit inside and control a fruit fly.
>

There is one reason I like genetic algorithms and genetic programming
for some tasks:
While the training process is itself fairly slow, and the results are
(rarely) much better than something one could come up with themselves
(in usually a fraction the time and effort), one can at least use it to
generate results that are fairly cheap to run within the constraints of
the target machine (unlike CNN's or "Deep Learning" models).

So, one can in theory set up a GP evolver to be able to use a simple set
of vector-arithmetic operators and a simplified register machine
(generally simpler register models seem to work out better, and are
simpler to implement, than a GP evolver which works in terms of ASTs).

The weighting algorithm can also impose a penalty for the number of
(Non-NOP) operations used, favoring smaller and simpler solutions (and
causing non-effective operations to tend to mutate into NOPs; which can
be removed when generating the final output).

Say, for example, for each "program":
Has between 64 and 1024 instruction words to work with;
Usually this is a fixed parameter in the tests.
Has 32 or 64 bits per instruction word;
Has 16 or 32 registers;
May or may not have control-flow operations (depending on the task);
...

An example GP-ISA design might have:
64-bit instruction words;
16 or 32 registers, encoded in a padded form (ECC style, *);
Opcode bits may or may not also have a padded encoding;
Most invalid operations are treated as NOP;
There is a way to encode things like vector loads, ...;
Most operators are 3R form, eg: "OP Rs, Rt, Rd"
...

*: Multiple encodings may map to the same logical register, and using
ECC bits makes the register ID more resistant to random bit-flips
(caused by the mutator).

So, a register may be encoded as:
4 bits (abcd): Register Number, Gray-Coded
3 bits: Parity (a^b^c, b^c^d, c^d^a)
R0: 000-0000
R1: 011-0001
R2: 100-0011
R3: 111-0010
R4: 001-0110
...

Similarly, could do a 5 bit register in 8 bits, eg:
{ a^b^c, b^c^d, c^d^e, e, a, b, c, d }

The ECC (~ Hamming(7,4)) may try to "correct" the register on decode.

Opcode may encode an 8 or 10-bit opcode number in 16 bits.

Encodings which fall into the "unrecoverable" or "disallowed" parts of
the encoding space are interpreted as NOPs.

Vector immediate values may be encoded as 48-bits, such as four 12-bit
floating-point values (S.E5.M6), which may also be stored in gray-coded
form. There might also be 2x 24-bit bit (truncated gray-coded single),
or 1x 48-bit (truncated gray-coded double).

It may also make sense to have integer operators available (depends on
the task).

....

The GP evolver basically consists of:
Test data, which is fed into the program in some form;
Say, the test data is presented as input registers.
An interpreter, which runs each GP program;
Output is one or more registers.
A heuristic to rank its performance;
...

For breeding the top-performing programs:
Pick instruction words randomly from each parent;
Randomly flip bits in each child produced.

The initial state would fill the programs with "random garbage" though,
using NOP encodings for the operators.

If one allows for control-flow, the interpreter will automatically
terminate after a certain number of instructions, and impose a fairly
severe penalty value.

Result (after a certain number of runs) would be dumped out in an ASM
style notation ("disassembled" from the internal format).

Not really developed any of this into a cohesive library or tool, partly
as it tends to be fairly ad-hoc, and I am not sure if anyone besides
myself would find something like this all that useful. These sorts of
small-scale tests were usually done via "copy-pasting something together".

....

Actually, thinking of it, it isn't exactly that huge of a stretch that
someone could also run such a GP evolver on an FPGA (as opposed to
running it on the CPU on a PC). It is possible that an FPGA could be
significantly faster at this task (if one had a good way to move results
and data between the FPGA and a PC).

....

Re: Neural Network Accelerators

<84elJ.36392$_Y5.21579@fx29.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22053&group=comp.arch#22053

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!news.mixmin.net!npeer.as286.net!npeer-ng0.as286.net!peer03.ams1!peer.ams1.xlned.com!news.xlned.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx29.iad.POSTED!not-for-mail
From: ThatWoul...@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: Neural Network Accelerators
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com> <sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org> <bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com> <FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad> <smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad> <17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad> <aCfkJ.35506$SW5.13028@fx45.iad> <779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com> <ILvkJ.66683$Wkjc.15861@fx35.iad> <fae39d16-2b37-4edb-bb3f-00bb4941a7ccn@googlegroups.com>
In-Reply-To: <fae39d16-2b37-4edb-bb3f-00bb4941a7ccn@googlegroups.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 98
Message-ID: <84elJ.36392$_Y5.21579@fx29.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Wed, 17 Nov 2021 21:00:52 UTC
Date: Wed, 17 Nov 2021 15:59:39 -0500
X-Received-Bytes: 6733
 by: EricP - Wed, 17 Nov 2021 20:59 UTC

Scott Smader wrote:
> On Monday, November 15, 2021 at 8:18:50 AM UTC-8, EricP wrote:
>
>> Thanks for the pointers.
> Well, quoting Bobcat Goldthwait, "Thank you for encouraging my behavior."
>
>> This sounds like what I was trying to speculate earlier, that information
>> in the NN is encoded not only in the location of connections and
>> in their weights, but also in the phase delay of the signal arrivals.
>>
>> An analogy would be an asynchronous logic circuit with feedback pathways
>> where propagation delay on the interconnect wire encodes part of the
>> signal processing logic.
>
> That is very much in haromony with the thinking in this paper which documents the use of varying myelination of axons to produce precise timing in birdsong.
> Local axonal conduction delays underlie precise timing of a neural sequence
> https://www.biorxiv.org/content/10.1101/864231v1
>
> This 2009 paper directly states your proposition: "[T]he visual detection threshold fluctuates over time along with the phase of ongoing EEG activity. The results support the notion that ongoing oscillations shape our perception, possibly by providing a temporal reference frame for neural codes that rely on precise spike timing."
> The Phase of Ongoing EEG Oscillations Predicts Visual Perception
> https://www.jneurosci.org/content/29/24/7869
>
> And the criticality of phase-related information is also suggested by this:
> Intracranial recordings reveal ubiquitous in-phase and in- antiphase functional connectivity between homologous brain regions in humans
> https://www.biorxiv.org/content/10.1101/2020.06.19.162065v2
>
> Possibly related, this paper claims that brain signaling is divided into frequency bands:
> Causal evidence of network communication in whole-brain dynamics through a multiplexed neural code
> https://doi.org/10.1101/2020.06.09.142695
> I don't believe the paper addresses this, but multiple bands could be used simultaneously for individual phase-synchronization signals in separated functional networks.
>
> In line with Ivan's insightful comment about evolution optimizing control systems to the edge of chaos, it also makes sense that given enough time, evolution would find an (approximate) implementation of almost every possible signal processing technique.
>
> And maybe even back-propagation, too, as speculated in this very recent paper about some simulations they did:
> Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits
> https://www.nature.com/articles/s41593-021-00857-x
> I'm too cheap to buy the article, but this Wired article describes it:
> https://www.wired.com/story/neuron-bursts-can-mimic-a-famous-ai-learning-strategy/

Google Scholar finds

https://www.biorxiv.org/content/10.1101/2020.03.30.015511v2.full

> Numenta forum has a discussion about it: https://discourse.numenta.org/t/burst-as-a-local-learning-rule-in-apical-but-not-basal-dendrites/9093
>
> Fun stuff!
>

Thanks.

With respect to phase shifting of signals I was thinking it might be
viewed as a dynamic mechanism.
Viewed statically neuron A triggers if B AND (C OR D) inputs are present.
But viewed dynamically signal C arrives early, D arrives later,
so they can be seen as causing a phase shift in A's output,
C advances A's output, D retards A's output.

Networks of such dynamic circuits connected together with feedback
reminded me of a hologram where the associative memory is distributed
across the whole of the circuit. That would be a big advantage as it
means that no single neuron is responsible for an individual memory.
Also the storage capacity is much higher than just the number
of synapses implies. Also no back propagation is required.

https://en.wikipedia.org/wiki/Holographic_associative_memory

I finally found the links I was looking for and it seems I am not
the first to note a similarity between neural networks and holograms
as it was proposed by Pribram in 1969. Apparently these are called
"Holographic Recurrent Networks" or "Holographic Reduced Representations"

Holographic Recurrent Networks, Plate, 1992
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.6991&rep=rep1&type=pdf

Holographic Reduced Representations, Plate, 1995
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.4546&rep=rep1&type=pdf

Plugging those titles into Google Scholar leads to
great many papers on holographic neural memory like

Encoding Structure in Holographic Reduced Representations, 2013
https://www.researchgate.net/profile/Douglas-Mewhort/publication/233836706_Encoding_Structure_in_Holographic_Reduced_Representations/links/57adb76608ae0932c976b72e/Encoding-Structure-in-Holographic-Reduced-Representations.pdf

Dynamically Structured Holographic Memory 2014
https://psyarxiv.com/pw93e/download/?format=pdf

Towards holographic brain memory based on randomization and
Walsh-Hadamard transformation 2016
https://www.researchgate.net/profile/Shlomi-Dolev/publication/289487007_Holographic_Brain_Memory_and_Computation/links/5d23fd2e92851cf4407280d0/Holographic-Brain-Memory-and-Computation.pdf

My gut tells me that somehow all of the above ties together
and the path forward to really useful artificial NN's is in
systems that combine all of the above ideas.
Which is why I thought convolution NN look like a dead end.

Re: Neural Network Accelerators

<546d3140-a52b-43f7-8b32-a8bff40e5694n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22054&group=comp.arch#22054

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:13cf:: with SMTP id p15mr22016797qtk.9.1637199137421;
Wed, 17 Nov 2021 17:32:17 -0800 (PST)
X-Received: by 2002:a9d:6d01:: with SMTP id o1mr17513947otp.227.1637199137157;
Wed, 17 Nov 2021 17:32:17 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Wed, 17 Nov 2021 17:32:17 -0800 (PST)
In-Reply-To: <84elJ.36392$_Y5.21579@fx29.iad>
Injection-Info: google-groups.googlegroups.com; posting-host=162.229.185.59; posting-account=7PRLigoAAACFPwZVkHN-LZAq4J2-eUVQ
NNTP-Posting-Host: 162.229.185.59
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad>
<aCfkJ.35506$SW5.13028@fx45.iad> <779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>
<ILvkJ.66683$Wkjc.15861@fx35.iad> <fae39d16-2b37-4edb-bb3f-00bb4941a7ccn@googlegroups.com>
<84elJ.36392$_Y5.21579@fx29.iad>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <546d3140-a52b-43f7-8b32-a8bff40e5694n@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: yogaman...@gmail.com (Yoga Man)
Injection-Date: Thu, 18 Nov 2021 01:32:17 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: Yoga Man - Thu, 18 Nov 2021 01:32 UTC

On Wednesday, November 17, 2021 at 1:00:55 PM UTC-8, EricP wrote:
<snip>
> With respect to phase shifting of signals I was thinking it might be
> viewed as a dynamic mechanism.
> Viewed statically neuron A triggers if B AND (C OR D) inputs are present.
> But viewed dynamically signal C arrives early, D arrives later,
> so they can be seen as causing a phase shift in A's output,
> C advances A's output, D retards A's output.
>

Subutai Ahmad at Numenta has proposed that sub-threshold dendritic pre-charge can prime a neuron to fire enough sooner than competitive neurons in a voting circuit to allow it to inhibit their firing. It's certainly possible that there are asynchronous races between portions of neural circuits as they compete for their own axonal discharge behavior (fire/not fire/burst/partial depolarization) to get a reward from the system for that behavior (eg, nourishment that allows new synapses to be made; maybe some other hygienic activity by astrocytes). But if the system is phase-locked (with some controllable variability) to a reference, then isn't it easier to pick winners and losers? (Presumably, neurons that don't meet the next-cycle deadline don't get rewarded and gradually disconnect from that circuit.)

And it's possible that networks of asynchronous circuits might fire synchronously at chaotically stable frequencies without being influenced to do so, but that seems pretty unlikely to me, especially given the ubiquity of alpha, theta, etc. waves.

> Networks of such dynamic circuits connected together with feedback
> reminded me of a hologram where the associative memory is distributed
> across the whole of the circuit. That would be a big advantage as it
> means that no single neuron is responsible for an individual memory.
> Also the storage capacity is much higher than just the number
> of synapses implies. Also no back propagation is required.
>
> https://en.wikipedia.org/wiki/Holographic_associative_memory
>
> I finally found the links I was looking for and it seems I am not
> the first to note a similarity between neural networks and holograms
> as it was proposed by Pribram in 1969. Apparently these are called
> "Holographic Recurrent Networks" or "Holographic Reduced Representations"
>
> Holographic Recurrent Networks, Plate, 1992
> https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.6991&rep=rep1&type=pdf
>
> Holographic Reduced Representations, Plate, 1995
> https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.4546&rep=rep1&type=pdf
>
> Plugging those titles into Google Scholar leads to
> great many papers on holographic neural memory like
>
> Encoding Structure in Holographic Reduced Representations, 2013
> https://www.researchgate.net/profile/Douglas-Mewhort/publication/233836706_Encoding_Structure_in_Holographic_Reduced_Representations/links/57adb76608ae0932c976b72e/Encoding-Structure-in-Holographic-Reduced-Representations.pdf
>
> Dynamically Structured Holographic Memory 2014
> https://psyarxiv.com/pw93e/download/?format=pdf
>
> Towards holographic brain memory based on randomization and
> Walsh-Hadamard transformation 2016
> https://www.researchgate.net/profile/Shlomi-Dolev/publication/289487007_Holographic_Brain_Memory_and_Computation/links/5d23fd2e92851cf4407280d0/Holographic-Brain-Memory-and-Computation.pdf
>
> My gut tells me that somehow all of the above ties together
> and the path forward to really useful artificial NN's is in
> systems that combine all of the above ideas.
> Which is why I thought convolution NN look like a dead end.

I agree that the original CNNs are inferior to however networks are configured in human brains, and I mean no disrespect to your gut, but there are other ways to survive noise and component failure in an associative memory. One such is sparse data representation, also described by Numenta. SDRs have other neat characteristics, like the ability to store multiple entries in one group of neurons while retaining the ability to access them individually, and the correspondence between the sparsity of neurons required for a representation in an SDR and the fact that only about 2% of neurons are active at any given moment. Numenta offer a suite of tools to explore SDRs.

Subutai Ahmad has also shown how a single neuron can learn hundreds of contexts or sequence of its inputs. The Numenta group has done some really innovative and useful work.

One thing I don't like about Numenta's approach is their assumption that intelligence is neocortical. That seems kinda chauvinist coming from the species with the most blown-out neocortexx, and a whole lot of animals survived and are surviving without neocortices. Besides, humans had bigger brains before 3,000 years ago, so were people smarter then?
https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full

And now I have some learning to do about the biological basis for holographic memory. I had dismissed it years ago, as reports of hippocampal localization of memory, place and grid cells, visual cortex specificity - esp. layer 1, etc., grew more frequent. I had not realized it's actively being pursued.

Thank you.

Re: Neural Network Accelerators

<ad3affb7-8897-476c-ab42-40ea671b583cn@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=22055&group=comp.arch#22055

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a0c:f992:: with SMTP id t18mr60143567qvn.37.1637199307900;
Wed, 17 Nov 2021 17:35:07 -0800 (PST)
X-Received: by 2002:a05:6808:211c:: with SMTP id r28mr4292638oiw.155.1637199307663;
Wed, 17 Nov 2021 17:35:07 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!border2.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Wed, 17 Nov 2021 17:35:07 -0800 (PST)
In-Reply-To: <546d3140-a52b-43f7-8b32-a8bff40e5694n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=162.229.185.59; posting-account=Gm3E_woAAACkDRJFCvfChVjhgA24PTsb
NNTP-Posting-Host: 162.229.185.59
References: <8ef0724a-811b-47ff-ad20-709c8c211a37n@googlegroups.com>
<sml4tj$img$1@dont-email.me> <smlm6d$o6a$1@gioia.aioe.org>
<bfbea040-8ef0-4020-aa45-56ae1531f4f8n@googlegroups.com> <896c6088-0cdf-4ad3-b432-544b228f7924n@googlegroups.com>
<FtTjJ.17520$hm7.7298@fx07.iad> <oSTjJ.50938$SR4.17611@fx43.iad>
<smp6vg$c4v$1@dont-email.me> <cV8kJ.31989$_Y5.24885@fx29.iad>
<17882f4b-ad6f-4603-85c2-165ca9fc84f9n@googlegroups.com> <N3bkJ.63875$Wkjc.36258@fx35.iad>
<aCfkJ.35506$SW5.13028@fx45.iad> <779e5c29-0576-4dc1-8c8e-79e57387c7b9n@googlegroups.com>
<ILvkJ.66683$Wkjc.15861@fx35.iad> <fae39d16-2b37-4edb-bb3f-00bb4941a7ccn@googlegroups.com>
<84elJ.36392$_Y5.21579@fx29.iad> <546d3140-a52b-43f7-8b32-a8bff40e5694n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <ad3affb7-8897-476c-ab42-40ea671b583cn@googlegroups.com>
Subject: Re: Neural Network Accelerators
From: yogaman...@yahoo.com (Scott Smader)
Injection-Date: Thu, 18 Nov 2021 01:35:07 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Lines: 102
 by: Scott Smader - Thu, 18 Nov 2021 01:35 UTC

On Wednesday, November 17, 2021 at 5:32:18 PM UTC-8, Yoga Man wrote:
> On Wednesday, November 17, 2021 at 1:00:55 PM UTC-8, EricP wrote:
> <snip>
> > With respect to phase shifting of signals I was thinking it might be
> > viewed as a dynamic mechanism.
> > Viewed statically neuron A triggers if B AND (C OR D) inputs are present.
> > But viewed dynamically signal C arrives early, D arrives later,
> > so they can be seen as causing a phase shift in A's output,
> > C advances A's output, D retards A's output.
> >
> Subutai Ahmad at Numenta has proposed that sub-threshold dendritic pre-charge can prime a neuron to fire enough sooner than competitive neurons in a voting circuit to allow it to inhibit their firing. It's certainly possible that there are asynchronous races between portions of neural circuits as they compete for their own axonal discharge behavior (fire/not fire/burst/partial depolarization) to get a reward from the system for that behavior (eg, nourishment that allows new synapses to be made; maybe some other hygienic activity by astrocytes). But if the system is phase-locked (with some controllable variability) to a reference, then isn't it easier to pick winners and losers? (Presumably, neurons that don't meet the next-cycle deadline don't get rewarded and gradually disconnect from that circuit.)
>
> And it's possible that networks of asynchronous circuits might fire synchronously at chaotically stable frequencies without being influenced to do so, but that seems pretty unlikely to me, especially given the ubiquity of alpha, theta, etc. waves.
> > Networks of such dynamic circuits connected together with feedback
> > reminded me of a hologram where the associative memory is distributed
> > across the whole of the circuit. That would be a big advantage as it
> > means that no single neuron is responsible for an individual memory.
> > Also the storage capacity is much higher than just the number
> > of synapses implies. Also no back propagation is required.
> >
> > https://en.wikipedia.org/wiki/Holographic_associative_memory
> >
> > I finally found the links I was looking for and it seems I am not
> > the first to note a similarity between neural networks and holograms
> > as it was proposed by Pribram in 1969. Apparently these are called
> > "Holographic Recurrent Networks" or "Holographic Reduced Representations"
> >
> > Holographic Recurrent Networks, Plate, 1992
> > https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.6991&rep=rep1&type=pdf
> >
> > Holographic Reduced Representations, Plate, 1995
> > https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.4546&rep=rep1&type=pdf
> >
> > Plugging those titles into Google Scholar leads to
> > great many papers on holographic neural memory like
> >
> > Encoding Structure in Holographic Reduced Representations, 2013
> > https://www.researchgate.net/profile/Douglas-Mewhort/publication/233836706_Encoding_Structure_in_Holographic_Reduced_Representations/links/57adb76608ae0932c976b72e/Encoding-Structure-in-Holographic-Reduced-Representations..pdf
> >
> > Dynamically Structured Holographic Memory 2014
> > https://psyarxiv.com/pw93e/download/?format=pdf
> >
> > Towards holographic brain memory based on randomization and
> > Walsh-Hadamard transformation 2016
> > https://www.researchgate.net/profile/Shlomi-Dolev/publication/289487007_Holographic_Brain_Memory_and_Computation/links/5d23fd2e92851cf4407280d0/Holographic-Brain-Memory-and-Computation.pdf
> >
> > My gut tells me that somehow all of the above ties together
> > and the path forward to really useful artificial NN's is in
> > systems that combine all of the above ideas.
> > Which is why I thought convolution NN look like a dead end.
> I agree that the original CNNs are inferior to however networks are configured in human brains, and I mean no disrespect to your gut, but there are other ways to survive noise and component failure in an associative memory. One such is sparse data representation, also described by Numenta. SDRs have other neat characteristics, like the ability to store multiple entries in one group of neurons while retaining the ability to access them individually, and the correspondence between the sparsity of neurons required for a representation in an SDR and the fact that only about 2% of neurons are active at any given moment. Numenta offer a suite of tools to explore SDRs.
>
> Subutai Ahmad has also shown how a single neuron can learn hundreds of contexts or sequence of its inputs. The Numenta group has done some really innovative and useful work.
>
> One thing I don't like about Numenta's approach is their assumption that intelligence is neocortical. That seems kinda chauvinist coming from the species with the most blown-out neocortexx, and a whole lot of animals survived and are surviving without neocortices. Besides, humans had bigger brains before 3,000 years ago, so were people smarter then?
> https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full
>
> And now I have some learning to do about the biological basis for holographic memory. I had dismissed it years ago, as reports of hippocampal localization of memory, place and grid cells, visual cortex specificity - esp. layer 1, etc., grew more frequent. I had not realized it's actively being pursued.
>
> Thank you.
Oops. I was logged in with my other Google ID. I'm usually Yoga Man in other contexts. I'll try to keep it Scott Smader here.

Pages:123
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor