Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

You might have mail.


devel / comp.arch / Re: Someone's Trying Again (Ascenium)

SubjectAuthor
* Someone's Trying Again (Ascenium)Quadibloc
+* Re: Someone's Trying Again (Ascenium)MitchAlsup
|`- Re: Someone's Trying Again (Ascenium)Terje Mathisen
+* Re: Someone's Trying Again (Ascenium)luke.l...@gmail.com
|+- Re: Someone's Trying Again (Ascenium)John Dallman
|`* Re: Someone's Trying Again (Ascenium)George Neuner
| `- Re: Someone's Trying Again (Ascenium)Chris M. Thomasson
+* Re: Someone's Trying Again (Ascenium)David Brown
|`* Re: Someone's Trying Again (Ascenium)Marcus
| +* Re: Someone's Trying Again (Ascenium)David Brown
| |+* Re: Someone's Trying Again (Ascenium)Theo Markettos
| ||+* Re: Someone's Trying Again (Ascenium)Marcus
| |||+* Re: Someone's Trying Again (Ascenium)David Brown
| ||||`* Re: Power efficient neural networks (was: Someone's Trying AgainMarcus
| |||| +- Re: Power efficient neural networks (was: Someone's Trying AgainDavid Brown
| |||| `* Re: Power efficient neural networks (was: Someone's Trying AgainStephen Fuld
| ||||  +* Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))MitchAlsup
| ||||  |+* Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))Quadibloc
| ||||  ||`* Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))Quadibloc
| ||||  || +* Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))MitchAlsup
| ||||  || |`* Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))Anton Ertl
| ||||  || | `* Re: Power efficient neural networks (was: Someone's Trying AgainThomas Koenig
| ||||  || |  `* Re: Power efficient neural networks (was: Someone's Trying AgainIvan Godard
| ||||  || |   `* Re: Power efficient neural networks (was: Someone's Trying AgainThomas Koenig
| ||||  || |    `* Re: Power efficient neural networks (was: Someone's Trying AgainIvan Godard
| ||||  || |     `* Re: Power efficient neural networks (was: Someone's Trying AgainThomas Koenig
| ||||  || |      +- Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))Quadibloc
| ||||  || |      +* Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))Quadibloc
| ||||  || |      |`* Re: Power efficient neural networks (was: Someone's Trying AgainThomas Koenig
| ||||  || |      | +* Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))MitchAlsup
| ||||  || |      | |`* Re: Power efficient neural networks (was: Someone's Trying AgainThomas Koenig
| ||||  || |      | | `- Re: Power efficient neural networks (was: Someone's Trying AgainIvan Godard
| ||||  || |      | +- Re: Power efficient neural networks (was: Someone's Trying AgainJohn Dallman
| ||||  || |      | +- Re: Power efficient neural networks (was: Someone's Trying Again (Ascenium))Quadibloc
| ||||  || |      | `* Cooling (was: Power efficient neural networks)Anton Ertl
| ||||  || |      |  `- Re: Cooling (was: Power efficient neural networks)Thomas Koenig
| ||||  || |      `* Re: Power efficient neural networks (was: Someone's Trying AgainIvan Godard
| ||||  || |       `* Re: Power efficient neural networks (was: Someone's Trying AgainThomas Koenig
| ||||  || |        `* Re: Power efficient neural networksantispam
| ||||  || |         `* Re: Power efficient neural networksThomas Koenig
| ||||  || |          +* Re: Power efficient neural networksMitchAlsup
| ||||  || |          |`* Re: Power efficient neural networksThomas Koenig
| ||||  || |          | `* Re: Power efficient neural networksMitchAlsup
| ||||  || |          |  `- Re: Power efficient neural networksThomas Koenig
| ||||  || |          `* Re: Power efficient neural networksantispam
| ||||  || |           `- Re: Power efficient neural networksThomas Koenig
| ||||  || `* Re: Power efficient neural networksStefan Monnier
| ||||  ||  `* Re: Power efficient neural networksQuadibloc
| ||||  ||   `- Re: Power efficient neural networksQuadibloc
| ||||  |`- Re: Power efficient neural networks (was: Someone's Trying AgainStephen Fuld
| ||||  `* Re: Power efficient neural networks (was: Someone's Trying AgainDavid Brown
| ||||   `- Re: Power efficient neural networks (was: Someone's Trying AgainStephen Fuld
| |||+- Re: Someone's Trying Again (Ascenium)Stefan Monnier
| |||`- Re: Someone's Trying Again (Ascenium)MitchAlsup
| ||`- Re: Someone's Trying Again (Ascenium)Marcus
| |`* Re: Someone's Trying Again (Ascenium)Quadibloc
| | +- Re: Someone's Trying Again (Ascenium)chris
| | `* Re: Someone's Trying Again (Ascenium)George Neuner
| |  +- Re: Someone's Trying Again (Ascenium)Quadibloc
| |  `* Re: Someone's Trying Again (Ascenium)Anton Ertl
| |   +* Re: Someone's Trying Again (Ascenium)MitchAlsup
| |   |+* Re: Someone's Trying Again (Ascenium)Quadibloc
| |   ||`* Re: Someone's Trying Again (Ascenium)MitchAlsup
| |   || `* Re: Someone's Trying Again (Ascenium)Marcus
| |   ||  `* Re: Someone's Trying Again (Ascenium)Stefan Monnier
| |   ||   `- Re: Someone's Trying Again (Ascenium)Marcus
| |   |`- Re: Someone's Trying Again (Ascenium)Thomas Koenig
| |   +* Re: Someone's Trying Again (Ascenium)George Neuner
| |   |+- Re: Someone's Trying Again (Ascenium)MitchAlsup
| |   |+* Re: Someone's Trying Again (Ascenium)Marcus
| |   ||`* Re: Someone's Trying Again (Ascenium)Thomas Koenig
| |   || +* Re: Someone's Trying Again (Ascenium)Quadibloc
| |   || |`* Re: Someone's Trying Again (Ascenium)Quadibloc
| |   || | `* Re: Someone's Trying Again (Ascenium)Thomas Koenig
| |   || |  `* Re: Someone's Trying Again (Ascenium)Quadibloc
| |   || |   `- Re: Someone's Trying Again (Ascenium)Quadibloc
| |   || `* Re: Someone's Trying Again (Ascenium)Terje Mathisen
| |   ||  +* Re: Someone's Trying Again (Ascenium)Thomas Koenig
| |   ||  |`* Re: Someone's Trying Again (Ascenium)pec...@gmail.com
| |   ||  | `* Re: Someone's Trying Again (Ascenium)Thomas Koenig
| |   ||  |  +* Re: Someone's Trying Again (Ascenium)Terje Mathisen
| |   ||  |  |`* Re: Someone's Trying Again (Ascenium)Thomas Koenig
| |   ||  |  | `- Re: Someone's Trying Again (Ascenium)Terje Mathisen
| |   ||  |  +* Re: Someone's Trying Again (Ascenium)Bill Findlay
| |   ||  |  |`- Re: Someone's Trying Again (Ascenium)Quadibloc
| |   ||  |  `* Re: Someone's Trying Again (Ascenium)pec...@gmail.com
| |   ||  |   `- Re: Someone's Trying Again (Ascenium)Terje Mathisen
| |   ||  `- Re: Someone's Trying Again (Ascenium)antispam
| |   |`* Parallelization (was: Someone's Trying Again (Ascenium))Anton Ertl
| |   | +- Re: ParallelizationStefan Monnier
| |   | +* Re: Parallelization (was: Someone's Trying Again (Ascenium))Branimir Maksimovic
| |   | |+* Re: Parallelization (was: Someone's Trying Again (Ascenium))Quadibloc
| |   | ||+* Re: Parallelization (was: Someone's Trying Again (Ascenium))Chris M. Thomasson
| |   | |||`- Re: Parallelization (was: Someone's Trying Again (Ascenium))Branimir Maksimovic
| |   | ||+- Re: Parallelization (was: Someone's Trying Again (Ascenium))Branimir Maksimovic
| |   | ||+* Re: ParallelizationStefan Monnier
| |   | |||`* Re: ParallelizationBranimir Maksimovic
| |   | ||| `* Re: ParallelizationStefan Monnier
| |   | |||  `* Re: ParallelizationBranimir Maksimovic
| |   | |||   `- Re: ParallelizationStefan Monnier
| |   | ||`- Re: Parallelization (was: Someone's Trying Again (Ascenium))Anton Ertl
| |   | |`* Re: Parallelization (was: Someone's Trying Again (Ascenium))Marcus
| |   | `* Re: Parallelization (was: Someone's Trying Again (Ascenium))Chris M. Thomasson
| |   `* Re: Someone's Trying Again (Ascenium)Tim Rentsch
| +- Re: Someone's Trying Again (Ascenium)MitchAlsup
| `* Re: wonderful compilers, or Someone's Trying Again (Ascenium)John Levine
`* Re: Someone's Trying Again (Ascenium)Theo Markettos

Pages:12345
Re: Someone's Trying Again (Ascenium)

<sd3u8q$kk0$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18928&group=comp.arch#18928

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: m.del...@this.bitsnbites.eu (Marcus)
Newsgroups: comp.arch
Subject: Re: Someone's Trying Again (Ascenium)
Date: Mon, 19 Jul 2021 15:21:29 +0200
Organization: A noiseless patient Spider
Lines: 23
Message-ID: <sd3u8q$kk0$1@dont-email.me>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<84bda4ad-5c78-44e1-9952-03ec38cf7cadn@googlegroups.com>
<cc2717d9-9311-4182-83a3-e8d3b70319b2n@googlegroups.com>
<7c38bf05-df09-42f8-ab66-06b4cb5d8e85n@googlegroups.com>
<sd34p9$gpv$1@dont-email.me> <jwvr1fu8nu8.fsf-monnier+comp.arch@gnu.org>
Mime-Version: 1.0
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 19 Jul 2021 13:21:30 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="7adfa040958caed31940ea1d97f73a2a";
logging-data="21120"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX185NU7/SHGijzPQLU+UfhHnSYky6nUfKaE="
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
Thunderbird/78.11.0
Cancel-Lock: sha1:x6jBnXpFGKoR7Z4DNuBApo/ssZE=
In-Reply-To: <jwvr1fu8nu8.fsf-monnier+comp.arch@gnu.org>
Content-Language: en-US
 by: Marcus - Mon, 19 Jul 2021 13:21 UTC

On 2021-07-19, Stefan Monnier wrote:
>> This is the reason that when you need instant threads, you set up a
>> thread pool of dormant threads instead of spawning a new OS thread
>> every time you need one. I personally consider that an ugly work-around
>> for inefficient thread creation.
>
> Part of the problem is linked to the precise meaning of "thread" (as in
> whether or not you care about all the different features offered for
> threads by the OS) and the devil in the details.

Yes, and short of language support for "thin" threads (e.g. without
proper OS support) most solutions are based on full OS threads.

>
> Also "instant threads" is still a lie for thread created from a thread
> pool of dormant threads, so it would be valuable to have benchmark
> numbers to see whether those are closer to "0 cycles" or to "1000
> cycle".

True. Starting up a thread usually requires things like passing a
function pointer in a message queue and sending a message.

/Marcus

Re: Someone's Trying Again (Ascenium)

<08f01775-114b-466d-90aa-15f731721511n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18940&group=comp.arch#18940

  copy link   Newsgroups: comp.arch
X-Received: by 2002:ad4:4bcf:: with SMTP id l15mr1773077qvw.11.1626763990972;
Mon, 19 Jul 2021 23:53:10 -0700 (PDT)
X-Received: by 2002:a9d:491c:: with SMTP id e28mr20781926otf.342.1626763990734;
Mon, 19 Jul 2021 23:53:10 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Mon, 19 Jul 2021 23:53:10 -0700 (PDT)
In-Reply-To: <sd3mhr$dtd$1@newsreader4.netcologne.de>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa3c:a000:8c73:bf6a:99c0:bfe6;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa3c:a000:8c73:bf6a:99c0:bfe6
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com> <2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at> <4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<sd365i$12a$1@dont-email.me> <sd36ng$474$1@newsreader4.netcologne.de>
<159032eb-f148-41b6-ae95-daae38bb2d5fn@googlegroups.com> <bff1888c-ea37-4dfd-b1cc-29dcb8b1c8bdn@googlegroups.com>
<sd3mhr$dtd$1@newsreader4.netcologne.de>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <08f01775-114b-466d-90aa-15f731721511n@googlegroups.com>
Subject: Re: Someone's Trying Again (Ascenium)
From: jsav...@ecn.ab.ca (Quadibloc)
Injection-Date: Tue, 20 Jul 2021 06:53:10 +0000
Content-Type: text/plain; charset="UTF-8"
 by: Quadibloc - Tue, 20 Jul 2021 06:53 UTC

On Monday, July 19, 2021 at 5:09:49 AM UTC-6, Thomas Koenig wrote:

> In the computer magazine I read this puzzle ("Chip"), it was
> actually 7,47 DM. And the issue must have been around October or
> November of that year.

There were issues of that magazine on the Internet Archive...

John Savard

Re: Someone's Trying Again (Ascenium)

<c1375b93-60d5-4981-b5df-fd6b49b3b71fn@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18941&group=comp.arch#18941

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a37:b6c5:: with SMTP id g188mr26644378qkf.92.1626765068076;
Tue, 20 Jul 2021 00:11:08 -0700 (PDT)
X-Received: by 2002:aca:2b08:: with SMTP id i8mr20159131oik.0.1626765067799;
Tue, 20 Jul 2021 00:11:07 -0700 (PDT)
Path: i2pn2.org!i2pn.org!paganini.bofh.team!news.dns-netz.com!news.freedyn.net!newsfeed.xs4all.nl!newsfeed8.news.xs4all.nl!news-out.netnews.com!news.alt.net!fdc2.netnews.com!peer03.ams1!peer.ams1.xlned.com!news.xlned.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Tue, 20 Jul 2021 00:11:07 -0700 (PDT)
In-Reply-To: <08f01775-114b-466d-90aa-15f731721511n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa3c:a000:8c73:bf6a:99c0:bfe6;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa3c:a000:8c73:bf6a:99c0:bfe6
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com> <2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at> <4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<sd365i$12a$1@dont-email.me> <sd36ng$474$1@newsreader4.netcologne.de>
<159032eb-f148-41b6-ae95-daae38bb2d5fn@googlegroups.com> <bff1888c-ea37-4dfd-b1cc-29dcb8b1c8bdn@googlegroups.com>
<sd3mhr$dtd$1@newsreader4.netcologne.de> <08f01775-114b-466d-90aa-15f731721511n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <c1375b93-60d5-4981-b5df-fd6b49b3b71fn@googlegroups.com>
Subject: Re: Someone's Trying Again (Ascenium)
From: jsav...@ecn.ab.ca (Quadibloc)
Injection-Date: Tue, 20 Jul 2021 07:11:08 +0000
Content-Type: text/plain; charset="UTF-8"
X-Received-Bytes: 2451
 by: Quadibloc - Tue, 20 Jul 2021 07:11 UTC

On Tuesday, July 20, 2021 at 12:53:12 AM UTC-6, Quadibloc wrote:
> On Monday, July 19, 2021 at 5:09:49 AM UTC-6, Thomas Koenig wrote:

> > In the computer magazine I read this puzzle ("Chip"), it was
> > actually 7,47 DM. And the issue must have been around October or
> > November of that year.

> There were issues of that magazine on the Internet Archive...

Or so I thought. Actually, only the very first issue, plus some of their
CHIP Specials, including one on computerizing one's Marklin
model railroad, are there.

Plus, there is another computer magazine of the same name
which publishes in Hungary, Romania, and Russia... and yet
another one which is in English and in Malaysia.

John Savard

Re: Someone's Trying Again (Ascenium)

<0bcd77d9-30db-4cfa-92ba-44099644c419n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18942&group=comp.arch#18942

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:11cf:: with SMTP id n15mr25485987qtk.256.1626769812623; Tue, 20 Jul 2021 01:30:12 -0700 (PDT)
X-Received: by 2002:a4a:ad4d:: with SMTP id s13mr20186669oon.74.1626769812365; Tue, 20 Jul 2021 01:30:12 -0700 (PDT)
Path: i2pn2.org!i2pn.org!aioe.org!feeder1.feed.usenet.farm!feed.usenet.farm!tr1.eu1.usenetexpress.com!feeder.usenetexpress.com!tr3.iad1.usenetexpress.com!border1.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Tue, 20 Jul 2021 01:30:12 -0700 (PDT)
In-Reply-To: <sd3riq$hrv$1@newsreader4.netcologne.de>
Injection-Info: google-groups.googlegroups.com; posting-host=5.173.64.226; posting-account=zjh_fgoAAABo0Nzgf6peaFtS6c-3xdgr
NNTP-Posting-Host: 5.173.64.226
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com> <scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me> <f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com> <2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com> <2021Jul18.175524@mips.complang.tuwien.ac.at> <4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com> <sd365i$12a$1@dont-email.me> <sd36ng$474$1@newsreader4.netcologne.de> <sd3cfd$5fs$1@gioia.aioe.org> <sd3riq$hrv$1@newsreader4.netcologne.de>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <0bcd77d9-30db-4cfa-92ba-44099644c419n@googlegroups.com>
Subject: Re: Someone's Trying Again (Ascenium)
From: pec...@gmail.com (pec...@gmail.com)
Injection-Date: Tue, 20 Jul 2021 08:30:12 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Lines: 23
 by: pec...@gmail.com - Tue, 20 Jul 2021 08:30 UTC

poniedziałek, 19 lipca 2021 o 14:35:40 UTC+2 Thomas Koenig napisał(a):
> That's a pretty good guess, going through three variables i <=
> j <= k and setting m = 7-i-j-k, while removing duplicates setting
> the bounds so that k <= m, gives around 2.9e6 iterations.

You are not serious...
def brahmagupta(c):
return (c-(c**2-4*c))/2,(c+(c**2-4*c))/2
brahmagupta(7.47) = (1.1893713939382278, 6.2806286060617715)
brahmagupta(6.2806286060617715) = (1.2480169133948371, 5.031983086605163)
brahmagupta(5.031983086605163)=(1.3764817684693538, 3.656171118366921)
"real" solution:
1.1893713939382278,1.2480169133948371,1.3764817684693538,3.656171118366921
starting point:
1.19+1.25+1.37+3.66=7.470000000000001
1.19*1.25*1.37*3.66 = 7.458622500000001
few manual iterations:
1.20*1.25*1.37*3.65=7.50075
1.19*1.25*1.38*3.65=7.492537499999998
1.18*1.25*1.39*3.65=7.483412499999998
1.17*1.25*1.40*3.65=7.473374999999999 - "upper bound"
1.16*1.25*1.41*3.65=7.462424999999999
1.17*1.24*1.41*3.65=7.466542199999999 - "lower bound"

Re: Someone's Trying Again (Ascenium)

<sd62ts$19o$1@newsreader4.netcologne.de>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18943&group=comp.arch#18943

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!paganini.bofh.team!news.dns-netz.com!news.freedyn.net!newsreader4.netcologne.de!news.netcologne.de!.POSTED.2a0a-a540-a40-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de!not-for-mail
From: tkoe...@netcologne.de (Thomas Koenig)
Newsgroups: comp.arch
Subject: Re: Someone's Trying Again (Ascenium)
Date: Tue, 20 Jul 2021 08:53:16 -0000 (UTC)
Organization: news.netcologne.de
Distribution: world
Message-ID: <sd62ts$19o$1@newsreader4.netcologne.de>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com> <sd365i$12a$1@dont-email.me>
<sd36ng$474$1@newsreader4.netcologne.de> <sd3cfd$5fs$1@gioia.aioe.org>
<sd3riq$hrv$1@newsreader4.netcologne.de>
<0bcd77d9-30db-4cfa-92ba-44099644c419n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 20 Jul 2021 08:53:16 -0000 (UTC)
Injection-Info: newsreader4.netcologne.de; posting-host="2a0a-a540-a40-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de:2a0a:a540:a40:0:7285:c2ff:fe6c:992d";
logging-data="1336"; mail-complaints-to="abuse@netcologne.de"
User-Agent: slrn/1.0.3 (Linux)
 by: Thomas Koenig - Tue, 20 Jul 2021 08:53 UTC

pec...@gmail.com <peceed@gmail.com> schrieb:
> poniedziałek, 19 lipca 2021 o 14:35:40 UTC+2 Thomas Koenig napisał(a):
>> That's a pretty good guess, going through three variables i <=
>> j <= k and setting m = 7-i-j-k, while removing duplicates setting
>> the bounds so that k <= m, gives around 2.9e6 iterations.
>
> You are not serious...
> def brahmagupta(c):
> return (c-(c**2-4*c))/2,(c+(c**2-4*c))/2
> brahmagupta(7.47) = (1.1893713939382278, 6.2806286060617715)
> brahmagupta(6.2806286060617715) = (1.2480169133948371, 5.031983086605163)
> brahmagupta(5.031983086605163)=(1.3764817684693538, 3.656171118366921)
> "real" solution:
> 1.1893713939382278,1.2480169133948371,1.3764817684693538,3.656171118366921
> starting point:
> 1.19+1.25+1.37+3.66=7.470000000000001
> 1.19*1.25*1.37*3.66 = 7.458622500000001
> few manual iterations:
> 1.20*1.25*1.37*3.65=7.50075
> 1.19*1.25*1.38*3.65=7.492537499999998
> 1.18*1.25*1.39*3.65=7.483412499999998
> 1.17*1.25*1.40*3.65=7.473374999999999 - "upper bound"
> 1.16*1.25*1.41*3.65=7.462424999999999
> 1.17*1.24*1.41*3.65=7.466542199999999 - "lower bound"

I'm not sure what you calculated here.

However, as stated, this is a Diophantine equation (integer solutions
only), so approximate solutions are not valid.

Diophantine equations are generally much harder to solve than
equations that involve real numbers that can be solved approximately
using floating point values.

Re: Someone's Trying Again (Ascenium)

<sd6d5m$6e3$1@gioia.aioe.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18945&group=comp.arch#18945

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!aioe.org!uNkxFD/dgvFUE+WUQcvYbA.user.46.165.242.91.POSTED!not-for-mail
From: terje.ma...@tmsw.no (Terje Mathisen)
Newsgroups: comp.arch
Subject: Re: Someone's Trying Again (Ascenium)
Date: Tue, 20 Jul 2021 13:48:05 +0200
Organization: Aioe.org NNTP Server
Message-ID: <sd6d5m$6e3$1@gioia.aioe.org>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com> <sd365i$12a$1@dont-email.me>
<sd36ng$474$1@newsreader4.netcologne.de> <sd3cfd$5fs$1@gioia.aioe.org>
<sd3riq$hrv$1@newsreader4.netcologne.de>
<0bcd77d9-30db-4cfa-92ba-44099644c419n@googlegroups.com>
<sd62ts$19o$1@newsreader4.netcologne.de>
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: gioia.aioe.org; logging-data="6595"; posting-host="uNkxFD/dgvFUE+WUQcvYbA.user.gioia.aioe.org"; mail-complaints-to="abuse@aioe.org";
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:60.0) Gecko/20100101
Firefox/60.0 SeaMonkey/2.53.8
X-Notice: Filtered by postfilter v. 0.9.2
 by: Terje Mathisen - Tue, 20 Jul 2021 11:48 UTC

Thomas Koenig wrote:
> pec...@gmail.com <peceed@gmail.com> schrieb:
>> poniedziałek, 19 lipca 2021 o 14:35:40 UTC+2 Thomas Koenig napisał(a):
>>> That's a pretty good guess, going through three variables i <=
>>> j <= k and setting m = 7-i-j-k, while removing duplicates setting
>>> the bounds so that k <= m, gives around 2.9e6 iterations.
>>
>> You are not serious...
>> def brahmagupta(c):
>> return (c-(c**2-4*c))/2,(c+(c**2-4*c))/2
>> brahmagupta(7.47) = (1.1893713939382278, 6.2806286060617715)
>> brahmagupta(6.2806286060617715) = (1.2480169133948371, 5.031983086605163)
>> brahmagupta(5.031983086605163)=(1.3764817684693538, 3.656171118366921)
>> "real" solution:
>> 1.1893713939382278,1.2480169133948371,1.3764817684693538,3.656171118366921
>> starting point:
>> 1.19+1.25+1.37+3.66=7.470000000000001
>> 1.19*1.25*1.37*3.66 = 7.458622500000001
>> few manual iterations:
>> 1.20*1.25*1.37*3.65=7.50075
>> 1.19*1.25*1.38*3.65=7.492537499999998
>> 1.18*1.25*1.39*3.65=7.483412499999998
>> 1.17*1.25*1.40*3.65=7.473374999999999 - "upper bound"
>> 1.16*1.25*1.41*3.65=7.462424999999999
>> 1.17*1.24*1.41*3.65=7.466542199999999 - "lower bound"
>
> I'm not sure what you calculated here.
>
> However, as stated, this is a Diophantine equation (integer solutions
> only), so approximate solutions are not valid.
>
> Diophantine equations are generally much harder to solve than
> equations that involve real numbers that can be solved approximately
> using floating point values.
>
I broke down and wrote a perl solver for this question:

Even with perl's slow interpreter, and no attempt to extract prime
factors from 747000000, just a brute force scan, it took about 0.2
seconds to find the solutions for either 7.47 or 7.11 as the total sum.
(Verifying that they were in fact unique took another 40 ms.)

C(++) would almost certainly run this at least an order of magnitude
faster, but using the prime roots as the starting point, noting that
there is a single large factor (of 747/9=83), and then use that as the
first term would help even more:
....
Yes indeed!

Starting the search with n*0.83 as one of the item prices reduced the
search time from 200ms to less than 9, and full verification took just
11 ms.

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"

Re: Someone's Trying Again (Ascenium)

<sd71b1$mtp$1@newsreader4.netcologne.de>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18947&group=comp.arch#18947

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!paganini.bofh.team!news.dns-netz.com!news.freedyn.net!newsreader4.netcologne.de!news.netcologne.de!.POSTED.2a0a-a540-a40-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de!not-for-mail
From: tkoe...@netcologne.de (Thomas Koenig)
Newsgroups: comp.arch
Subject: Re: Someone's Trying Again (Ascenium)
Date: Tue, 20 Jul 2021 17:32:17 -0000 (UTC)
Organization: news.netcologne.de
Distribution: world
Message-ID: <sd71b1$mtp$1@newsreader4.netcologne.de>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com> <sd365i$12a$1@dont-email.me>
<sd36ng$474$1@newsreader4.netcologne.de> <sd3cfd$5fs$1@gioia.aioe.org>
<sd3riq$hrv$1@newsreader4.netcologne.de>
<0bcd77d9-30db-4cfa-92ba-44099644c419n@googlegroups.com>
<sd62ts$19o$1@newsreader4.netcologne.de> <sd6d5m$6e3$1@gioia.aioe.org>
Injection-Date: Tue, 20 Jul 2021 17:32:17 -0000 (UTC)
Injection-Info: newsreader4.netcologne.de; posting-host="2a0a-a540-a40-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de:2a0a:a540:a40:0:7285:c2ff:fe6c:992d";
logging-data="23481"; mail-complaints-to="abuse@netcologne.de"
User-Agent: slrn/1.0.3 (Linux)
 by: Thomas Koenig - Tue, 20 Jul 2021 17:32 UTC

Terje Mathisen <terje.mathisen@tmsw.no> schrieb:

> Even with perl's slow interpreter, and no attempt to extract prime
> factors from 747000000, just a brute force scan, it took about 0.2
> seconds to find the solutions for either 7.47 or 7.11 as the total sum.
> (Verifying that they were in fact unique took another 40 ms.)

Which shows that an interpreter on a (I assume) relatively modern
machine 2021 is _much_ faster than an interpreter on a machine
introduced in 1982, whose CPU ran at ~ 0.3% of the clock speed of
today's machine, which had an 8-bit processor doing floating point
on a 40-bit format with a 32-bit mantissa without even instructions
for an 8-bit integer multiply.

It was _all_ shift and add for multiplication.

>
> C(++) would almost certainly run this at least an order of magnitude
> faster, but using the prime roots as the starting point, noting that
> there is a single large factor (of 747/9=83), and then use that as the
> first term would help even more:
> ...
> Yes indeed!

Ah, I don't think we noticed that at the time. Good catch!

> Starting the search with n*0.83 as one of the item prices reduced the
> search time from 200ms to less than 9, and full verification took just
> 11 ms.

There is actually a bit more to the story. It was one of the
first days after I had started studying, which is why I remember
the approximate date so well. The people I shared a flat with had
looked at the problem for a short time without even hitting on the
rather obvious fact that, for a+b+c+d=s, you only need three loops.

We ran a few benchmarks and concluded that a run would take a
few months on the C-64, and gave up for a time.

One of us, a first-semester computer science studend, then left.
The rest of us looked at the problem again, noticed d=s-a-b-c, and
used a few more simplifications, which brought down the calculation
time to around half an hour.

One of us (not me) then had an idea. He sat down and wrote down
random formulas from "Bronstein"The formulas looked impressive,
but had absolutely no bearing on the problem. When the C.S. student
returned late in the evening, we gave him the sheets of paper
and told him this was the analytical soluttion. The unsuspecting
C.S. student believed us for a few weeks, because he still thought
that a calculation would have taken months, and was quite impressed.

Parallelization (was: Someone's Trying Again (Ascenium))

<2021Jul20.185454@mips.complang.tuwien.ac.at>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18948&group=comp.arch#18948

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: ant...@mips.complang.tuwien.ac.at (Anton Ertl)
Newsgroups: comp.arch
Subject: Parallelization (was: Someone's Trying Again (Ascenium))
Date: Tue, 20 Jul 2021 16:54:54 GMT
Organization: Institut fuer Computersprachen, Technische Universitaet Wien
Lines: 112
Message-ID: <2021Jul20.185454@mips.complang.tuwien.ac.at>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com> <scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me> <f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com> <2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com> <2021Jul18.175524@mips.complang.tuwien.ac.at> <4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
Injection-Info: reader02.eternal-september.org; posting-host="c09d68920d58eaebe03c67429035a048";
logging-data="6886"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18NNTJNSEMIxtX8SQK0Orqb"
Cancel-Lock: sha1:WdH8moj/N8Bb7/s7XvV7S5uZVp0=
X-newsreader: xrn 10.00-beta-3
 by: Anton Ertl - Tue, 20 Jul 2021 16:54 UTC

George Neuner <gneuner2@comcast.net> writes:
>On Sun, 18 Jul 2021 15:55:24 GMT, anton@mips.complang.tuwien.ac.at
>(Anton Ertl) wrote:
>
>>George Neuner <gneuner2@comcast.net> writes:
>>
>>>The problem - at least with current hardware - is that programmers are
>>>much better at identifying what CAN be done in parallel than what
>>>SHOULD be done in parallel.
>>
>>You make it sound as if that's a problem with the programmers, not
>>with the hardware. But it's fundamental to programming (at least in
>>areas affected by the software crisis, i.e., not supercomputers), so
>>it has to be solved at the system level (i.e., hardware, compiler,
>>etc.).
>
>It /IS/ a problem with the programmers. The average "developer" now
>has no CS, engineering, or (advanced) mathematics education, and their
>programming skills are pitiful - only slightly above "script kiddie".
>This is an unfortunate fact of life that I think too often is lost on
>some denizens of this group.

One could have an interesting discussion about that, but that's
besides the point wrt the parallelization problem. Even if the
developers have all the education one could wish for, if they have to
produce a maintainable program for a big problem (resulting in a big
program) with the minimal development effort, they will divide the
problem into subproblems and divide the program into parts for dealing
with the subproblems etc. But parallelization with the current cost
structure has to be done for the whole program and cannot be
subdivided in the same way.

>Given the ability to create "parallel" tasks, an /average/ programmer
>is very likely to naively create large numbers of new tasks regardless
>of resources being available to actually execute them.

Yes, if you tell them to create parallel tasks. And good programmers
will do so, too, unless you tell them that efficient parallelization
is more important than maintainability.

>Which maybe is fine if the number of tasks (relatively) is small, or
>if many of them are I/O bound and the use is for /concurrency/. But
>most programmers do not understand the difference between "parallel"
>and "concurrent", and too many don't understand why spawning large
>numbers of tasks can slow down the program.

Sure. That's the way to write parallel programs that is in line with
the divide-and-conquer approach we have established for writing
programs for big problems. So if it slows down programs, the solution
is not to tell the programmers not to do that, but to make systems
that run such programs efficiently. E.g., have hardware where having
many more tasks than hardware threads does not slow down the program.
Or have a compiler and run-time system that combines the many tasks
written by the programmer into so few intermediate tasks that the
overheads of having more tasks than threads play little role. Or
both.

>>Why is it fundamental? Because we build maintainable software by
>>splitting it into mostly-independent parts. Deciding how much to
>>parallelize on current hardware needs a global view of the program,
>>which programmers usually do not have; and even when they have it,
>>their decisions will probably be outdated after a while of maintaining
>>the program.
>>
>>We have similar problems with explicitly managed fast memory, which is
>>why we don't see that in general-purpose computers; instead, we see
>>caches (a software-crisis-compatible variant of fast memory).
>
>We have similar problems with programmer managed dynamic allocation.
>All the modern languages use GC /because/ repeated studies have shown
>that average programmers largely are incapable of writing leak-proof
>code without it.

Good example. Garbage collection is a good solution to the dynamic
memory allocation problem, including for good programmers. Now we
need such a solution for the parallelization problem.

>>Yet another problem of this kind is fixed-point scaling. That's why
>>we have floating-point.
>
>And the same people who, in the past, would not have understood the
>issues of using fixed-point now don't understand the issues of using
>floating point.

Sure, FP has its pitfalls, but it's possible to write, say, a general
FP matrix multiplication subroutine (and the pitfalls of FP typically
don't play much role in that), while for fixed-point you would have to
write one with the right scaling for every application it is used in;
or maybe these days have a templated C++ library, and instantiate it
with the appropriate scalings for each use.

>>So what do we need of the system? Ideally having more parallel parts
>>than needed should not cause a slowdown. This has two aspects:
>>
>>1) Thread creation and destruction should be cheap.
>>
>>2) The harder part is memory locality: Sequential code often works
>>very well on caches because it has a lot of temporal and spatial
>>locality. If the code is split into more tasks than necessary, how do
>>we avoid losing locality and thus losing some of the benefits of
>>caching?
>
>Agreed! But this has little to do with any of my points.

But it has to do with the parallelization problem, and more realistic
solutions for it than perfect programmers with unlimited time on their
hands.

- anton
--
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

Re: Parallelization

<jwva6mg50j9.fsf-monnier+comp.arch@gnu.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18949&group=comp.arch#18949

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: monn...@iro.umontreal.ca (Stefan Monnier)
Newsgroups: comp.arch
Subject: Re: Parallelization
Date: Tue, 20 Jul 2021 14:05:01 -0400
Organization: A noiseless patient Spider
Lines: 15
Message-ID: <jwva6mg50j9.fsf-monnier+comp.arch@gnu.org>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<2021Jul20.185454@mips.complang.tuwien.ac.at>
Mime-Version: 1.0
Content-Type: text/plain
Injection-Info: reader02.eternal-september.org; posting-host="b52b516e5b8c44a9f0febe8562fbd517";
logging-data="7404"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/prpGsQRayL81XiwmzXIC+"
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.0.50 (gnu/linux)
Cancel-Lock: sha1:dtAaJeXKVt8f5Y8+QP8f22Mscos=
sha1:NYruvXb9h8r/q4fYJlcde1/jLJc=
 by: Stefan Monnier - Tue, 20 Jul 2021 18:05 UTC

Anton Ertl [2021-07-20 16:54:54] wrote:
> programs for big problems. So if it slows down programs, the solution
> is not to tell the programmers not to do that, but to make systems
> that run such programs efficiently. E.g., have hardware where having

Indeed, I think the only viable "solution" is to ask the programmers to
write code in a way that exposes as much parallelism as possible, and
then have the compiler "auto-sequentialize" the code.

It should be a bit easier for the compiler: at least it's easy to
sequentialize correctly, so the main difficulty is to sequentialize in
a way that maximizes the performance.

Stefan

Re: Parallelization (was: Someone's Trying Again (Ascenium))

<AjFJI.30937$r21.16512@fx38.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18952&group=comp.arch#18952

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!ecngs!feeder2.ecngs.de!178.20.174.213.MISMATCH!feeder1.feed.usenet.farm!feed.usenet.farm!peer01.ams4!peer.am4.highwinds-media.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx38.iad.POSTED!not-for-mail
Newsgroups: comp.arch
From: branimir...@gmail.com (Branimir Maksimovic)
Subject: Re: Parallelization (was: Someone's Trying Again (Ascenium))
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<2021Jul20.185454@mips.complang.tuwien.ac.at>
User-Agent: slrn/1.0.3 (Darwin)
Lines: 136
Message-ID: <AjFJI.30937$r21.16512@fx38.iad>
X-Complaints-To: abuse@usenet-news.net
NNTP-Posting-Date: Tue, 20 Jul 2021 19:17:52 UTC
Organization: usenet-news.net
Date: Tue, 20 Jul 2021 19:17:52 GMT
X-Received-Bytes: 7160
 by: Branimir Maksimovic - Tue, 20 Jul 2021 19:17 UTC

On 2021-07-20, Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
> George Neuner <gneuner2@comcast.net> writes:
>>On Sun, 18 Jul 2021 15:55:24 GMT, anton@mips.complang.tuwien.ac.at
>>(Anton Ertl) wrote:
>>
>>>George Neuner <gneuner2@comcast.net> writes:
>>>
>>>>The problem - at least with current hardware - is that programmers are
>>>>much better at identifying what CAN be done in parallel than what
>>>>SHOULD be done in parallel.
>>>
>>>You make it sound as if that's a problem with the programmers, not
>>>with the hardware. But it's fundamental to programming (at least in
>>>areas affected by the software crisis, i.e., not supercomputers), so
>>>it has to be solved at the system level (i.e., hardware, compiler,
>>>etc.).
>>
>>It /IS/ a problem with the programmers. The average "developer" now
>>has no CS, engineering, or (advanced) mathematics education, and their
>>programming skills are pitiful - only slightly above "script kiddie".
>>This is an unfortunate fact of life that I think too often is lost on
>>some denizens of this group.
>
> One could have an interesting discussion about that, but that's
> besides the point wrt the parallelization problem. Even if the
> developers have all the education one could wish for, if they have to
> produce a maintainable program for a big problem (resulting in a big
> program) with the minimal development effort, they will divide the
> problem into subproblems and divide the program into parts for dealing
> with the subproblems etc. But parallelization with the current cost
> structure has to be done for the whole program and cannot be
> subdivided in the same way.

Well almost all new languages have async/await mechanism which makes
concurrent programming trivial...
Swift got it in 5.5 which is still beta..

>
>>Given the ability to create "parallel" tasks, an /average/ programmer
>>is very likely to naively create large numbers of new tasks regardless
>>of resources being available to actually execute them.
>
> Yes, if you tell them to create parallel tasks. And good programmers
> will do so, too, unless you tell them that efficient parallelization
> is more important than maintainability.

Not issue any more...

>
>>Which maybe is fine if the number of tasks (relatively) is small, or
>>if many of them are I/O bound and the use is for /concurrency/. But
>>most programmers do not understand the difference between "parallel"
>>and "concurrent", and too many don't understand why spawning large
>>numbers of tasks can slow down the program.
>
> Sure. That's the way to write parallel programs that is in line with
> the divide-and-conquer approach we have established for writing
> programs for big problems. So if it slows down programs, the solution
> is not to tell the programmers not to do that, but to make systems
> that run such programs efficiently. E.g., have hardware where having
> many more tasks than hardware threads does not slow down the program.
> Or have a compiler and run-time system that combines the many tasks
> written by the programmer into so few intermediate tasks that the
> overheads of having more tasks than threads play little role. Or
> both.

Parallel and concurrent is same thing :P
Someone told me that depends on definition, but it is dim :P

>
>>>Why is it fundamental? Because we build maintainable software by
>>>splitting it into mostly-independent parts. Deciding how much to
>>>parallelize on current hardware needs a global view of the program,
>>>which programmers usually do not have; and even when they have it,
>>>their decisions will probably be outdated after a while of maintaining
>>>the program.
>>>
>>>We have similar problems with explicitly managed fast memory, which is
>>>why we don't see that in general-purpose computers; instead, we see
>>>caches (a software-crisis-compatible variant of fast memory).
>>
>>We have similar problems with programmer managed dynamic allocation.
>>All the modern languages use GC /because/ repeated studies have shown
>>that average programmers largely are incapable of writing leak-proof
>>code without it.
>
> Good example. Garbage collection is a good solution to the dynamic
> memory allocation problem, including for good programmers. Now we
> need such a solution for the parallelization problem.

C++, Rust , Swift all does not have GC...
They use RAII and reference counting (shared_ptr and such)

>
>>>Yet another problem of this kind is fixed-point scaling. That's why
>>>we have floating-point.
>>
>>And the same people who, in the past, would not have understood the
>>issues of using fixed-point now don't understand the issues of using
>>floating point.
>
> Sure, FP has its pitfalls, but it's possible to write, say, a general
> FP matrix multiplication subroutine (and the pitfalls of FP typically
> don't play much role in that), while for fixed-point you would have to
> write one with the right scaling for every application it is used in;
> or maybe these days have a templated C++ library, and instantiate it
> with the appropriate scalings for each use.
>
>>>So what do we need of the system? Ideally having more parallel parts
>>>than needed should not cause a slowdown. This has two aspects:
>>>
>>>1) Thread creation and destruction should be cheap.
>>>
>>>2) The harder part is memory locality: Sequential code often works
>>>very well on caches because it has a lot of temporal and spatial
>>>locality. If the code is split into more tasks than necessary, how do
>>>we avoid losing locality and thus losing some of the benefits of
>>>caching?
>>
>>Agreed! But this has little to do with any of my points.
>
> But it has to do with the parallelization problem, and more realistic
> solutions for it than perfect programmers with unlimited time on their
> hands.
Well, Rust and Swift mad efforts to make less bugs with concurrent tasks,
that is with shared state :P
Swift mad efforts to make less bugs with concurrent tasks,
that is with shared state :P

>
> - anton

--
bmaxa now listens Arguments by Robots in disguise from Robots in disguise

Re: Parallelization (was: Someone's Trying Again (Ascenium))

<sd7b7b$187u$1@gioia.aioe.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18954&group=comp.arch#18954

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!aioe.org!ux6ld97kLXxG8kVFFLnoWg.user.46.165.242.75.POSTED!not-for-mail
From: chris.m....@gmail.com (Chris M. Thomasson)
Newsgroups: comp.arch
Subject: Re: Parallelization (was: Someone's Trying Again (Ascenium))
Date: Tue, 20 Jul 2021 13:20:57 -0700
Organization: Aioe.org NNTP Server
Message-ID: <sd7b7b$187u$1@gioia.aioe.org>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<2021Jul20.185454@mips.complang.tuwien.ac.at>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Info: gioia.aioe.org; logging-data="41214"; posting-host="ux6ld97kLXxG8kVFFLnoWg.user.gioia.aioe.org"; mail-complaints-to="abuse@aioe.org";
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
Thunderbird/78.12.0
X-Notice: Filtered by postfilter v. 0.9.2
Content-Language: en-US
 by: Chris M. Thomasson - Tue, 20 Jul 2021 20:20 UTC

On 7/20/2021 9:54 AM, Anton Ertl wrote:
> George Neuner <gneuner2@comcast.net> writes:
>> On Sun, 18 Jul 2021 15:55:24 GMT, anton@mips.complang.tuwien.ac.at
>> (Anton Ertl) wrote:
>>
>>> George Neuner <gneuner2@comcast.net> writes:
>>>
>>>> The problem - at least with current hardware - is that programmers are
>>>> much better at identifying what CAN be done in parallel than what
>>>> SHOULD be done in parallel.
>>>
>>> You make it sound as if that's a problem with the programmers, not
>>> with the hardware. But it's fundamental to programming (at least in
>>> areas affected by the software crisis, i.e., not supercomputers), so
>>> it has to be solved at the system level (i.e., hardware, compiler,
>>> etc.).
>>
>> It /IS/ a problem with the programmers. The average "developer" now
>> has no CS, engineering, or (advanced) mathematics education, and their
>> programming skills are pitiful - only slightly above "script kiddie".
>> This is an unfortunate fact of life that I think too often is lost on
>> some denizens of this group.
>
> One could have an interesting discussion about that, but that's
> besides the point wrt the parallelization problem. Even if the
> developers have all the education one could wish for, if they have to
> produce a maintainable program for a big problem (resulting in a big
> program) with the minimal development effort, they will divide the
> problem into subproblems and divide the program into parts for dealing
> with the subproblems etc. But parallelization with the current cost
> structure has to be done for the whole program and cannot be
> subdivided in the same way.
>
>> Given the ability to create "parallel" tasks, an /average/ programmer
>> is very likely to naively create large numbers of new tasks regardless
>> of resources being available to actually execute them.
>
> Yes, if you tell them to create parallel tasks. And good programmers
> will do so, too, unless you tell them that efficient parallelization
> is more important than maintainability.
>
>> Which maybe is fine if the number of tasks (relatively) is small, or
>> if many of them are I/O bound and the use is for /concurrency/. But
>> most programmers do not understand the difference between "parallel"
>> and "concurrent", and too many don't understand why spawning large
>> numbers of tasks can slow down the program.
>
> Sure. That's the way to write parallel programs that is in line with
> the divide-and-conquer approach we have established for writing
> programs for big problems. So if it slows down programs, the solution
> is not to tell the programmers not to do that, but to make systems
> that run such programs efficiently. E.g., have hardware where having
> many more tasks than hardware threads does not slow down the program.
> Or have a compiler and run-time system that combines the many tasks
> written by the programmer into so few intermediate tasks that the
> overheads of having more tasks than threads play little role. Or
> both.
>
>>> Why is it fundamental? Because we build maintainable software by
>>> splitting it into mostly-independent parts. Deciding how much to
>>> parallelize on current hardware needs a global view of the program,
>>> which programmers usually do not have; and even when they have it,
>>> their decisions will probably be outdated after a while of maintaining
>>> the program.
>>>
>>> We have similar problems with explicitly managed fast memory, which is
>>> why we don't see that in general-purpose computers; instead, we see
>>> caches (a software-crisis-compatible variant of fast memory).
>>
>> We have similar problems with programmer managed dynamic allocation.
>> All the modern languages use GC /because/ repeated studies have shown
>> that average programmers largely are incapable of writing leak-proof
>> code without it.
>
> Good example. Garbage collection is a good solution to the dynamic
> memory allocation problem, including for good programmers. Now we
> need such a solution for the parallelization problem.
[...]

I remember way back when somebody was having a performance issue in a
GC'ed environment when the system was under a great deal of sustained
load. Memory would grow and grow, and the GC was spending a lot of time
trying to handle all of it. So, I suggested using distributed lock-free
node caches, it sped things up by orders of magnitude. However,
pushing/popping from the node cache is a form of manual memory
management. They did not like it because of that, but ended up using it
anyway. So, manual memory management can "help" a GC under a large
amount of stress.

Re: Parallelization (was: Someone's Trying Again (Ascenium))

<wJGJI.13618$Ei1.6401@fx07.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18955&group=comp.arch#18955

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!aioe.org!news.uzoreto.com!newsfeed.xs4all.nl!newsfeed7.news.xs4all.nl!news-out.netnews.com!news.alt.net!fdc2.netnews.com!peer02.ams1!peer.ams1.xlned.com!news.xlned.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx07.iad.POSTED!not-for-mail
Newsgroups: comp.arch
From: branimir...@gmail.com (Branimir Maksimovic)
Subject: Re: Parallelization (was: Someone's Trying Again (Ascenium))
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<2021Jul20.185454@mips.complang.tuwien.ac.at>
<sd7b7b$187u$1@gioia.aioe.org>
User-Agent: slrn/1.0.3 (Darwin)
Lines: 96
Message-ID: <wJGJI.13618$Ei1.6401@fx07.iad>
X-Complaints-To: abuse@usenet-news.net
NNTP-Posting-Date: Tue, 20 Jul 2021 20:53:48 UTC
Organization: usenet-news.net
Date: Tue, 20 Jul 2021 20:53:48 GMT
X-Received-Bytes: 6152
 by: Branimir Maksimovic - Tue, 20 Jul 2021 20:53 UTC

On 2021-07-20, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
> On 7/20/2021 9:54 AM, Anton Ertl wrote:
>> George Neuner <gneuner2@comcast.net> writes:
>>> On Sun, 18 Jul 2021 15:55:24 GMT, anton@mips.complang.tuwien.ac.at
>>> (Anton Ertl) wrote:
>>>
>>>> George Neuner <gneuner2@comcast.net> writes:
>>>>
>>>>> The problem - at least with current hardware - is that programmers are
>>>>> much better at identifying what CAN be done in parallel than what
>>>>> SHOULD be done in parallel.
>>>>
>>>> You make it sound as if that's a problem with the programmers, not
>>>> with the hardware. But it's fundamental to programming (at least in
>>>> areas affected by the software crisis, i.e., not supercomputers), so
>>>> it has to be solved at the system level (i.e., hardware, compiler,
>>>> etc.).
>>>
>>> It /IS/ a problem with the programmers. The average "developer" now
>>> has no CS, engineering, or (advanced) mathematics education, and their
>>> programming skills are pitiful - only slightly above "script kiddie".
>>> This is an unfortunate fact of life that I think too often is lost on
>>> some denizens of this group.
>>
>> One could have an interesting discussion about that, but that's
>> besides the point wrt the parallelization problem. Even if the
>> developers have all the education one could wish for, if they have to
>> produce a maintainable program for a big problem (resulting in a big
>> program) with the minimal development effort, they will divide the
>> problem into subproblems and divide the program into parts for dealing
>> with the subproblems etc. But parallelization with the current cost
>> structure has to be done for the whole program and cannot be
>> subdivided in the same way.
>>
>>> Given the ability to create "parallel" tasks, an /average/ programmer
>>> is very likely to naively create large numbers of new tasks regardless
>>> of resources being available to actually execute them.
>>
>> Yes, if you tell them to create parallel tasks. And good programmers
>> will do so, too, unless you tell them that efficient parallelization
>> is more important than maintainability.
>>
>>> Which maybe is fine if the number of tasks (relatively) is small, or
>>> if many of them are I/O bound and the use is for /concurrency/. But
>>> most programmers do not understand the difference between "parallel"
>>> and "concurrent", and too many don't understand why spawning large
>>> numbers of tasks can slow down the program.
>>
>> Sure. That's the way to write parallel programs that is in line with
>> the divide-and-conquer approach we have established for writing
>> programs for big problems. So if it slows down programs, the solution
>> is not to tell the programmers not to do that, but to make systems
>> that run such programs efficiently. E.g., have hardware where having
>> many more tasks than hardware threads does not slow down the program.
>> Or have a compiler and run-time system that combines the many tasks
>> written by the programmer into so few intermediate tasks that the
>> overheads of having more tasks than threads play little role. Or
>> both.
>>
>>>> Why is it fundamental? Because we build maintainable software by
>>>> splitting it into mostly-independent parts. Deciding how much to
>>>> parallelize on current hardware needs a global view of the program,
>>>> which programmers usually do not have; and even when they have it,
>>>> their decisions will probably be outdated after a while of maintaining
>>>> the program.
>>>>
>>>> We have similar problems with explicitly managed fast memory, which is
>>>> why we don't see that in general-purpose computers; instead, we see
>>>> caches (a software-crisis-compatible variant of fast memory).
>>>
>>> We have similar problems with programmer managed dynamic allocation.
>>> All the modern languages use GC /because/ repeated studies have shown
>>> that average programmers largely are incapable of writing leak-proof
>>> code without it.
>>
>> Good example. Garbage collection is a good solution to the dynamic
>> memory allocation problem, including for good programmers. Now we
>> need such a solution for the parallelization problem.
> [...]
>
> I remember way back when somebody was having a performance issue in a
> GC'ed environment when the system was under a great deal of sustained
> load. Memory would grow and grow, and the GC was spending a lot of time
> trying to handle all of it. So, I suggested using distributed lock-free
> node caches, it sped things up by orders of magnitude. However,
> pushing/popping from the node cache is a form of manual memory
> management. They did not like it because of that, but ended up using it
> anyway. So, manual memory management can "help" a GC under a large
> amount of stress.
Rust had GC in early versions. They ditched in in favor of reference count
because of performance problems. Swift, which is derived from Rust,
never had GC :P
Simply put GC is hassle in concurrent loads...

--
bmaxa now listens Boys by Robots in disguise from Robots in disguise

Re: Parallelization (was: Someone's Trying Again (Ascenium))

<da787257-e095-4c2a-a76f-46d17fd6d1d0n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18960&group=comp.arch#18960

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a37:a6d2:: with SMTP id p201mr15811112qke.98.1626818852212;
Tue, 20 Jul 2021 15:07:32 -0700 (PDT)
X-Received: by 2002:a9d:6d83:: with SMTP id x3mr5424386otp.110.1626818851961;
Tue, 20 Jul 2021 15:07:31 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Tue, 20 Jul 2021 15:07:31 -0700 (PDT)
In-Reply-To: <AjFJI.30937$r21.16512@fx38.iad>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa3c:a000:ad50:a0b3:d0f6:79ef;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa3c:a000:ad50:a0b3:d0f6:79ef
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com> <2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at> <4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<2021Jul20.185454@mips.complang.tuwien.ac.at> <AjFJI.30937$r21.16512@fx38.iad>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <da787257-e095-4c2a-a76f-46d17fd6d1d0n@googlegroups.com>
Subject: Re: Parallelization (was: Someone's Trying Again (Ascenium))
From: jsav...@ecn.ab.ca (Quadibloc)
Injection-Date: Tue, 20 Jul 2021 22:07:32 +0000
Content-Type: text/plain; charset="UTF-8"
 by: Quadibloc - Tue, 20 Jul 2021 22:07 UTC

On Tuesday, July 20, 2021 at 1:17:57 PM UTC-6, Branimir Maksimovic wrote:

> Parallel and concurrent is same thing :P
> Someone told me that depends on definition, but it is dim :P

"Concurrent" refers to tasks which _could_ be done in parallel, but becaue
you only have one serial processor available, it just does them one at a time,
switching back and forth between them.

So now the distinction is no longer dim, but glowing as brightly as the
noonday sun!

John Savard

Re: Parallelization (was: Someone's Trying Again (Ascenium))

<sd7hm7$1ran$1@gioia.aioe.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18961&group=comp.arch#18961

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!aioe.org!ux6ld97kLXxG8kVFFLnoWg.user.46.165.242.75.POSTED!not-for-mail
From: chris.m....@gmail.com (Chris M. Thomasson)
Newsgroups: comp.arch
Subject: Re: Parallelization (was: Someone's Trying Again (Ascenium))
Date: Tue, 20 Jul 2021 15:11:17 -0700
Organization: Aioe.org NNTP Server
Message-ID: <sd7hm7$1ran$1@gioia.aioe.org>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<2021Jul20.185454@mips.complang.tuwien.ac.at>
<AjFJI.30937$r21.16512@fx38.iad>
<da787257-e095-4c2a-a76f-46d17fd6d1d0n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Info: gioia.aioe.org; logging-data="60759"; posting-host="ux6ld97kLXxG8kVFFLnoWg.user.gioia.aioe.org"; mail-complaints-to="abuse@aioe.org";
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
Thunderbird/78.12.0
Content-Language: en-US
X-Notice: Filtered by postfilter v. 0.9.2
 by: Chris M. Thomasson - Tue, 20 Jul 2021 22:11 UTC

On 7/20/2021 3:07 PM, Quadibloc wrote:
> On Tuesday, July 20, 2021 at 1:17:57 PM UTC-6, Branimir Maksimovic wrote:
>
>> Parallel and concurrent is same thing :P
>> Someone told me that depends on definition, but it is dim :P
>
> "Concurrent" refers to tasks which _could_ be done in parallel, but becaue
> you only have one serial processor available, it just does them one at a time,
> switching back and forth between them.
>
> So now the distinction is no longer dim, but glowing as brightly as the
> noonday sun!

Now, one "difference" can be that several parallel computations can
never possibly interfere with each other as they run at the same time.
Several concurrent computations might mean that they can interfere with
one other as they run at the same time.

Re: Parallelization (was: Someone's Trying Again (Ascenium))

<hTHJI.27410$Yv3.14786@fx41.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18962&group=comp.arch#18962

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!aioe.org!feeder1.feed.usenet.farm!feed.usenet.farm!newsfeed.xs4all.nl!newsfeed9.news.xs4all.nl!news-out.netnews.com!news.alt.net!fdc2.netnews.com!peer02.ams1!peer.ams1.xlned.com!news.xlned.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx41.iad.POSTED!not-for-mail
Newsgroups: comp.arch
From: branimir...@gmail.com (Branimir Maksimovic)
Subject: Re: Parallelization (was: Someone's Trying Again (Ascenium))
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<2021Jul20.185454@mips.complang.tuwien.ac.at>
<AjFJI.30937$r21.16512@fx38.iad>
<da787257-e095-4c2a-a76f-46d17fd6d1d0n@googlegroups.com>
User-Agent: slrn/1.0.3 (Darwin)
Lines: 23
Message-ID: <hTHJI.27410$Yv3.14786@fx41.iad>
X-Complaints-To: abuse@usenet-news.net
NNTP-Posting-Date: Tue, 20 Jul 2021 22:12:29 UTC
Organization: usenet-news.net
Date: Tue, 20 Jul 2021 22:12:29 GMT
X-Received-Bytes: 1863
 by: Branimir Maksimovic - Tue, 20 Jul 2021 22:12 UTC

On 2021-07-20, Quadibloc <jsavard@ecn.ab.ca> wrote:
> On Tuesday, July 20, 2021 at 1:17:57 PM UTC-6, Branimir Maksimovic wrote:
>
>> Parallel and concurrent is same thing :P
>> Someone told me that depends on definition, but it is dim :P
>
> "Concurrent" refers to tasks which _could_ be done in parallel, but becaue
> you only have one serial processor available, it just does them one at a time,
> switching back and forth between them.
>
> So now the distinction is no longer dim, but glowing as brightly as the
> noonday sun!

That's your defnition. Such tasks are in no way concurrent.
What you describing is coroutines that must yield for other
tasks to work :p

>
> John Savard

--
bmaxa now listens Woman In Disguise by Angelic Upstarts from The Independent Punk Singles Collection

Re: Parallelization (was: Someone's Trying Again (Ascenium))

<IZHJI.24918$0Z7.9563@fx39.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18963&group=comp.arch#18963

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!feeder1.feed.usenet.farm!feed.usenet.farm!newsfeed.xs4all.nl!newsfeed7.news.xs4all.nl!news-out.netnews.com!news.alt.net!fdc2.netnews.com!peer02.ams1!peer.ams1.xlned.com!news.xlned.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx39.iad.POSTED!not-for-mail
Newsgroups: comp.arch
From: branimir...@gmail.com (Branimir Maksimovic)
Subject: Re: Parallelization (was: Someone's Trying Again (Ascenium))
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<2021Jul20.185454@mips.complang.tuwien.ac.at>
<AjFJI.30937$r21.16512@fx38.iad>
<da787257-e095-4c2a-a76f-46d17fd6d1d0n@googlegroups.com>
<sd7hm7$1ran$1@gioia.aioe.org>
User-Agent: slrn/1.0.3 (Darwin)
Lines: 27
Message-ID: <IZHJI.24918$0Z7.9563@fx39.iad>
X-Complaints-To: abuse@usenet-news.net
NNTP-Posting-Date: Tue, 20 Jul 2021 22:19:20 UTC
Organization: usenet-news.net
Date: Tue, 20 Jul 2021 22:19:20 GMT
X-Received-Bytes: 2403
 by: Branimir Maksimovic - Tue, 20 Jul 2021 22:19 UTC

On 2021-07-20, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
> On 7/20/2021 3:07 PM, Quadibloc wrote:
>> On Tuesday, July 20, 2021 at 1:17:57 PM UTC-6, Branimir Maksimovic wrote:
>>
>>> Parallel and concurrent is same thing :P
>>> Someone told me that depends on definition, but it is dim :P
>>
>> "Concurrent" refers to tasks which _could_ be done in parallel, but becaue
>> you only have one serial processor available, it just does them one at a time,
>> switching back and forth between them.
>>
>> So now the distinction is no longer dim, but glowing as brightly as the
>> noonday sun!
>
> Now, one "difference" can be that several parallel computations can
> never possibly interfere with each other as they run at the same time.
> Several concurrent computations might mean that they can interfere with
> one other as they run at the same time.
That is another definition, which is another interpretation. In reallity
no body made clear definition that will all follow :P
My attempt is this: Parallel execution can be done on several different
machines, like MPI, andr concurrent means on single computer. I think that
this was original definition and distinction.

--
bmaxa now listens Woman In Disguise by Angelic Upstarts from The Independent Punk Singles Collection

Re: Parallelization

<jwvmtqg39iw.fsf-monnier+comp.arch@gnu.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18964&group=comp.arch#18964

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: monn...@iro.umontreal.ca (Stefan Monnier)
Newsgroups: comp.arch
Subject: Re: Parallelization
Date: Tue, 20 Jul 2021 18:32:36 -0400
Organization: A noiseless patient Spider
Lines: 19
Message-ID: <jwvmtqg39iw.fsf-monnier+comp.arch@gnu.org>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<2021Jul20.185454@mips.complang.tuwien.ac.at>
<AjFJI.30937$r21.16512@fx38.iad>
<da787257-e095-4c2a-a76f-46d17fd6d1d0n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain
Injection-Info: reader02.eternal-september.org; posting-host="fc47cb0ebf557aa361a0ff27e6db2bef";
logging-data="18975"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18QVhavl0r9ahQ9gKF1tmLk"
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.0.50 (gnu/linux)
Cancel-Lock: sha1:KIurOKnFsjlYdwivKTSy2pFLvvE=
sha1:G9R9Zm6z1J2GtUKw0+RJwhAkzNw=
 by: Stefan Monnier - Tue, 20 Jul 2021 22:32 UTC

Quadibloc [2021-07-20 15:07:31] wrote:
> On Tuesday, July 20, 2021 at 1:17:57 PM UTC-6, Branimir Maksimovic wrote:
>> Parallel and concurrent is same thing :P
>> Someone told me that depends on definition, but it is dim :P
> "Concurrent" refers to tasks which _could_ be done in parallel, but
> becaue you only have one serial processor available, it just does them
> one at a time, switching back and forth between them.

That's definitely not my understanding of the term.

To me the difference between the two is that parallelism is concerned
with dividing a task into subtasks that can be performed concurrently so
as to reduce the latency of its execution. Concurrency OTOH starts with
concurrent tasks and is concerned with how to schedule and synchronize
them such that the result is correct and to obey various other
timing constraints like fairness.

Stefan

Re: Parallelization

<beIJI.69548$Vv6.41255@fx45.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18965&group=comp.arch#18965

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!4.us.feeder.erje.net!2.eu.feeder.erje.net!feeder.erje.net!news.uzoreto.com!news-out.netnews.com!news.alt.net!fdc2.netnews.com!peer02.ams1!peer.ams1.xlned.com!news.xlned.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx45.iad.POSTED!not-for-mail
Newsgroups: comp.arch
From: branimir...@gmail.com (Branimir Maksimovic)
Subject: Re: Parallelization
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<2021Jul20.185454@mips.complang.tuwien.ac.at>
<AjFJI.30937$r21.16512@fx38.iad>
<da787257-e095-4c2a-a76f-46d17fd6d1d0n@googlegroups.com>
<jwvmtqg39iw.fsf-monnier+comp.arch@gnu.org>
User-Agent: slrn/1.0.3 (Darwin)
Lines: 24
Message-ID: <beIJI.69548$Vv6.41255@fx45.iad>
X-Complaints-To: abuse@usenet-news.net
NNTP-Posting-Date: Tue, 20 Jul 2021 22:36:55 UTC
Organization: usenet-news.net
Date: Tue, 20 Jul 2021 22:36:55 GMT
X-Received-Bytes: 2232
 by: Branimir Maksimovic - Tue, 20 Jul 2021 22:36 UTC

On 2021-07-20, Stefan Monnier <monnier@iro.umontreal.ca> wrote:
> Quadibloc [2021-07-20 15:07:31] wrote:
>> On Tuesday, July 20, 2021 at 1:17:57 PM UTC-6, Branimir Maksimovic wrote:
>>> Parallel and concurrent is same thing :P
>>> Someone told me that depends on definition, but it is dim :P
>> "Concurrent" refers to tasks which _could_ be done in parallel, but
>> becaue you only have one serial processor available, it just does them
>> one at a time, switching back and forth between them.
>
> That's definitely not my understanding of the term.
>
> To me the difference between the two is that parallelism is concerned
> with dividing a task into subtasks that can be performed concurrently so
> as to reduce the latency of its execution. Concurrency OTOH starts with
> concurrent tasks and is concerned with how to schedule and synchronize
> them such that the result is correct and to obey various other
> timing constraints like fairness.
>
>
> Stefan
Well parrallel computation (several computers), concurrent execution (single computer)

--
bmaxa now listens Solidarity by Angelic Upstarts from The Independent Punk Singles Collection

Re: Someone's Trying Again (Ascenium)

<0001HW.26A78E0E02AE68F6700002F9A38F@news.individual.net>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18966&group=comp.arch#18966

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!news.swapon.de!fu-berlin.de!uni-berlin.de!individual.net!not-for-mail
From: findlayb...@blueyonder.co.uk (Bill Findlay)
Newsgroups: comp.arch
Subject: Re: Someone's Trying Again (Ascenium)
Date: Wed, 21 Jul 2021 00:04:46 +0100
Organization: none
Lines: 14
Message-ID: <0001HW.26A78E0E02AE68F6700002F9A38F@news.individual.net>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com> <scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me> <f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com> <2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com> <2021Jul18.175524@mips.complang.tuwien.ac.at> <4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com> <sd365i$12a$1@dont-email.me> <sd36ng$474$1@newsreader4.netcologne.de> <sd3cfd$5fs$1@gioia.aioe.org> <sd3riq$hrv$1@newsreader4.netcologne.de> <0bcd77d9-30db-4cfa-92ba-44099644c419n@googlegroups.com> <sd62ts$19o$1@newsreader4.netcologne.de>
Reply-To: findlaybill@blueyonder.co.uk
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Trace: individual.net PFAb5adHbio8GonBJAev/QCGt+HQgyMOnp5afdlaxFevgCyYlR
X-Orig-Path: not-for-mail
Cancel-Lock: sha1:9c4Swf6wS3Et4Gz8TPsr3Le8e5c=
User-Agent: Hogwasher/5.24
 by: Bill Findlay - Tue, 20 Jul 2021 23:04 UTC

On 20 Jul 2021, Thomas Koenig wrote
(in article <sd62ts$19o$1@newsreader4.netcologne.de>):

> Diophantine equations are generally much harder to solve than
> equations that involve real numbers that can be solved approximately
> using floating point values.

It's worse than that, Jim ...
The roots of Diophantine equations are not computable
(in the same sense as the TM halting problem).

--
Bill Findlay

Re: Someone's Trying Again (Ascenium)

<99056e77-7bb9-4ea9-afd9-be55008630f3n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18967&group=comp.arch#18967

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:489:: with SMTP id p9mr2448865qtx.256.1626822426793;
Tue, 20 Jul 2021 16:07:06 -0700 (PDT)
X-Received: by 2002:a05:6808:114f:: with SMTP id u15mr2193504oiu.0.1626822426529;
Tue, 20 Jul 2021 16:07:06 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Tue, 20 Jul 2021 16:07:06 -0700 (PDT)
In-Reply-To: <sd62ts$19o$1@newsreader4.netcologne.de>
Injection-Info: google-groups.googlegroups.com; posting-host=5.173.64.226; posting-account=zjh_fgoAAABo0Nzgf6peaFtS6c-3xdgr
NNTP-Posting-Host: 5.173.64.226
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com> <2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at> <4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<sd365i$12a$1@dont-email.me> <sd36ng$474$1@newsreader4.netcologne.de>
<sd3cfd$5fs$1@gioia.aioe.org> <sd3riq$hrv$1@newsreader4.netcologne.de>
<0bcd77d9-30db-4cfa-92ba-44099644c419n@googlegroups.com> <sd62ts$19o$1@newsreader4.netcologne.de>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <99056e77-7bb9-4ea9-afd9-be55008630f3n@googlegroups.com>
Subject: Re: Someone's Trying Again (Ascenium)
From: pec...@gmail.com (pec...@gmail.com)
Injection-Date: Tue, 20 Jul 2021 23:07:06 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: pec...@gmail.com - Tue, 20 Jul 2021 23:07 UTC

wtorek, 20 lipca 2021 o 10:53:19 UTC+2 Thomas Koenig napisał(a):

> I'm not sure what you calculated here.
The point is to have a good starting point.
> However, as stated, this is a Diophantine equation (integer solutions
> only), so approximate solutions are not valid.
> Diophantine equations are generally much harder to solve than
> equations that involve real numbers that can be solved approximately
> using floating point values.
It is not a Diophantine equation.
The numbers are decimals with fixed point arithmetic.
Numerical analysis guarantees that fixed point product can not be too far from the real one, so you can dramatically restrict the search space: approximately +-0.03 around every value but we need to consider all permutations of values (multiplication with rounding is not connective).

Re: Parallelization

<jwvzgug1m94.fsf-monnier+comp.arch@gnu.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18968&group=comp.arch#18968

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: monn...@iro.umontreal.ca (Stefan Monnier)
Newsgroups: comp.arch
Subject: Re: Parallelization
Date: Tue, 20 Jul 2021 21:44:58 -0400
Organization: A noiseless patient Spider
Lines: 30
Message-ID: <jwvzgug1m94.fsf-monnier+comp.arch@gnu.org>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<2021Jul20.185454@mips.complang.tuwien.ac.at>
<AjFJI.30937$r21.16512@fx38.iad>
<da787257-e095-4c2a-a76f-46d17fd6d1d0n@googlegroups.com>
<jwvmtqg39iw.fsf-monnier+comp.arch@gnu.org>
<beIJI.69548$Vv6.41255@fx45.iad>
Mime-Version: 1.0
Content-Type: text/plain
Injection-Info: reader02.eternal-september.org; posting-host="fc47cb0ebf557aa361a0ff27e6db2bef";
logging-data="7626"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19DAY8X480U5Z49jIXAeEDe"
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.0.50 (gnu/linux)
Cancel-Lock: sha1:MwnWNPfOZL4AnDMTuP63DbHchwU=
sha1:XIo3ciNjRFOUNON4uxjkbvVox08=
 by: Stefan Monnier - Wed, 21 Jul 2021 01:44 UTC

Branimir Maksimovic [2021-07-20 22:36:55] wrote:
> On 2021-07-20, Stefan Monnier <monnier@iro.umontreal.ca> wrote:
>> Quadibloc [2021-07-20 15:07:31] wrote:
>>> On Tuesday, July 20, 2021 at 1:17:57 PM UTC-6, Branimir Maksimovic wrote:
>>>> Parallel and concurrent is same thing :P
>>>> Someone told me that depends on definition, but it is dim :P
>>> "Concurrent" refers to tasks which _could_ be done in parallel, but
>>> becaue you only have one serial processor available, it just does them
>>> one at a time, switching back and forth between them.
>>
>> That's definitely not my understanding of the term.
>>
>> To me the difference between the two is that parallelism is concerned
>> with dividing a task into subtasks that can be performed concurrently so
>> as to reduce the latency of its execution. Concurrency OTOH starts with
>> concurrent tasks and is concerned with how to schedule and synchronize
>> them such that the result is correct and to obey various other
>> timing constraints like fairness.
> Well parrallel computation (several computers), concurrent execution
> (single computer)

I don't see where my description made you think concurrency was limited
to the "single computer" case. But of course, parallel computation is
generally a losing proposition if you only have a single compute element
(tho a "single computer" can contain several CPUs, and a single CPU can
also have several concurrent compute elements, e.g. via superscalar
execution, SIMD, you name it).

Stefan

Re: Parallelization

<AaMJI.64784$VU3.40237@fx46.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18969&group=comp.arch#18969

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!feeds.phibee-telecom.net!newsfeed.xs4all.nl!newsfeed8.news.xs4all.nl!news-out.netnews.com!news.alt.net!fdc3.netnews.com!peer03.ams1!peer.ams1.xlned.com!news.xlned.com!peer03.ams4!peer.am4.highwinds-media.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx46.iad.POSTED!not-for-mail
Newsgroups: comp.arch
From: branimir...@gmail.com (Branimir Maksimovic)
Subject: Re: Parallelization
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<2021Jul20.185454@mips.complang.tuwien.ac.at>
<AjFJI.30937$r21.16512@fx38.iad>
<da787257-e095-4c2a-a76f-46d17fd6d1d0n@googlegroups.com>
<jwvmtqg39iw.fsf-monnier+comp.arch@gnu.org>
<beIJI.69548$Vv6.41255@fx45.iad>
<jwvzgug1m94.fsf-monnier+comp.arch@gnu.org>
User-Agent: slrn/1.0.3 (Darwin)
Lines: 46
Message-ID: <AaMJI.64784$VU3.40237@fx46.iad>
X-Complaints-To: abuse@usenet-news.net
NNTP-Posting-Date: Wed, 21 Jul 2021 03:06:08 UTC
Organization: usenet-news.net
Date: Wed, 21 Jul 2021 03:06:08 GMT
X-Received-Bytes: 3052
 by: Branimir Maksimovic - Wed, 21 Jul 2021 03:06 UTC

On 2021-07-21, Stefan Monnier <monnier@iro.umontreal.ca> wrote:
> Branimir Maksimovic [2021-07-20 22:36:55] wrote:
>> On 2021-07-20, Stefan Monnier <monnier@iro.umontreal.ca> wrote:
>>> Quadibloc [2021-07-20 15:07:31] wrote:
>>>> On Tuesday, July 20, 2021 at 1:17:57 PM UTC-6, Branimir Maksimovic wrote:
>>>>> Parallel and concurrent is same thing :P
>>>>> Someone told me that depends on definition, but it is dim :P
>>>> "Concurrent" refers to tasks which _could_ be done in parallel, but
>>>> becaue you only have one serial processor available, it just does them
>>>> one at a time, switching back and forth between them.
>>>
>>> That's definitely not my understanding of the term.
>>>
>>> To me the difference between the two is that parallelism is concerned
>>> with dividing a task into subtasks that can be performed concurrently so
>>> as to reduce the latency of its execution. Concurrency OTOH starts with
>>> concurrent tasks and is concerned with how to schedule and synchronize
>>> them such that the result is correct and to obey various other
>>> timing constraints like fairness.
>> Well parrallel computation (several computers), concurrent execution
>> (single computer)
>
> I don't see where my description made you think concurrency was limited
> to the "single computer" case. But of course, parallel computation is
> generally a losing proposition if you only have a single compute element
> (tho a "single computer" can contain several CPUs, and a single CPU can
> also have several concurrent compute elements, e.g. via superscalar
> execution, SIMD, you name it).

That was original defitibition. Parallel was distributed computing via MPI eg,
while concurrent ment via threads on single computer.
Just sain' :P

i

>
>
> Stefan

--
bmaxa now listens MARTHA & THE MUFFINS - ECHO BEACH.ogg

Re: Someone's Trying Again (Ascenium)

<cbbb3d51-afed-4d1b-b571-a5947cdc375an@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18970&group=comp.arch#18970

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a0c:ee2a:: with SMTP id l10mr34527971qvs.22.1626848687920;
Tue, 20 Jul 2021 23:24:47 -0700 (PDT)
X-Received: by 2002:a54:4608:: with SMTP id p8mr1599892oip.110.1626848687676;
Tue, 20 Jul 2021 23:24:47 -0700 (PDT)
Path: i2pn2.org!i2pn.org!news.nntp4.net!usenet.pasdenom.info!usenet-fr.net!fdn.fr!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Tue, 20 Jul 2021 23:24:47 -0700 (PDT)
In-Reply-To: <0001HW.26A78E0E02AE68F6700002F9A38F@news.individual.net>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa3c:a000:412d:1136:693:fe90;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa3c:a000:412d:1136:693:fe90
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com> <2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at> <4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<sd365i$12a$1@dont-email.me> <sd36ng$474$1@newsreader4.netcologne.de>
<sd3cfd$5fs$1@gioia.aioe.org> <sd3riq$hrv$1@newsreader4.netcologne.de>
<0bcd77d9-30db-4cfa-92ba-44099644c419n@googlegroups.com> <sd62ts$19o$1@newsreader4.netcologne.de>
<0001HW.26A78E0E02AE68F6700002F9A38F@news.individual.net>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <cbbb3d51-afed-4d1b-b571-a5947cdc375an@googlegroups.com>
Subject: Re: Someone's Trying Again (Ascenium)
From: jsav...@ecn.ab.ca (Quadibloc)
Injection-Date: Wed, 21 Jul 2021 06:24:47 +0000
Content-Type: text/plain; charset="UTF-8"
 by: Quadibloc - Wed, 21 Jul 2021 06:24 UTC

On Tuesday, July 20, 2021 at 5:04:49 PM UTC-6, Bill Findlay wrote:
> On 20 Jul 2021, Thomas Koenig wrote
> (in article <sd62ts$19o$1...@newsreader4.netcologne.de>):
> > Diophantine equations are generally much harder to solve than
> > equations that involve real numbers that can be solved approximately
> > using floating point values.

> It's worse than that, Jim ...
> The roots of Diophantine equations are not computable
> (in the same sense as the TM halting problem).

But that only means that _some_ Diophantine equations can't be solved.

Many of them can still be solved, but with difficulty.

John Savard

Re: Parallelization (was: Someone's Trying Again (Ascenium))

<sd8j0q$odj$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18973&group=comp.arch#18973

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: m.del...@this.bitsnbites.eu (Marcus)
Newsgroups: comp.arch
Subject: Re: Parallelization (was: Someone's Trying Again (Ascenium))
Date: Wed, 21 Jul 2021 09:40:10 +0200
Organization: A noiseless patient Spider
Lines: 49
Message-ID: <sd8j0q$odj$1@dont-email.me>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com>
<scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me>
<sck00m$a4r$1@dont-email.me>
<f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com>
<2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com>
<2021Jul18.175524@mips.complang.tuwien.ac.at>
<4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com>
<2021Jul20.185454@mips.complang.tuwien.ac.at>
<AjFJI.30937$r21.16512@fx38.iad>
Mime-Version: 1.0
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 21 Jul 2021 07:40:10 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="b28e696dcafce0392e3c74d6cc33e41b";
logging-data="25011"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18MjYXdAwt3skz0n1vUmiuwq2ha2DG9C4M="
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
Thunderbird/78.11.0
Cancel-Lock: sha1:V2Wty+507z+ZVG6nbuVE/0gL4lM=
In-Reply-To: <AjFJI.30937$r21.16512@fx38.iad>
Content-Language: en-US
 by: Marcus - Wed, 21 Jul 2021 07:40 UTC

On 2021-07-20 21:17, Branimir Maksimovic wrote:
> On 2021-07-20, Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
>> George Neuner <gneuner2@comcast.net> writes:
>>> On Sun, 18 Jul 2021 15:55:24 GMT, anton@mips.complang.tuwien.ac.at
>>> (Anton Ertl) wrote:
>>>

[snip]

>>>> We have similar problems with explicitly managed fast memory, which is
>>>> why we don't see that in general-purpose computers; instead, we see
>>>> caches (a software-crisis-compatible variant of fast memory).
>>>
>>> We have similar problems with programmer managed dynamic allocation.
>>> All the modern languages use GC /because/ repeated studies have shown
>>> that average programmers largely are incapable of writing leak-proof
>>> code without it.
>>
>> Good example. Garbage collection is a good solution to the dynamic
>> memory allocation problem, including for good programmers. Now we
>> need such a solution for the parallelization problem.
>
> C++, Rust , Swift all does not have GC...
> They use RAII and reference counting (shared_ptr and such)
>

I've never liked GC. I find that when languages try to hide things from
the programmer (e.g. when, how and what memory gets freed) it gets
harder for me to reason about the code and its behavior. I also believe
that it generally lures programmers into making poorer SW architecture.

RAII and reference counting are much more controlled methods with
predictable performance / overhead, so me likes.

It is /possible/ to write C++ code that never leaks, /if/ you use the
right constructs. Rust took this to the next level and simply excluded
the bad constructs from the language.

....but I agree that we need language support for parallelization. For
instance order-independent loops and similar constructs should be the
goto solution (no pun intended) rather than explicit iteration logic.
Likewise pure functions should be the norm (object orientation really
screwed that up). And of course good support for async primitives.

Then we can start designing hardware that can spawn lightweight threads
as easily as they can call subroutines. But as long as everyone uses
old-school C and stdc functionality there's little use in trying.

/Marcus

Re: Parallelization (was: Someone's Trying Again (Ascenium))

<2021Jul21.105314@mips.complang.tuwien.ac.at>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=18975&group=comp.arch#18975

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: ant...@mips.complang.tuwien.ac.at (Anton Ertl)
Newsgroups: comp.arch
Subject: Re: Parallelization (was: Someone's Trying Again (Ascenium))
Date: Wed, 21 Jul 2021 08:53:14 GMT
Organization: Institut fuer Computersprachen, Technische Universitaet Wien
Lines: 31
Message-ID: <2021Jul21.105314@mips.complang.tuwien.ac.at>
References: <8945b42b-133f-4ba7-a8a7-de165de183c4n@googlegroups.com> <scjnv6$jq7$1@dont-email.me> <scjt80$mtc$1@dont-email.me> <sck00m$a4r$1@dont-email.me> <f4c3383d-f709-4662-a43d-5ec556e0df49n@googlegroups.com> <2cg0fgttpehhb2aeaimsc9359f5llskhcq@4ax.com> <2021Jul18.175524@mips.complang.tuwien.ac.at> <4sn8fgdkf0lp67fopmc0ddb395s1jd1sh3@4ax.com> <2021Jul20.185454@mips.complang.tuwien.ac.at> <AjFJI.30937$r21.16512@fx38.iad> <da787257-e095-4c2a-a76f-46d17fd6d1d0n@googlegroups.com>
Injection-Info: reader02.eternal-september.org; posting-host="b49d7fcff2fb833c3a4d66103d083e8a";
logging-data="13676"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+zl5pm8HPGmKZ6Ff0ZIVih"
Cancel-Lock: sha1:FsOoU7q7ajeCn55QfbgwDCkTJ7U=
X-newsreader: xrn 10.00-beta-3
 by: Anton Ertl - Wed, 21 Jul 2021 08:53 UTC

Quadibloc <jsavard@ecn.ab.ca> writes:
>On Tuesday, July 20, 2021 at 1:17:57 PM UTC-6, Branimir Maksimovic wrote:
>
>> Parallel and concurrent is same thing :P
>> Someone told me that depends on definition, but it is dim :P
>
>"Concurrent" refers to tasks which _could_ be done in parallel, but becaue
>you only have one serial processor available, it just does them one at a time,
>switching back and forth between them.

Reference needed.

en.wikipedia.org says:

|Concurrent computing is a form of computing in which several
|computations are executed concurrently-during overlapping time
|periods-instead of sequentially-with one completing before the next
|starts.

My impression is that "concurrent" is used when discussing concurrent
access to a data structure common between the tasks, and the
synchronization mechanisms for that, while "parallel" is used when
discussing parallel execution where such issues play no role or are
outside or at the fringes of the scope of the discussion.

So these are basically complementary concepts.

- anton
--
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

Pages:12345
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor