Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

Doubt is a pain too lonely to know that faith is his twin brother. -- Kahlil Gibran


devel / comp.arch.embedded / Re: 64-bit embedded computing is here and now

SubjectAuthor
* 64-bit embedded computing is here and nowJames Brakefield
+- Re: 64-bit embedded computing is here and nowDon Y
`* Re: 64-bit embedded computing is here and nowPaul Rubin
 `* Re: 64-bit embedded computing is here and nowDavid Brown
  +* Re: 64-bit embedded computing is here and nowDon Y
  |+* Re: 64-bit embedded computing is here and nowDavid Brown
  ||`* Re: 64-bit embedded computing is here and nowDon Y
  || `* Re: 64-bit embedded computing is here and nowDavid Brown
  ||  +* Re: 64-bit embedded computing is here and nowDon Y
  ||  |`* Re: 64-bit embedded computing is here and nowGeorge Neuner
  ||  | `- Re: 64-bit embedded computing is here and nowDon Y
  ||  `* Re: 64-bit embedded computing is here and nowPaul Rubin
  ||   +- Re: 64-bit embedded computing is here and nowDon Y
  ||   +- Re: 64-bit embedded computing is here and nowDJ Delorie
  ||   `* Re: 64-bit embedded computing is here and nowPhil Hobbs
  ||    `* Re: 64-bit embedded computing is here and nowDimiter_Popoff
  ||     `* Re: 64-bit embedded computing is here and nowPhil Hobbs
  ||      +* Re: 64-bit embedded computing is here and nowPaul Rubin
  ||      |`- Re: 64-bit embedded computing is here and nowDon Y
  ||      `- Re: 64-bit embedded computing is here and nowDimiter_Popoff
  |`* Re: 64-bit embedded computing is here and nowJames Brakefield
  | +* Re: 64-bit embedded computing is here and nowDavid Brown
  | |+- Re: 64-bit embedded computing is here and nowJames Brakefield
  | |`* Re: 64-bit embedded computing is here and nowGeorge Neuner
  | | `* Re: 64-bit embedded computing is here and nowDavid Brown
  | |  `* Re: 64-bit embedded computing is here and nowHans-Bernhard Bröker
  | |   `- Re: 64-bit embedded computing is here and nowDavid Brown
  | +* Re: 64-bit embedded computing is here and nowDimiter_Popoff
  | |`* Re: 64-bit embedded computing is here and nowDon Y
  | | +* Re: 64-bit embedded computing is here and nowDimiter_Popoff
  | | |`* Re: 64-bit embedded computing is here and nowDon Y
  | | | `* Re: 64-bit embedded computing is here and nowDimiter_Popoff
  | | |  `* Re: 64-bit embedded computing is here and nowDon Y
  | | |   `* Re: 64-bit embedded computing is here and nowDimiter_Popoff
  | | |    `* Re: 64-bit embedded computing is here and nowDon Y
  | | |     `* Re: 64-bit embedded computing is here and nowDimiter_Popoff
  | | |      `* Re: 64-bit embedded computing is here and nowDon Y
  | | |       `* Re: 64-bit embedded computing is here and nowDimiter_Popoff
  | | |        `* Re: 64-bit embedded computing is here and nowDon Y
  | | |         `* Re: 64-bit embedded computing is here and nowDimiter_Popoff
  | | |          `- Re: 64-bit embedded computing is here and nowDon Y
  | | `- Re: 64-bit embedded computing is here and nowGeorge Neuner
  | +- Re: 64-bit embedded computing is here and nowDon Y
  | `* Re: 64-bit embedded computing is here and nowPaul Rubin
  |  `* Re: 64-bit embedded computing is here and nowTheo
  |   +* Re: 64-bit embedded computing is here and nowPaul Rubin
  |   |`- Re: 64-bit embedded computing is here and nowDon Y
  |   `- Re: 64-bit embedded computing is here and nowDon Y
  `* Re: 64-bit embedded computing is here and nowTheo
   +* Re: 64-bit embedded computing is here and nowDavid Brown
   |`* Re: 64-bit embedded computing is here and nowDimiter_Popoff
   | +- Re: 64-bit embedded computing is here and nowDon Y
   | `* Re: 64-bit embedded computing is here and nowDavid Brown
   |  `* Re: 64-bit embedded computing is here and nowDimiter_Popoff
   |   `* Re: 64-bit embedded computing is here and nowDavid Brown
   |    `- Re: 64-bit embedded computing is here and nowDimiter_Popoff
   `* Re: 64-bit embedded computing is here and nowDon Y
    `* Re: 64-bit embedded computing is here and nowTheo
     `- Re: 64-bit embedded computing is here and nowDon Y

Pages:123
64-bit embedded computing is here and now

<7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=490&group=comp.arch.embedded#490

  copy link   Newsgroups: comp.arch.embedded
X-Received: by 2002:a05:6214:1c0d:: with SMTP id u13mr15775869qvc.49.1623077271783; Mon, 07 Jun 2021 07:47:51 -0700 (PDT)
X-Received: by 2002:a25:6088:: with SMTP id u130mr24894655ybb.257.1623077271569; Mon, 07 Jun 2021 07:47:51 -0700 (PDT)
Path: i2pn2.org!i2pn.org!aioe.org!news.uzoreto.com!tr2.eu1.usenetexpress.com!feeder.usenetexpress.com!tr1.iad1.usenetexpress.com!border1.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch.embedded
Date: Mon, 7 Jun 2021 07:47:51 -0700 (PDT)
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:a0d0:9f90:f509:60d4:4fba:dcc4; posting-account=AoizIQoAAADa7kQDpB0DAj2jwddxXUgl
NNTP-Posting-Host: 2600:1700:a0d0:9f90:f509:60d4:4fba:dcc4
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
Subject: 64-bit embedded computing is here and now
From: jim.brak...@ieee.org (James Brakefield)
Injection-Date: Mon, 07 Jun 2021 14:47:51 +0000
Content-Type: text/plain; charset="UTF-8"
Lines: 13
 by: James Brakefield - Mon, 7 Jun 2021 14:47 UTC

Sometimes things move faster than expected.
As someone with an embedded background this caught me by surprise:

Tera-Byte microSD cards are readily available and getting cheaper.
Heck, you can carry ten of them in a credit card pouch.
Likely to move to the same price range as hard disks ($20/TB).

That means that a 2+ square inch PCB can hold a 64-bit processor and enough storage for memory mapped files larger than 4GB.

Is the 32-bit embedded processor cost vulnerable to 64-bit 7nm devices as the FABs mature? Will video data move to the IOT edge? Will AI move to the edge? Will every embedded CPU have a built-in radio?

Wait a few years and find out.

Re: 64-bit embedded computing is here and now

<s9m26e$rhu$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=491&group=comp.arch.embedded#491

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Mon, 7 Jun 2021 14:13:26 -0700
Organization: A noiseless patient Spider
Lines: 57
Message-ID: <s9m26e$rhu$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 7 Jun 2021 21:13:50 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="1bfc24091825c55c9ee1f484af250c47";
logging-data="28222"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/UwFbiX/3Ckb8BXpuCPqXf"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:eC6/i8vywsqF7Ww/zQwbYtbl6DA=
In-Reply-To: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
Content-Language: en-US
 by: Don Y - Mon, 7 Jun 2021 21:13 UTC

On 6/7/2021 7:47 AM, James Brakefield wrote:
>
> Sometimes things move faster than expected. As someone with an embedded
> background this caught me by surprise:
>
> Tera-Byte microSD cards are readily available and getting cheaper. Heck, you
> can carry ten of them in a credit card pouch. Likely to move to the same
> price range as hard disks ($20/TB).
>
> That means that a 2+ square inch PCB can hold a 64-bit processor and enough
> storage for memory mapped files larger than 4GB.

Kind of old news. I've been developing on a SAMA5D36 platform with 256M of
FLASH and 256M of DDR2 for 5 or 6 years, now. PCB is just over 2 sq in
(but most of that being off-board connectors). Granted, it's a 32b processor
but I'll be upgrading that to something "wider" before release (software and
OS have been written for a 64b world -- previously waiting for costs to fall
to make it as economical as the 32b was years ago; now waiting to see if I
can leverage even MORE hardware-per-dollar!).

Once you have any sort of connectivity, it becomes practical to support
files larger than your physical memory -- just fault the appropriate
page in over whatever interface(s) you have available (assuming you
have other boxes that you can talk to/with)

> Is the 32-bit embedded processor cost vulnerable to 64-bit 7nm devices as
> the FABs mature? Will video data move to the IOT edge? Will AI move to the
> edge? Will every embedded CPU have a built-in radio?

In my case, video is already *at* the edge. The idea of needing a
"bigger host" or "the cloud" is already obsolescent. Even the need
for bulk storage -- whether on-board (removable flash, as you suggest)
or remotely served -- is dubious. How much persistent store do you
really need, beyond your executables, in a typical application?

I've decided that RAM is the bottleneck as you can't XIP out of
an SD card...

Radios? <shrug> Possibly as wireless is *so* much easier to
interconnect than wired. But, you're still left with the power
problem; even at a couple of watts, wall warts are unsightly
and low voltage DC isn't readily available *everywhere* that
you may want to site a device. (how many devices do you
want tethered to a USB host before it starts to look a mess?)

The bigger challenge is moving developers to think in terms of
the capabilities that the hardware will afford. E.g., can
you exploit *true* concurrency in your application? Or, will
you "waste" a second core/thread context on some largely
decoupled activity? How much capability will you be willing
to sacrifice to your hosting OS -- and what NEW capabilities
will it 0provide you?

> Wait a few years and find out.

The wait won't even be *that* long...

Re: 64-bit embedded computing is here and now

<87lf7kexbp.fsf@nightsong.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=492&group=comp.arch.embedded#492

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: no.em...@nospam.invalid (Paul Rubin)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Mon, 07 Jun 2021 22:31:54 -0700
Organization: A noiseless patient Spider
Lines: 7
Message-ID: <87lf7kexbp.fsf@nightsong.com>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain
Injection-Info: reader02.eternal-september.org; posting-host="f052e59b0403915fe8a8b7931989deb4";
logging-data="21992"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19+V+JQPdDWSIXPY/AtkKFh"
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux)
Cancel-Lock: sha1:qxQKQ6m7DzNcWC1b90X8t/LWvqA=
sha1:8DGO3hXBxLb1SIOwdYNUMVX0yOk=
 by: Paul Rubin - Tue, 8 Jun 2021 05:31 UTC

James Brakefield <jim.brakefield@ieee.org> writes:
> Is the 32-bit embedded processor cost vulnerable to 64-bit 7nm devices
> as the FABs mature? Will video data move to the IOT edge? Will AI move
> to the edge? Will every embedded CPU have a built-in radio?

I don't care what the people say--
32 bits are here to stay.

Re: 64-bit embedded computing is here and now

<s9n10p$t4i$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=493&group=comp.arch.embedded#493

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: david.br...@hesbynett.no (David Brown)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Tue, 8 Jun 2021 07:59:53 +0200
Organization: A noiseless patient Spider
Lines: 32
Message-ID: <s9n10p$t4i$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 8 Jun 2021 05:59:53 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="80bc4134b12705f24fd88068270160a0";
logging-data="29842"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+Pmp1zNAzuco5wyv4q+I9KU33Vgg1BuAs="
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
Thunderbird/68.10.0
Cancel-Lock: sha1:txekoyJs1vfA+QZYnqU++zOkht8=
In-Reply-To: <87lf7kexbp.fsf@nightsong.com>
Content-Language: en-GB
 by: David Brown - Tue, 8 Jun 2021 05:59 UTC

On 08/06/2021 07:31, Paul Rubin wrote:
> James Brakefield <jim.brakefield@ieee.org> writes:
>> Is the 32-bit embedded processor cost vulnerable to 64-bit 7nm devices
>> as the FABs mature? Will video data move to the IOT edge? Will AI move
>> to the edge? Will every embedded CPU have a built-in radio?
>
> I don't care what the people say--
> 32 bits are here to stay.
>

8-bit microcontrollers are still far more common than 32-bit devices in
the embedded world (and 4-bit devices are not gone yet). At the other
end, 64-bit devices have been used for a decade or two in some kinds of
embedded systems.

We'll see 64-bit take a greater proportion of the embedded systems that
demand high throughput or processing power (network devices, hard cores
in expensive FPGAs, etc.) where the extra cost in dollars, power,
complexity, board design are not a problem. They will probably become
more common in embedded Linux systems as the core itself is not usually
the biggest part of the cost. And such systems are definitely on the
increase.

But for microcontrollers - which dominate embedded systems - there has
been a lot to gain by going from 8-bit and 16-bit to 32-bit for little
cost. There is almost nothing to gain from a move to 64-bit, but the
cost would be a good deal higher. So it is not going to happen - at
least not more than a very small and very gradual change.

The OP sounds more like a salesman than someone who actually works with
embedded development in reality.

Re: 64-bit embedded computing is here and now

<s9n6rb$t19$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=494&group=comp.arch.embedded#494

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Tue, 8 Jun 2021 00:39:01 -0700
Organization: A noiseless patient Spider
Lines: 98
Message-ID: <s9n6rb$t19$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 8 Jun 2021 07:39:24 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="b4d1ccce944af5439bc3847708824f15";
logging-data="29737"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1++e5+XCv3XIWD1Rdqx4nm8"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:owta4TckKTQXn/xAPHSnAgB9wRg=
In-Reply-To: <s9n10p$t4i$1@dont-email.me>
Content-Language: en-US
 by: Don Y - Tue, 8 Jun 2021 07:39 UTC

On 6/7/2021 10:59 PM, David Brown wrote:
> 8-bit microcontrollers are still far more common than 32-bit devices in
> the embedded world (and 4-bit devices are not gone yet). At the other
> end, 64-bit devices have been used for a decade or two in some kinds of
> embedded systems.

I contend that a good many "32b" implementations are really glorified
8/16b applications that exhausted their memory space. I still see lots
of designs build on a small platform (8/16b) and augment it -- either
with some "memory enhancement" technology or additional "slave"
processors to split the binaries. Code increases in complexity but
there doesn't seem to be a need for the "work-per-unit-time" to.

[This has actually been the case for a long time. The appeal of
newer CPUs is often in the set of peripherals that accompany the
processor, not the processor itself.]

> We'll see 64-bit take a greater proportion of the embedded systems that
> demand high throughput or processing power (network devices, hard cores
> in expensive FPGAs, etc.) where the extra cost in dollars, power,
> complexity, board design are not a problem. They will probably become
> more common in embedded Linux systems as the core itself is not usually
> the biggest part of the cost. And such systems are definitely on the
> increase.
>
> But for microcontrollers - which dominate embedded systems - there has
> been a lot to gain by going from 8-bit and 16-bit to 32-bit for little

I disagree. The "cost" (barrier) that I see clients facing is the
added complexity of a 32b platform and how it often implies (or even
*requires*) a more formal OS underpinning the application. Where you
could hack together something on bare metal in the 8/16b worlds,
moving to 32 often requires additional complexity in managing
mechanisms that aren't usually present in smaller CPUs (caches,
MMU/MPU, DMA, etc.) Developers (and their organizations) can't just
play "coder cowboy" and coerce the hardware to behaving as they
would like. Existing staff (hired with the "bare metal" mindset)
are often not equipped to move into a more structured environment.

[I can hack together a device to meet some particular purpose
much easier on "development hardware" than I can on a "PC" -- simply
because there's too much I have to "work around" on a PC that isn't
present on development hardware.]

Not every product needs a filesystem, network stack, protected
execution domains, etc. Those come with additional costs -- often
in the form of a lack of understanding as to what the ACTUAL
code in your product is doing at any given time. (this isn't the
case in the smaller MCU world; it's possible for a developer to
have written EVERY line of code in a smaller platform)

> cost. There is almost nothing to gain from a move to 64-bit, but the
> cost would be a good deal higher.

Why is the cost "a good deal higher"? Code/data footprints don't
uniformly "double" in size. The CPU doesn't slow down to handle
bigger data.

The cost is driven by where the market goes. Note how many 68Ks found
design-ins vs. the T11, F11, 16032, etc. My first 32b design was
physically large, consumed a boatload of power and ran at only a modest
improvement (in terms of system clock) over 8b processors of its day.
Now, I can buy two orders of magnitude more horsepower PLUS a
bunch of built-in peripherals for two cups of coffee (at QTY 1)

> So it is not going to happen - at
> least not more than a very small and very gradual change.

We got 32b processors NOT because the embedded world cried out for
them but, rather, because of the influence of the 32b desktop world.
We've had 32b processors since the early 80's. But, we've only had
PCs since about the same timeframe! One assumes ubiquity in the
desktop world would need to happen before any real spillover to embedded.
(When the "desktop" was an '11 sitting in a back room, it wasn't seen
as ubiquitous.)

In the future, we'll see the 64b *phone* world drive the evolution
of embedded designs, similarly. (do you really need 32b/64b to
make a phone? how much code is actually executing at any given
time and in how many different containers?)

[The OP suggests MCus with radios -- maybe they'll be cell phone
radios and *not* wifi/BLE as I assume he's thinking! Why add the
need for some sort of access point to a product's deployment if
the product *itself* can make a direct connection??]

My current design can't fill a 32b address space (but, that's because
I've decomposed apps to the point that they can be relatively small).
OTOH, designing a system with a 32b limitation seems like an invitation
to do it over when 64b is "cost effective". The extra "baggage" has
proven to be relatively insignificant (I have ports of my codebase
to SPARC as well as Atom running alongside a 32b ARM)

> The OP sounds more like a salesman than someone who actually works with
> embedded development in reality.

Possibly. Or, just someone that wanted to stir up discussion...

Re: 64-bit embedded computing is here and now

<s9nir0$k47$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=495&group=comp.arch.embedded#495

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: david.br...@hesbynett.no (David Brown)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Tue, 8 Jun 2021 13:04:00 +0200
Organization: A noiseless patient Spider
Lines: 131
Message-ID: <s9nir0$k47$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<s9n6rb$t19$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 8 Jun 2021 11:04:00 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="80bc4134b12705f24fd88068270160a0";
logging-data="20615"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+B7Ek85tbOUo5jFf7uZkZ6S3RY1OI/jgA="
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
Thunderbird/68.10.0
Cancel-Lock: sha1:TaxNX1UFGtXssJyggiw27U9lWyA=
In-Reply-To: <s9n6rb$t19$1@dont-email.me>
Content-Language: en-GB
 by: David Brown - Tue, 8 Jun 2021 11:04 UTC

On 08/06/2021 09:39, Don Y wrote:
> On 6/7/2021 10:59 PM, David Brown wrote:
>> 8-bit microcontrollers are still far more common than 32-bit devices in
>> the embedded world (and 4-bit devices are not gone yet).  At the other
>> end, 64-bit devices have been used for a decade or two in some kinds of
>> embedded systems.
>
> I contend that a good many "32b" implementations are really glorified
> 8/16b applications that exhausted their memory space. 

Sure. Previously you might have used 32 kB flash on an 8-bit device,
now you can use 64 kB flash on a 32-bit device. The point is, you are
/not/ going to find yourself hitting GB limits any time soon. The step
from 8-bit or 16-bit to 32-bit is useful to get a bit more out of the
system - the step from 32-bit to 64-bit is totally pointless for 99.99%
of embedded systems. (Even for most embedded Linux systems, you usually
only have a 64-bit cpu because you want bigger and faster, not because
of memory limitations. It is only when you have a big gui with fast
graphics that 32-bit address space becomes a limitation.)

A 32-bit microcontroller is simply much easier to work with than an
8-bit or 16-bit with "extended" or banked memory to get beyond 64 K
address space limits.

>
>> We'll see 64-bit take a greater proportion of the embedded systems that
>> demand high throughput or processing power (network devices, hard cores
>> in expensive FPGAs, etc.) where the extra cost in dollars, power,
>> complexity, board design are not a problem.  They will probably become
>> more common in embedded Linux systems as the core itself is not usually
>> the biggest part of the cost.  And such systems are definitely on the
>> increase.
>>
>> But for microcontrollers - which dominate embedded systems - there has
>> been a lot to gain by going from 8-bit and 16-bit to 32-bit for little
>
> I disagree.  The "cost" (barrier) that I see clients facing is the
> added complexity of a 32b platform and how it often implies (or even
> *requires*) a more formal OS underpinning the application.

Yes, that is definitely a cost in some cases - 32-bit microcontrollers
are usually noticeably more complicated than 8-bit ones. How
significant the cost is depends on the balances of the project between
development costs and production costs, and how beneficial the extra
functionality can be (like moving from bare metal to RTOS, or supporting
networking).

>
>> cost.  There is almost nothing to gain from a move to 64-bit, but the
>> cost would be a good deal higher.
>
> Why is the cost "a good deal higher"?  Code/data footprints don't
> uniformly "double" in size.  The CPU doesn't slow down to handle
> bigger data.

Some parts of code and data /do/ double in size - but not uniformly, of
course. But your chip is bigger, faster, requires more power, has wider
buses, needs more advanced memories, has more balls on the package,
requires finer pitched pcb layouts, etc.

In theory, you /could/ make a microcontroller in a 64-pin LQFP and
replace the 72 MHz Cortex-M4 with a 64-bit ARM core at the same clock
speed. The die would only cost two or three times more, and take
perhaps less than 10 times the power for the core. But it would be so
utterly pointless that no manufacturer would make such a device.

So a move to 64-bit in practice means moving from a small, cheap,
self-contained microcontroller to an embedded PC. Lots of new
possibilities, lots of new costs of all kinds.

Oh, and the cpu /could/ be slower for some tasks - bigger cpus that are
optimised for throughput often have poorer latency and more jitter for
interrupts and other time-critical features.

>
>>  So it is not going to happen - at
>> least not more than a very small and very gradual change.
>
> We got 32b processors NOT because the embedded world cried out for
> them but, rather, because of the influence of the 32b desktop world.
> We've had 32b processors since the early 80's.  But, we've only had
> PCs since about the same timeframe!  One assumes ubiquity in the
> desktop world would need to happen before any real spillover to embedded.
> (When the "desktop" was an '11 sitting in a back room, it wasn't seen
> as ubiquitous.)

I don't assume there is any direct connection between the desktop world
and the embedded world - the needs are usually very different. There is
a small overlap in the area of embedded devices with good networking and
a gui, where similarity to the desktop world is useful.

We have had 32-bit microcontrollers for decades. I used a 16-bit
Windows system when working with my first 32-bit microcontroller. But
at that time, 32-bit microcontrollers cost a lot more and required more
from the board (external memories, more power, etc.) than 8-bit or
16-bit devices. That has gradually changed with an almost total
disregard for what has happened in the desktop world.

Yes, the embedded world /did/ cry out for 32-bit microcontrollers for an
increasing proportion of tasks. We cried many tears when then
microcontroller manufacturers offered to give more flash space to their
8-bit devices by having different memory models, banking, far jumps, and
all the other shit that goes with not having a big enough address space.
We cried out when we wanted to have Ethernet and the microcontroller
only had a few KB of ram. I have used maybe 6 or 8 different 32-bit
microcontroller processor architectures, and I used them because I
needed them for the task. It's only in the past 5+ years that I have
been using 32-bit microcontrollers for tasks that could be done fine
with 8-bit devices, but the 32-bit devices are smaller, cheaper and
easier to work with than the corresponding 8-bit parts.

>
> In the future, we'll see the 64b *phone* world drive the evolution
> of embedded designs, similarly.  (do you really need 32b/64b to
> make a phone?  how much code is actually executing at any given
> time and in how many different containers?)
>

We will see that on devices that are, roughly speaking, tablets -
embedded systems with a good gui, a touchscreen, networking. And that's
fine. But these are a tiny proportion of the embedded devices made.

>
>> The OP sounds more like a salesman than someone who actually works with
>> embedded development in reality.
>
> Possibly.  Or, just someone that wanted to stir up discussion...
>

Could be. And there's no harm in that!

Re: 64-bit embedded computing is here and now

<Iyz*Vc+ly@news.chiark.greenend.org.uk>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=496&group=comp.arch.embedded#496

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!news.nntp4.net!nntp.terraraq.uk!nntp-feed.chiark.greenend.org.uk!ewrotcd!.POSTED!not-for-mail
From: theom+n...@chiark.greenend.org.uk (Theo)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: 08 Jun 2021 15:46:22 +0100 (BST)
Organization: University of Cambridge, England
Lines: 38
Message-ID: <Iyz*Vc+ly@news.chiark.greenend.org.uk>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com> <87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
NNTP-Posting-Host: chiark.greenend.org.uk
X-Trace: chiark.greenend.org.uk 1623163585 20017 212.13.197.229 (8 Jun 2021 14:46:25 GMT)
X-Complaints-To: abuse@chiark.greenend.org.uk
NNTP-Posting-Date: Tue, 8 Jun 2021 14:46:25 +0000 (UTC)
User-Agent: tin/1.8.3-20070201 ("Scotasay") (UNIX) (Linux/3.16.0-7-amd64 (x86_64))
Originator: theom@chiark.greenend.org.uk ([212.13.197.229])
 by: Theo - Tue, 8 Jun 2021 14:46 UTC

David Brown <david.brown@hesbynett.no> wrote:
> But for microcontrollers - which dominate embedded systems - there has
> been a lot to gain by going from 8-bit and 16-bit to 32-bit for little
> cost. There is almost nothing to gain from a move to 64-bit, but the
> cost would be a good deal higher. So it is not going to happen - at
> least not more than a very small and very gradual change.

I think there will be divergence about what people mean by an N-bit system:

Register size
Unit of logical/arithmetical processing
Memory address/pointer size
Memory bus/cache width

I think we will increasingly see parts which have different sizes on one
area but not the other.

For example, for doing some kinds of logical operations (eg crypto), having
64-bit registers and ALU makes sense, but you might only need kilobytes of
memory so only have <32 address bits.

For something else, like a microcontroller that's hung off the side of a
bigger system (eg the MCU on a PCIe card) you might want the ability to
handle 64 bit addresses but don't need to pay the price for 64-bit
registers.

Or you might operate with 16 or 32 bit wide external RAM chip, but your
cache could extend that to a wider word width.

There are many permutations, and I think people will pay the cost where it
benefits them and not where it doesn't.

This is not a new phenomenon, of course. But for a time all these numbers
were in the range between 16 and 32 bits, which made 32 simplest all round.
Just like we previously had various 8/16 hybrids (eg 8 bit datapath, 16 bit
address) I think we're going to see more 32/64 hybrids.

Theo

Re: 64-bit embedded computing is here and now

<5e77ea0a-ef41-4f72-a538-4edc9bfff075n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=497&group=comp.arch.embedded#497

  copy link   Newsgroups: comp.arch.embedded
X-Received: by 2002:a05:620a:1f7:: with SMTP id x23mr3034904qkn.160.1623181124756; Tue, 08 Jun 2021 12:38:44 -0700 (PDT)
X-Received: by 2002:a25:42d4:: with SMTP id p203mr33315560yba.97.1623181124509; Tue, 08 Jun 2021 12:38:44 -0700 (PDT)
Path: i2pn2.org!i2pn.org!paganini.bofh.team!news.dns-netz.com!news.freedyn.net!newsfeed.xs4all.nl!newsfeed8.news.xs4all.nl!tr1.eu1.usenetexpress.com!feeder.usenetexpress.com!tr2.iad1.usenetexpress.com!border1.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch.embedded
Date: Tue, 8 Jun 2021 12:38:44 -0700 (PDT)
In-Reply-To: <s9n6rb$t19$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:a0d0:9f90:adc5:cbde:e815:5766; posting-account=AoizIQoAAADa7kQDpB0DAj2jwddxXUgl
NNTP-Posting-Host: 2600:1700:a0d0:9f90:adc5:cbde:e815:5766
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com> <87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me> <s9n6rb$t19$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <5e77ea0a-ef41-4f72-a538-4edc9bfff075n@googlegroups.com>
Subject: Re: 64-bit embedded computing is here and now
From: jim.brak...@ieee.org (James Brakefield)
Injection-Date: Tue, 08 Jun 2021 19:38:44 +0000
Content-Type: text/plain; charset="UTF-8"
Lines: 100
 by: James Brakefield - Tue, 8 Jun 2021 19:38 UTC

On Tuesday, June 8, 2021 at 2:39:29 AM UTC-5, Don Y wrote:
> On 6/7/2021 10:59 PM, David Brown wrote:
> > 8-bit microcontrollers are still far more common than 32-bit devices in
> > the embedded world (and 4-bit devices are not gone yet). At the other
> > end, 64-bit devices have been used for a decade or two in some kinds of
> > embedded systems.
> I contend that a good many "32b" implementations are really glorified
> 8/16b applications that exhausted their memory space. I still see lots
> of designs build on a small platform (8/16b) and augment it -- either
> with some "memory enhancement" technology or additional "slave"
> processors to split the binaries. Code increases in complexity but
> there doesn't seem to be a need for the "work-per-unit-time" to.
>
> [This has actually been the case for a long time. The appeal of
> newer CPUs is often in the set of peripherals that accompany the
> processor, not the processor itself.]
> > We'll see 64-bit take a greater proportion of the embedded systems that
> > demand high throughput or processing power (network devices, hard cores
> > in expensive FPGAs, etc.) where the extra cost in dollars, power,
> > complexity, board design are not a problem. They will probably become
> > more common in embedded Linux systems as the core itself is not usually
> > the biggest part of the cost. And such systems are definitely on the
> > increase.
> >
> > But for microcontrollers - which dominate embedded systems - there has
> > been a lot to gain by going from 8-bit and 16-bit to 32-bit for little
> I disagree. The "cost" (barrier) that I see clients facing is the
> added complexity of a 32b platform and how it often implies (or even
> *requires*) a more formal OS underpinning the application. Where you
> could hack together something on bare metal in the 8/16b worlds,
> moving to 32 often requires additional complexity in managing
> mechanisms that aren't usually present in smaller CPUs (caches,
> MMU/MPU, DMA, etc.) Developers (and their organizations) can't just
> play "coder cowboy" and coerce the hardware to behaving as they
> would like. Existing staff (hired with the "bare metal" mindset)
> are often not equipped to move into a more structured environment.
>
> [I can hack together a device to meet some particular purpose
> much easier on "development hardware" than I can on a "PC" -- simply
> because there's too much I have to "work around" on a PC that isn't
> present on development hardware.]
>
> Not every product needs a filesystem, network stack, protected
> execution domains, etc. Those come with additional costs -- often
> in the form of a lack of understanding as to what the ACTUAL
> code in your product is doing at any given time. (this isn't the
> case in the smaller MCU world; it's possible for a developer to
> have written EVERY line of code in a smaller platform)
> > cost. There is almost nothing to gain from a move to 64-bit, but the
> > cost would be a good deal higher.
> Why is the cost "a good deal higher"? Code/data footprints don't
> uniformly "double" in size. The CPU doesn't slow down to handle
> bigger data.
>
> The cost is driven by where the market goes. Note how many 68Ks found
> design-ins vs. the T11, F11, 16032, etc. My first 32b design was
> physically large, consumed a boatload of power and ran at only a modest
> improvement (in terms of system clock) over 8b processors of its day.
> Now, I can buy two orders of magnitude more horsepower PLUS a
> bunch of built-in peripherals for two cups of coffee (at QTY 1)
> > So it is not going to happen - at
> > least not more than a very small and very gradual change.
> We got 32b processors NOT because the embedded world cried out for
> them but, rather, because of the influence of the 32b desktop world.
> We've had 32b processors since the early 80's. But, we've only had
> PCs since about the same timeframe! One assumes ubiquity in the
> desktop world would need to happen before any real spillover to embedded.
> (When the "desktop" was an '11 sitting in a back room, it wasn't seen
> as ubiquitous.)
>
> In the future, we'll see the 64b *phone* world drive the evolution
> of embedded designs, similarly. (do you really need 32b/64b to
> make a phone? how much code is actually executing at any given
> time and in how many different containers?)
>
> [The OP suggests MCus with radios -- maybe they'll be cell phone
> radios and *not* wifi/BLE as I assume he's thinking! Why add the
> need for some sort of access point to a product's deployment if
> the product *itself* can make a direct connection??]
>
> My current design can't fill a 32b address space (but, that's because
> I've decomposed apps to the point that they can be relatively small).
> OTOH, designing a system with a 32b limitation seems like an invitation
> to do it over when 64b is "cost effective". The extra "baggage" has
> proven to be relatively insignificant (I have ports of my codebase
> to SPARC as well as Atom running alongside a 32b ARM)
> > The OP sounds more like a salesman than someone who actually works with
> > embedded development in reality.
> Possibly. Or, just someone that wanted to stir up discussion...

|> I contend that a good many "32b" implementations are really glorified
|> 8/16b applications that exhausted their memory space.

The only thing that will take more than 4GB is video or a day's worth of photos.
So there is likely to be some embedded aps that need a > 32-bit address space.
Cost, size or storage capacity are no longer limiting factors.

Am trying to puzzle out what a 64-bit embedded processor should look like.
At the low end, yeah, a simple RISC processor. And support for complex arithmetic
using 32-bit floats? And support for pixel alpha blending using quad 16-bit numbers?
32-bit pointers into the software?

Re: 64-bit embedded computing is here and now

<s9oit7$1mi$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=498&group=comp.arch.embedded#498

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: david.br...@hesbynett.no (David Brown)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Tue, 8 Jun 2021 22:11:18 +0200
Organization: A noiseless patient Spider
Lines: 52
Message-ID: <s9oit7$1mi$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<s9n6rb$t19$1@dont-email.me>
<5e77ea0a-ef41-4f72-a538-4edc9bfff075n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 8 Jun 2021 20:11:19 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="84073352f0212cfea752c7534e2f3d93";
logging-data="1746"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/AWl71HNBkm+o9t9YxNAa02pinJAAqXE0="
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
Thunderbird/68.10.0
Cancel-Lock: sha1:TcscywRdUabWvbfyyxHcsdga/sg=
In-Reply-To: <5e77ea0a-ef41-4f72-a538-4edc9bfff075n@googlegroups.com>
Content-Language: en-GB
 by: David Brown - Tue, 8 Jun 2021 20:11 UTC

On 08/06/2021 21:38, James Brakefield wrote:

Could you explain your background here, and what you are trying to get
at? That would make it easier to give you better answers.

> The only thing that will take more than 4GB is video or a day's worth of photos.

No, video is not the only thing that takes 4GB or more. But it is,
perhaps, one of the more common cases. Most embedded systems don't need
anything remotely like that much memory - to the nearest percent, 100%
of embedded devices don't even need close to 4MB of memory (ram and
flash put together).

> So there is likely to be some embedded aps that need a > 32-bit address space.

Some, yes. Many, no.

> Cost, size or storage capacity are no longer limiting factors.

Cost and size (and power) are /always/ limiting factors in embedded systems.

>
> Am trying to puzzle out what a 64-bit embedded processor should look like.

There are plenty to look at. There are ARMs, PowerPC, MIPS, RISC-V.
And of course there are some x86 processors used in embedded systems.

> At the low end, yeah, a simple RISC processor.

Pretty much all processors except x86 and brain-dead old-fashioned 8-bit
CISC devices are RISC. Not all are simple.

> And support for complex arithmetic
> using 32-bit floats?

A 64-bit processor will certainly support 64-bit doubles as well as
32-bit floats. Complex arithmetic is rarely needed, except perhaps for
FFT's, but is easily done using real arithmetic. You can happily do
32-bit complex arithmetic on an 8-bit AVR, albeit taking significant
code space and run time. I believe the latest gcc for the AVR will do
64-bit doubles as well - using exactly the same C code you would on any
other processor.

> And support for pixel alpha blending using quad 16-bit numbers?

You would use a hardware 2D graphics accelerator for that, not the
processor.

> 32-bit pointers into the software?
>

With 64-bit processors you usually use 64-bit pointers.

Re: 64-bit embedded computing is here and now

<s9ojag$4ce$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=499&group=comp.arch.embedded#499

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: david.br...@hesbynett.no (David Brown)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Tue, 8 Jun 2021 22:18:23 +0200
Organization: A noiseless patient Spider
Lines: 82
Message-ID: <s9ojag$4ce$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<Iyz*Vc+ly@news.chiark.greenend.org.uk>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 8 Jun 2021 20:18:24 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="84073352f0212cfea752c7534e2f3d93";
logging-data="4494"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18b0dawMSdcTV36gq09Z5UM+SyQ+zMtNDw="
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
Thunderbird/68.10.0
Cancel-Lock: sha1:EWtLtJ24grA1XjUw7++50qGhVzc=
In-Reply-To: <Iyz*Vc+ly@news.chiark.greenend.org.uk>
Content-Language: en-GB
 by: David Brown - Tue, 8 Jun 2021 20:18 UTC

On 08/06/2021 16:46, Theo wrote:
> David Brown <david.brown@hesbynett.no> wrote:
>> But for microcontrollers - which dominate embedded systems - there has
>> been a lot to gain by going from 8-bit and 16-bit to 32-bit for little
>> cost. There is almost nothing to gain from a move to 64-bit, but the
>> cost would be a good deal higher. So it is not going to happen - at
>> least not more than a very small and very gradual change.
>
> I think there will be divergence about what people mean by an N-bit system:

There has always been different ways to measure the width of a cpu, and
different people have different preferences.

>
> Register size

Yes, that is common.

> Unit of logical/arithmetical processing

As is that. Sometimes the width supported by general instructions
differs from the ALU width, however, resulting in classifications like
8/16-bit for the Z80 and 16/32-bit for the 68000.

> Memory address/pointer size

Yes, also common.

> Memory bus/cache width

No, that is not a common way to measure cpu "width", for many reasons.
A chip is likely to have many buses outside the cpu core itself (and the
cache(s) may or may not be considered part of the core). It's common to
have 64-bit wide buses on 32-bit processors, it's also common to have
16-bit external databuses on a microcontroller. And the cache might be
128 bits wide.

>
> I think we will increasingly see parts which have different sizes on one
> area but not the other.
>

That has always been the case.

> For example, for doing some kinds of logical operations (eg crypto), having
> 64-bit registers and ALU makes sense, but you might only need kilobytes of
> memory so only have <32 address bits.

You need quite a few KB of ram for more serious cryptography. But it
sounds more like you are talking about SIMD or vector operations here,
which are not considered part of the "normal" width of the cpu. Modern
x86 cpus might have 512 bit SIMD registers - but they are still 64-bit
processors.

But you are right that you might want some parts of the system to be
wider and other parts thinner.

>
> For something else, like a microcontroller that's hung off the side of a
> bigger system (eg the MCU on a PCIe card) you might want the ability to
> handle 64 bit addresses but don't need to pay the price for 64-bit
> registers.
>
> Or you might operate with 16 or 32 bit wide external RAM chip, but your
> cache could extend that to a wider word width.
>
> There are many permutations, and I think people will pay the cost where it
> benefits them and not where it doesn't.
>

Agreed.

> This is not a new phenomenon, of course. But for a time all these numbers
> were in the range between 16 and 32 bits, which made 32 simplest all round.
> Just like we previously had various 8/16 hybrids (eg 8 bit datapath, 16 bit
> address) I think we're going to see more 32/64 hybrids.
>

32-bit processors have often had 64-bit registers for floating point,
and 64-bit operations of various sorts. It is not new.

Re: 64-bit embedded computing is here and now

<s9okhu$e58$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=500&group=comp.arch.embedded#500

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: dp...@tgi-sci.com (Dimiter_Popoff)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Tue, 8 Jun 2021 23:39:24 +0300
Organization: TGI
Lines: 31
Message-ID: <s9okhu$e58$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<Iyz*Vc+ly@news.chiark.greenend.org.uk> <s9ojag$4ce$1@dont-email.me>
Reply-To: dp@tgi-sci.com
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 8 Jun 2021 20:39:26 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="32c9020f0ce3d5222317f9f5fb9b6d38";
logging-data="14504"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/1R69Qwb4d0C6qSDg4E0r4FS/yM45GR6k="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
Thunderbird/78.11.0
Cancel-Lock: sha1:+nhMp8mZH2q00WNqYcJPj3mnHW4=
In-Reply-To: <s9ojag$4ce$1@dont-email.me>
Content-Language: en-US
 by: Dimiter_Popoff - Tue, 8 Jun 2021 20:39 UTC

On 6/8/2021 23:18, David Brown wrote:
> On 08/06/2021 16:46, Theo wrote:
>> ......
>
>> Memory bus/cache width
>
> No, that is not a common way to measure cpu "width", for many reasons.
> A chip is likely to have many buses outside the cpu core itself (and the
> cache(s) may or may not be considered part of the core). It's common to
> have 64-bit wide buses on 32-bit processors, it's also common to have
> 16-bit external databuses on a microcontroller. And the cache might be
> 128 bits wide.

I agree with your points and those of Theo, but the cache is basically
as wide as the registers? Logically, that is; a cacheline is several
times that, probably you refer to that.
Not that it makes much of a difference to the fact that 64 bit data
buses/registers in an MCU (apart from FPU registers, 32 bit FPUs are
useless to me) are unlikely to attract much interest, nothing of
significance to be gained as you said.
To me 64 bit CPUs are of interest of course and thankfully there are
some available, but this goes somewhat past what we call "embedded".
Not long ago in a chat with a guy who knew some of ARM 64 bit I gathered
there is some real mess with their out of order execution, one needs to
do... hmmmm.. "sync", whatever they call it, all the time and there is
a huge performance cost because of that. Anybody heard anything about
it? (I only know what I was told).

Dimiter

Re: 64-bit embedded computing is here and now

<61db88d3-3842-4a76-921f-cbbcaf54c25cn@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=501&group=comp.arch.embedded#501

  copy link   Newsgroups: comp.arch.embedded
X-Received: by 2002:ac8:75c3:: with SMTP id z3mr23090707qtq.308.1623187521783; Tue, 08 Jun 2021 14:25:21 -0700 (PDT)
X-Received: by 2002:a25:dbc4:: with SMTP id g187mr34926093ybf.142.1623187521571; Tue, 08 Jun 2021 14:25:21 -0700 (PDT)
Path: i2pn2.org!i2pn.org!news.neodome.net!feeder5.feed.usenet.farm!feeder1.feed.usenet.farm!feed.usenet.farm!tr2.eu1.usenetexpress.com!feeder.usenetexpress.com!tr3.iad1.usenetexpress.com!border1.nntp.dca1.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch.embedded
Date: Tue, 8 Jun 2021 14:25:21 -0700 (PDT)
In-Reply-To: <s9oit7$1mi$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:a0d0:9f90:adc5:cbde:e815:5766; posting-account=AoizIQoAAADa7kQDpB0DAj2jwddxXUgl
NNTP-Posting-Host: 2600:1700:a0d0:9f90:adc5:cbde:e815:5766
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com> <87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me> <s9n6rb$t19$1@dont-email.me> <5e77ea0a-ef41-4f72-a538-4edc9bfff075n@googlegroups.com> <s9oit7$1mi$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <61db88d3-3842-4a76-921f-cbbcaf54c25cn@googlegroups.com>
Subject: Re: 64-bit embedded computing is here and now
From: jim.brak...@ieee.org (James Brakefield)
Injection-Date: Tue, 08 Jun 2021 21:25:21 +0000
Content-Type: text/plain; charset="UTF-8"
Lines: 46
 by: James Brakefield - Tue, 8 Jun 2021 21:25 UTC

On Tuesday, June 8, 2021 at 3:11:24 PM UTC-5, David Brown wrote:
> On 08/06/2021 21:38, James Brakefield wrote:
>
> Could you explain your background here, and what you are trying to get
> at? That would make it easier to give you better answers.
> > The only thing that will take more than 4GB is video or a day's worth of photos.
> No, video is not the only thing that takes 4GB or more. But it is,
> perhaps, one of the more common cases. Most embedded systems don't need
> anything remotely like that much memory - to the nearest percent, 100%
> of embedded devices don't even need close to 4MB of memory (ram and
> flash put together).
> > So there is likely to be some embedded aps that need a > 32-bit address space.
> Some, yes. Many, no.
> > Cost, size or storage capacity are no longer limiting factors.
> Cost and size (and power) are /always/ limiting factors in embedded systems.
> >
> > Am trying to puzzle out what a 64-bit embedded processor should look like.
> There are plenty to look at. There are ARMs, PowerPC, MIPS, RISC-V.
> And of course there are some x86 processors used in embedded systems.
> > At the low end, yeah, a simple RISC processor.
> Pretty much all processors except x86 and brain-dead old-fashioned 8-bit
> CISC devices are RISC. Not all are simple.
> > And support for complex arithmetic
> > using 32-bit floats?
> A 64-bit processor will certainly support 64-bit doubles as well as
> 32-bit floats. Complex arithmetic is rarely needed, except perhaps for
> FFT's, but is easily done using real arithmetic. You can happily do
> 32-bit complex arithmetic on an 8-bit AVR, albeit taking significant
> code space and run time. I believe the latest gcc for the AVR will do
> 64-bit doubles as well - using exactly the same C code you would on any
> other processor.
> > And support for pixel alpha blending using quad 16-bit numbers?
> You would use a hardware 2D graphics accelerator for that, not the
> processor.
> > 32-bit pointers into the software?
> >
> With 64-bit processors you usually use 64-bit pointers.

|> Could you explain your background here, and what you are trying to get
at?

Am familiar with embedded systems, image processing and scientific applications.
Have used a number of 8, 16, 32 and ~64bit processors. Have also done work in
FPGAs. Am semi-retired and when working was always trying to stay ahead of
new opportunities and challenges.

Some of my questions/comments belong over at comp.arch

Re: 64-bit embedded computing is here and now

<s9opbi$gtu$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=502&group=comp.arch.embedded#502

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: dp...@tgi-sci.com (Dimiter_Popoff)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Wed, 9 Jun 2021 01:01:21 +0300
Organization: TGI
Lines: 121
Message-ID: <s9opbi$gtu$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<s9n6rb$t19$1@dont-email.me>
<5e77ea0a-ef41-4f72-a538-4edc9bfff075n@googlegroups.com>
Reply-To: dp@tgi-sci.com
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 8 Jun 2021 22:01:22 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="bc8b6acb1416ddd781bfd8aa807a1e3a";
logging-data="17342"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19xYmJ1PWZgyIZK643bX1WYpxVt5XAfm+Q="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
Thunderbird/78.11.0
Cancel-Lock: sha1:41qeeqXzEst7ywhbmUJYMnZ9YbE=
In-Reply-To: <5e77ea0a-ef41-4f72-a538-4edc9bfff075n@googlegroups.com>
Content-Language: en-US
 by: Dimiter_Popoff - Tue, 8 Jun 2021 22:01 UTC

On 6/8/2021 22:38, James Brakefield wrote:
> On Tuesday, June 8, 2021 at 2:39:29 AM UTC-5, Don Y wrote:
>> On 6/7/2021 10:59 PM, David Brown wrote:
>>> 8-bit microcontrollers are still far more common than 32-bit devices in
>>> the embedded world (and 4-bit devices are not gone yet). At the other
>>> end, 64-bit devices have been used for a decade or two in some kinds of
>>> embedded systems.
>> I contend that a good many "32b" implementations are really glorified
>> 8/16b applications that exhausted their memory space. I still see lots
>> of designs build on a small platform (8/16b) and augment it -- either
>> with some "memory enhancement" technology or additional "slave"
>> processors to split the binaries. Code increases in complexity but
>> there doesn't seem to be a need for the "work-per-unit-time" to.
>>
>> [This has actually been the case for a long time. The appeal of
>> newer CPUs is often in the set of peripherals that accompany the
>> processor, not the processor itself.]
>>> We'll see 64-bit take a greater proportion of the embedded systems that
>>> demand high throughput or processing power (network devices, hard cores
>>> in expensive FPGAs, etc.) where the extra cost in dollars, power,
>>> complexity, board design are not a problem. They will probably become
>>> more common in embedded Linux systems as the core itself is not usually
>>> the biggest part of the cost. And such systems are definitely on the
>>> increase.
>>>
>>> But for microcontrollers - which dominate embedded systems - there has
>>> been a lot to gain by going from 8-bit and 16-bit to 32-bit for little
>> I disagree. The "cost" (barrier) that I see clients facing is the
>> added complexity of a 32b platform and how it often implies (or even
>> *requires*) a more formal OS underpinning the application. Where you
>> could hack together something on bare metal in the 8/16b worlds,
>> moving to 32 often requires additional complexity in managing
>> mechanisms that aren't usually present in smaller CPUs (caches,
>> MMU/MPU, DMA, etc.) Developers (and their organizations) can't just
>> play "coder cowboy" and coerce the hardware to behaving as they
>> would like. Existing staff (hired with the "bare metal" mindset)
>> are often not equipped to move into a more structured environment.
>>
>> [I can hack together a device to meet some particular purpose
>> much easier on "development hardware" than I can on a "PC" -- simply
>> because there's too much I have to "work around" on a PC that isn't
>> present on development hardware.]
>>
>> Not every product needs a filesystem, network stack, protected
>> execution domains, etc. Those come with additional costs -- often
>> in the form of a lack of understanding as to what the ACTUAL
>> code in your product is doing at any given time. (this isn't the
>> case in the smaller MCU world; it's possible for a developer to
>> have written EVERY line of code in a smaller platform)
>>> cost. There is almost nothing to gain from a move to 64-bit, but the
>>> cost would be a good deal higher.
>> Why is the cost "a good deal higher"? Code/data footprints don't
>> uniformly "double" in size. The CPU doesn't slow down to handle
>> bigger data.
>>
>> The cost is driven by where the market goes. Note how many 68Ks found
>> design-ins vs. the T11, F11, 16032, etc. My first 32b design was
>> physically large, consumed a boatload of power and ran at only a modest
>> improvement (in terms of system clock) over 8b processors of its day.
>> Now, I can buy two orders of magnitude more horsepower PLUS a
>> bunch of built-in peripherals for two cups of coffee (at QTY 1)
>>> So it is not going to happen - at
>>> least not more than a very small and very gradual change.
>> We got 32b processors NOT because the embedded world cried out for
>> them but, rather, because of the influence of the 32b desktop world.
>> We've had 32b processors since the early 80's. But, we've only had
>> PCs since about the same timeframe! One assumes ubiquity in the
>> desktop world would need to happen before any real spillover to embedded.
>> (When the "desktop" was an '11 sitting in a back room, it wasn't seen
>> as ubiquitous.)
>>
>> In the future, we'll see the 64b *phone* world drive the evolution
>> of embedded designs, similarly. (do you really need 32b/64b to
>> make a phone? how much code is actually executing at any given
>> time and in how many different containers?)
>>
>> [The OP suggests MCus with radios -- maybe they'll be cell phone
>> radios and *not* wifi/BLE as I assume he's thinking! Why add the
>> need for some sort of access point to a product's deployment if
>> the product *itself* can make a direct connection??]
>>
>> My current design can't fill a 32b address space (but, that's because
>> I've decomposed apps to the point that they can be relatively small).
>> OTOH, designing a system with a 32b limitation seems like an invitation
>> to do it over when 64b is "cost effective". The extra "baggage" has
>> proven to be relatively insignificant (I have ports of my codebase
>> to SPARC as well as Atom running alongside a 32b ARM)
>>> The OP sounds more like a salesman than someone who actually works with
>>> embedded development in reality.
>> Possibly. Or, just someone that wanted to stir up discussion...
>
> |> I contend that a good many "32b" implementations are really glorified
> |> 8/16b applications that exhausted their memory space.
>
> The only thing that will take more than 4GB is video or a day's worth of photos.
> So there is likely to be some embedded aps that need a > 32-bit address space.
> Cost, size or storage capacity are no longer limiting factors.
>
> Am trying to puzzle out what a 64-bit embedded processor should look like.
> At the low end, yeah, a simple RISC processor. And support for complex arithmetic
> using 32-bit floats? And support for pixel alpha blending using quad 16-bit numbers?
> 32-bit pointers into the software?
>

The real value in 64 bit integer registers and 64 bit address space is
just that, having an orthogonal "endless" space (well I remember some
30 years ago 32 bits seemed sort of "endless" to me...).

Not needing to assign overlapping logical addresses to anything
can make a big difference to how the OS is done.

32 bit FPU seems useless to me, 64 bit is OK. Although 32 FP
*numbers* can be quite useful for storing/passing data.

Dimiter

======================================================
Dimiter Popoff, TGI http://www.tgi-sci.com
======================================================
http://www.flickr.com/photos/didi_tgi/

Re: 64-bit embedded computing is here and now

<s9osrt$5fa$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=503&group=comp.arch.embedded#503

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Tue, 8 Jun 2021 16:00:54 -0700
Organization: A noiseless patient Spider
Lines: 83
Message-ID: <s9osrt$5fa$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<Iyz*Vc+ly@news.chiark.greenend.org.uk>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 8 Jun 2021 23:01:17 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="67e6bc2a74bcfde539b04c3da972b629";
logging-data="5610"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+yeavdzCn78aOO0Y/LhKsu"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:VwC0VV2oHlBwyjJF9H+XCyhL7/M=
In-Reply-To: <Iyz*Vc+ly@news.chiark.greenend.org.uk>
Content-Language: en-US
 by: Don Y - Tue, 8 Jun 2021 23:00 UTC

On 6/8/2021 7:46 AM, Theo wrote:
> David Brown <david.brown@hesbynett.no> wrote:
>> But for microcontrollers - which dominate embedded systems - there has
>> been a lot to gain by going from 8-bit and 16-bit to 32-bit for little
>> cost. There is almost nothing to gain from a move to 64-bit, but the
>> cost would be a good deal higher. So it is not going to happen - at
>> least not more than a very small and very gradual change.
>
> I think there will be divergence about what people mean by an N-bit system:
>
> Register size
> Unit of logical/arithmetical processing
> Memory address/pointer size
> Memory bus/cache width

(General) Register size is the primary driver.

A processor can have very different "size" subcomponents.
E.g., a Z80 is an 8b processor -- registers are nominally 8b.
However, it support 16b operations -- on register PAIRs
(an implicit acknowledgement that the REGISTER is smaller
than the register pair). This is common on many smaller
processors. The address space is 16b -- with a separate 16b
address space for I/Os. The Z180 extends the PHYSICAL
address space to 20b but the logical address space
remains unchanged at 16b (if you want to specify a physical
address, you must use 20+ bits to represent it -- and invoke
a separate mechanism to access it!). The ALU is *4* bits.

Cache? Which one? I or D? L1/2/3/?

What about the oddballs -- 12b? 1b?

> I think we will increasingly see parts which have different sizes on one
> area but not the other.
>
> For example, for doing some kinds of logical operations (eg crypto), having
> 64-bit registers and ALU makes sense, but you might only need kilobytes of
> memory so only have <32 address bits.

That depends on the algorithm chosen and the hardware support available.

> For something else, like a microcontroller that's hung off the side of a
> bigger system (eg the MCU on a PCIe card) you might want the ability to
> handle 64 bit addresses but don't need to pay the price for 64-bit
> registers.
>
> Or you might operate with 16 or 32 bit wide external RAM chip, but your
> cache could extend that to a wider word width.
>
> There are many permutations, and I think people will pay the cost where it
> benefits them and not where it doesn't.

But you don't buy MCUs with a-la-carte pricing. How much does an extra
timer cost me? What if I want it to also serve as a *counter*? What
cost for 100K of internal ROM? 200K?

[It would be an interesting exercise to try to do a linear analysis of
product prices with an idea of trying to tease out the "costs" (to
the developer) for each feature in EXISTING products!]

Instead, you see a *price* that is reflective of how widely used the
device happens to be, today. You are reliant on the preferences of others
to determine which is the most cost effective product -- for *you*.

E.g., most of my devices have no "display" -- yet, the MCU I've chosen
has hardware support for same. It would obviously cost me more to
select a device WITHOUT that added capability -- because most
purchasers *want* a display (and *they* drive the production economies).

I could, potentially, use a 2A03 for some applications. But, the "TCO"
of such an approach would exceed that of a 32b (or larger) processor!

[What a crazy world!]

> This is not a new phenomenon, of course. But for a time all these numbers
> were in the range between 16 and 32 bits, which made 32 simplest all round.
> Just like we previously had various 8/16 hybrids (eg 8 bit datapath, 16 bit
> address) I think we're going to see more 32/64 hybrids.
>
> Theo
>

Re: 64-bit embedded computing is here and now

<s9ovrk$k0b$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=504&group=comp.arch.embedded#504

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Tue, 8 Jun 2021 16:51:56 -0700
Organization: A noiseless patient Spider
Lines: 252
Message-ID: <s9ovrk$k0b$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<s9n6rb$t19$1@dont-email.me>
<5e77ea0a-ef41-4f72-a538-4edc9bfff075n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 8 Jun 2021 23:52:20 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="67e6bc2a74bcfde539b04c3da972b629";
logging-data="20491"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+Utjg35MbNeIByO8zOJi65"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:45H7Ihltm8PG26LKLol5UHQZfFg=
In-Reply-To: <5e77ea0a-ef41-4f72-a538-4edc9bfff075n@googlegroups.com>
Content-Language: en-US
X-Mozilla-News-Host: news://NNTP.AIOE.org
 by: Don Y - Tue, 8 Jun 2021 23:51 UTC

On 6/8/2021 12:38 PM, James Brakefield wrote:

> |> I contend that a good many "32b" implementations are really glorified
> |> 8/16b applications that exhausted their memory space.
>
> The only thing that will take more than 4GB is video or a day's worth of photos.

That's not true. For example, I rely on a "PC" in my current design
to support the RDBMS. Otherwise, I would have to design a "special
node" (I have a distributed system) that had the resources necessary
to process multiple concurrent queries in a timely fashion; I can
put 100GB of RAM in a PC (whereas my current nodes only have 256MB).

The alternative is to rely on secondary (disk) storage -- which is
even worse!

And "video" is incredibly nondescript. It conjures ideas of STBs.
Instead, I see a wider range of applications in terms of *vision*.

E.g., let your doorbell camera "notice motion", recognize that
motion as indicative of someone/thing approaching it (e.g.,
a visitor), recognize the face/features of the visitor and
alert you to its presence (if desired). No need to involve a
cloud service to do this.

[My "doorbell" is a camera/microphone/speaker. *If* I want to
know that you are present, *it* will tell me. Or, if told to
do so, will grant you access to the house (even in my absence).
For "undesirables", I'm mounting a coin mechanism adjacent to
the entryway (our front door is protected by a gated porch area):
"Deposit 25c to ring bell. If we want to talk to you, your
deposit will be refunded. If *not*, consider that the *cost* of
pestering us!"]

There are surveillance cameras discretely placed around the exterior
of the house (don't want the place to look like a frigging *bank*!).
One of them has a clear view of the mailbox (our mail is delivered
via lettercarriers riding in mail trucks). Same front door camera
hardware. But, now: detect motion; detect motion STOPPING
proximate to mailbox (for a few seconds or more); detect motion
resuming; signal "mail available". Again, no need to involve a
cloud service to accomplish this. And, when not watching for mail
delivery, it's performing "general" surveillance -- mail detection
is a "free bonus"!

Imagine designing a vision-based inspection system where you "train"
the CAMERA -- instead of some box that the camera connects to. And,
the CAMERA signals accept/reject directly.

[I use a boatload of cameras, here; they are cheap sensors -- the
"cost" lies in the signal processing!]

> So there is likely to be some embedded aps that need a > 32-bit address space.
> Cost, size or storage capacity are no longer limiting factors.

No, cost size and storage are ALWAYS limiting factors!

E.g., each of my nodes derive power from the wired network connection.
That puts a practical limit of ~12W on what a node can dissipate.
That has to support the processing core plus any local I/Os! Note
that dissipated power == heat. So, one also has to be conscious of
how that heat will affect the devices' environs.

(Yes, there are schemes to increase this to ~100W but now the cost
of providing power -- and BACKUP power -- to a remote device starts
to be a sizeable portion of the product's cost and complexity).

My devices are intended to be "invisible" to the user -- so, they
have to hide *inside* something (most commonly, the walls or
ceiling -- in standard Jboxes for accessibility and Code compliance).
So, that limits their size/volume (mine are about the volume of a
standard duplex receptacle -- 3 cu in -- so fit in even the smallest
of 1G boxes... even pancake boxes!)

They have to be inexpensive so I can justify using LOTS of them
(I will have 240 deployed, here; my industrial beta site will have
over 1000; commercial beta site almost a similar number). Not only
is the cost of initial acquisition of concern, but also the *perceived*
cost of maintaining the hardware in a functional state (customer
doesn't want to have $10K of spares on hand for rapid incident response
and staff to be able to diagnose and repair/replace "on demand")

In my case, I sidestep the PERSISTENT storage issue by relegating that
to the RDBMS. In *that* domain, I can freely add spinning rust or
an SSD without complicating the design of the rest of the nodes.
So, "storage" becomes:
- how much do I need for a secure bootstrap
- how much do I need to contain a downloaded (from the RDBMS!) binary
- how much do I need to keep "local runtime resources"
- how much can I exploit surplus capacity *elsewhere* in the system
to address transient needs

Imagine what it would be like having to replace "worn" SD cards
at some frequency in hundreds of devices scattered around hundreds
of "invisible" places! Almost as bad as replacing *batteries* in
those devices!

[Have you ever had an SD card suddenly write protect itself?]

> Am trying to puzzle out what a 64-bit embedded processor should look like.

"Should"? That depends on what you expect it to do for you.
The nonrecurring cost of development will become an ever-increasing
portion of the device's "cost". If you sell 10K units but spend
500K on development (over its lifetime), you've justification for
spending a few more dollars on recurring costs *if* you can realize
a reduction in development/maintenance costs (because the development
is easier, bugs are fewer/easier to find, etc.)

Developers (and silicon vendors, as Good Business Practice)
will look at their code and see what's "hard" to do, efficiently.
Then, consider mechanisms that could make that easier or
more effective.

I see the addition of hardware features that enhance the robustness
of the software development *process*. E.g., allowing for compartmentalizing
applications and subsystems more effectively and *efficiently*.

[I put individual objects into their own address space containers
to ensure Object A can't be mangled by Client B (or Object C). As
a result, talking to an object is expensive because I have to hop
back and forth across that protection boundary. It's even worse
when the targeted object is located on some other physical node
(as now I have the transport cost to contend with).]

Similarly, making communications more robust. We already see that
with crypto accelerators. The idea of device "islands" is
obsolescent. Increasingly, devices will interact with other
devices to solve problems. More processing will move to the
edge simply because of scaling issues (I can add more CPUs
far more effectively than I can increase the performance of
a "centralized" CPU; add another sense/control point? let *it*
bring some processing abilities along with it!).

And, securing the product from tampering/counterfeiting; it seems
like most approaches, to date, have some hidden weakness. It's hard
to believe hardware can't ameliorate that. The fact that "obscurity"
is still relied upon by silicon vendors suggests an acknowledgement
of their weaknesses.

Beyond that? Likely more DSP-related support in the "native"
instruction set (so you can blend operations between conventional
computing needs and signal processing related issues).

And, graphics acceleration as many applications implement user
interfaces in the appliance.

There may be some other optimizations that help with hashing
or managing large "datasets" (without them being considered
formal datasets).

Power management (and measurement) will become increasingly
important (I spend almost as much on the "power supply"
as I do on the compute engine). Developers will want to be
able to easily ascertain what they are consuming as well
as why -- so they can (dynamically) alter their strategies.
In addition to varying CPU clock frequency, there may be
mechanisms to automatically (!) power down sections of
the die based on observed instruction sequences (instead
of me having to explicitly do so).

[E.g., I shed load when I'm running off backup power.
This involves powering down nodes as well as the "fields"
on selective nodes. How do I decide *which* load to shed to
gain the greatest benefit?]

Memory management (in the conventional sense) will likely
see more innovation. Instead of just "settling" for a couple
of page sizes, we might see "adjustable" page sizes.
Or, the ability to specify some PORTION of a *particular*
page as being "valid" -- instead of treating the entire
page as such.

Scheduling algorithms will hopefully get additional
hardware support. E.g., everything is deadline driven
in my design ("real-time"). So, schedulers are concerned
with evaluating the deadlines of "ready" tasks -- which
can vary, over time, as well as may need further qualification
based on other criteria (e.g., least-slack-time scheduling)

Everything in my system is an *opaque* object on which a
set of POSSIBLE methods that can be invoked. But, each *Client*
of that object (an Actor may be multiple Clients if it possesses
multiple different Handles to the Object) is constrained as to
which methods can be invoked via a particular Handle.


Click here to read the complete article
Re: 64-bit embedded computing is here and now

<s9p24n$tp2$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=505&group=comp.arch.embedded#505

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Tue, 8 Jun 2021 17:30:53 -0700
Organization: A noiseless patient Spider
Lines: 389
Message-ID: <s9p24n$tp2$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<s9n6rb$t19$1@dont-email.me> <s9nir0$k47$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 9 Jun 2021 00:31:19 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="67e6bc2a74bcfde539b04c3da972b629";
logging-data="30498"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/p13vadN6/F8ssUDaAjFNI"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:PKcKZMyW0iFbRDWdhjC8kf6mSCg=
In-Reply-To: <s9nir0$k47$1@dont-email.me>
Content-Language: en-US
X-Mozilla-News-Host: news://NNTP.AIOE.org
 by: Don Y - Wed, 9 Jun 2021 00:30 UTC

On 6/8/2021 4:04 AM, David Brown wrote:
> On 08/06/2021 09:39, Don Y wrote:
>> On 6/7/2021 10:59 PM, David Brown wrote:
>>> 8-bit microcontrollers are still far more common than 32-bit devices in
>>> the embedded world (and 4-bit devices are not gone yet). At the other
>>> end, 64-bit devices have been used for a decade or two in some kinds of
>>> embedded systems.
>>
>> I contend that a good many "32b" implementations are really glorified
>> 8/16b applications that exhausted their memory space.
>
> Sure. Previously you might have used 32 kB flash on an 8-bit device,
> now you can use 64 kB flash on a 32-bit device. The point is, you are
> /not/ going to find yourself hitting GB limits any time soon. The step

I don't see the "problem" with 32b devices as one of address space limits
(except devices utilizing VMM with insanely large page sizes). As I said,
in my application, task address spaces are really just a handful of pages.

I *do* see (flat) address spaces that find themselves filling up with
stack-and-heap-per-task, big chunks set aside for "onboard" I/Os,
*partial* address decoding for offboard I/Os, etc. (i.e., you're
not likely going to fully decode a single address to access a set
of DIP switches as the decode logic is disproportionately high
relative to the functionality it adds)

How often do you see a high-order address line used for kernel/user?
(gee, now your "user" space has been halved)

> from 8-bit or 16-bit to 32-bit is useful to get a bit more out of the
> system - the step from 32-bit to 64-bit is totally pointless for 99.99%
> of embedded systems. (Even for most embedded Linux systems, you usually
> only have a 64-bit cpu because you want bigger and faster, not because
> of memory limitations. It is only when you have a big gui with fast
> graphics that 32-bit address space becomes a limitation.)

You're assuming there has to be some "capacity" value to the 64b move.

You might discover that the ultralow power devices (for phones!)
are being offered in the process geometries targeted for the 64b
devices. Or, that some integrated peripheral "makes sense" for
phones (but not MCUs targeting motor control applications). Or,
that there are additional power management strategies supported
in the hardware.

In my mind, the distinction brought about by "32b" was more advanced
memory protection/management -- even if not used in a particular
application. You simply didn't see these sorts of mechanisms
in 8/16b offerings. Likewise, floating point accelerators. Working
in smaller processors meant you had to spend extra effort to
bullet-proof your code, economize on math operators, etc.

So, if you wanted the advantages of those (hardware) mechanisms,
you "upgraded" your design to 32b -- even if it didn't need
gobs of address space or generic MIPS. It just wasn't economical
to bolt on an AM9511 or practical to build a homebrew MMU.

> A 32-bit microcontroller is simply much easier to work with than an
> 8-bit or 16-bit with "extended" or banked memory to get beyond 64 K
> address space limits.

There have been some 8b processors that could seemlessly (in HLL)
handle extended address spaces. The Z180s were delightfully easy
to use, thusly. You just had to keep in mind that a "call" to
a different bank was more expensive than a "local" call (though
there were no syntactic differences; the linkage editor and runtime
package made this invisible to the developer).

We were selling products with 128K of DRAM on Z80's back in 1981.
Because it was easier to design THAT hardware than to step up to
a 68K, for example. (as well as leveraging our existing codebase)
The "video game era" was built on hybridized 8b systems -- even though
you could buy 32b hardware, at the time. You would be surprised at
the ingenuity of many of those systems in offloading the processor
of costly (time consuming) operations to make the device appear more
powerful than it actually was.

>>> We'll see 64-bit take a greater proportion of the embedded systems that
>>> demand high throughput or processing power (network devices, hard cores
>>> in expensive FPGAs, etc.) where the extra cost in dollars, power,
>>> complexity, board design are not a problem. They will probably become
>>> more common in embedded Linux systems as the core itself is not usually
>>> the biggest part of the cost. And such systems are definitely on the
>>> increase.
>>>
>>> But for microcontrollers - which dominate embedded systems - there has
>>> been a lot to gain by going from 8-bit and 16-bit to 32-bit for little
>>
>> I disagree. The "cost" (barrier) that I see clients facing is the
>> added complexity of a 32b platform and how it often implies (or even
>> *requires*) a more formal OS underpinning the application.
>
> Yes, that is definitely a cost in some cases - 32-bit microcontrollers
> are usually noticeably more complicated than 8-bit ones. How
> significant the cost is depends on the balances of the project between
> development costs and production costs, and how beneficial the extra
> functionality can be (like moving from bare metal to RTOS, or supporting
> networking).

I see most 32b designs operating without the benefits that a VMM system
can apply (even if you discount demand paging). They just want to have
a big address space and not have to dick with "segment registers", etc.
They plow through the learning effort required to configure the device
to move the "extra capabilities" out of the way. Then, just treat it
like a bigger 8/16 processor.

You can "bolt on" a simple network stack even with a rudimentary RTOS/MTOS.
Likewise, a web server. Now, you remove the need for graphics and other UI
activities hosted *in* the device. And, you likely don't need to support
multiple concurrent clients. If you want to provide those capabilities, do
that *outside* the device (let it be someone else's problem). And, you gain
"remote access" for free.

Few such devices *need* (or even WANT!) ARP caches, inetd, high performance
stack, file systems, etc.

Given the obvious (coming) push for enhanced security in devices, anything
running on your box that you don't need (or UNDERSTAND!) is likely going to
be pruned off as a way to reduce the attack surface. "Why is this port open?
What is this process doing? How robust is the XXX subsystem implementation
to hostile actors in an *unsupervised* setting?"

>>> cost. There is almost nothing to gain from a move to 64-bit, but the
>>> cost would be a good deal higher.
>>
>> Why is the cost "a good deal higher"? Code/data footprints don't
>> uniformly "double" in size. The CPU doesn't slow down to handle
>> bigger data.
>
> Some parts of code and data /do/ double in size - but not uniformly, of
> course. But your chip is bigger, faster, requires more power, has wider
> buses, needs more advanced memories, has more balls on the package,
> requires finer pitched pcb layouts, etc.

And has been targeted to a market that is EXTREMELY power sensitive
(phones!).

It is increasingly common for manufacturing technologies to be moving away
from "casual development". The days of owning your own wave and doing
in-house manufacturing at a small startup are gone. If you want to
limit yourself to the kinds of products that you CAN (easily) assemble, you
will find yourself operating with a much poorer selection of components
available. I could fab a PCB in-house and build small runs of prototypes
using the wave and shake-and-bake facilities that we had on hand. Harder
to do so, nowadays.

This has always been the case. When thru-hole met SMT, folks had to
either retool to support SMT, or limit themselves to components that
were available in thru-hole packages. As the trend has always been
for MORE devices to move to newer packaging technologies, anyone
who spent any time thinking about it could read the writing on the wall!
(I bought my Leister in 1988? Now, I prefer begging favors from
colleagues to get my prototypes assembled!)

I suspect this is why we now see designs built on COTS "modules"
increasingly. Just like designs using wall warts (so they don't
have to do the testing on their own, internally designed supplies).
It's one of the reasons FOSH is hampered (unlike FOSS, you can't roll
your own copy of a hardware design!)

> In theory, you /could/ make a microcontroller in a 64-pin LQFP and
> replace the 72 MHz Cortex-M4 with a 64-bit ARM core at the same clock
> speed. The die would only cost two or three times more, and take
> perhaps less than 10 times the power for the core. But it would be so
> utterly pointless that no manufacturer would make such a device.

This is specious reasoning: "You could take the die out of a 68K and
replace it with a 64 bit ARM." Would THAT core cost two or three times more
(do you recall how BIG 68K die were?) and consume 10 times the power?
(it would consume considerably LESS).


Click here to read the complete article
Re: 64-bit embedded computing is here and now

<87czsvesvs.fsf@nightsong.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=506&group=comp.arch.embedded#506

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: no.em...@nospam.invalid (Paul Rubin)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Tue, 08 Jun 2021 18:20:07 -0700
Organization: A noiseless patient Spider
Lines: 5
Message-ID: <87czsvesvs.fsf@nightsong.com>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<s9n6rb$t19$1@dont-email.me>
<5e77ea0a-ef41-4f72-a538-4edc9bfff075n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain
Injection-Info: reader02.eternal-september.org; posting-host="b1a338fd02f04c336abb13ef51db34f4";
logging-data="9792"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1++SE2cdf+ONbhHfLfkz8T1"
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux)
Cancel-Lock: sha1:X5BOJx4YePu3mXXIlO1iMS1Pdts=
sha1:3JWlrIB4C1fwXrbxBLQQfgJpEUo=
 by: Paul Rubin - Wed, 9 Jun 2021 01:20 UTC

James Brakefield <jim.brakefield@ieee.org> writes:
> Am trying to puzzle out what a 64-bit embedded processor should look like.

Buy yourself a Raspberry Pi 4 and set it up to run your fish tank via a
remote web browser. There's your 64 bit embedded system.

Re: 64-bit embedded computing is here and now

<s9p5ie$fsu$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=507&group=comp.arch.embedded#507

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Tue, 8 Jun 2021 18:29:24 -0700
Organization: A noiseless patient Spider
Lines: 53
Message-ID: <s9p5ie$fsu$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<s9n6rb$t19$1@dont-email.me>
<5e77ea0a-ef41-4f72-a538-4edc9bfff075n@googlegroups.com>
<s9opbi$gtu$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 9 Jun 2021 01:29:50 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="67e6bc2a74bcfde539b04c3da972b629";
logging-data="16286"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19iPmdpuh/Ez6CafvC3+Zyy"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:MbCi3eH1DOHVEm5KAu85UEmf6vY=
In-Reply-To: <s9opbi$gtu$1@dont-email.me>
Content-Language: en-US
 by: Don Y - Wed, 9 Jun 2021 01:29 UTC

On 6/8/2021 3:01 PM, Dimiter_Popoff wrote:

>> Am trying to puzzle out what a 64-bit embedded processor should look like.
>> At the low end, yeah, a simple RISC processor. And support for complex
>> arithmetic
>> using 32-bit floats? And support for pixel alpha blending using quad 16-bit
>> numbers?
>> 32-bit pointers into the software?
>
> The real value in 64 bit integer registers and 64 bit address space is
> just that, having an orthogonal "endless" space (well I remember some
> 30 years ago 32 bits seemed sort of "endless" to me...).
>
> Not needing to assign overlapping logical addresses to anything
> can make a big difference to how the OS is done.

That depends on what you expect from the OS. If you are
comfortable with the possibility of bugs propagating between
different subsystems, then you can live with a logical address
space that exactly coincides with a physical address space.

But, consider how life was before Windows used compartmentalized
applications (and OS). How easily it is for one "application"
(or subsystem) to cause a reboot -- unceremoniously.

The general direction (in software development, and, by
association, hardware) seems to be to move away from unrestrained
access to the underlying hardware in an attempt to limit the
amount of damage that a "misbehaving" application can cause.

You see this in languages designed to eliminate dereferencing
pointers, pointer arithmetic, etc. Languages that claim to
ensure your code can't misbehave because it can only do
exactly what the language allows (no more injecting ASM
into your HLL code).

I think that because you are the sole developer in your
application, you see a distorted vision of what the rest
of the development world encounters. Imagine handing your
codebase to a third party. And, *then* having to come
back to it and fix the things that "got broken".

Or, in my case, allowing a developer to install software
that I have to "tolerate" (for some definition of "tolerate")
without impacting the software that I've already got running.
(i.e., its ok to kill off his application if it is broken; but
he can't cause *my* portion of the system to misbehave!)

> 32 bit FPU seems useless to me, 64 bit is OK. Although 32 FP
> *numbers* can be quite useful for storing/passing data.

32 bit numbers have appeal if you're registers are 32b;
they "fit nicely". Ditto 64b in 64b registers.

Re: 64-bit embedded computing is here and now

<s9p5p7$a5g$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=508&group=comp.arch.embedded#508

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Tue, 8 Jun 2021 18:33:03 -0700
Organization: A noiseless patient Spider
Lines: 22
Message-ID: <s9p5p7$a5g$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<Iyz*Vc+ly@news.chiark.greenend.org.uk> <s9ojag$4ce$1@dont-email.me>
<s9okhu$e58$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 9 Jun 2021 01:33:27 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="67e6bc2a74bcfde539b04c3da972b629";
logging-data="10416"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18rqFvHDv/zYE3UQe6WR06N"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:OMEUy6qiX4p7vosIWA/wmWE7JVQ=
In-Reply-To: <s9okhu$e58$1@dont-email.me>
Content-Language: en-US
 by: Don Y - Wed, 9 Jun 2021 01:33 UTC

On 6/8/2021 1:39 PM, Dimiter_Popoff wrote:

> Not long ago in a chat with a guy who knew some of ARM 64 bit I gathered
> there is some real mess with their out of order execution, one needs to
> do... hmmmm.. "sync", whatever they call it, all the time and there is
> a huge performance cost because of that. Anybody heard anything about
> it? (I only know what I was told).

Many processors support instruction reordering (and many compilers
will reorder the code they generate). In each case, the reordering
is supposed to preserve semantics.

If the code "just runs" (and is never interrupted nor synchronized
with something else), the result should be the same.

If you want to be able to arbitrarily interrupt an instruction
sequence, then you need to take special measures. This is why
we have barriers, the ability to flush caches, etc.

For "generic" code, the developer isn't involved with any of this.
Inside the kernel (or device drivers), its often a different
story...

Re: 64-bit embedded computing is here and now

<doc0cgdq9hclq231h0fpiks5nka58iee16@4ax.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=509&group=comp.arch.embedded#509

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: gneun...@comcast.net (George Neuner)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Wed, 09 Jun 2021 00:16:35 -0400
Organization: A noiseless patient Spider
Lines: 35
Message-ID: <doc0cgdq9hclq231h0fpiks5nka58iee16@4ax.com>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com> <87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me> <s9n6rb$t19$1@dont-email.me> <5e77ea0a-ef41-4f72-a538-4edc9bfff075n@googlegroups.com> <s9oit7$1mi$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Injection-Info: reader02.eternal-september.org; posting-host="c25c0108920fc5c6b8ded59ab94053cb";
logging-data="8648"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/Bd16acrocVeKiYuZdnq/B0XgPFfN/OQ0="
User-Agent: ForteAgent/8.00.32.1272
Cancel-Lock: sha1:0MAMm2C0SCc9XB60hlIXEcFVvAA=
 by: George Neuner - Wed, 9 Jun 2021 04:16 UTC

On Tue, 8 Jun 2021 22:11:18 +0200, David Brown
<david.brown@hesbynett.no> wrote:

>Pretty much all processors except x86 and brain-dead old-fashioned 8-bit
>CISC devices are RISC...

It certainly is correct to say of the x86 that its legacy, programmer
visible, instruction set is CISC ... but it is no longer correct to
say that the chip design is CISC.

Since (at least) the Pentium 4 x86 really are a CISC decoder bolted
onto the front of what essentially is a load/store RISC.

"Complex" x86 instructions (in RAM and/or $I cache) are dynamically
translated into equivalent short sequences[*] of RISC-like wide format
instructions which are what actually is executed. Those sequences
also are stored into a special trace cache in case they will be used
again soon - e.g., in a loop - so they (hopefully) will not have to be
translated again.

[*] Actually, a great many x86 instructions map 1:1 to internal RISC
instructions - only a small percentage of complex x86 instructions
require "emulation" via a sequence of RISC instructions.

>... Not all [RISC] are simple.

Correct. Every successful RISC CPU has supported a suite of complex
instructions.

Of course, YMMV.
George

Re: 64-bit embedded computing is here and now

<s9ppv5$vn3$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=510&group=comp.arch.embedded#510

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: david.br...@hesbynett.no (David Brown)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Wed, 9 Jun 2021 09:17:57 +0200
Organization: A noiseless patient Spider
Lines: 212
Message-ID: <s9ppv5$vn3$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<s9n6rb$t19$1@dont-email.me> <s9nir0$k47$1@dont-email.me>
<s9p24n$tp2$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 9 Jun 2021 07:17:57 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="f24e378582fc45cdc1822be00fd58621";
logging-data="32483"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/kS4q7iL1V3n9R9tVMChkPVDpSJrycSRw="
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
Thunderbird/68.10.0
Cancel-Lock: sha1:osFZRnIoo7N8iN0GeIfxK5mZsE0=
In-Reply-To: <s9p24n$tp2$1@dont-email.me>
Content-Language: en-GB
 by: David Brown - Wed, 9 Jun 2021 07:17 UTC

On 09/06/2021 02:30, Don Y wrote:
> On 6/8/2021 4:04 AM, David Brown wrote:
>> On 08/06/2021 09:39, Don Y wrote:
>>> On 6/7/2021 10:59 PM, David Brown wrote:
>>>> 8-bit microcontrollers are still far more common than 32-bit devices in
>>>> the embedded world (and 4-bit devices are not gone yet).  At the other
>>>> end, 64-bit devices have been used for a decade or two in some kinds of
>>>> embedded systems.
>>>
>>> I contend that a good many "32b" implementations are really glorified
>>> 8/16b applications that exhausted their memory space.
>>
>> Sure.  Previously you might have used 32 kB flash on an 8-bit device,
>> now you can use 64 kB flash on a 32-bit device.  The point is, you are
>> /not/ going to find yourself hitting GB limits any time soon.  The step
>
> I don't see the "problem" with 32b devices as one of address space limits
> (except devices utilizing VMM with insanely large page sizes).  As I said,
> in my application, task address spaces are really just a handful of pages.
>

32 bit address space is not typically a problem or limitation.

(One other use of 64-bit address space is for debug tools like valgrind
or "sanitizers" that use large address spaces along with MMU protection
and specialised memory allocation to help catch memory errors. But
these also need sophisticated MMU's and a lot of other resources not
often found on small embedded systems.)

> I *do* see (flat) address spaces that find themselves filling up with
> stack-and-heap-per-task, big chunks set aside for "onboard" I/Os,
> *partial* address decoding for offboard I/Os, etc.  (i.e., you're
> not likely going to fully decode a single address to access a set
> of DIP switches as the decode logic is disproportionately high
> relative to the functionality it adds)
>
> How often do you see a high-order address line used for kernel/user?
> (gee, now your "user" space has been halved)

Unless you are talking about embedded Linux and particularly demanding
(or inefficient!) tasks, halving your address space is not going to be a
problem.

>
>> from 8-bit or 16-bit to 32-bit is useful to get a bit more out of the
>> system - the step from 32-bit to 64-bit is totally pointless for 99.99%
>> of embedded systems.  (Even for most embedded Linux systems, you usually
>> only have a 64-bit cpu because you want bigger and faster, not because
>> of memory limitations.  It is only when you have a big gui with fast
>> graphics that 32-bit address space becomes a limitation.)
>
> You're assuming there has to be some "capacity" value to the 64b move.
>

I'm trying to establish if there is any value at all in moving to
64-bit. And I have no doubt that for the /great/ majority of embedded
systems, it would not.

I don't even see it as having noticeable added value in the solid
majority of embedded Linux systems produced. But in those systems, the
cost is minor or irrelevant once you have a big enough processor.

> You might discover that the ultralow power devices (for phones!)
> are being offered in the process geometries targeted for the 64b
> devices.

Process geometries are not targeted at 64-bit. They are targeted at
smaller, faster and lower dynamic power. In order to produce such a big
design as a 64-bit cpu, you'll aim for a minimum level of process
sophistication - but that same process can be used for twice as many
32-bit cores, or bigger sram, or graphics accelerators, or whatever else
suits the needs of the device.

A major reason you see 64-bit cores in big SOC's is that the die space
is primarily taken up by caches, graphics units, on-board ram,
networking, interfaces, and everything else. Moving the cpu core from
32-bit to 64-bit only increases the die size by a few percent, and for
some tasks it will also increase the the performance of the code by a
small but helpful amount. So it is not uncommon, even if you don't need
the additional address space.

(The other major reason is that for some systems, you want to work with
more than about 2 GB ram, and then life is much easier with 64-bit cores.)

On microcontrollers - say, a random Cortex-M4 or M7 device - changing to
a 64-bit core will increase the die by maybe 30% and give roughly /zero/
performance increase. You don't use 64-bit unless you really need it.

>  Or, that some integrated peripheral "makes sense" for
> phones (but not MCUs targeting motor control applications).  Or,
> that there are additional power management strategies supported
> in the hardware.
>
> In my mind, the distinction brought about by "32b" was more advanced
> memory protection/management -- even if not used in a particular
> application.  You simply didn't see these sorts of mechanisms
> in 8/16b offerings.  Likewise, floating point accelerators.  Working
> in smaller processors meant you had to spend extra effort to
> bullet-proof your code, economize on math operators, etc.

You need to write correct code regardless of the size of the device. I
disagree entirely about memory protection being useful there. This is
comp.arch.embedded, not comp.programs.windows (or whatever). An MPU
might make it easier to catch and fix bugs while developing and testing,
but code that hits MPU traps should not leave your workbench.

But you are absolutely right about maths (floating point or integer) -
having 32-bit gives you a lot more freedom and less messing around with
scaling back and forth to make things fit and work efficiently in 8-bit
or 16-bit. And if you have floating point hardware (and know how to use
it properly), that opens up new possibilities.

64-bit cores will extend that, but the step is almost negligable in
comparison. It would be wrong to say "int32_t is enough for anyone",
but it is /almost/ true. It is certainly true enough that it is not a
problem that using "int64_t" takes two instructions instead of one.

>> Some parts of code and data /do/ double in size - but not uniformly, of
>> course.  But your chip is bigger, faster, requires more power, has wider
>> buses, needs more advanced memories, has more balls on the package,
>> requires finer pitched pcb layouts, etc.
>
> And has been targeted to a market that is EXTREMELY power sensitive
> (phones!).

A phone cpu takes orders of magnitude more power to do the kinds of
tasks that might be typical for a microcontroller cpu - reading sensors,
controlling outputs, handling UARTs, SPI and I²C buses, etc. Phone cpus
are optimised for doing the "big phone stuff" efficiently - because
that's what takes the time, and therefore the power.

(I'm snipping because there is far too much here - I have read your
comments, but I'm trying to limit the ones I reply to.)

>>
>> We will see that on devices that are, roughly speaking, tablets -
>> embedded systems with a good gui, a touchscreen, networking.  And that's
>> fine.  But these are a tiny proportion of the embedded devices made.
>
> Again, I disagree.

I assume you are disagreeing about seeing 64-bit cpus only on devices
that need a lot of memory or processing power, rather than disagreeing
that such devices are only a tiny proportion of embedded devices.

> You've already admitted to using 32b processors
> where 8b could suffice.  What makes you think you won't be using 64b
> processors when 32b could suffice?

As I have said, I think there will be an increase in the proportion of
64-bit embedded devices - but I think it will be very slow and gradual.
Perhaps in 20 years time 64-bit will be in the place that 32-bit is
now. But it won't happen for a long time.

Why do I use 32-bit microcontrollers where an 8-bit one could do the
job? Well, we mentioned above that you can be freer with the maths.
You can, in general, be freer in the code - and you can use better tools
and languages. With ARM microcontrollers I can use the latest gcc and
C++ standards - I don't have to program in a weird almost-C dialect
using extensions to get data in flash, or pay thousands for a limited
C++ compiler with last century's standards. I don't have to try and
squeeze things into 8-bit scaled integers, or limit my use of pointers
due to cpu limitations.

And manufacturers make the devices smaller, cheaper, lower power and
faster than 8-bit devices in many cases.

If manufactures made 64-bit devices that are smaller, cheaper and lower
power than the 32-bit ones today, I'd use them. But they would not be
better for the job, or better to work with and better for development in
the way 32-bit devices are better than 8-bit and 16-bit.

>
> It's just as hard for me to prototype a 64b SoC as it is a 32b SoC.
> The boards are essentially the same size.  "System" power consumption
> is almost identical.  Cost is the sole differentiating factor, today.


Click here to read the complete article
Re: 64-bit embedded computing is here and now

<s9puq6$oce$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=511&group=comp.arch.embedded#511

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: david.br...@hesbynett.no (David Brown)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Wed, 9 Jun 2021 10:40:37 +0200
Organization: A noiseless patient Spider
Lines: 56
Message-ID: <s9puq6$oce$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<s9n6rb$t19$1@dont-email.me>
<5e77ea0a-ef41-4f72-a538-4edc9bfff075n@googlegroups.com>
<s9oit7$1mi$1@dont-email.me> <doc0cgdq9hclq231h0fpiks5nka58iee16@4ax.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 9 Jun 2021 08:40:38 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="f24e378582fc45cdc1822be00fd58621";
logging-data="24974"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX189Gn6qJhGolST+zAsXzFS7fsxK+6FqQDY="
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
Thunderbird/68.10.0
Cancel-Lock: sha1:76gKTDvrqAnB3p86WmvLUhpn568=
In-Reply-To: <doc0cgdq9hclq231h0fpiks5nka58iee16@4ax.com>
Content-Language: en-GB
 by: David Brown - Wed, 9 Jun 2021 08:40 UTC

On 09/06/2021 06:16, George Neuner wrote:
> On Tue, 8 Jun 2021 22:11:18 +0200, David Brown
> <david.brown@hesbynett.no> wrote:
>
>
>> Pretty much all processors except x86 and brain-dead old-fashioned 8-bit
>> CISC devices are RISC...
>
> It certainly is correct to say of the x86 that its legacy, programmer
> visible, instruction set is CISC ... but it is no longer correct to
> say that the chip design is CISC.
>
> Since (at least) the Pentium 4 x86 really are a CISC decoder bolted
> onto the front of what essentially is a load/store RISC.
>

Absolutely. But from the user viewpoint, it is the ISA that matters -
it is a CISC ISA. The implementation details are mostly hidden (though
sometimes it is useful to know about timings).

> "Complex" x86 instructions (in RAM and/or $I cache) are dynamically
> translated into equivalent short sequences[*] of RISC-like wide format
> instructions which are what actually is executed. Those sequences
> also are stored into a special trace cache in case they will be used
> again soon - e.g., in a loop - so they (hopefully) will not have to be
> translated again.
>
>
> [*] Actually, a great many x86 instructions map 1:1 to internal RISC
> instructions - only a small percentage of complex x86 instructions
> require "emulation" via a sequence of RISC instructions.
>

And also, some sequences of several x86 instructions map to single RISC
instructions, or to no instructions at all.

It is, of course, a horrendously complex mess - and is a major reason
for x86 cores taking more power and costing more than RISC cores for the
same performance.

>
>> ... Not all [RISC] are simple.
>
> Correct. Every successful RISC CPU has supported a suite of complex
> instructions.
>

Yes. People often parse RISC as R(IS)C - i.e., they think it means the
ISA has a small instruction set. It should be parsed (RI)SC - the
instructions are limited compared to those on a (CI)SC cpu.

>
> Of course, YMMV.
> George
>

Re: 64-bit embedded computing is here and now

<s9pvth$b1c$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=512&group=comp.arch.embedded#512

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: david.br...@hesbynett.no (David Brown)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Wed, 9 Jun 2021 10:59:29 +0200
Organization: A noiseless patient Spider
Lines: 50
Message-ID: <s9pvth$b1c$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<Iyz*Vc+ly@news.chiark.greenend.org.uk> <s9ojag$4ce$1@dont-email.me>
<s9okhu$e58$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 9 Jun 2021 08:59:29 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="f24e378582fc45cdc1822be00fd58621";
logging-data="11308"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+njOzCU67F17dh76h6Csh1EIaofVyxTUI="
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
Thunderbird/68.10.0
Cancel-Lock: sha1:c4N6qWJUUqBCxDF9MueLjtFwsug=
In-Reply-To: <s9okhu$e58$1@dont-email.me>
Content-Language: en-GB
 by: David Brown - Wed, 9 Jun 2021 08:59 UTC

On 08/06/2021 22:39, Dimiter_Popoff wrote:
> On 6/8/2021 23:18, David Brown wrote:
>> On 08/06/2021 16:46, Theo wrote:
>>> ......
>>
>>> Memory bus/cache width
>>
>> No, that is not a common way to measure cpu "width", for many reasons.
>> A chip is likely to have many buses outside the cpu core itself (and the
>> cache(s) may or may not be considered part of the core).  It's common to
>> have 64-bit wide buses on 32-bit processors, it's also common to have
>> 16-bit external databuses on a microcontroller.  And the cache might be
>> 128 bits wide.
>
> I agree with your points and those of Theo, but the cache is basically
> as wide as the registers? Logically, that is; a cacheline is several
> times that, probably you refer to that.
> Not that it makes much of a difference to the fact that 64 bit data
> buses/registers in an MCU (apart from FPU registers, 32 bit FPUs are
> useless to me) are unlikely to attract much interest, nothing of
> significance to be gained as you said.
> To me 64 bit CPUs are of interest of course and thankfully there are
> some available, but this goes somewhat past what we call  "embedded".
> Not long ago in a chat with a guy who knew some of ARM 64 bit I gathered
> there is some real mess with their out of order execution, one needs to
> do... hmmmm.. "sync", whatever they call it, all the time and there is
> a huge performance cost because of that. Anybody heard anything about
> it? (I only know what I was told).
>

sync instructions of various types can be needed to handle
thread/process synchronisation, atomic accesses, and coordination
between software and hardware registers. Software normally runs with
the idea that it is the only thing running, and the cpu can re-order and
re-arrange the instructions and execution as long as it maintains the
illusion that the assembly instructions in the current thread are
executed one after the other. These re-arrangements and parallel
execution can give very large performance benefits.

But it also means that when you need to coordinate with other things,
you need syncs, perhaps cache flushes, etc. Full syncs can take
hundreds of cycles to execute on large processors. So you need to
distinguish between reads and writes, acquires and releases, syncs on
single addresses or general memory syncs. Big processors are optimised
for throughput, not latency or quick reaction to hardware events.

There are good reasons why big cpus are often paired with a Cortex-M
core in SOCs.

Re: 64-bit embedded computing is here and now

<s9q46j$qbj$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=513&group=comp.arch.embedded#513

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Wed, 9 Jun 2021 03:12:12 -0700
Organization: A noiseless patient Spider
Lines: 387
Message-ID: <s9q46j$qbj$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<s9n6rb$t19$1@dont-email.me> <s9nir0$k47$1@dont-email.me>
<s9p24n$tp2$1@dont-email.me> <s9ppv5$vn3$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 9 Jun 2021 10:12:36 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="67e6bc2a74bcfde539b04c3da972b629";
logging-data="26995"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+AanPiws6N9S1DvrMmK0ZW"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:80k5Bya3Fn2KazV3smzI1Jml7jo=
In-Reply-To: <s9ppv5$vn3$1@dont-email.me>
Content-Language: en-US
 by: Don Y - Wed, 9 Jun 2021 10:12 UTC

On 6/9/2021 12:17 AM, David Brown wrote:

>>> from 8-bit or 16-bit to 32-bit is useful to get a bit more out of the
>>> system - the step from 32-bit to 64-bit is totally pointless for 99.99%
>>> of embedded systems. (Even for most embedded Linux systems, you usually
>>> only have a 64-bit cpu because you want bigger and faster, not because
>>> of memory limitations. It is only when you have a big gui with fast
>>> graphics that 32-bit address space becomes a limitation.)
>>
>> You're assuming there has to be some "capacity" value to the 64b move.
>
> I'm trying to establish if there is any value at all in moving to
> 64-bit. And I have no doubt that for the /great/ majority of embedded
> systems, it would not.

That;s a no-brainer -- most embedded systems are small MCUs.
Consider the PC I'm sitting at has an MCU in the keyboard;
another in the mouse; one in the optical disk drive; one in
the rust disk drive; one in the printer; two in the UPS;
one in the wireless "modem"; one in the router; one in
the thumb drive; etc. All offsetting the "big" CPU in
the computer, itself.

> I don't even see it as having noticeable added value in the solid
> majority of embedded Linux systems produced. But in those systems, the
> cost is minor or irrelevant once you have a big enough processor.

My point is that the market can distort the "price/value"
relationship in ways that might not, otherwise, make sense.
A "better" device may end up costing less than a "worse"
device -- simply because of the volumes that the population
of customers favor.

>> You might discover that the ultralow power devices (for phones!)
>> are being offered in the process geometries targeted for the 64b
>> devices.
>
> Process geometries are not targeted at 64-bit. They are targeted at
> smaller, faster and lower dynamic power. In order to produce such a big
> design as a 64-bit cpu, you'll aim for a minimum level of process
> sophistication - but that same process can be used for twice as many
> 32-bit cores, or bigger sram, or graphics accelerators, or whatever else
> suits the needs of the device.

They will apply newer process geometries to newer devices.
No one is going to retool an existing design -- unless doing
so will result in a significant market enhancement.

Why don't we have 100MHz MC6800's?

> A major reason you see 64-bit cores in big SOC's is that the die space
> is primarily taken up by caches, graphics units, on-board ram,
> networking, interfaces, and everything else. Moving the cpu core from
> 32-bit to 64-bit only increases the die size by a few percent, and for
> some tasks it will also increase the the performance of the code by a
> small but helpful amount. So it is not uncommon, even if you don't need
> the additional address space.
>
> (The other major reason is that for some systems, you want to work with
> more than about 2 GB ram, and then life is much easier with 64-bit cores.)
>
> On microcontrollers - say, a random Cortex-M4 or M7 device - changing to
> a 64-bit core will increase the die by maybe 30% and give roughly /zero/
> performance increase. You don't use 64-bit unless you really need it.

Again, "... unless the market has made those devices cheaper than
their previous choices" People don't necessarily "fit" their
applications to the devices they choose; they consider other
factors (cost, package type, availability, etc.) in deciding
what to actual design into the product.

You might "need" X MB of RAM but will "tolerate" 4X -- if the
price is better than for the X MB *or* the X MB devices are
not available. If the PCB layout can directly accommodate
such a solution, then great! But, even if not, a PCB
revision is a cheap expenditure if it lets you take advantage of
a different component.

I've made very deliberate efforts NOT to use many of the
"I/Os" on the MCUs that I'm designing around so I can
have more leeway in making that selection when released
to production (every capability used represents a
constraint that OTHER selections must satisfy)

>> Or, that some integrated peripheral "makes sense" for
>> phones (but not MCUs targeting motor control applications). Or,
>> that there are additional power management strategies supported
>> in the hardware.
>>
>> In my mind, the distinction brought about by "32b" was more advanced
>> memory protection/management -- even if not used in a particular
>> application. You simply didn't see these sorts of mechanisms
>> in 8/16b offerings. Likewise, floating point accelerators. Working
>> in smaller processors meant you had to spend extra effort to
>> bullet-proof your code, economize on math operators, etc.
>
> You need to write correct code regardless of the size of the device. I
> disagree entirely about memory protection being useful there. This is
> comp.arch.embedded, not comp.programs.windows (or whatever). An MPU
> might make it easier to catch and fix bugs while developing and testing,
> but code that hits MPU traps should not leave your workbench.

You're assuming you (or I) have control over all of the code that
executes on a product/platform. And, that every potential bug
manifests *in* testing. (If that were the case, we'd never
see bugs in the wild!)

In my case, "third parties" (who the hell is the SECOND party??)
can install code that I've no control over. That code could
be buggy -- or malevolent. Being able to isolate "actors"
from each other means the OS can detect "can't happens"
at run time and shut down the offender -- instead of letting
it corrupt some part of the system.

> But you are absolutely right about maths (floating point or integer) -
> having 32-bit gives you a lot more freedom and less messing around with
> scaling back and forth to make things fit and work efficiently in 8-bit
> or 16-bit. And if you have floating point hardware (and know how to use
> it properly), that opens up new possibilities.
>
> 64-bit cores will extend that, but the step is almost negligable in
> comparison. It would be wrong to say "int32_t is enough for anyone",
> but it is /almost/ true. It is certainly true enough that it is not a
> problem that using "int64_t" takes two instructions instead of one.

Except that int64_t can take *four* instead of one (add/sub/mul two
int64_t's with 32b hardware).

>>> Some parts of code and data /do/ double in size - but not uniformly, of
>>> course. But your chip is bigger, faster, requires more power, has wider
>>> buses, needs more advanced memories, has more balls on the package,
>>> requires finer pitched pcb layouts, etc.
>>
>> And has been targeted to a market that is EXTREMELY power sensitive
>> (phones!).
>
> A phone cpu takes orders of magnitude more power to do the kinds of
> tasks that might be typical for a microcontroller cpu - reading sensors,
> controlling outputs, handling UARTs, SPI and I²C buses, etc. Phone cpus
> are optimised for doing the "big phone stuff" efficiently - because
> that's what takes the time, and therefore the power.

But you're making assumptions about what the "embedded microcontroller"
will actually be called upon to do!

Most of my embedded devices have "done more" than the PCs on which
they were designed -- despite the fact that the PC can defrost bagels!

> (I'm snipping because there is far too much here - I have read your
> comments, but I'm trying to limit the ones I reply to.)
>
>>>
>>> We will see that on devices that are, roughly speaking, tablets -
>>> embedded systems with a good gui, a touchscreen, networking. And that's
>>> fine. But these are a tiny proportion of the embedded devices made.
>>
>> Again, I disagree.
>
> I assume you are disagreeing about seeing 64-bit cpus only on devices
> that need a lot of memory or processing power, rather than disagreeing
> that such devices are only a tiny proportion of embedded devices.

I'm disagreeing with the assumption that 64bit CPUs are solely used
on "tablets, devices with good GUIs, touchscreens, networking"
(in the embedded domain).

>> You've already admitted to using 32b processors
>> where 8b could suffice. What makes you think you won't be using 64b
>> processors when 32b could suffice?
>
> As I have said, I think there will be an increase in the proportion of
> 64-bit embedded devices - but I think it will be very slow and gradual.
> Perhaps in 20 years time 64-bit will be in the place that 32-bit is
> now. But it won't happen for a long time.


Click here to read the complete article
Re: 64-bit embedded computing is here and now

<vXB*SVcmy@news.chiark.greenend.org.uk>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=514&group=comp.arch.embedded#514

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!news.nntp4.net!nntp.terraraq.uk!nntp-feed.chiark.greenend.org.uk!ewrotcd!.POSTED!not-for-mail
From: theom+n...@chiark.greenend.org.uk (Theo)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: 09 Jun 2021 13:10:25 +0100 (BST)
Organization: University of Cambridge, England
Lines: 61
Message-ID: <vXB*SVcmy@news.chiark.greenend.org.uk>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com> <87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me> <Iyz*Vc+ly@news.chiark.greenend.org.uk> <s9osrt$5fa$1@dont-email.me>
NNTP-Posting-Host: chiark.greenend.org.uk
X-Trace: chiark.greenend.org.uk 1623240627 18989 212.13.197.229 (9 Jun 2021 12:10:27 GMT)
X-Complaints-To: abuse@chiark.greenend.org.uk
NNTP-Posting-Date: Wed, 9 Jun 2021 12:10:27 +0000 (UTC)
User-Agent: tin/1.8.3-20070201 ("Scotasay") (UNIX) (Linux/3.16.0-7-amd64 (x86_64))
Originator: theom@chiark.greenend.org.uk ([212.13.197.229])
 by: Theo - Wed, 9 Jun 2021 12:10 UTC

Don Y <blockedofcourse@foo.invalid> wrote:
> On 6/8/2021 7:46 AM, Theo wrote:
> > I think there will be divergence about what people mean by an N-bit system:
> >
> > Register size
> > Unit of logical/arithmetical processing
> > Memory address/pointer size
> > Memory bus/cache width
>
> (General) Register size is the primary driver.

Is it, though? What's driving that?
Why do you want larger registers without a larger ALU width?

I don't think register size is of itself a primary pressure. On larger CPUs
with lots of rename or vector registers, they have kilobytes of SRAM to hold
the registers, and increasing the size is a cost. On a basic in-order MCU
with 16 or 32 registers, is the register width an issue? We aren't
designing them on 10 micron technology any more.

I would expect datapath width to be more critical, but again that's
relatively small on an in-order CPU, especially compared with on-chip SRAM.

> However, it support 16b operations -- on register PAIRs
> (an implicit acknowledgement that the REGISTER is smaller
> than the register pair). This is common on many smaller
> processors. The address space is 16b -- with a separate 16b
> address space for I/Os. The Z180 extends the PHYSICAL
> address space to 20b but the logical address space
> remains unchanged at 16b (if you want to specify a physical
> address, you must use 20+ bits to represent it -- and invoke
> a separate mechanism to access it!). The ALU is *4* bits.

This is not really the world of a current 32-bit MCU, which has a 32 bit
datapath and 32 bit registers. Maybe it does 64 bit arithmetic in 32 bit
chunks, which then leads to the question of which MCU workloads require 64
bit arithmetic?

> But you don't buy MCUs with a-la-carte pricing. How much does an extra
> timer cost me? What if I want it to also serve as a *counter*? What
> cost for 100K of internal ROM? 200K?
>
> [It would be an interesting exercise to try to do a linear analysis of
> product prices with an idea of trying to tease out the "costs" (to
> the developer) for each feature in EXISTING products!]
>
> Instead, you see a *price* that is reflective of how widely used the
> device happens to be, today. You are reliant on the preferences of others
> to determine which is the most cost effective product -- for *you*.

Sure, what you buy is a 'highest common denominator' - you get things you
don't use, but that other people do. But it still depends on a significant
chunk of the market demanding those features. It's then a cost function of
how much the market wants a feature against how much it'll cost to implement
(and at runtime). If the cost is tiny, it may well get implemented even if
almost nobody asked for it.

If there's a use case, people will pay for it.
(although maybe not enough)

Theo

Pages:123
server_pubkey.txt

rocksolid light 0.9.8
clearnet tor