Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

Only a fool fights in a burning house. -- Kank the Klingon, "Day of the Dove", stardate unknown


computers / comp.arch.embedded / Re: 64-bit embedded computing is here and now

Re: 64-bit embedded computing is here and now

<s9ppv5$vn3$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=510&group=comp.arch.embedded#510

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: david.br...@hesbynett.no (David Brown)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Wed, 9 Jun 2021 09:17:57 +0200
Organization: A noiseless patient Spider
Lines: 212
Message-ID: <s9ppv5$vn3$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<s9n6rb$t19$1@dont-email.me> <s9nir0$k47$1@dont-email.me>
<s9p24n$tp2$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 9 Jun 2021 07:17:57 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="f24e378582fc45cdc1822be00fd58621";
logging-data="32483"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/kS4q7iL1V3n9R9tVMChkPVDpSJrycSRw="
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
Thunderbird/68.10.0
Cancel-Lock: sha1:osFZRnIoo7N8iN0GeIfxK5mZsE0=
In-Reply-To: <s9p24n$tp2$1@dont-email.me>
Content-Language: en-GB
 by: David Brown - Wed, 9 Jun 2021 07:17 UTC

On 09/06/2021 02:30, Don Y wrote:
> On 6/8/2021 4:04 AM, David Brown wrote:
>> On 08/06/2021 09:39, Don Y wrote:
>>> On 6/7/2021 10:59 PM, David Brown wrote:
>>>> 8-bit microcontrollers are still far more common than 32-bit devices in
>>>> the embedded world (and 4-bit devices are not gone yet).  At the other
>>>> end, 64-bit devices have been used for a decade or two in some kinds of
>>>> embedded systems.
>>>
>>> I contend that a good many "32b" implementations are really glorified
>>> 8/16b applications that exhausted their memory space.
>>
>> Sure.  Previously you might have used 32 kB flash on an 8-bit device,
>> now you can use 64 kB flash on a 32-bit device.  The point is, you are
>> /not/ going to find yourself hitting GB limits any time soon.  The step
>
> I don't see the "problem" with 32b devices as one of address space limits
> (except devices utilizing VMM with insanely large page sizes).  As I said,
> in my application, task address spaces are really just a handful of pages.
>

32 bit address space is not typically a problem or limitation.

(One other use of 64-bit address space is for debug tools like valgrind
or "sanitizers" that use large address spaces along with MMU protection
and specialised memory allocation to help catch memory errors. But
these also need sophisticated MMU's and a lot of other resources not
often found on small embedded systems.)

> I *do* see (flat) address spaces that find themselves filling up with
> stack-and-heap-per-task, big chunks set aside for "onboard" I/Os,
> *partial* address decoding for offboard I/Os, etc.  (i.e., you're
> not likely going to fully decode a single address to access a set
> of DIP switches as the decode logic is disproportionately high
> relative to the functionality it adds)
>
> How often do you see a high-order address line used for kernel/user?
> (gee, now your "user" space has been halved)

Unless you are talking about embedded Linux and particularly demanding
(or inefficient!) tasks, halving your address space is not going to be a
problem.

>
>> from 8-bit or 16-bit to 32-bit is useful to get a bit more out of the
>> system - the step from 32-bit to 64-bit is totally pointless for 99.99%
>> of embedded systems.  (Even for most embedded Linux systems, you usually
>> only have a 64-bit cpu because you want bigger and faster, not because
>> of memory limitations.  It is only when you have a big gui with fast
>> graphics that 32-bit address space becomes a limitation.)
>
> You're assuming there has to be some "capacity" value to the 64b move.
>

I'm trying to establish if there is any value at all in moving to
64-bit. And I have no doubt that for the /great/ majority of embedded
systems, it would not.

I don't even see it as having noticeable added value in the solid
majority of embedded Linux systems produced. But in those systems, the
cost is minor or irrelevant once you have a big enough processor.

> You might discover that the ultralow power devices (for phones!)
> are being offered in the process geometries targeted for the 64b
> devices.

Process geometries are not targeted at 64-bit. They are targeted at
smaller, faster and lower dynamic power. In order to produce such a big
design as a 64-bit cpu, you'll aim for a minimum level of process
sophistication - but that same process can be used for twice as many
32-bit cores, or bigger sram, or graphics accelerators, or whatever else
suits the needs of the device.

A major reason you see 64-bit cores in big SOC's is that the die space
is primarily taken up by caches, graphics units, on-board ram,
networking, interfaces, and everything else. Moving the cpu core from
32-bit to 64-bit only increases the die size by a few percent, and for
some tasks it will also increase the the performance of the code by a
small but helpful amount. So it is not uncommon, even if you don't need
the additional address space.

(The other major reason is that for some systems, you want to work with
more than about 2 GB ram, and then life is much easier with 64-bit cores.)

On microcontrollers - say, a random Cortex-M4 or M7 device - changing to
a 64-bit core will increase the die by maybe 30% and give roughly /zero/
performance increase. You don't use 64-bit unless you really need it.

>  Or, that some integrated peripheral "makes sense" for
> phones (but not MCUs targeting motor control applications).  Or,
> that there are additional power management strategies supported
> in the hardware.
>
> In my mind, the distinction brought about by "32b" was more advanced
> memory protection/management -- even if not used in a particular
> application.  You simply didn't see these sorts of mechanisms
> in 8/16b offerings.  Likewise, floating point accelerators.  Working
> in smaller processors meant you had to spend extra effort to
> bullet-proof your code, economize on math operators, etc.

You need to write correct code regardless of the size of the device. I
disagree entirely about memory protection being useful there. This is
comp.arch.embedded, not comp.programs.windows (or whatever). An MPU
might make it easier to catch and fix bugs while developing and testing,
but code that hits MPU traps should not leave your workbench.

But you are absolutely right about maths (floating point or integer) -
having 32-bit gives you a lot more freedom and less messing around with
scaling back and forth to make things fit and work efficiently in 8-bit
or 16-bit. And if you have floating point hardware (and know how to use
it properly), that opens up new possibilities.

64-bit cores will extend that, but the step is almost negligable in
comparison. It would be wrong to say "int32_t is enough for anyone",
but it is /almost/ true. It is certainly true enough that it is not a
problem that using "int64_t" takes two instructions instead of one.

>> Some parts of code and data /do/ double in size - but not uniformly, of
>> course.  But your chip is bigger, faster, requires more power, has wider
>> buses, needs more advanced memories, has more balls on the package,
>> requires finer pitched pcb layouts, etc.
>
> And has been targeted to a market that is EXTREMELY power sensitive
> (phones!).

A phone cpu takes orders of magnitude more power to do the kinds of
tasks that might be typical for a microcontroller cpu - reading sensors,
controlling outputs, handling UARTs, SPI and I²C buses, etc. Phone cpus
are optimised for doing the "big phone stuff" efficiently - because
that's what takes the time, and therefore the power.

(I'm snipping because there is far too much here - I have read your
comments, but I'm trying to limit the ones I reply to.)

>>
>> We will see that on devices that are, roughly speaking, tablets -
>> embedded systems with a good gui, a touchscreen, networking.  And that's
>> fine.  But these are a tiny proportion of the embedded devices made.
>
> Again, I disagree.

I assume you are disagreeing about seeing 64-bit cpus only on devices
that need a lot of memory or processing power, rather than disagreeing
that such devices are only a tiny proportion of embedded devices.

> You've already admitted to using 32b processors
> where 8b could suffice.  What makes you think you won't be using 64b
> processors when 32b could suffice?

As I have said, I think there will be an increase in the proportion of
64-bit embedded devices - but I think it will be very slow and gradual.
Perhaps in 20 years time 64-bit will be in the place that 32-bit is
now. But it won't happen for a long time.

Why do I use 32-bit microcontrollers where an 8-bit one could do the
job? Well, we mentioned above that you can be freer with the maths.
You can, in general, be freer in the code - and you can use better tools
and languages. With ARM microcontrollers I can use the latest gcc and
C++ standards - I don't have to program in a weird almost-C dialect
using extensions to get data in flash, or pay thousands for a limited
C++ compiler with last century's standards. I don't have to try and
squeeze things into 8-bit scaled integers, or limit my use of pointers
due to cpu limitations.

And manufacturers make the devices smaller, cheaper, lower power and
faster than 8-bit devices in many cases.

If manufactures made 64-bit devices that are smaller, cheaper and lower
power than the 32-bit ones today, I'd use them. But they would not be
better for the job, or better to work with and better for development in
the way 32-bit devices are better than 8-bit and 16-bit.

>
> It's just as hard for me to prototype a 64b SoC as it is a 32b SoC.
> The boards are essentially the same size.  "System" power consumption
> is almost identical.  Cost is the sole differentiating factor, today.

For you, perhaps. Not necessarily for others.

We design, program and manufacture electronics. Production and testing
of simpler cards is cheaper. The pcbs are cheaper. The chips are
cheaper. The mounting is faster. The programming and testing is
faster. You don't mix big, thick tracks and high power on the same
board as tight-packed BGA with blind/buried vias - but you /can/ happily
work with less dense packages on the same board.

If you are talking about replacing one 400-ball SOC with another
400-ball SOC with a 64-bit core instead of a 32-bit core, then it will
make no difference in manufacturing. But if you are talking about
replacing a Cortex-M4 microcontroller with a Cortex-A53 SOC, it /will/
be a lot more expensive in most volumes.

I can't really tell what kinds of designs you are discussing here. When
I talk about embedded systems in general, I mean microcontrollers
running specific programs - not general-purpose computers in embedded
formats (such as phones).

(For very small volumes, the actual physical production costs are a
small proportion of the price, and for very large volumes you have
dedicated machines for the particular board.)

>>> Possibly.  Or, just someone that wanted to stir up discussion...
>>
>> Could be.  And there's no harm in that!
>
> On that, we agree.
>
> Time for ice cream (easiest -- and most enjoyable -- way to lose weight)!

I've not heard of that as a dieting method, but I shall give it a try :-)

SubjectRepliesAuthor
o 64-bit embedded computing is here and now

By: James Brakefield on Mon, 7 Jun 2021

58James Brakefield
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor