Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

"It's God. No, not Richard Stallman, or Linus Torvalds, but God." (By Matt Welsh)


computers / comp.arch.embedded / Re: 64-bit embedded computing is here and now

Re: 64-bit embedded computing is here and now

<s9p24n$tp2$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=505&group=comp.arch.embedded#505

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Tue, 8 Jun 2021 17:30:53 -0700
Organization: A noiseless patient Spider
Lines: 389
Message-ID: <s9p24n$tp2$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<s9n6rb$t19$1@dont-email.me> <s9nir0$k47$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 9 Jun 2021 00:31:19 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="67e6bc2a74bcfde539b04c3da972b629";
logging-data="30498"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/p13vadN6/F8ssUDaAjFNI"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:PKcKZMyW0iFbRDWdhjC8kf6mSCg=
In-Reply-To: <s9nir0$k47$1@dont-email.me>
Content-Language: en-US
X-Mozilla-News-Host: news://NNTP.AIOE.org
 by: Don Y - Wed, 9 Jun 2021 00:30 UTC

On 6/8/2021 4:04 AM, David Brown wrote:
> On 08/06/2021 09:39, Don Y wrote:
>> On 6/7/2021 10:59 PM, David Brown wrote:
>>> 8-bit microcontrollers are still far more common than 32-bit devices in
>>> the embedded world (and 4-bit devices are not gone yet). At the other
>>> end, 64-bit devices have been used for a decade or two in some kinds of
>>> embedded systems.
>>
>> I contend that a good many "32b" implementations are really glorified
>> 8/16b applications that exhausted their memory space.
>
> Sure. Previously you might have used 32 kB flash on an 8-bit device,
> now you can use 64 kB flash on a 32-bit device. The point is, you are
> /not/ going to find yourself hitting GB limits any time soon. The step

I don't see the "problem" with 32b devices as one of address space limits
(except devices utilizing VMM with insanely large page sizes). As I said,
in my application, task address spaces are really just a handful of pages.

I *do* see (flat) address spaces that find themselves filling up with
stack-and-heap-per-task, big chunks set aside for "onboard" I/Os,
*partial* address decoding for offboard I/Os, etc. (i.e., you're
not likely going to fully decode a single address to access a set
of DIP switches as the decode logic is disproportionately high
relative to the functionality it adds)

How often do you see a high-order address line used for kernel/user?
(gee, now your "user" space has been halved)

> from 8-bit or 16-bit to 32-bit is useful to get a bit more out of the
> system - the step from 32-bit to 64-bit is totally pointless for 99.99%
> of embedded systems. (Even for most embedded Linux systems, you usually
> only have a 64-bit cpu because you want bigger and faster, not because
> of memory limitations. It is only when you have a big gui with fast
> graphics that 32-bit address space becomes a limitation.)

You're assuming there has to be some "capacity" value to the 64b move.

You might discover that the ultralow power devices (for phones!)
are being offered in the process geometries targeted for the 64b
devices. Or, that some integrated peripheral "makes sense" for
phones (but not MCUs targeting motor control applications). Or,
that there are additional power management strategies supported
in the hardware.

In my mind, the distinction brought about by "32b" was more advanced
memory protection/management -- even if not used in a particular
application. You simply didn't see these sorts of mechanisms
in 8/16b offerings. Likewise, floating point accelerators. Working
in smaller processors meant you had to spend extra effort to
bullet-proof your code, economize on math operators, etc.

So, if you wanted the advantages of those (hardware) mechanisms,
you "upgraded" your design to 32b -- even if it didn't need
gobs of address space or generic MIPS. It just wasn't economical
to bolt on an AM9511 or practical to build a homebrew MMU.

> A 32-bit microcontroller is simply much easier to work with than an
> 8-bit or 16-bit with "extended" or banked memory to get beyond 64 K
> address space limits.

There have been some 8b processors that could seemlessly (in HLL)
handle extended address spaces. The Z180s were delightfully easy
to use, thusly. You just had to keep in mind that a "call" to
a different bank was more expensive than a "local" call (though
there were no syntactic differences; the linkage editor and runtime
package made this invisible to the developer).

We were selling products with 128K of DRAM on Z80's back in 1981.
Because it was easier to design THAT hardware than to step up to
a 68K, for example. (as well as leveraging our existing codebase)
The "video game era" was built on hybridized 8b systems -- even though
you could buy 32b hardware, at the time. You would be surprised at
the ingenuity of many of those systems in offloading the processor
of costly (time consuming) operations to make the device appear more
powerful than it actually was.

>>> We'll see 64-bit take a greater proportion of the embedded systems that
>>> demand high throughput or processing power (network devices, hard cores
>>> in expensive FPGAs, etc.) where the extra cost in dollars, power,
>>> complexity, board design are not a problem. They will probably become
>>> more common in embedded Linux systems as the core itself is not usually
>>> the biggest part of the cost. And such systems are definitely on the
>>> increase.
>>>
>>> But for microcontrollers - which dominate embedded systems - there has
>>> been a lot to gain by going from 8-bit and 16-bit to 32-bit for little
>>
>> I disagree. The "cost" (barrier) that I see clients facing is the
>> added complexity of a 32b platform and how it often implies (or even
>> *requires*) a more formal OS underpinning the application.
>
> Yes, that is definitely a cost in some cases - 32-bit microcontrollers
> are usually noticeably more complicated than 8-bit ones. How
> significant the cost is depends on the balances of the project between
> development costs and production costs, and how beneficial the extra
> functionality can be (like moving from bare metal to RTOS, or supporting
> networking).

I see most 32b designs operating without the benefits that a VMM system
can apply (even if you discount demand paging). They just want to have
a big address space and not have to dick with "segment registers", etc.
They plow through the learning effort required to configure the device
to move the "extra capabilities" out of the way. Then, just treat it
like a bigger 8/16 processor.

You can "bolt on" a simple network stack even with a rudimentary RTOS/MTOS.
Likewise, a web server. Now, you remove the need for graphics and other UI
activities hosted *in* the device. And, you likely don't need to support
multiple concurrent clients. If you want to provide those capabilities, do
that *outside* the device (let it be someone else's problem). And, you gain
"remote access" for free.

Few such devices *need* (or even WANT!) ARP caches, inetd, high performance
stack, file systems, etc.

Given the obvious (coming) push for enhanced security in devices, anything
running on your box that you don't need (or UNDERSTAND!) is likely going to
be pruned off as a way to reduce the attack surface. "Why is this port open?
What is this process doing? How robust is the XXX subsystem implementation
to hostile actors in an *unsupervised* setting?"

>>> cost. There is almost nothing to gain from a move to 64-bit, but the
>>> cost would be a good deal higher.
>>
>> Why is the cost "a good deal higher"? Code/data footprints don't
>> uniformly "double" in size. The CPU doesn't slow down to handle
>> bigger data.
>
> Some parts of code and data /do/ double in size - but not uniformly, of
> course. But your chip is bigger, faster, requires more power, has wider
> buses, needs more advanced memories, has more balls on the package,
> requires finer pitched pcb layouts, etc.

And has been targeted to a market that is EXTREMELY power sensitive
(phones!).

It is increasingly common for manufacturing technologies to be moving away
from "casual development". The days of owning your own wave and doing
in-house manufacturing at a small startup are gone. If you want to
limit yourself to the kinds of products that you CAN (easily) assemble, you
will find yourself operating with a much poorer selection of components
available. I could fab a PCB in-house and build small runs of prototypes
using the wave and shake-and-bake facilities that we had on hand. Harder
to do so, nowadays.

This has always been the case. When thru-hole met SMT, folks had to
either retool to support SMT, or limit themselves to components that
were available in thru-hole packages. As the trend has always been
for MORE devices to move to newer packaging technologies, anyone
who spent any time thinking about it could read the writing on the wall!
(I bought my Leister in 1988? Now, I prefer begging favors from
colleagues to get my prototypes assembled!)

I suspect this is why we now see designs built on COTS "modules"
increasingly. Just like designs using wall warts (so they don't
have to do the testing on their own, internally designed supplies).
It's one of the reasons FOSH is hampered (unlike FOSS, you can't roll
your own copy of a hardware design!)

> In theory, you /could/ make a microcontroller in a 64-pin LQFP and
> replace the 72 MHz Cortex-M4 with a 64-bit ARM core at the same clock
> speed. The die would only cost two or three times more, and take
> perhaps less than 10 times the power for the core. But it would be so
> utterly pointless that no manufacturer would make such a device.

This is specious reasoning: "You could take the die out of a 68K and
replace it with a 64 bit ARM." Would THAT core cost two or three times more
(do you recall how BIG 68K die were?) and consume 10 times the power?
(it would consume considerably LESS).

The market will drive the cost (power, size, $$$, etc.) of 64b cores
down as they will find increasing use in devices that are size and
power constrained. There's far more incentive to make a cheap,
low power 64b ARM than there is to make a cheap, low power i686
(or 68K) -- you don't see x86 devices in phones (laptops have bigger
power budgets so less pressure on efficiency).

There's no incentive to making thru-hole versions of any "serious"
processor, today. Just like you can't find any fabs for DTL devices.
Or 10 & 12" vinyl. (yeah, you can buy vinyl, today -- at a premium.
And, I suspect you can find someone to package an ARM on a DIP
carrier. But, each of those are niche markets, not where the
"money lies")

> So a move to 64-bit in practice means moving from a small, cheap,
> self-contained microcontroller to an embedded PC. Lots of new
> possibilities, lots of new costs of all kinds.

How do you come to that conclusion? I have a 32b MCU on a board.
And some FLASH and DRAM. How is that going to change when I
move to a 64b processor? The 64b devices are also SoCs so
it's not like you suddenly have to add address decoding logic,
a clock generator, interrupt controller, etc.

Will phones suddenly become FATTER to accommodate the extra
hardware needed? Will they all need bolt on battery boosters?

> Oh, and the cpu /could/ be slower for some tasks - bigger cpus that are
> optimised for throughput often have poorer latency and more jitter for
> interrupts and other time-critical features.

You're cherry picking. They can also be FASTER for other tasks
and likely will be optimized to justify/exploit those added abilities;
a vendor isn't going to offer a product that is LESS desireable than
his existing products. An IPv6 stack on a 64b processor is a bit
easier to implement than on 32b.

(remember, ARM is in a LOT of fabs! That speaks to how ubiquitous
it is!)

>>> So it is not going to happen - at
>>> least not more than a very small and very gradual change.
>>
>> We got 32b processors NOT because the embedded world cried out for
>> them but, rather, because of the influence of the 32b desktop world.
>> We've had 32b processors since the early 80's. But, we've only had
>> PCs since about the same timeframe! One assumes ubiquity in the
>> desktop world would need to happen before any real spillover to embedded.
>> (When the "desktop" was an '11 sitting in a back room, it wasn't seen
>> as ubiquitous.)
>
> I don't assume there is any direct connection between the desktop world
> and the embedded world - the needs are usually very different. There is
> a small overlap in the area of embedded devices with good networking and
> a gui, where similarity to the desktop world is useful.

The desktop world inspires the embedded world. You see what CAN be done
for "reasonable money".

In the 70's, we put i4004's into products because we knew the processing
that was required was "affordable" (at several kilobucks) -- because
we had our own '11 on site. We leveraged the in-house '11 to compute
"initialization constants" for the needs of specific users (operating
the i4004-based products). We didn't hesitate to migrate to i8080/85
when they became available -- because the price point was largely
unchanged (from where it had been with the i4004) AND we could skip the
involvement of the '11 in computing those initialization constants!

I watch the prices of the original 32b ARM I chose fall and see that
as an opportunity -- to UPGRADE the capabilities (and future-safeness
of the design). If I'd assumed $X was a tolerable price, before,
then it likely still is!

> We have had 32-bit microcontrollers for decades. I used a 16-bit
> Windows system when working with my first 32-bit microcontroller. But
> at that time, 32-bit microcontrollers cost a lot more and required more
> from the board (external memories, more power, etc.) than 8-bit or
> 16-bit devices. That has gradually changed with an almost total
> disregard for what has happened in the desktop world.

I disagree. I recall having to put lots of "peripherals" into
an 8/16b system, external address decoding logic, clock generators,
DRAM controllers, etc.

And, the cost of entry was considerably higher. Development systems
used to cost tens of kilodollars (Intellec MDS, Zilog ZRDS, Moto
EXORmacs, etc.) I shared a development system with several other
developers in the 70's -- because the idea of giving each of us our
own was anathema, at the time.

For 35+ years, you could put one on YOUR desk for a few kilobucks.
Now, it's considerably less than that.

You'd have to be blind to NOT think that the components that
are "embedded" in products haven't -- and won't continue -- to
see similar reductions in price and increases in performance.

Do you think the folks making the components didn't anticipate
the potential demand for smaller/faster/cheaper chips?

We've had TCP/IP for decades. Why is it "suddenly" more ubiquitous
in product offerings? People *see* what they can do with a technology
in one application domain (e.g., desktop) and extrapolate that to
other, similar application domains (embedded).

I did my first full custom 30+ years ago. Now, I can buy an off-the-shelf
component and "program" it to get similar functionality (without
involving a service bureau). Ideas that previously were "gee, if only..."
are now commonplace.

> Yes, the embedded world /did/ cry out for 32-bit microcontrollers for an
> increasing proportion of tasks. We cried many tears when then
> microcontroller manufacturers offered to give more flash space to their
> 8-bit devices by having different memory models, banking, far jumps, and
> all the other shit that goes with not having a big enough address space.
> We cried out when we wanted to have Ethernet and the microcontroller
> only had a few KB of ram. I have used maybe 6 or 8 different 32-bit
> microcontroller processor architectures, and I used them because I
> needed them for the task. It's only in the past 5+ years that I have
> been using 32-bit microcontrollers for tasks that could be done fine
> with 8-bit devices, but the 32-bit devices are smaller, cheaper and
> easier to work with than the corresponding 8-bit parts.

But that's because your needs evolve and the tools you choose to
use have, as well.

I wanted to build a little line frequency clock to see how well it
could discipline my NTPd. I've got all these PCs, single board PCs,
etc. lying around. It was *easier* to hack together a small 8b
processor to do the job -- less hardware to understand, no OS
to get in the way, really simple to put a number on the interrupt
latency that I could expect, no uncertainties about the hardware
that's on the PC, etc.

OTOH, I have a network stack that I wrote for the Z180 decades
ago. Despite being written in a HLL, it is a bear to deploy and
maintain owing to the tools and resources available in that
platform. My 32b stack was a piece of cake to write, by comparison!

>> In the future, we'll see the 64b *phone* world drive the evolution
>> of embedded designs, similarly. (do you really need 32b/64b to
>> make a phone? how much code is actually executing at any given
>> time and in how many different containers?)
>
> We will see that on devices that are, roughly speaking, tablets -
> embedded systems with a good gui, a touchscreen, networking. And that's
> fine. But these are a tiny proportion of the embedded devices made.

Again, I disagree. You've already admitted to using 32b processors
where 8b could suffice. What makes you think you won't be using 64b
processors when 32b could suffice?

It's just as hard for me to prototype a 64b SoC as it is a 32b SoC.
The boards are essentially the same size. "System" power consumption
is almost identical. Cost is the sole differentiating factor, today.
History tells us it will be less so, tomorrow. And, the innovations
that will likely come in that offering will likely exceed the
capabilities (or perceived market needs) of smaller processors.
To say nothing of the *imagined* uses that future developers will
envision!

I can make a camera that "reports to google/amazon" to do motion detection,
remote access, etc. Or, for virtually the same (customer) dollars, I
can provide that functionality locally. Would a customer want to add
an "unnecessary" dependency to a solution? "Tired of being dependant
on Big Brother for your home security needs? ..." Imagine a 64b SoC
with a cellular radio: "I'll *call* you when someone comes to the door..."
(or SMS)

I have cameras INSIDE my garage that assist with my parking and
tell me if I've forgotten to close the garage door. Should I have
google/amazon perform those value-added tasks for me? Will they
tell me if I've left something in the car's path before I run over it?
Will they turn on the light to make it easier for me to see?
Should I, instead, tether all of those cameras to some "big box"
that does all of that signal processing? What happens to those
resources when the garage is "empty"??

The "electric eye" (interrupter) that guards against closing the
garage door on a toddler/pet/item in it's path does nothing to
protect me if I leave some portion of the vehicle in the path of
the door (but ABOVE the detection range of the interrupter).
Locating a *camera* on teh side of the doorway lets me detect
if ANYTHING is in the path of the door, regardless of how high
above the old interrupter's position it may be located.

How *many* camera interfaces should the SoC *directly* support?

The number (and type) of applications that can be addressed with
ADDITIONAL *local* smarts/resources is almost boundless. And, folks
don't have to wait for a cloud supplier (off-site processing) to
decide to offer them.

"Build it and they will come."

[Does your thermostat REALLY need all of that horsepower -- two
processors! -- AND google's server in order to control the HVAC
in your home? My god, how did that simple bimetallic strip
ever do it??!]

If you move into the commercial/industrial domains, the opportunities
are even more diverse! (e.g., build a camera that does component inspection
*in* the camera and interfaces to a go/nogo gate or labeller)

Note that none of these applications need a display, touch panel, etc.
What they likely need is low power, small size, connectivity, MIPS and
memory. The same sorts of things that are common in phones.

>>> The OP sounds more like a salesman than someone who actually works with
>>> embedded development in reality.
>>
>> Possibly. Or, just someone that wanted to stir up discussion...
>
> Could be. And there's no harm in that!

On that, we agree.

Time for ice cream (easiest -- and most enjoyable -- way to lose weight)!

SubjectRepliesAuthor
o 64-bit embedded computing is here and now

By: James Brakefield on Mon, 7 Jun 2021

58James Brakefield
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor