Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

Remember Darwin; building a better mousetrap merely results in smarter mice.


computers / comp.arch.embedded / Re: 64-bit embedded computing is here and now

Re: 64-bit embedded computing is here and now

<s9q46j$qbj$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=513&group=comp.arch.embedded#513

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: 64-bit embedded computing is here and now
Date: Wed, 9 Jun 2021 03:12:12 -0700
Organization: A noiseless patient Spider
Lines: 387
Message-ID: <s9q46j$qbj$1@dont-email.me>
References: <7eefb5db-b155-44f8-9aad-7ce25d06c602n@googlegroups.com>
<87lf7kexbp.fsf@nightsong.com> <s9n10p$t4i$1@dont-email.me>
<s9n6rb$t19$1@dont-email.me> <s9nir0$k47$1@dont-email.me>
<s9p24n$tp2$1@dont-email.me> <s9ppv5$vn3$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 9 Jun 2021 10:12:36 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="67e6bc2a74bcfde539b04c3da972b629";
logging-data="26995"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+AanPiws6N9S1DvrMmK0ZW"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:80k5Bya3Fn2KazV3smzI1Jml7jo=
In-Reply-To: <s9ppv5$vn3$1@dont-email.me>
Content-Language: en-US
 by: Don Y - Wed, 9 Jun 2021 10:12 UTC

On 6/9/2021 12:17 AM, David Brown wrote:

>>> from 8-bit or 16-bit to 32-bit is useful to get a bit more out of the
>>> system - the step from 32-bit to 64-bit is totally pointless for 99.99%
>>> of embedded systems. (Even for most embedded Linux systems, you usually
>>> only have a 64-bit cpu because you want bigger and faster, not because
>>> of memory limitations. It is only when you have a big gui with fast
>>> graphics that 32-bit address space becomes a limitation.)
>>
>> You're assuming there has to be some "capacity" value to the 64b move.
>
> I'm trying to establish if there is any value at all in moving to
> 64-bit. And I have no doubt that for the /great/ majority of embedded
> systems, it would not.

That;s a no-brainer -- most embedded systems are small MCUs.
Consider the PC I'm sitting at has an MCU in the keyboard;
another in the mouse; one in the optical disk drive; one in
the rust disk drive; one in the printer; two in the UPS;
one in the wireless "modem"; one in the router; one in
the thumb drive; etc. All offsetting the "big" CPU in
the computer, itself.

> I don't even see it as having noticeable added value in the solid
> majority of embedded Linux systems produced. But in those systems, the
> cost is minor or irrelevant once you have a big enough processor.

My point is that the market can distort the "price/value"
relationship in ways that might not, otherwise, make sense.
A "better" device may end up costing less than a "worse"
device -- simply because of the volumes that the population
of customers favor.

>> You might discover that the ultralow power devices (for phones!)
>> are being offered in the process geometries targeted for the 64b
>> devices.
>
> Process geometries are not targeted at 64-bit. They are targeted at
> smaller, faster and lower dynamic power. In order to produce such a big
> design as a 64-bit cpu, you'll aim for a minimum level of process
> sophistication - but that same process can be used for twice as many
> 32-bit cores, or bigger sram, or graphics accelerators, or whatever else
> suits the needs of the device.

They will apply newer process geometries to newer devices.
No one is going to retool an existing design -- unless doing
so will result in a significant market enhancement.

Why don't we have 100MHz MC6800's?

> A major reason you see 64-bit cores in big SOC's is that the die space
> is primarily taken up by caches, graphics units, on-board ram,
> networking, interfaces, and everything else. Moving the cpu core from
> 32-bit to 64-bit only increases the die size by a few percent, and for
> some tasks it will also increase the the performance of the code by a
> small but helpful amount. So it is not uncommon, even if you don't need
> the additional address space.
>
> (The other major reason is that for some systems, you want to work with
> more than about 2 GB ram, and then life is much easier with 64-bit cores.)
>
> On microcontrollers - say, a random Cortex-M4 or M7 device - changing to
> a 64-bit core will increase the die by maybe 30% and give roughly /zero/
> performance increase. You don't use 64-bit unless you really need it.

Again, "... unless the market has made those devices cheaper than
their previous choices" People don't necessarily "fit" their
applications to the devices they choose; they consider other
factors (cost, package type, availability, etc.) in deciding
what to actual design into the product.

You might "need" X MB of RAM but will "tolerate" 4X -- if the
price is better than for the X MB *or* the X MB devices are
not available. If the PCB layout can directly accommodate
such a solution, then great! But, even if not, a PCB
revision is a cheap expenditure if it lets you take advantage of
a different component.

I've made very deliberate efforts NOT to use many of the
"I/Os" on the MCUs that I'm designing around so I can
have more leeway in making that selection when released
to production (every capability used represents a
constraint that OTHER selections must satisfy)

>> Or, that some integrated peripheral "makes sense" for
>> phones (but not MCUs targeting motor control applications). Or,
>> that there are additional power management strategies supported
>> in the hardware.
>>
>> In my mind, the distinction brought about by "32b" was more advanced
>> memory protection/management -- even if not used in a particular
>> application. You simply didn't see these sorts of mechanisms
>> in 8/16b offerings. Likewise, floating point accelerators. Working
>> in smaller processors meant you had to spend extra effort to
>> bullet-proof your code, economize on math operators, etc.
>
> You need to write correct code regardless of the size of the device. I
> disagree entirely about memory protection being useful there. This is
> comp.arch.embedded, not comp.programs.windows (or whatever). An MPU
> might make it easier to catch and fix bugs while developing and testing,
> but code that hits MPU traps should not leave your workbench.

You're assuming you (or I) have control over all of the code that
executes on a product/platform. And, that every potential bug
manifests *in* testing. (If that were the case, we'd never
see bugs in the wild!)

In my case, "third parties" (who the hell is the SECOND party??)
can install code that I've no control over. That code could
be buggy -- or malevolent. Being able to isolate "actors"
from each other means the OS can detect "can't happens"
at run time and shut down the offender -- instead of letting
it corrupt some part of the system.

> But you are absolutely right about maths (floating point or integer) -
> having 32-bit gives you a lot more freedom and less messing around with
> scaling back and forth to make things fit and work efficiently in 8-bit
> or 16-bit. And if you have floating point hardware (and know how to use
> it properly), that opens up new possibilities.
>
> 64-bit cores will extend that, but the step is almost negligable in
> comparison. It would be wrong to say "int32_t is enough for anyone",
> but it is /almost/ true. It is certainly true enough that it is not a
> problem that using "int64_t" takes two instructions instead of one.

Except that int64_t can take *four* instead of one (add/sub/mul two
int64_t's with 32b hardware).

>>> Some parts of code and data /do/ double in size - but not uniformly, of
>>> course. But your chip is bigger, faster, requires more power, has wider
>>> buses, needs more advanced memories, has more balls on the package,
>>> requires finer pitched pcb layouts, etc.
>>
>> And has been targeted to a market that is EXTREMELY power sensitive
>> (phones!).
>
> A phone cpu takes orders of magnitude more power to do the kinds of
> tasks that might be typical for a microcontroller cpu - reading sensors,
> controlling outputs, handling UARTs, SPI and I²C buses, etc. Phone cpus
> are optimised for doing the "big phone stuff" efficiently - because
> that's what takes the time, and therefore the power.

But you're making assumptions about what the "embedded microcontroller"
will actually be called upon to do!

Most of my embedded devices have "done more" than the PCs on which
they were designed -- despite the fact that the PC can defrost bagels!

> (I'm snipping because there is far too much here - I have read your
> comments, but I'm trying to limit the ones I reply to.)
>
>>>
>>> We will see that on devices that are, roughly speaking, tablets -
>>> embedded systems with a good gui, a touchscreen, networking. And that's
>>> fine. But these are a tiny proportion of the embedded devices made.
>>
>> Again, I disagree.
>
> I assume you are disagreeing about seeing 64-bit cpus only on devices
> that need a lot of memory or processing power, rather than disagreeing
> that such devices are only a tiny proportion of embedded devices.

I'm disagreeing with the assumption that 64bit CPUs are solely used
on "tablets, devices with good GUIs, touchscreens, networking"
(in the embedded domain).

>> You've already admitted to using 32b processors
>> where 8b could suffice. What makes you think you won't be using 64b
>> processors when 32b could suffice?
>
> As I have said, I think there will be an increase in the proportion of
> 64-bit embedded devices - but I think it will be very slow and gradual.
> Perhaps in 20 years time 64-bit will be in the place that 32-bit is
> now. But it won't happen for a long time.

And how is that any different from 32b processors introduced in 1980
only NOW seeing any sort of "widespread" use?

The adoption of new technologies accelerates, over time. People
(not "everyone") are more willing to try new things -- esp if
it is relatively easy to do so. I can buy a 64b evaluation kit
for a few hundred dollars -- I paid more than that for my first
8" floppy drive. I can run/install some demo software and
get a feel for the level of performance, how much power
is consumed, etc. I don't need to convince my employer to
make that investment (so *I* can explore).

In a group environment, if such a solution is *suggested*,
I can then lend my support -- instead of shying away out of
fear of the unknown risks.

> Why do I use 32-bit microcontrollers where an 8-bit one could do the
> job? Well, we mentioned above that you can be freer with the maths.
> You can, in general, be freer in the code - and you can use better tools
> and languages.

Exactly. It's "easier" and you're less concerned with sorting
out (later) what might not fit or be fast enough, etc.

I could have done my current project with a bunch of PICs
talking to a "big machine" over EIA485 links (I'd done an
industrial automation project like that, before). But,
unless you can predict how many sensors/actuators ("motes")
there will EVER be, it's hard to determine how "big" that
computer needs to be!

Given that the cost of the PIC is only partially reflective
of the cost of the DEPLOYED mote (run cable, attach and
calibrate sensors/actuators, etc.) the added cost of
moving to a bigger device on that mote disappears.
Especially when you consider the flexibility it affords
(in terms of scaling)

> With ARM microcontrollers I can use the latest gcc and
> C++ standards - I don't have to program in a weird almost-C dialect
> using extensions to get data in flash, or pay thousands for a limited
> C++ compiler with last century's standards. I don't have to try and
> squeeze things into 8-bit scaled integers, or limit my use of pointers
> due to cpu limitations.
>
> And manufacturers make the devices smaller, cheaper, lower power and
> faster than 8-bit devices in many cases.
>
> If manufactures made 64-bit devices that are smaller, cheaper and lower
> power than the 32-bit ones today, I'd use them. But they would not be
> better for the job, or better to work with and better for development in
> the way 32-bit devices are better than 8-bit and 16-bit.

Again, you're making predictions about what those devices will be.

Imagine 64b devices ARE equipped with radios. You can ADD a radio
to your "better suited" 32b design. Or, *buy* the radio already
integrated into the 64b solution. Are you going to stick with
32b devices because they are "better suited" to the application?
Or, will you "suffer" the pains of embracing the 64b device?

It's not *just* a CPU core that you're dealing with. Just like
the 8/16 vs 32b decision isn't JUST about the width of the registers
in the device or size of the address space.

I mentioned my little experimental LFC device to discipline my
NTPd. It would have been *nice* if it had an 8P8C onboard
so I could talk to it "over the wire". But, that's not the
appropriate sort of connectivity for an 8b device -- a serial
port is. If I didn't have a means of connecting to it thusly,
the 8b solution -- despite being a TINY development effort -- would
have been impractical; bolting on a network stack and NIC would
greatly magnify the cost (development time) of that platform.

>> It's just as hard for me to prototype a 64b SoC as it is a 32b SoC.
>> The boards are essentially the same size. "System" power consumption
>> is almost identical. Cost is the sole differentiating factor, today.
>
> For you, perhaps. Not necessarily for others.
>
> We design, program and manufacture electronics. Production and testing
> of simpler cards is cheaper. The pcbs are cheaper. The chips are
> cheaper. The mounting is faster. The programming and testing is
> faster. You don't mix big, thick tracks and high power on the same
> board as tight-packed BGA with blind/buried vias - but you /can/ happily
> work with less dense packages on the same board.
>
> If you are talking about replacing one 400-ball SOC with another
> 400-ball SOC with a 64-bit core instead of a 32-bit core, then it will
> make no difference in manufacturing. But if you are talking about
> replacing a Cortex-M4 microcontroller with a Cortex-A53 SOC, it /will/
> be a lot more expensive in most volumes.
>
> I can't really tell what kinds of designs you are discussing here. When
> I talk about embedded systems in general, I mean microcontrollers
> running specific programs - not general-purpose computers in embedded
> formats (such as phones).

I cite phones as an example of a "big market" that will severely
impact the devices (MCUs) that are actually manufactured and sold.

I increasingly see "applications" growing in complexity -- beyond
"single use" devices in the past. Devices talk to more things
(devices) than they had, previously. Interfaces grow in
complexity (markets often want to exercise some sort of control
or configuration over a device -- remotely -- instead of just
letting it do its ONE thing).

In the past, additional functionality was an infrequent upgrade.
Now, designs accommodate it "in the field" -- because they
are expected to (no one wants to mail a device back to the factory
for a software upgrade -- or have a visit from a service tech
for that purpose).

Rarely does a product become LESS complex, with updates. I've
often found myself updating a design only to discover I've
run out of some resource ("ROM", RAM, real-time, etc.). This
never causes the update to be aborted; rather, it forces
an unexpected diversion into shoehorning the "new REQUIREMENTS"
into the old "5 pound sack".

In *my* case, there are fixed applications (MANY) running on
the hardware. But, the system is designed to allow for
new applications to be added, old ones replaced (or retired),
augmented with additional hardware, etc. It's not the "closed
unless updated" systems previously common.

We made LORAN-C position plotters, ages ago. Conceptually,
cut a portion of a commercially available map and adhere it
to the plotter bed. Position the pen at your current location
on the map. Turn on. Start driving ("sailing"). The pen
will move to indicate your NEW current position as well as
a track indicating your path TO that (from wherever you
were a moment ago).

[This uses 100% of an 8b processor's real-time to keep up
with the updates from the navigation receiver.]

"Gee, what if the user doesn't have a commercial map,
handy? Can't we *draw* one for him?"

[Hmmm... if we concentrate on JUST drawing a map, then
we can spend 100% of the CPU on THAT activity! We'll just
need to find some extra space to store the code required
and RAM to hold the variables we'll need...]

"Gee, when the fisherman drops a lobster pot over the
side, he has to run over to the plotter to mark the
current location -- so he can return to it at some later
date. Why can't we give him a button (on a long cable)
that automatically draws an 'X' on the plot each time
he depresses it?"

You can see where this is going...

Devices grow in features and complexity. If that plotter
was designed today, it would likely have a graphic display
(instead of pen and ink). And the 'X' would want to be
displayed in RED (or, some user-configured color). And
another color for the map to distinguish it from the "track".
And updates would want to be distributed via a phone
or thumbdrive or other "user accessible" medium.

This because the needs of such a device will undoubtedly
evolve. How often have you updated the firmware in
your disk drives? Optical drives? Mice? Keyboard?
Microwave oven? TV?

We designed medical instruments where the firmware resided
in a big, bulky "module" that could easily be removed
(expensive ZIF connector!) -- so that medtechs could
perform the updates in minutes (instead of taking the device
out of service). But, as long as we didn't overly tax the
real-time demands of the "base hardware", we were free
(subject to pricing issues) to enhance that "module" to
accommodate whatever new features were required. The product
could "remain current".

Like adding RAM to a PC to extend its utility (why can't I add
RAM to my SmartTVs? Why can't I update their codecs?)

The upgradeable products are designed for longer service lives
than the nonupgradable examples, here. So, they have to be
able to accommodate (in their "base designs" a wider variety
of unforeseeable changes.

If you expect a short service life, then you can rationalize NOT
upgrading/updating and simply expecting the user to REPLACE the
device at some interval that your marketeers consider appropriate.

> (For very small volumes, the actual physical production costs are a
> small proportion of the price, and for very large volumes you have
> dedicated machines for the particular board.)
>
>>>> Possibly. Or, just someone that wanted to stir up discussion...
>>>
>>> Could be. And there's no harm in that!
>>
>> On that, we agree.
>>
>> Time for ice cream (easiest -- and most enjoyable -- way to lose weight)!
>
> I've not heard of that as a dieting method, but I shall give it a try :-)

It's not recommended. I suspect it is evidence of some sort of
food allergy that causes my body not to process calories properly
(a tablespoon is 200+ calories; an enviable "scoop" is well over a
thousand!). It annoys my other half to no end cuz she gains weight
just by LOOKING at the stuff! :> So, its best for me to "sneak"
it when she can't set eyes on it. Or, for me to make flavors
that she's not keen on (this was butter pecan so she is REALLY
annoyed!)

SubjectRepliesAuthor
o 64-bit embedded computing is here and now

By: James Brakefield on Mon, 7 Jun 2021

58James Brakefield
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor