Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

We don't really understand it, so we'll give it to the programmers.


computers / comp.sys.unisys / Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...

SubjectAuthor
* Speaking of the 1110 AKA 1100/40 and its two types of memory ...Lewis Cole
+* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Scott Lurndal
|`* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Stephen Fuld
| `* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Scott Lurndal
|  +- Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Scott Lurndal
|  `- Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Stephen Fuld
`* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Stephen Fuld
 `* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Lewis Cole
  `* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Stephen Fuld
   +* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Scott Lurndal
   |+* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Stephen Fuld
   ||`* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Scott Lurndal
   || `* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Stephen Fuld
   ||  +- Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Scott Lurndal
   ||  `- Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Don Vito Martinelli
   |`* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Lewis Cole
   | `- Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Scott Lurndal
   `* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Lewis Cole
    `* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Stephen Fuld
     +* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Scott Lurndal
     |`- Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Stephen Fuld
     +* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Lewis Cole
     |+- Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Scott Lurndal
     |`- Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Scott Lurndal
     +* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Lewis Cole
     |+- Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Scott Lurndal
     |`* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Stephen Fuld
     | `* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...David W Schroth
     |  +* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Lewis Cole
     |  |`- Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...David W Schroth
     |  `* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Stephen Fuld
     |   `* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...David W Schroth
     |    +* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Scott Lurndal
     |    |`* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...David W Schroth
     |    | `- Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Stephen Fuld
     |    `- Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Stephen Fuld
     `* Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Lewis Cole
      `- Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...Scott Lurndal

Pages:12
Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...

<PFWUM.1252$8hV9.734@fx44.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=427&group=comp.sys.unisys#427

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!news.1d4.us!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx44.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: sco...@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...
Newsgroups: comp.sys.unisys
References: <08eedc84-339f-47d8-a2c1-65a076a53ed7n@googlegroups.com> <ubdjto$2d085$1@dont-email.me> <4a2bbd4c-8e8b-4136-ae74-89b9a9ea1edcn@googlegroups.com> <ubf86b$2nnjm$1@dont-email.me> <f56ecc39-a1ec-4e67-8fc4-696114f6d99cn@googlegroups.com> <ubo3b6$9hr9$1@dont-email.me> <646745b0-5c25-44b9-a9a2-06b25fa9c240n@googlegroups.com>
Lines: 209
Message-ID: <PFWUM.1252$8hV9.734@fx44.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Mon, 09 Oct 2023 17:22:55 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Mon, 09 Oct 2023 17:22:55 GMT
X-Received-Bytes: 10274
 by: Scott Lurndal - Mon, 9 Oct 2023 17:22 UTC

Lewis Cole <l_cole@juno.com> writes:

>> Think of it this way. The highest hit rate is
>> obtained when the number of most likely to be used blocks are exactly
>> evenly split between the two caches.
>
>Ummm, no. I guess we are going to have an argument over caching after all .=
>...
>
>The highest hit rate is obtained when a cache manages to successfully antic=
>ipate, load up into its local storage, and then provide that which the proc=
>essor needs *BEFORE* the processor actually makes a request to get it from =
>memory. Period.

I think that's a pretty simplistic characterization. Certainly
the hit-rate of a cache is an important performance indicator. That's
distinct, however, from prefetching cache lines in anticipation of
future need. Hit rate can be affected by cache organization (number
of sets, number of ways, associated metadata (virtual machine identifiers,
address space identifiers, et alia).

Prefetching falls into two buckets - explicit and implicit. Explicit
prefetching (i.e. via specialized load instructions) initiated by the
software can improve (or if done incorrectly, degrade) the cache hit
rate.

Implicit anticipatory prefetching by the cache subsystem can also have
a positive effect _on specific workloads_, but if not done carefully,
can have a negative affect on other workloads. A stride-based
prefetcher helps in the case of sequential memory accesses, for example,
but will degrade the performance of a random access (e.g. walking a
linked list or traversing a tree structured object).

<snip>

>Whether or not, say, an I-cache happens to have the same number of likely-t=
>o-be-used blocks as the D-cache is irrelevant.

Is it?

It's also useful to characterize the cache heirarchy. Modern systems have
three or more layers of cache, each slower to access than the next, and
each larger, but yet all are faster than accessing DRAM directly.

>They may have the same number. They may not have the same number. I suspe=
>ct for reasons that I'll wave my arms at shortly that they usually aren't.
>What matters is whether they have what's needed and can deliver it before t=
>he processor actually requests it.
>
>Now if an I-cache is getting lots and lots of hits, then presumably it is l=
>ikely filled with code loops that are being executed frequently.

Actually, Icache hit rates are dominated by how the compiler and linker
generate the executable code. To the extent that the compiler
(or linker - see LTO) can optimize the code layout for the most common
cases (taken/non taken branches, etc), the Icache hit rate is dependent
upon how well the code is laid out in memory. An anticipatory prefetcher
can be trained to be very effective in these cases (or the compiler can
explictly generate prefetech instructions.

>The longer that the processor can continue to execute these loops, the more=
> it will execute them at speeds that approach that which it would if main m=
>emory were as fast as the cache memory.

The loops will almost surely be domininated by accesses to the data cache
rather than the instruction case, to be sure.

<snip>

>> That would make the contents of
>> the two half sized caches exactly the same as those of the full sized
>> cache.=20
>
>No, it wouldn't. See above.
>
>> Conversely, if one of the caches has a different, (which means
>> lesser used) block, then its hit rate would be lower.=20
>
>No, it wouldn't. See above.
>
>> There is no way
>> that splitting the caches would lead to a higher hit rate.=20
>
>As I waved my arms at before, it is possible if more changes than are made =
>than just to its size.
>
>For example, if a cache happens to be a directed mapped cache, then there's=
> only one spot in the cache for a piece of data with a particular index.
>If another piece of data with the same index is requested, then old piece o=
>f data is lost/replaced with the new one.
>This is basic direct mapped cache behavior 101.

And the last modern system built with a direct mapped cache was over two
decades ago.

>
>OTOH, if a cache happens to be a set associative cache of any way greater t=
>han one (i.e. not direct mapped), then the new piece of data can end up in =
>a different spot within the same set for the given index from which it can =
>be returned if it is not lost/replaced for some other reason.
>This is basic set associativity cache behavior 101.
>
>The result is that if the processor has a direct mapped cache and just happ=
>ens to make alternating accesses to two pieces of data that have the same i=
>ndex, the directed mapped cache will *ALWAYS* take a miss on every access (=
>i.e. have a hit rate of 0%), while the same processor with a set associativ=
>e cache of any way greater than one will *ALWAYS* take a take a hit (i.e. h=
>ave a hit rate of 100%).

Which is _why_ direct cache implementations are considered obsolete.

<snip>

>Having a separate I-cache and D-cache may well have other advantages beside=
>s increased hit rate.
>And increased concurrency may well be one of them.
>However, my point by mentioning the existence of separate I-caches and D-ca=
>ches was to point out that given a sufficiently Good Reason, splitting/repl=
>acing a single cache with smaller caches may be A Good Idea.

A singleton example which may not apply generally. Splitting a Dcache
(at the same latency level) doesn't seem to have arguments for and
a plentitude of arguments against.

>I will point out, however, that I think that increased concurrency seems li=
>ke a pretty weak justification.
>Yes, separate caches might well allow for increased concurrency, but you ha=
>ve to come up with finding those things that can be done during instruction=
> execution that can be done in parallel.

So, what characteristic of an access define which of the separate caches
will be used by that access? Is there a potential for the same cache
line to appear in both caches? Does software need to manage the cache
fills (e.g. the MIPS software table walker)?

<snip>

>But generating a "select" signal that will access instructions or data in e=
>ither a user mode instruction cache or data in a user mode data cache is tr=
>ivially easy as well, at least conceptually, especially if one is willing t=
>o make use of/exploit that which is common practice in OSs these days.
>In particular, since even before the Toy OSs grew up, there has been a fixa=
>tion with dividing the logical address space into two parts, one part for u=
>ser code and data and the other part for supervisor code and data.

Fixation? WTF?

Leaving aside your "Toy OS" comment, whether you're referring to MSDOS or
any of the various single-job Mainframe operating systems that grew into
multiprogramming operating systems, there's always been a need to distinguish
between privileged and non-privileged code in some fashion.

The mechanisms varied, but the need is the same. Burroughs did it with
segmentation, IBM did it with partitions (physical and eventually logical),
Univac, well, that one was just wierd.

>When the logical space is divided exactly in half (as was the case for much=
> of the time for 32-bit machines),

That's not actually the case. Many of the early unix systems when ported
to 386, chose a 3-1 split (3G user, 1G system), others used a 2-2 split.

There was no other choice that would provide acceptable performance.

the result was that the high order bit o=
>f the address indicates (and therefore could be used as a select line for) =
>user space versus supervisor space cache access.

It wasn't necessarily the high-order bit(s).

>While things have changed a bit since 64-bit machines have become dominant,=
> it is still at least conceptually possible to treat some part of the high =
>order part of a logical address as such an indicator.

ARM64, which is about a decade old now, uses bit 55 of the virtual address
to determine whether the address should be translated using supervisor
translation tables or user translation tables. On Intel and AMD systems,
it's the highest supported virtual address bit (40, 48 or 52) and the
higher bits are required to be 'canonical' (i.e. match the supported high-order
bit in value).

<snip>

>What I find ... "interesting" ... here, however, is that you would try to m=
>ake an argument at all about the possible lack of concurrency WRT a possibl=
>e supervisor cache.
>As I have indicated before, I assume that any such cache would be basically=
> at the same level as current L3 caches and it is my understanding that for=
> the most part, they're not doing any sort of concurrent operations today.

In the system I'm currently working on, L1I and L1D are part of the
processor core. L2 is part of the processor "cluster" (one or more
cores sharing that L2) and L3 is shared (and accessed concurrently on
L2 misses).


Click here to read the complete article
Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...

<lPWUM.1253$8hV9.529@fx44.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=428&group=comp.sys.unisys#428

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx44.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: sco...@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...
Newsgroups: comp.sys.unisys
References: <08eedc84-339f-47d8-a2c1-65a076a53ed7n@googlegroups.com> <ubdjto$2d085$1@dont-email.me> <4a2bbd4c-8e8b-4136-ae74-89b9a9ea1edcn@googlegroups.com> <ubf86b$2nnjm$1@dont-email.me> <f56ecc39-a1ec-4e67-8fc4-696114f6d99cn@googlegroups.com> <ubo3b6$9hr9$1@dont-email.me> <e8862a35-38bb-4f36-a512-f44b552c2e65n@googlegroups.com>
Lines: 64
Message-ID: <lPWUM.1253$8hV9.529@fx44.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Mon, 09 Oct 2023 17:33:05 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Mon, 09 Oct 2023 17:33:05 GMT
X-Received-Bytes: 3719
 by: Scott Lurndal - Mon, 9 Oct 2023 17:33 UTC

Lewis Cole <l_cole@juno.com> writes:
>
>So here's the second part of my reply to Mr. Fuld's last response to me.
>Considering how quickly this reply has grown, I may end up breaking it up i=
>nto a third part as well.

>Just for giggles, though, let's say that that was then and this is now, mea=
>ning that the amount of time spent in the Exec is (and has been for some ti=
>me) roughly the same as the figure that Mr. Lurndal cited ... so what?
>Mr. Lurndal apparently wants to argue that since the *AVERAGE* amount of ti=
>me that some systems (presumably those whose OSs' names end in the letters =
>"ix") spend in the supervisor is "only" 20%, that means that it isn't worth=
> having a separate supervisor cache.

Ok, another 'toy os' command from a dinosaur. Note that I spent a decade
working on the Burroughs MCP for the medium systems line. Modern unix
is 1000 times better than the MCP in almost every respect. Likewise,
they're better than OS2200.

>After all, his reasoning goes, if the entire time spent in the supervisor w=
>ere eliminated, that would mean an increase of only 20% more time to user p=
>rograms.

I simply pointed out that modern day systems spend very little time in
"supervisor" mode. Please don't try to guess my "reasoning".

Even the Dorado systems are running on standard intel hardware these days.

>
>Just on principle, this is a silly thing to say.

I never said that.

>
>It obviously incorrectly equates time with useful work and then compounds t=
>hat by treating time spent in user code as important while time spent in su=
>pervisor mode as waste.

No. Time spent in supervisor mode is time spent not doing application
processing. Take a look at DPDK or ODP, for example, where much of the
work is moved from the OS to the application specifically to eliminate
the overhead of moving between user and supervisor/kernel modes.

>It shouldn't take much to realize that this is nonsense.
>Imagine a user program making a request to the OS to send a message somewhe=
>re that can't be delivered for some reason (e.g. an error or some programma=
>tic limits being exceeded), the OS should return a bit more quickly than if=
> it could send the message.
>So the user program should get a bit more time and the OS should get a bit =
>less time.
>But no one in their right mind should automatically presume the user progra=
>m should be able to do something more "useful" with the extra time it has.

There have been more than one program running under an operating system for
fifty years. There is always something productive that the processor can
be doing while one process is waiting, like scheduling another user process.

>
>So I'm going to end this part and go on to a new Part 3.

Don't bother on my account.

Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...

<%0XUM.17483$xTV9.6644@fx39.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=429&group=comp.sys.unisys#429

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!newsfeed.endofthelinebbs.com!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx39.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: sco...@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...
Newsgroups: comp.sys.unisys
References: <08eedc84-339f-47d8-a2c1-65a076a53ed7n@googlegroups.com> <ubdjto$2d085$1@dont-email.me> <4a2bbd4c-8e8b-4136-ae74-89b9a9ea1edcn@googlegroups.com> <ubf86b$2nnjm$1@dont-email.me> <f56ecc39-a1ec-4e67-8fc4-696114f6d99cn@googlegroups.com> <ubo3b6$9hr9$1@dont-email.me> <106bf172-c240-42f7-b756-7de5a299e218n@googlegroups.com>
Lines: 29
Message-ID: <%0XUM.17483$xTV9.6644@fx39.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Mon, 09 Oct 2023 17:47:39 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Mon, 09 Oct 2023 17:47:39 GMT
X-Received-Bytes: 2169
 by: Scott Lurndal - Mon, 9 Oct 2023 17:47 UTC

Lewis Cole <l_cole@juno.com> writes:
>So here's the third part of my reply to Mr. Fuld's last response to me.
>

>So about 10 years ago, the boys and girls at ETH Zurich along with the boys=
> and girls at Microsoft decided to try to come up with an OS based on a new=
> model which became known as a "multi-kernel".
>The new OS they created, called Barrelfish, treated all CPUs as if they wer=
>e networked even if they were on the same chip, sharing the same common mem=
>ory.

Sorry, you're way behind. Unisys did did this 1989-1997. The
system was called OPUS. Chorus microkernel and a Unisys developed
Unix subsystem distributed across 64 nodes (pentium pro dual processor,
each node with ethernet and scsi controllers) using the Intel Paragon
supercomputer wormhole routing backplane.

It even ran Mapper.

A decade later, some ex unisys folks and I started a company called 3Leaf
systems which built an ASIC to extend the cache coherency domain
for AMD/Intel processors across a fabric (Infiniband QDR at the time)
creating large, shared memory NUMA systems. We were just a few
years too early.

Today, CXL has become an industry standard.

The processors I work on today, have huge bandwidth
requirements (more than 80 gigabytes/second).

Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...

<ug6gnr$1t5hb$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=430&group=comp.sys.unisys#430

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.sys.unisys
Subject: Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...
Date: Wed, 11 Oct 2023 08:58:51 -0700
Organization: A noiseless patient Spider
Lines: 133
Message-ID: <ug6gnr$1t5hb$1@dont-email.me>
References: <08eedc84-339f-47d8-a2c1-65a076a53ed7n@googlegroups.com>
<ubdjto$2d085$1@dont-email.me>
<4a2bbd4c-8e8b-4136-ae74-89b9a9ea1edcn@googlegroups.com>
<ubf86b$2nnjm$1@dont-email.me>
<f56ecc39-a1ec-4e67-8fc4-696114f6d99cn@googlegroups.com>
<ubo3b6$9hr9$1@dont-email.me>
<e8862a35-38bb-4f36-a512-f44b552c2e65n@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 11 Oct 2023 15:58:51 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="86f1718ad469f97e7f9c0f29065bea7b";
logging-data="2004523"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19VgXcWLGTENIzyElyMpJ0T7jay3GkpUAc="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:mdggqT/ftlq60c6KrcXkIo1oVoQ=
In-Reply-To: <e8862a35-38bb-4f36-a512-f44b552c2e65n@googlegroups.com>
Content-Language: en-US
 by: Stephen Fuld - Wed, 11 Oct 2023 15:58 UTC

On 10/8/2023 7:30 PM, Lewis Cole wrote:
>
> So here's the second part of my reply to Mr. Fuld's last response to me.
> Considering how quickly this reply has grown, I may end up breaking it up into a third part as well.
>
> On 8/15/2023 11:48 AM, Lewis Cole wrote:
>>> On Tuesday, August 15, 2023 at 12:06:53AM UTC-7, Stephen Fuld wrote:
>>> <snip>
>>>>> And even since the beginning of time
>>>>> (well ... since real live multi-tasking
>>>>> OS appeared), it has been obvious that
>>>>> processors tend to spend most of their
>>>>> time in supervisor mode (OS) code
>>>>> rather than in user (program) code.
>>>>
>>>> I don't want to get into an argument about caching with you, [...]
>>>
>>> I'm not sure what sort of argument you
>>> think I'm trying get into WRT caching,
>>> but I assume that we both are familiar
>>> enough with it so that there's really
>>> no argument to be had so your
>>> comment makes no sense to me.
>>>
>>>> [...] but I am sure that the percentage of time spent in supervisor mode is very
>>>> workload dependent.
>>>
>>> Agreed.
>>> But to the extent that the results of
>>> the SS keyin were ... useful .. in The
>>> Good Old Days at Roseville, I recall
>>> seeing something in excess of 80+
>>> percent of the time was spent in the
>>> Exec on regular basis.
>>
>> It took me a while to respond to this, as I had a memory, but had to
>> find the manual to check. You might have had some non-released code
>> running in Roseville, but the standard SS keyin doesn't show what
>> percentage of time is spent in Exec. To me, and as supported by the
>> evidence Scott gave, 80% seems way too high.
>
> So let me get this straight: You don't believe the 80% figure I cite because it seems too high to you and it didn't come a "standard" SS keyin of the time.
> Meanwhile, you believe the figure cited by Mr. Lurndal because it seems more believable even though it comes from a system that's almost certainly running a different workload than the one I'm referring to which was from decades ago.
> Did I get this right?
> Seriously?

Basically right. Add to that the belief that I have that if OS
utilization were frequently 80%, then no customer would buy such a
system, as they would be losing 80% of it to the OS. And the fact that
I saw lots of customer systems when I was active in the community and
never saw anything like it.

But see below for a possible resolution to this issue.

> What happened to the bit where *YOU* were saying about the amount of time spent in an OS was probably workload dependent?

I believe that. But 80% is way above any experience that I have had.

> And since when does the credibility of local code written in Roseville (by people who are likely responsible for the care and feeding of the Exec that the local code is being written for) some how become suspect just because said code didn't make it into a release ... whether it's output is consistent with what you believe or not?

Wow! I never doubted the credibility of the Roseville Exec programmers.
They were exceptionally good. But you presented no evidence that such
code ever existed. I posited it as a possibility to explain what you
recalled seeing. I actually doubt such code existed.

>
> FWIW, I stand by the statement about seeing CPU utilization in excess of 80+% on a regular basis because that is what I recall seeing.

Ahhh! Here is the key. In this sentence, you say *CPU utilization*,
not *OS utilization*. The CPU utilization includes everything but idle
time, specifically including Exec plus all user (Batch, Demand, TIP,
RT,) time. BTW, this is readily calculated from the numbers in a
standard SS keyin. I certainly agree that this could, and frequently was
at 80% or higher. If you claim that you frequently saw CPU utilization
at 80%, I will readily believe you, and I suspect that Scott will too.

> You can choose to believe me or not.
> (And I would like to point out that I don't appreciate being called a liar no matter how politely it is done.)

Again, Wow! I never called you a liar. To be pedantic, a lie is
something that the originator knows is incorrect. I never said you were
lying. At worst, I accused you of having bad recollection, not
intention, which, as I get older, I suffer from more and more. :-(

>
> I cannot provide direct evidence to support my statement.
> I don't have any console listings or demand terminal session listings where I entered an "@@cons ss", for example.
> However, I can point to an old (~1981) video that clearly suggests that the 20% figure cited by Mr. Lurndal almost certainly doesn't apply to the Exec at least in some environments from way back when.
> And I can wave my arms at why it is most certainly possible for a much higher figure to show up, at least theoretically, even today.
>
> So the video I want to draw your attention to is entitled, "19th Annual Sperry Univac Spring Technical Symposium - 'Proposed Memory Management Techniques for Sperry Univac 1100 Series Systems'", and can be found here:
>
> < https://digital.hagley.org/VID_1985261_B110_ID05?solr_nav%5Bid%5D=88d187d912cfce1a5ad1&solr_nav%5Bpage%5D=0&solr_nav%5Boffset%5D=2 >

Interesting video, thank you. BTW, the excessive time spent in memory
allocation searching for the best fit, figuring out what to swap and
minimizing fragmentation were probably motivating factors for going to a
paging system.

But note that the allocation times getting up to 33% (as he said due to
larger memories being available and changing workload patterns) was such
a problem that they convened a task force to fix it, and it seems put in
patches pretty quickly. Assuming their changes were successful, it
should have substantially reduced memory allocation time.

But all of this about utilization and workload changing is not relevant
to the original question of whether having two caches, one of size X
dedicated to Exec (supervisor) and one of size Y, dedicated to user use
is better than a single cache of size X+Y available to both.

Since when in Exec mode, the effective cache size is smaller (does not
include Y), and similarly for user, i.e. not including X, performance
will be worse for both workloads. This is different from a separate I
cache vs. D cache, as all programs use both simultaneously.

Feel free to respond, but ISTM that this thread has wandered so far from
its original topic that I am losing interest, and probably won't respond.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...

<sakeiite7fsjdum88n3obf3ql8d3un3r52@4ax.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=431&group=comp.sys.unisys#431

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!1.us.feeder.erje.net!feeder.erje.net!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx01.iad.POSTED!not-for-mail
From: davidsch...@harrietmanor.com (David W Schroth)
Newsgroups: comp.sys.unisys
Subject: Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...
Message-ID: <sakeiite7fsjdum88n3obf3ql8d3un3r52@4ax.com>
References: <08eedc84-339f-47d8-a2c1-65a076a53ed7n@googlegroups.com> <ubdjto$2d085$1@dont-email.me> <4a2bbd4c-8e8b-4136-ae74-89b9a9ea1edcn@googlegroups.com> <ubf86b$2nnjm$1@dont-email.me> <f56ecc39-a1ec-4e67-8fc4-696114f6d99cn@googlegroups.com> <ubo3b6$9hr9$1@dont-email.me> <e8862a35-38bb-4f36-a512-f44b552c2e65n@googlegroups.com> <ug6gnr$1t5hb$1@dont-email.me>
User-Agent: ForteAgent/8.00.32.1272
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Lines: 194
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Wed, 11 Oct 2023 23:38:13 -0500
X-Received-Bytes: 10933
 by: David W Schroth - Thu, 12 Oct 2023 04:38 UTC

On Wed, 11 Oct 2023 08:58:51 -0700, Stephen Fuld
<sfuld@alumni.cmu.edu.invalid> wrote:

I know I'm going to regret responding to all of this...

>On 10/8/2023 7:30 PM, Lewis Cole wrote:
>>
>> So here's the second part of my reply to Mr. Fuld's last response to me.
>> Considering how quickly this reply has grown, I may end up breaking it up into a third part as well.
>>
>> On 8/15/2023 11:48 AM, Lewis Cole wrote:
>>>> On Tuesday, August 15, 2023 at 12:06:53AM UTC-7, Stephen Fuld wrote:
>>>> <snip>
>>>>>> And even since the beginning of time
>>>>>> (well ... since real live multi-tasking
>>>>>> OS appeared), it has been obvious that
>>>>>> processors tend to spend most of their
>>>>>> time in supervisor mode (OS) code
>>>>>> rather than in user (program) code.
>>>>>
>>>>> I don't want to get into an argument about caching with you, [...]
>>>>
>>>> I'm not sure what sort of argument you
>>>> think I'm trying get into WRT caching,
>>>> but I assume that we both are familiar
>>>> enough with it so that there's really
>>>> no argument to be had so your
>>>> comment makes no sense to me.
>>>>
>>>>> [...] but I am sure that the percentage of time spent in supervisor mode is very
>>>>> workload dependent.
>>>>
>>>> Agreed.
>>>> But to the extent that the results of
>>>> the SS keyin were ... useful .. in The
>>>> Good Old Days at Roseville, I recall
>>>> seeing something in excess of 80+
>>>> percent of the time was spent in the
>>>> Exec on regular basis.
>>>

I have to believe your memory is conflating two different things. Not
surprising, given the timespan involved.

FWIW, the output from the SS keyin does not tell anyone how much time
was spent in the Exec. It tells the operator what percentage of the
possible Standard Units of Processing were consumed by Batch programs,
Demand programs, and TIP transactions. SUPs are *not* accumulated by
the Exec. Note that the amount of possible SUPs in a measuring
interval is not particularly well-defined.

I have a vague memory from when I first worked at Univac facilities in
Minnesota of seeing a sign describing how the system in the "Fishbowl"
had been instrumented to display performance numbers in real time (as
opposed to Real Time performance numbers). I don't recall ever seeing
the system/display, so it's possible that thr forty-some odd years
misspent in the employment of Uivac and Unisys has left me with a
false memory.

Otherwisem the only way to see how much time is spent in the Exec
involves the use of SIP/OSAM, which you were almost certainly not
using from the operator's console.
>>> It took me a while to respond to this, as I had a memory, but had to
>>> find the manual to check. You might have had some non-released code
>>> running in Roseville, but the standard SS keyin doesn't show what
>>> percentage of time is spent in Exec. To me, and as supported by the
>>> evidence Scott gave, 80% seems way too high.
>>

Well, 80% at a customer site *is* way too high. However, Mr. Cole was
at the development center, where torturing the hardware and software
was de rigueur. If he says he saw 80%, I tend to believe him, while
not agreeing that this was something typically seen at customer sites.
>> So let me get this straight: You don't believe the 80% figure I cite because it seems too high to you and it didn't come a "standard" SS keyin of the time.
>> Meanwhile, you believe the figure cited by Mr. Lurndal because it seems more believable even though it comes from a system that's almost certainly running a different workload than the one I'm referring to which was from decades ago.
>> Did I get this right?
>> Seriously?
>
>Basically right. Add to that the belief that I have that if OS
>utilization were frequently 80%, then no customer would buy such a
>system, as they would be losing 80% of it to the OS. And the fact that
>I saw lots of customer systems when I was active in the community and
>never saw anything like it.
>
>But see below for a possible resolution to this issue.
>
>
>> What happened to the bit where *YOU* were saying about the amount of time spent in an OS was probably workload dependent?
>
>I believe that. But 80% is way above any experience that I have had.
>

My personal recollection is there were benchmarks where the Exec
portion of TIP utilized 50% of the processor cycles. Given that
recollection dates back (starts counting on fingers, ends up counting
on toes) over twenty years ago, I wouldn't put a lot of stock in this
recollection.

>
>> And since when does the credibility of local code written in Roseville (by people who are likely responsible for the care and feeding of the Exec that the local code is being written for) some how become suspect just because said code didn't make it into a release ... whether it's output is consistent with what you believe or not?
>
>Wow! I never doubted the credibility of the Roseville Exec programmers.
> They were exceptionally good. But you presented no evidence that such
>code ever existed. I posited it as a possibility to explain what you
>recalled seeing. I actually doubt such code existed.
>

If I have not created the aforementioned "fishbowl" system out of whle
cloth, there was certainly modifications to both hardware and software
involved. Any such probably got folded into the Internal Performance
Monitors and External Performance Monitors on later systems.

>
>
>>
>> FWIW, I stand by the statement about seeing CPU utilization in excess of 80+% on a regular basis because that is what I recall seeing.
>
>Ahhh! Here is the key. In this sentence, you say *CPU utilization*,
>not *OS utilization*. The CPU utilization includes everything but idle
>time, specifically including Exec plus all user (Batch, Demand, TIP,
>RT,) time. BTW, this is readily calculated from the numbers in a
>standard SS keyin. I certainly agree that this could, and frequently was
>at 80% or higher. If you claim that you frequently saw CPU utilization
>at 80%, I will readily believe you, and I suspect that Scott will too.
>

I am exrememly skeptical of the assertion that one can calculate CPU
utilization from the output of the SS keyin. Possibly because I've
spent too much time digging aroung in the guts of the Continuous
Display/SS keyin output. I will acknowledge the possibility that
calculating the CPU utilization from the output of the SS keyin
*might* have been possible for EON (1108) and TON (1110) systems, but
I'm pretty sure it wouldn't work for any systems from the 1100/80
onward.

The take from the Performance Analysis folk in the Development Center
is that while one can calculate MIPS from SUPs, the result of such
calculations is not particularly accurate.
>
>
>> You can choose to believe me or not.
>> (And I would like to point out that I don't appreciate being called a liar no matter how politely it is done.)
>
>Again, Wow! I never called you a liar. To be pedantic, a lie is
>something that the originator knows is incorrect. I never said you were
>lying. At worst, I accused you of having bad recollection, not
>intention, which, as I get older, I suffer from more and more. :-(
>
>
>
>
>>
>> I cannot provide direct evidence to support my statement.
>> I don't have any console listings or demand terminal session listings where I entered an "@@cons ss", for example.
>> However, I can point to an old (~1981) video that clearly suggests that the 20% figure cited by Mr. Lurndal almost certainly doesn't apply to the Exec at least in some environments from way back when.
>> And I can wave my arms at why it is most certainly possible for a much higher figure to show up, at least theoretically, even today.
>>
>> So the video I want to draw your attention to is entitled, "19th Annual Sperry Univac Spring Technical Symposium - 'Proposed Memory Management Techniques for Sperry Univac 1100 Series Systems'", and can be found here:
>>
>> < https://digital.hagley.org/VID_1985261_B110_ID05?solr_nav%5Bid%5D=88d187d912cfce1a5ad1&solr_nav%5Bpage%5D=0&solr_nav%5Boffset%5D=2 >
>
>Interesting video, thank you. BTW, the excessive time spent in memory
>allocation searching for the best fit, figuring out what to swap and
>minimizing fragmentation were probably motivating factors for going to a
>paging system.

Probably not so much. While I wasn't there when paging was
architected, I was there to design and implement it. The motivating
factor is almost certainly called out in the following quote - "There
is only one mistake in computer design that is difficult to recover
from - not having enough address bits for memory addressing and memory
management."
>
>But note that the allocation times getting up to 33% (as he said due to
>larger memories being available and changing workload patterns) was such
>a problem that they convened a task force to fix it, and it seems put in
>patches pretty quickly. Assuming their changes were successful, it
>should have substantially reduced memory allocation time.
>
>
>But all of this about utilization and workload changing is not relevant
>to the original question of whether having two caches, one of size X
>dedicated to Exec (supervisor) and one of size Y, dedicated to user use
>is better than a single cache of size X+Y available to both.
>
>Since when in Exec mode, the effective cache size is smaller (does not
>include Y), and similarly for user, i.e. not including X, performance
>will be worse for both workloads. This is different from a separate I
>cache vs. D cache, as all programs use both simultaneously.
>
>Feel free to respond, but ISTM that this thread has wandered so far from
>its original topic that I am losing interest, and probably won't respond.


Click here to read the complete article
Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...

<a7a48a83-030b-4445-86ec-fb26dccc8bd9n@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=432&group=comp.sys.unisys#432

  copy link   Newsgroups: comp.sys.unisys
X-Received: by 2002:adf:a347:0:b0:32d:640a:d127 with SMTP id d7-20020adfa347000000b0032d640ad127mr61782wrb.7.1697091934316;
Wed, 11 Oct 2023 23:25:34 -0700 (PDT)
X-Received: by 2002:a05:6870:8c32:b0:1e1:394:52a8 with SMTP id
ec50-20020a0568708c3200b001e1039452a8mr8358038oab.3.1697091933508; Wed, 11
Oct 2023 23:25:33 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.128.87.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.sys.unisys
Date: Wed, 11 Oct 2023 23:25:32 -0700 (PDT)
In-Reply-To: <sakeiite7fsjdum88n3obf3ql8d3un3r52@4ax.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2601:602:c080:3f60:c92:9014:f547:710e;
posting-account=DycLBQoAAACVeYHALMkZoo5C926pUXDC
NNTP-Posting-Host: 2601:602:c080:3f60:c92:9014:f547:710e
References: <08eedc84-339f-47d8-a2c1-65a076a53ed7n@googlegroups.com>
<ubdjto$2d085$1@dont-email.me> <4a2bbd4c-8e8b-4136-ae74-89b9a9ea1edcn@googlegroups.com>
<ubf86b$2nnjm$1@dont-email.me> <f56ecc39-a1ec-4e67-8fc4-696114f6d99cn@googlegroups.com>
<ubo3b6$9hr9$1@dont-email.me> <e8862a35-38bb-4f36-a512-f44b552c2e65n@googlegroups.com>
<ug6gnr$1t5hb$1@dont-email.me> <sakeiite7fsjdum88n3obf3ql8d3un3r52@4ax.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <a7a48a83-030b-4445-86ec-fb26dccc8bd9n@googlegroups.com>
Subject: Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...
From: l_c...@juno.com (Lewis Cole)
Injection-Date: Thu, 12 Oct 2023 06:25:34 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: Lewis Cole - Thu, 12 Oct 2023 06:25 UTC

On Wednesday, October 11, 2023 at 9:36:53 PM UTC-7, David W Schroth wrote:
> On Wed, 11 Oct 2023 08:58:51 -0700, Stephen Fuld
> <sf...@alumni.cmu.edu.invalid> wrote:
>
> I know I'm going to regret responding to all of this...
> >On 10/8/2023 7:30 PM, Lewis Cole wrote:
> >>
> >> So here's the second part of my reply to Mr. Fuld's last response to me.
> >> Considering how quickly this reply has grown, I may end up breaking it up into a third part as well.
> >>
> >> On 8/15/2023 11:48 AM, Lewis Cole wrote:
> >>>> On Tuesday, August 15, 2023 at 12:06:53AM UTC-7, Stephen Fuld wrote:
> >>>> <snip>
> >>>>>> And even since the beginning of time
> >>>>>> (well ... since real live multi-tasking
> >>>>>> OS appeared), it has been obvious that
> >>>>>> processors tend to spend most of their
> >>>>>> time in supervisor mode (OS) code
> >>>>>> rather than in user (program) code.
> >>>>>
> >>>>> I don't want to get into an argument about caching with you, [...]
> >>>>
> >>>> I'm not sure what sort of argument you
> >>>> think I'm trying get into WRT caching,
> >>>> but I assume that we both are familiar
> >>>> enough with it so that there's really
> >>>> no argument to be had so your
> >>>> comment makes no sense to me.
> >>>>
> >>>>> [...] but I am sure that the percentage of time spent in supervisor mode is very
> >>>>> workload dependent.
> >>>>
> >>>> Agreed.
> >>>> But to the extent that the results of
> >>>> the SS keyin were ... useful .. in The
> >>>> Good Old Days at Roseville, I recall
> >>>> seeing something in excess of 80+
> >>>> percent of the time was spent in the
> >>>> Exec on regular basis.
> >>>
> I have to believe your memory is conflating two different things. Not
> surprising, given the timespan involved.
>
> FWIW, the output from the SS keyin does not tell anyone how much time
> was spent in the Exec. It tells the operator what percentage of the
> possible Standard Units of Processing were consumed by Batch programs,
> Demand programs, and TIP transactions. SUPs are *not* accumulated by
> the Exec. Note that the amount of possible SUPs in a measuring
> interval is not particularly well-defined.
>
> I have a vague memory from when I first worked at Univac facilities in
> Minnesota of seeing a sign describing how the system in the "Fishbowl"
> had been instrumented to display performance numbers in real time (as
> opposed to Real Time performance numbers). I don't recall ever seeing
> the system/display, so it's possible that thr forty-some odd years
> misspent in the employment of Uivac and Unisys has left me with a
> false memory.
>
> Otherwisem the only way to see how much time is spent in the Exec
> involves the use of SIP/OSAM, which you were almost certainly not
> using from the operator's console.

If Mr. Schroth says that I'm full of shit WRT being able to determine the amount of time spent in the Exec via an SS keyin, then I accept that I am full of shit.
I am/was wrong.
Thank you for the correction, Mr. Schroth.

Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...

<2hdhii1bts1p8cqlee9vsbmhsiiihbjc8e@4ax.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=433&group=comp.sys.unisys#433

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx36.iad.POSTED!not-for-mail
From: davidsch...@harrietmanor.com (David W Schroth)
Newsgroups: comp.sys.unisys
Subject: Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...
Message-ID: <2hdhii1bts1p8cqlee9vsbmhsiiihbjc8e@4ax.com>
References: <08eedc84-339f-47d8-a2c1-65a076a53ed7n@googlegroups.com> <ubdjto$2d085$1@dont-email.me> <4a2bbd4c-8e8b-4136-ae74-89b9a9ea1edcn@googlegroups.com> <ubf86b$2nnjm$1@dont-email.me> <f56ecc39-a1ec-4e67-8fc4-696114f6d99cn@googlegroups.com> <ubo3b6$9hr9$1@dont-email.me> <e8862a35-38bb-4f36-a512-f44b552c2e65n@googlegroups.com> <ug6gnr$1t5hb$1@dont-email.me> <sakeiite7fsjdum88n3obf3ql8d3un3r52@4ax.com> <a7a48a83-030b-4445-86ec-fb26dccc8bd9n@googlegroups.com>
User-Agent: ForteAgent/8.00.32.1272
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Lines: 88
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Thu, 12 Oct 2023 22:26:37 -0500
X-Received-Bytes: 5534
 by: David W Schroth - Fri, 13 Oct 2023 03:26 UTC

On Wed, 11 Oct 2023 23:25:32 -0700 (PDT), Lewis Cole <l_cole@juno.com>
wrote:

>On Wednesday, October 11, 2023 at 9:36:53?PM UTC-7, David W Schroth wrote:
>> On Wed, 11 Oct 2023 08:58:51 -0700, Stephen Fuld
>> <sf...@alumni.cmu.edu.invalid> wrote:
>>
>> I know I'm going to regret responding to all of this...
>> >On 10/8/2023 7:30 PM, Lewis Cole wrote:
>> >>
>> >> So here's the second part of my reply to Mr. Fuld's last response to me.
>> >> Considering how quickly this reply has grown, I may end up breaking it up into a third part as well.
>> >>
>> >> On 8/15/2023 11:48 AM, Lewis Cole wrote:
>> >>>> On Tuesday, August 15, 2023 at 12:06:53AM UTC-7, Stephen Fuld wrote:
>> >>>> <snip>
>> >>>>>> And even since the beginning of time
>> >>>>>> (well ... since real live multi-tasking
>> >>>>>> OS appeared), it has been obvious that
>> >>>>>> processors tend to spend most of their
>> >>>>>> time in supervisor mode (OS) code
>> >>>>>> rather than in user (program) code.
>> >>>>>
>> >>>>> I don't want to get into an argument about caching with you, [...]
>> >>>>
>> >>>> I'm not sure what sort of argument you
>> >>>> think I'm trying get into WRT caching,
>> >>>> but I assume that we both are familiar
>> >>>> enough with it so that there's really
>> >>>> no argument to be had so your
>> >>>> comment makes no sense to me.
>> >>>>
>> >>>>> [...] but I am sure that the percentage of time spent in supervisor mode is very
>> >>>>> workload dependent.
>> >>>>
>> >>>> Agreed.
>> >>>> But to the extent that the results of
>> >>>> the SS keyin were ... useful .. in The
>> >>>> Good Old Days at Roseville, I recall
>> >>>> seeing something in excess of 80+
>> >>>> percent of the time was spent in the
>> >>>> Exec on regular basis.
>> >>>
>> I have to believe your memory is conflating two different things. Not
>> surprising, given the timespan involved.
>>
>> FWIW, the output from the SS keyin does not tell anyone how much time
>> was spent in the Exec. It tells the operator what percentage of the
>> possible Standard Units of Processing were consumed by Batch programs,
>> Demand programs, and TIP transactions. SUPs are *not* accumulated by
>> the Exec. Note that the amount of possible SUPs in a measuring
>> interval is not particularly well-defined.
>>
>> I have a vague memory from when I first worked at Univac facilities in
>> Minnesota of seeing a sign describing how the system in the "Fishbowl"
>> had been instrumented to display performance numbers in real time (as
>> opposed to Real Time performance numbers). I don't recall ever seeing
>> the system/display, so it's possible that thr forty-some odd years
>> misspent in the employment of Uivac and Unisys has left me with a
>> false memory.
>>
>> Otherwisem the only way to see how much time is spent in the Exec
>> involves the use of SIP/OSAM, which you were almost certainly not
>> using from the operator's console.
>
>If Mr. Schroth says that I'm full of shit WRT being able to determine the amount of time spent in the Exec via an SS keyin, then I accept that I am full of shit.
>I am/was wrong.
>Thank you for the correction, Mr. Schroth.

I'm pretty sure I didn't say you were full of shit.

I thought some more about this, and I suspect I'm giving too much
weight to my experience with more recent flavors of the architecture.
I think the key is "how closely does the 2200's accounting measure
(SUPs) match up with the amount of wall clock time?"
For those systems that use Quantum Timer ticks to generate SUPs, the
answer is "Not very closely at all." A load instruction will cost one
Quantum Timer tick, regardless of whether the load instruction gets
the data from the Level 1 cache or from remote memory. The wall clock
time of the instruction will be greatly affected by where the data is
retrieved from.
At the other end of the spectrum, I suspect that accounting numbers
were generated by subtracting start times from end times. Since all
memory references cost roughly the same amount of wall clock time, I
would expect that the output from the SS keyin could actually provide
a reasonable estimate of how mush time was spent in the Exec. Since
neither you nor I is old enough to remember how Exec 8 did accounting,
this will remain somewhat speculative.

Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...

<ujitv6$uifo$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=466&group=comp.sys.unisys#466

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.sys.unisys
Subject: Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...
Date: Tue, 21 Nov 2023 10:47:02 -0800
Organization: A noiseless patient Spider
Lines: 70
Message-ID: <ujitv6$uifo$1@dont-email.me>
References: <08eedc84-339f-47d8-a2c1-65a076a53ed7n@googlegroups.com>
<ubdjto$2d085$1@dont-email.me>
<4a2bbd4c-8e8b-4136-ae74-89b9a9ea1edcn@googlegroups.com>
<ubf86b$2nnjm$1@dont-email.me>
<f56ecc39-a1ec-4e67-8fc4-696114f6d99cn@googlegroups.com>
<ubo3b6$9hr9$1@dont-email.me>
<e8862a35-38bb-4f36-a512-f44b552c2e65n@googlegroups.com>
<ug6gnr$1t5hb$1@dont-email.me> <sakeiite7fsjdum88n3obf3ql8d3un3r52@4ax.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 21 Nov 2023 18:47:02 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="e6b6de7b86e3c5993f9751efc26adf2f";
logging-data="1001976"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+geOI+SNfRkAR6QNfEPNqYiz0kSM5gvVM="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:s+DHLbtkropove+9XEwVBN2cMrs=
Content-Language: en-US
In-Reply-To: <sakeiite7fsjdum88n3obf3ql8d3un3r52@4ax.com>
 by: Stephen Fuld - Tue, 21 Nov 2023 18:47 UTC

On 10/11/2023 9:38 PM, David W Schroth wrote:
> On Wed, 11 Oct 2023 08:58:51 -0700, Stephen Fuld
> <sfuld@alumni.cmu.edu.invalid> wrote:
>
> I know I'm going to regret responding to all of this...

I hope not. I am sure I am not alone in valuing your contributions here.

big snip

>>> So the video I want to draw your attention to is entitled, "19th Annual Sperry Univac Spring Technical Symposium - 'Proposed Memory Management Techniques for Sperry Univac 1100 Series Systems'", and can be found here:
>>>
>>> < https://digital.hagley.org/VID_1985261_B110_ID05?solr_nav%5Bid%5D=88d187d912cfce1a5ad1&solr_nav%5Bpage%5D=0&solr_nav%5Boffset%5D=2 >
>>
>> Interesting video, thank you. BTW, the excessive time spent in memory
>> allocation searching for the best fit, figuring out what to swap and
>> minimizing fragmentation were probably motivating factors for going to a
>> paging system.
>
> Probably not so much. While I wasn't there when paging was
> architected, I was there to design and implement it. The motivating
> factor is almost certainly called out in the following quote - "There
> is only one mistake in computer design that is difficult to recover
> from - not having enough address bits for memory addressing and memory
> management."

While I absolutely agree with the quotation, with all due respect, I
disagree that it was the motivation for implementing paging. A caveat, I
was not involved at all in either the architecture nor the
implementation. My argument is based primarily on logical analysis.

The reason that the ability for a program to address lots of memory
(i.e. more address bits) wasn't a factor in the decision is that Univac
already that problem solved!

I remember a conversation I had with Ron Smith at a Use conference
sometime probably in the late 1970s or early 1980s, when IBM had
implemented virtual memory/paging in the S/370 line. I can't remember
the exact quotation, but it was essentially that paging was sort of like
multibanking, but turned "inside out".

That is, with virtual memory, multiple different, potentially large,
user program addresses get mapped to the same physical memory at
different times, whereas with multibanking, multiple smaller user
program addresses (i.e. bank relative addresses), get mapped at
different times (i.e. when the bank was pointed), to the same physical
memory. In other words, both virtual memory/paging and multibanking
break the identity of program relative addresses with physical memory
addresses.

Since you can have a large number (hundreds or thousands) of banks
defined within a program, by pointing different banks at different
times, you can address a huge amount of memory (far larger than any
contemplated physical memory), and the limitation expressed in that
quotation doesn't apply.

Each solution (paging and multi banking) has advantages and
disadvantages, and one can argue the relative merits of the two
solutions (we can discuss that further if anyone cares), they both solve
the problem, so solving that problem shouldn't/couldn't be the
motivation for Unisys implementing paging in 2200s.

Obviously, I invite comments/questions/arguments, etc.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...

<fglqlipg3ngqnk30mtmbertuacl0tmk58e@4ax.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=468&group=comp.sys.unisys#468

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx47.iad.POSTED!not-for-mail
From: davidsch...@harrietmanor.com (David W Schroth)
Newsgroups: comp.sys.unisys
Subject: Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...
Message-ID: <fglqlipg3ngqnk30mtmbertuacl0tmk58e@4ax.com>
References: <08eedc84-339f-47d8-a2c1-65a076a53ed7n@googlegroups.com> <ubdjto$2d085$1@dont-email.me> <4a2bbd4c-8e8b-4136-ae74-89b9a9ea1edcn@googlegroups.com> <ubf86b$2nnjm$1@dont-email.me> <f56ecc39-a1ec-4e67-8fc4-696114f6d99cn@googlegroups.com> <ubo3b6$9hr9$1@dont-email.me> <e8862a35-38bb-4f36-a512-f44b552c2e65n@googlegroups.com> <ug6gnr$1t5hb$1@dont-email.me> <sakeiite7fsjdum88n3obf3ql8d3un3r52@4ax.com> <ujitv6$uifo$1@dont-email.me>
User-Agent: ForteAgent/8.00.32.1272
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Lines: 97
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Tue, 21 Nov 2023 19:36:08 -0600
X-Received-Bytes: 5872
 by: David W Schroth - Wed, 22 Nov 2023 01:36 UTC

On Tue, 21 Nov 2023 10:47:02 -0800, Stephen Fuld
<sfuld@alumni.cmu.edu.invalid> wrote:

>On 10/11/2023 9:38 PM, David W Schroth wrote:
>> On Wed, 11 Oct 2023 08:58:51 -0700, Stephen Fuld
>> <sfuld@alumni.cmu.edu.invalid> wrote:
>>
>> I know I'm going to regret responding to all of this...
>
>I hope not. I am sure I am not alone in valuing your contributions here.
>
>big snip
>
>
>>>> So the video I want to draw your attention to is entitled, "19th Annual Sperry Univac Spring Technical Symposium - 'Proposed Memory Management Techniques for Sperry Univac 1100 Series Systems'", and can be found here:
>>>>
>>>> < https://digital.hagley.org/VID_1985261_B110_ID05?solr_nav%5Bid%5D=88d187d912cfce1a5ad1&solr_nav%5Bpage%5D=0&solr_nav%5Boffset%5D=2 >
>>>
>>> Interesting video, thank you. BTW, the excessive time spent in memory
>>> allocation searching for the best fit, figuring out what to swap and
>>> minimizing fragmentation were probably motivating factors for going to a
>>> paging system.
>>
>> Probably not so much. While I wasn't there when paging was
>> architected, I was there to design and implement it. The motivating
>> factor is almost certainly called out in the following quote - "There
>> is only one mistake in computer design that is difficult to recover
>> from - not having enough address bits for memory addressing and memory
>> management."
>
>While I absolutely agree with the quotation, with all due respect, I
>disagree that it was the motivation for implementing paging. A caveat, I
>was not involved at all in either the architecture nor the
>implementation. My argument is based primarily on logical analysis.
>
>The reason that the ability for a program to address lots of memory
>(i.e. more address bits) wasn't a factor in the decision is that Univac
>already that problem solved!
>
>I remember a conversation I had with Ron Smith at a Use conference
>sometime probably in the late 1970s or early 1980s, when IBM had
>implemented virtual memory/paging in the S/370 line. I can't remember
>the exact quotation, but it was essentially that paging was sort of like
>multibanking, but turned "inside out".
>
>That is, with virtual memory, multiple different, potentially large,
>user program addresses get mapped to the same physical memory at
>different times, whereas with multibanking, multiple smaller user
>program addresses (i.e. bank relative addresses), get mapped at
>different times (i.e. when the bank was pointed), to the same physical
>memory. In other words, both virtual memory/paging and multibanking
>break the identity of program relative addresses with physical memory
>addresses.
>
>Since you can have a large number (hundreds or thousands) of banks
>defined within a program, by pointing different banks at different
>times, you can address a huge amount of memory (far larger than any
>contemplated physical memory), and the limitation expressed in that
>quotation doesn't apply.

I believe there are a couple of problems with that view.

The Exec depended very much on absolute addressing when managing
mamory, which limited the systems to 2 ** 24 words of physical memory,
which was Not Enough.

And the amount of virtual space available to the system was limited by
the amount of swapfile space which, if I recall correctly, was limited
to 03400000000 words (less tha half a GiW, although I am too lazy to
figure out the exact amount).

I think both of those problems could have been addressed in a swapping
system by applying some of the paging design (2200 paging supplied one
or more Working Set file(s) for each subsystem), but swapping Large
Banks (2**24 words max) or Very Large Banks (2**30 words max) would
take too long and consume too much I/O bandwidth. I grant it would be
interesting to follow up on Nick McLaren's idea of using base and
bounds with swapping on systems with a lot of memory, but my
experiences fixing 2200 paging bugs suggests (to me) that the end
result would not be as satisfactory as Nick thought (even though he's
probably much smarter than me).
>
>Each solution (paging and multi banking) has advantages and
>disadvantages, and one can argue the relative merits of the two
>solutions (we can discuss that further if anyone cares), they both solve
>the problem, so solving that problem shouldn't/couldn't be the
>motivation for Unisys implementing paging in 2200s.
>
>Obviously, I invite comments/questions/arguments, etc.

My views are colored by my suspicion that I am one of the very few
people still working who has worked down in the bowels of memory
management by swapping and memory management by paging.

Regards,

David W. Schroth

Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...

<zyO7N.128776$Ee89.126555@fx17.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=469&group=comp.sys.unisys#469

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!newsfeed.endofthelinebbs.com!panix!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx17.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: sco...@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...
Newsgroups: comp.sys.unisys
References: <08eedc84-339f-47d8-a2c1-65a076a53ed7n@googlegroups.com> <ubdjto$2d085$1@dont-email.me> <4a2bbd4c-8e8b-4136-ae74-89b9a9ea1edcn@googlegroups.com> <ubf86b$2nnjm$1@dont-email.me> <f56ecc39-a1ec-4e67-8fc4-696114f6d99cn@googlegroups.com> <ubo3b6$9hr9$1@dont-email.me> <e8862a35-38bb-4f36-a512-f44b552c2e65n@googlegroups.com> <ug6gnr$1t5hb$1@dont-email.me> <sakeiite7fsjdum88n3obf3ql8d3un3r52@4ax.com> <ujitv6$uifo$1@dont-email.me> <fglqlipg3ngqnk30mtmbertuacl0tmk58e@4ax.com>
Lines: 101
Message-ID: <zyO7N.128776$Ee89.126555@fx17.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Thu, 23 Nov 2023 20:25:03 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Thu, 23 Nov 2023 20:25:03 GMT
X-Received-Bytes: 5993
 by: Scott Lurndal - Thu, 23 Nov 2023 20:25 UTC

David W Schroth <davidschroth@harrietmanor.com> writes:
>On Tue, 21 Nov 2023 10:47:02 -0800, Stephen Fuld
><sfuld@alumni.cmu.edu.invalid> wrote:
>
>>On 10/11/2023 9:38 PM, David W Schroth wrote:
>>> On Wed, 11 Oct 2023 08:58:51 -0700, Stephen Fuld
>>> <sfuld@alumni.cmu.edu.invalid> wrote:
>>>
>>> I know I'm going to regret responding to all of this...
>>
>>I hope not. I am sure I am not alone in valuing your contributions here.
>>
>>big snip
>>
>>
>>>>> So the video I want to draw your attention to is entitled, "19th Annual Sperry Univac Spring Technical Symposium - 'Proposed Memory Management Techniques for Sperry Univac 1100 Series Systems'", and can be found here:
>>>>>
>>>>> < https://digital.hagley.org/VID_1985261_B110_ID05?solr_nav%5Bid%5D=88d187d912cfce1a5ad1&solr_nav%5Bpage%5D=0&solr_nav%5Boffset%5D=2 >
>>>>
>>>> Interesting video, thank you. BTW, the excessive time spent in memory
>>>> allocation searching for the best fit, figuring out what to swap and
>>>> minimizing fragmentation were probably motivating factors for going to a
>>>> paging system.
>>>
>>> Probably not so much. While I wasn't there when paging was
>>> architected, I was there to design and implement it. The motivating
>>> factor is almost certainly called out in the following quote - "There
>>> is only one mistake in computer design that is difficult to recover
>>> from - not having enough address bits for memory addressing and memory
>>> management."
>>
>>While I absolutely agree with the quotation, with all due respect, I
>>disagree that it was the motivation for implementing paging. A caveat, I
>>was not involved at all in either the architecture nor the
>>implementation. My argument is based primarily on logical analysis.
>>
>>The reason that the ability for a program to address lots of memory
>>(i.e. more address bits) wasn't a factor in the decision is that Univac
>>already that problem solved!
>>
>>I remember a conversation I had with Ron Smith at a Use conference
>>sometime probably in the late 1970s or early 1980s, when IBM had
>>implemented virtual memory/paging in the S/370 line. I can't remember
>>the exact quotation, but it was essentially that paging was sort of like
>>multibanking, but turned "inside out".
>>
>>That is, with virtual memory, multiple different, potentially large,
>>user program addresses get mapped to the same physical memory at
>>different times, whereas with multibanking, multiple smaller user
>>program addresses (i.e. bank relative addresses), get mapped at
>>different times (i.e. when the bank was pointed), to the same physical
>>memory. In other words, both virtual memory/paging and multibanking
>>break the identity of program relative addresses with physical memory
>>addresses.
>>
>>Since you can have a large number (hundreds or thousands) of banks
>>defined within a program, by pointing different banks at different
>>times, you can address a huge amount of memory (far larger than any
>>contemplated physical memory), and the limitation expressed in that
>>quotation doesn't apply.
>
>I believe there are a couple of problems with that view.
>
>The Exec depended very much on absolute addressing when managing
>mamory, which limited the systems to 2 ** 24 words of physical memory,
>which was Not Enough.
>
>And the amount of virtual space available to the system was limited by
>the amount of swapfile space which, if I recall correctly, was limited
>to 03400000000 words (less tha half a GiW, although I am too lazy to
>figure out the exact amount).
>
>I think both of those problems could have been addressed in a swapping
>system by applying some of the paging design (2200 paging supplied one
>or more Working Set file(s) for each subsystem), but swapping Large
>Banks (2**24 words max) or Very Large Banks (2**30 words max) would
>take too long and consume too much I/O bandwidth.

> I grant it would be
>interesting to follow up on Nick McLaren's idea of using base and
>bounds with swapping on systems with a lot of memory, but my
>experiences fixing 2200 paging bugs suggests (to me) that the end
>result would not be as satisfactory as Nick thought (even though he's
>probably much smarter than me).

My experiences with MCP/VS on the Burroughs side, which
supported swapping (rollout/rollin) contiguous regions
of 1000 digit "pages" showed that checkerboarding of
memory was inevitable, leading to OS defragmentation
overhead and/or excessive swapping in any kind of
multiprogramming environment.

Swapping to solid state disk ameliorated the performance
overhead somewhat, but at a price.

>My views are colored by my suspicion that I am one of the very few
>people still working who has worked down in the bowels of memory
>management by swapping and memory management by paging.

I'll take paging over any segmentation scheme anyday.

Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...

<hjv1mih96pt310ttob6idjl1jrvp1ivr3d@4ax.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=470&group=comp.sys.unisys#470

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx12.iad.POSTED!not-for-mail
From: davidsch...@harrietmanor.com (David W Schroth)
Newsgroups: comp.sys.unisys
Subject: Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...
Message-ID: <hjv1mih96pt310ttob6idjl1jrvp1ivr3d@4ax.com>
References: <ubdjto$2d085$1@dont-email.me> <4a2bbd4c-8e8b-4136-ae74-89b9a9ea1edcn@googlegroups.com> <ubf86b$2nnjm$1@dont-email.me> <f56ecc39-a1ec-4e67-8fc4-696114f6d99cn@googlegroups.com> <ubo3b6$9hr9$1@dont-email.me> <e8862a35-38bb-4f36-a512-f44b552c2e65n@googlegroups.com> <ug6gnr$1t5hb$1@dont-email.me> <sakeiite7fsjdum88n3obf3ql8d3un3r52@4ax.com> <ujitv6$uifo$1@dont-email.me> <fglqlipg3ngqnk30mtmbertuacl0tmk58e@4ax.com> <zyO7N.128776$Ee89.126555@fx17.iad>
User-Agent: ForteAgent/8.00.32.1272
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Lines: 114
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Fri, 24 Nov 2023 13:54:07 -0600
X-Received-Bytes: 6661
 by: David W Schroth - Fri, 24 Nov 2023 19:54 UTC

On Thu, 23 Nov 2023 20:25:03 GMT, scott@slp53.sl.home (Scott Lurndal)
wrote:

>David W Schroth <davidschroth@harrietmanor.com> writes:
>>On Tue, 21 Nov 2023 10:47:02 -0800, Stephen Fuld
>><sfuld@alumni.cmu.edu.invalid> wrote:
>>
>>>On 10/11/2023 9:38 PM, David W Schroth wrote:
>>>> On Wed, 11 Oct 2023 08:58:51 -0700, Stephen Fuld
>>>> <sfuld@alumni.cmu.edu.invalid> wrote:
>>>>
>>>> I know I'm going to regret responding to all of this...
>>>
>>>I hope not. I am sure I am not alone in valuing your contributions here.
>>>
>>>big snip
>>>
>>>
>>>>>> So the video I want to draw your attention to is entitled, "19th Annual Sperry Univac Spring Technical Symposium - 'Proposed Memory Management Techniques for Sperry Univac 1100 Series Systems'", and can be found here:
>>>>>>
>>>>>> < https://digital.hagley.org/VID_1985261_B110_ID05?solr_nav%5Bid%5D=88d187d912cfce1a5ad1&solr_nav%5Bpage%5D=0&solr_nav%5Boffset%5D=2 >
>>>>>
>>>>> Interesting video, thank you. BTW, the excessive time spent in memory
>>>>> allocation searching for the best fit, figuring out what to swap and
>>>>> minimizing fragmentation were probably motivating factors for going to a
>>>>> paging system.
>>>>
>>>> Probably not so much. While I wasn't there when paging was
>>>> architected, I was there to design and implement it. The motivating
>>>> factor is almost certainly called out in the following quote - "There
>>>> is only one mistake in computer design that is difficult to recover
>>>> from - not having enough address bits for memory addressing and memory
>>>> management."
>>>
>>>While I absolutely agree with the quotation, with all due respect, I
>>>disagree that it was the motivation for implementing paging. A caveat, I
>>>was not involved at all in either the architecture nor the
>>>implementation. My argument is based primarily on logical analysis.
>>>
>>>The reason that the ability for a program to address lots of memory
>>>(i.e. more address bits) wasn't a factor in the decision is that Univac
>>>already that problem solved!
>>>
>>>I remember a conversation I had with Ron Smith at a Use conference
>>>sometime probably in the late 1970s or early 1980s, when IBM had
>>>implemented virtual memory/paging in the S/370 line. I can't remember
>>>the exact quotation, but it was essentially that paging was sort of like
>>>multibanking, but turned "inside out".
>>>
>>>That is, with virtual memory, multiple different, potentially large,
>>>user program addresses get mapped to the same physical memory at
>>>different times, whereas with multibanking, multiple smaller user
>>>program addresses (i.e. bank relative addresses), get mapped at
>>>different times (i.e. when the bank was pointed), to the same physical
>>>memory. In other words, both virtual memory/paging and multibanking
>>>break the identity of program relative addresses with physical memory
>>>addresses.
>>>
>>>Since you can have a large number (hundreds or thousands) of banks
>>>defined within a program, by pointing different banks at different
>>>times, you can address a huge amount of memory (far larger than any
>>>contemplated physical memory), and the limitation expressed in that
>>>quotation doesn't apply.
>>
>>I believe there are a couple of problems with that view.
>>
>>The Exec depended very much on absolute addressing when managing
>>mamory, which limited the systems to 2 ** 24 words of physical memory,
>>which was Not Enough.
>>
>>And the amount of virtual space available to the system was limited by
>>the amount of swapfile space which, if I recall correctly, was limited
>>to 03400000000 words (less tha half a GiW, although I am too lazy to
>>figure out the exact amount).
>>
>>I think both of those problems could have been addressed in a swapping
>>system by applying some of the paging design (2200 paging supplied one
>>or more Working Set file(s) for each subsystem), but swapping Large
>>Banks (2**24 words max) or Very Large Banks (2**30 words max) would
>>take too long and consume too much I/O bandwidth.
>
>> I grant it would be
>>interesting to follow up on Nick McLaren's idea of using base and
>>bounds with swapping on systems with a lot of memory, but my
>>experiences fixing 2200 paging bugs suggests (to me) that the end
>>result would not be as satisfactory as Nick thought (even though he's
>>probably much smarter than me).
>
>My experiences with MCP/VS on the Burroughs side, which
>supported swapping (rollout/rollin) contiguous regions
>of 1000 digit "pages" showed that checkerboarding of
>memory was inevitable, leading to OS defragmentation
>overhead and/or excessive swapping in any kind of
>multiprogramming environment.
>
>Swapping to solid state disk ameliorated the performance
>overhead somewhat, but at a price.
>
>
>>My views are colored by my suspicion that I am one of the very few
>>people still working who has worked down in the bowels of memory
>>management by swapping and memory management by paging.
>
>I'll take paging over any segmentation scheme anyday.

And my experience with OS22000 n=memory managent leaves preferring
both paging and segmentation, each doing what they do best.

Paging for mapping virtual to physical and getting chunks of virtual
space into physical memory and out to backing store.

And segmentation for process/thread isolation and access control.

Wich is how they have been used in OS2200 su=ince the early '90s...

Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...

<uk3fjt$3vur2$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=473&group=comp.sys.unisys#473

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.sys.unisys
Subject: Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...
Date: Mon, 27 Nov 2023 17:26:21 -0800
Organization: A noiseless patient Spider
Lines: 126
Message-ID: <uk3fjt$3vur2$1@dont-email.me>
References: <08eedc84-339f-47d8-a2c1-65a076a53ed7n@googlegroups.com>
<ubdjto$2d085$1@dont-email.me>
<4a2bbd4c-8e8b-4136-ae74-89b9a9ea1edcn@googlegroups.com>
<ubf86b$2nnjm$1@dont-email.me>
<f56ecc39-a1ec-4e67-8fc4-696114f6d99cn@googlegroups.com>
<ubo3b6$9hr9$1@dont-email.me>
<e8862a35-38bb-4f36-a512-f44b552c2e65n@googlegroups.com>
<ug6gnr$1t5hb$1@dont-email.me> <sakeiite7fsjdum88n3obf3ql8d3un3r52@4ax.com>
<ujitv6$uifo$1@dont-email.me> <fglqlipg3ngqnk30mtmbertuacl0tmk58e@4ax.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 28 Nov 2023 01:26:21 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="6ab5e1bccaefdaf17b623068b07b8318";
logging-data="4193122"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19eTFHAmT/pbB3BYguizLFKUyPECp9ebng="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:YcPvwOk9hweP++0TfLjhK/NQY4E=
In-Reply-To: <fglqlipg3ngqnk30mtmbertuacl0tmk58e@4ax.com>
Content-Language: en-US
 by: Stephen Fuld - Tue, 28 Nov 2023 01:26 UTC

On 11/21/2023 5:36 PM, David W Schroth wrote:
> On Tue, 21 Nov 2023 10:47:02 -0800, Stephen Fuld
> <sfuld@alumni.cmu.edu.invalid> wrote:
>
>> On 10/11/2023 9:38 PM, David W Schroth wrote:
>>> On Wed, 11 Oct 2023 08:58:51 -0700, Stephen Fuld
>>> <sfuld@alumni.cmu.edu.invalid> wrote:
>>>
>>> I know I'm going to regret responding to all of this...
>>
>> I hope not. I am sure I am not alone in valuing your contributions here.
>>
>> big snip
>>
>>
>>>>> So the video I want to draw your attention to is entitled, "19th Annual Sperry Univac Spring Technical Symposium - 'Proposed Memory Management Techniques for Sperry Univac 1100 Series Systems'", and can be found here:
>>>>>
>>>>> < https://digital.hagley.org/VID_1985261_B110_ID05?solr_nav%5Bid%5D=88d187d912cfce1a5ad1&solr_nav%5Bpage%5D=0&solr_nav%5Boffset%5D=2 >
>>>>
>>>> Interesting video, thank you. BTW, the excessive time spent in memory
>>>> allocation searching for the best fit, figuring out what to swap and
>>>> minimizing fragmentation were probably motivating factors for going to a
>>>> paging system.
>>>
>>> Probably not so much. While I wasn't there when paging was
>>> architected, I was there to design and implement it. The motivating
>>> factor is almost certainly called out in the following quote - "There
>>> is only one mistake in computer design that is difficult to recover
>>> from - not having enough address bits for memory addressing and memory
>>> management."
>>
>> While I absolutely agree with the quotation, with all due respect, I
>> disagree that it was the motivation for implementing paging. A caveat, I
>> was not involved at all in either the architecture nor the
>> implementation. My argument is based primarily on logical analysis.
>>
>> The reason that the ability for a program to address lots of memory
>> (i.e. more address bits) wasn't a factor in the decision is that Univac
>> already that problem solved!
>>
>> I remember a conversation I had with Ron Smith at a Use conference
>> sometime probably in the late 1970s or early 1980s, when IBM had
>> implemented virtual memory/paging in the S/370 line. I can't remember
>> the exact quotation, but it was essentially that paging was sort of like
>> multibanking, but turned "inside out".
>>
>> That is, with virtual memory, multiple different, potentially large,
>> user program addresses get mapped to the same physical memory at
>> different times, whereas with multibanking, multiple smaller user
>> program addresses (i.e. bank relative addresses), get mapped at
>> different times (i.e. when the bank was pointed), to the same physical
>> memory. In other words, both virtual memory/paging and multibanking
>> break the identity of program relative addresses with physical memory
>> addresses.
>>
>> Since you can have a large number (hundreds or thousands) of banks
>> defined within a program, by pointing different banks at different
>> times, you can address a huge amount of memory (far larger than any
>> contemplated physical memory), and the limitation expressed in that
>> quotation doesn't apply.
>
> I believe there are a couple of problems with that view.
>
> The Exec depended very much on absolute addressing when managing
> mamory, which limited the systems to 2 ** 24 words of physical memory,
> which was Not Enough.
>
> And the amount of virtual space available to the system was limited by
> the amount of swapfile space which, if I recall correctly, was limited
> to 03400000000 words (less tha half a GiW, although I am too lazy to
> figure out the exact amount).
>
> I think both of those problems could have been addressed in a swapping
> system by applying some of the paging design (2200 paging supplied one
> or more Working Set file(s) for each subsystem),

Agreed. If those were the only problems, fixing them would have been a
much easier task than implementing paging.

but swapping Large
> Banks (2**24 words max) or Very Large Banks (2**30 words max) would
> take too long and consume too much I/O bandwidth.

Absolutely agree! In fact, this was one of the issues mentioned in the
video, compounded by people not making banks dynamic when they probably
should have, thus increasing executable and thus swap sizes. However,
this problem, while important, has nothing to do with running out of
memory addressing bits.

> I grant it would be
> interesting to follow up on Nick McLaren's idea of using base and
> bounds with swapping on systems with a lot of memory, but my
> experiences fixing 2200 paging bugs suggests (to me) that the end
> result would not be as satisfactory as Nick thought (even though he's
> probably much smarter than me).

I think Nick's thinking was overly influenced by the environment he was
used to, that is scientific computing where there is much less swapping
than in say a general purpose system such as time shared program
development system. If you don't do much swapping, it matters less how
long it takes.

>>
>> Each solution (paging and multi banking) has advantages and
>> disadvantages, and one can argue the relative merits of the two
>> solutions (we can discuss that further if anyone cares), they both solve
>> the problem, so solving that problem shouldn't/couldn't be the
>> motivation for Unisys implementing paging in 2200s.
>>
>> Obviously, I invite comments/questions/arguments, etc.
>
> My views are colored by my suspicion that I am one of the very few
> people still working who has worked down in the bowels of memory
> management by swapping and memory management by paging.

Probably true. A unicorn :-)

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...

<uk3gdr$3vur2$2@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=474&group=comp.sys.unisys#474

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.sys.unisys
Subject: Re: Speaking of the 1110 AKA 1100/40 and its two types of memory ...
Date: Mon, 27 Nov 2023 17:40:11 -0800
Organization: A noiseless patient Spider
Lines: 63
Message-ID: <uk3gdr$3vur2$2@dont-email.me>
References: <ubdjto$2d085$1@dont-email.me>
<4a2bbd4c-8e8b-4136-ae74-89b9a9ea1edcn@googlegroups.com>
<ubf86b$2nnjm$1@dont-email.me>
<f56ecc39-a1ec-4e67-8fc4-696114f6d99cn@googlegroups.com>
<ubo3b6$9hr9$1@dont-email.me>
<e8862a35-38bb-4f36-a512-f44b552c2e65n@googlegroups.com>
<ug6gnr$1t5hb$1@dont-email.me> <sakeiite7fsjdum88n3obf3ql8d3un3r52@4ax.com>
<ujitv6$uifo$1@dont-email.me> <fglqlipg3ngqnk30mtmbertuacl0tmk58e@4ax.com>
<zyO7N.128776$Ee89.126555@fx17.iad>
<hjv1mih96pt310ttob6idjl1jrvp1ivr3d@4ax.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 28 Nov 2023 01:40:12 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="6ab5e1bccaefdaf17b623068b07b8318";
logging-data="4193122"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX188PT5WPzO1RuOjGspoY6JxH+YhYFcCctQ="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:/xWafEwc2fSIdGW+6U4PfhhncV4=
In-Reply-To: <hjv1mih96pt310ttob6idjl1jrvp1ivr3d@4ax.com>
Content-Language: en-US
 by: Stephen Fuld - Tue, 28 Nov 2023 01:40 UTC

On 11/24/2023 11:54 AM, David W Schroth wrote:
> On Thu, 23 Nov 2023 20:25:03 GMT, scott@slp53.sl.home (Scott Lurndal)
> wrote:

big snip

>> My experiences with MCP/VS on the Burroughs side, which
>> supported swapping (rollout/rollin) contiguous regions
>> of 1000 digit "pages" showed that checkerboarding of
>> memory was inevitable, leading to OS defragmentation
>> overhead and/or excessive swapping in any kind of
>> multiprogramming environment.

That agrees with what the video said, and my experience, with the
possible exception of Nick McLarren's comments on a scientific workload
(obviously not V-series).

>> Swapping to solid state disk ameliorated the performance
>> overhead somewhat, but at a price.

Sure. In the early 1970s, we used UCS (Unified Channel Storage), which
was essentially 1106 core memory used as a peripheral, for swapping. By
the late 70s, replaced by Amperif SSDs.

>>
>>
>>> My views are colored by my suspicion that I am one of the very few
>>> people still working who has worked down in the bowels of memory
>>> management by swapping and memory management by paging.
>>
>> I'll take paging over any segmentation scheme anyday.
>
> And my experience with OS22000 n=memory managent leaves preferring
> both paging and segmentation, each doing what they do best.
>
> Paging for mapping virtual to physical and getting chunks of virtual
> space into physical memory and out to backing store.
>
> And segmentation for process/thread isolation and access control.

Yup. I dislike "overloading" the paging mechanism with the protection
mechanism, which has nothing to do with 4K boundaries. I sort of like
what the Mill is proposing - a separate data structure with base and
limits for protection, cached in the CPU by a "PLB" (Protection Look
Aside Buffer) working along side a traditional page table/TLB mechanism.
It accomplishes the separation without the issues of multi-banking e.g.
the user having to point different banks.

> Wich is how they have been used in OS2200 su=ince the early '90s...

Yup.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Pages:12
server_pubkey.txt

rocksolid light 0.9.8
clearnet tor