Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

I have never seen anything fill up a vacuum so fast and still suck. -- Rob Pike, on X.


computers / comp.sys.unisys / Re: scale-out of OS2200

SubjectAuthor
* scale-out of OS2200Kurt Duncan
+* Re: scale-out of OS2200Stephen Fuld
|+- Re: scale-out of OS2200Kurt Duncan
|`* Re: scale-out of OS2200David W Schroth
| `* Re: scale-out of OS2200Kurt Duncan
|  +* Re: scale-out of OS2200Lewis Cole
|  |`* Re: scale-out of OS2200Kurt Duncan
|  | `- Re: scale-out of OS2200Lewis Cole
|  `* Re: scale-out of OS2200David W Schroth
|   +- Re: scale-out of OS2200Scott Lurndal
|   `* Re: scale-out of OS2200Kurt Duncan
|    +- Re: scale-out of OS2200mpe...@gmail.com
|    `* Re: scale-out of OS2200Stephen Fuld
|     +* Re: scale-out of OS2200Lewis Cole
|     |`- Re: scale-out of OS2200Scott Lurndal
|     +- Re: scale-out of OS2200David W Schroth
|     `- Re: scale-out of OS2200Scott Lurndal
`* Re: scale-out of OS2200Stephen Fuld
 `* Re: scale-out of OS2200Kurt Duncan
  `- Re: scale-out of OS2200Stephen Fuld

1
scale-out of OS2200

<4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=341&group=comp.sys.unisys#341

  copy link   Newsgroups: comp.sys.unisys
X-Received: by 2002:a05:622a:1a16:b0:403:2978:b7d0 with SMTP id f22-20020a05622a1a1600b004032978b7d0mr2827qtb.12.1688088824358;
Thu, 29 Jun 2023 18:33:44 -0700 (PDT)
X-Received: by 2002:a17:90a:ea0f:b0:263:630:78bf with SMTP id
w15-20020a17090aea0f00b00263063078bfmr379808pjy.7.1688088824116; Thu, 29 Jun
2023 18:33:44 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.sys.unisys
Date: Thu, 29 Jun 2023 18:33:43 -0700 (PDT)
Injection-Info: google-groups.googlegroups.com; posting-host=75.71.150.4; posting-account=mEuitgoAAADY9v63PvAUFdA9nsy_ToEE
NNTP-Posting-Host: 75.71.150.4
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>
Subject: scale-out of OS2200
From: kurtadun...@gmail.com (Kurt Duncan)
Injection-Date: Fri, 30 Jun 2023 01:33:44 +0000
Content-Type: text/plain; charset="UTF-8"
X-Received-Bytes: 1736
 by: Kurt Duncan - Fri, 30 Jun 2023 01:33 UTC

So, just spit-balling ideas here.
Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.

One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.

TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.

Spin up one or more database instances as needed.

Spin up two or three batch services for nightly processing.

Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.

Same thing with BIS.

Then you would truly have cloud-native...ish OS2200. Thoughts?

Re: scale-out of OS2200

<u7motv$2iibt$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=343&group=comp.sys.unisys#343

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.sys.unisys
Subject: Re: scale-out of OS2200
Date: Fri, 30 Jun 2023 07:30:21 -0700
Organization: A noiseless patient Spider
Lines: 68
Message-ID: <u7motv$2iibt$1@dont-email.me>
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Fri, 30 Jun 2023 14:30:23 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="a04b329b10d51c215315c936ccd55463";
logging-data="2705789"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/lE3R2kPzZJMOcm+yZ4aM/7yvYKzEVrlU="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:ozON1wDenzNSZwppIS46xlQM8Yk=
In-Reply-To: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>
Content-Language: en-US
 by: Stephen Fuld - Fri, 30 Jun 2023 14:30 UTC

On 6/29/2023 6:33 PM, Kurt Duncan wrote:
> So, just spit-balling ideas here.
> Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.

I am not sure exactly what you are proposing here, nor what the
advantages of it are. See below for specifics.

> One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.

Does this mean that a file name has to specify somehow which MFD
instance it is in, or does a run specify which instance all files that
run references belong to? What if a run wants to access a file in a
different instance? Or do you mean multiple copies of the same
information? In that case, updates have to be propagated to multiple
instances. Both are messy, and what do you gain?

> TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.

Sounds like early CICS on IBM S/360. Multiple "instances" of CICS, each
managing its own set of transactions. It is costly performance wise as
you are doing CPU dispatching twice, once in the OS and once in the CICS
instance. There is a reason why TIP tended to out perform CICS.

> Spin up one or more database instances as needed.

What is an instance here? The common code to handle the database? Why
waste the space? The user code? Already done. Again, I am not sure
what you mean here.

> Spin up two or three batch services for nightly processing.
>
> Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.

One of the really nice things about OS/2200 is that batch and demand
share so much of the same facilities, both within the OS, and things
like ECL? What is the advantage of separating and duplicating them?
And what does adding another instance of a demand service if one get
over 100 users buy you?

> Same thing with BIS.

IIRC BIS is what used to be called MAPPER. If so, I think you can have
multiple Mapper runs open simultaneously. I don't know if anyone does
this, and why?

> Then you would truly have cloud-native...ish OS2200. Thoughts?

My initial thoughts are that it seems like a lot of work for minimal
benefit. But I freely admit, I don't fully understand your proposal. :-(

One further note. Your talk of "cloudish" could be sort of like
multiple 2200s in a shared disk environment. I don't know if such
configurations are still supported.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: scale-out of OS2200

<67aed343-f4ec-42ff-ab3d-ddbdb69ff80en@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=345&group=comp.sys.unisys#345

  copy link   Newsgroups: comp.sys.unisys
X-Received: by 2002:a05:620a:2415:b0:765:4f29:33b0 with SMTP id d21-20020a05620a241500b007654f2933b0mr7501qkn.9.1688141897559;
Fri, 30 Jun 2023 09:18:17 -0700 (PDT)
X-Received: by 2002:a17:902:e74b:b0:1b5:253f:d05e with SMTP id
p11-20020a170902e74b00b001b5253fd05emr2062872plf.10.1688141897255; Fri, 30
Jun 2023 09:18:17 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.sys.unisys
Date: Fri, 30 Jun 2023 09:18:16 -0700 (PDT)
In-Reply-To: <u7motv$2iibt$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=75.71.150.4; posting-account=mEuitgoAAADY9v63PvAUFdA9nsy_ToEE
NNTP-Posting-Host: 75.71.150.4
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com> <u7motv$2iibt$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <67aed343-f4ec-42ff-ab3d-ddbdb69ff80en@googlegroups.com>
Subject: Re: scale-out of OS2200
From: kurtadun...@gmail.com (Kurt Duncan)
Injection-Date: Fri, 30 Jun 2023 16:18:17 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: Kurt Duncan - Fri, 30 Jun 2023 16:18 UTC

On Friday, June 30, 2023 at 8:30:25 AM UTC-6, Stephen Fuld wrote:
> On 6/29/2023 6:33 PM, Kurt Duncan wrote:
> > So, just spit-balling ideas here.
> > Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
> I am not sure exactly what you are proposing here, nor what the
> advantages of it are. See below for specifics.
> > One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.
> Does this mean that a file name has to specify somehow which MFD
> instance it is in, or does a run specify which instance all files that
> run references belong to? What if a run wants to access a file in a
> different instance? Or do you mean multiple copies of the same
> information? In that case, updates have to be propagated to multiple
> instances. Both are messy, and what do you gain?
> > TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.
> Sounds like early CICS on IBM S/360. Multiple "instances" of CICS, each
> managing its own set of transactions. It is costly performance wise as
> you are doing CPU dispatching twice, once in the OS and once in the CICS
> instance. There is a reason why TIP tended to out perform CICS.
> > Spin up one or more database instances as needed.
> What is an instance here? The common code to handle the database? Why
> waste the space? The user code? Already done. Again, I am not sure
> what you mean here.
> > Spin up two or three batch services for nightly processing.
> >
> > Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.
> One of the really nice things about OS/2200 is that batch and demand
> share so much of the same facilities, both within the OS, and things
> like ECL? What is the advantage of separating and duplicating them?
> And what does adding another instance of a demand service if one get
> over 100 users buy you?
>
>
> > Same thing with BIS.
>
> IIRC BIS is what used to be called MAPPER. If so, I think you can have
> multiple Mapper runs open simultaneously. I don't know if anyone does
> this, and why?
> > Then you would truly have cloud-native...ish OS2200. Thoughts?
> My initial thoughts are that it seems like a lot of work for minimal
> benefit. But I freely admit, I don't fully understand your proposal. :-(
>
> One further note. Your talk of "cloudish" could be sort of like
> multiple 2200s in a shared disk environment. I don't know if such
> configurations are still supported.
>
> --
> - Stephen Fuld
> (e-mail address disguised to prevent spam)

Each instance boots from/into its local MFD (think STD#Q*F...).
For TIP, the registered files would be in {something}#Q*F which is, yes, a shared MFD.
But for transaction world, if you can live in different application contexts, then you can live in different shared directories for the TIP/database files.
One TIP instance runs from Acct shared MFD, Five TIP instances from ShoppingCart shared MFD, Two TIP instances from... you get the idea.
When you have a sale on Mary-Lou Retton action figures, you might scale the shopping cart microservices up to eight or ten TIP instances, until the sale is over.

Developers would have their own sandboxes in the local MFD, possibly snapshotted each night or whatever/whenever, with pushes to production going to
one common shared MFD.

The point of cloud-native is that you break up your monolithic... whatever.... into micro-services, which you can then scale out... into multiple instances of the service.
If one service crashes, the others continue, *and* your spend is completely dependent upon your usage. So.... you don't pay for all the cycles you don't use during the evening, or whatever.
And... if you suddenly have a huge spike in usage, you can push a button and have enough capacity to deal with it.

Batch/Demand using the same facilities is really just a matter of using a lot of the same OS code. Which can be broken down into modules, and linked into the batch executable and into the demand executable.

You already have something of a concept of instances in terms of recovery applications (and the whole concept of IRU is really the main hurdle... I think...).

The why, is to allow *some* existing applications to migrate into the micro-services cloud world, with only a small rewrite, and hopefully little to no re-architecting, and to relieve the monolithic systems from the load of lots of developers, by moving those apps into scheduled-as-needed u-services..

Unisys does metering, but you still have to have the whole system on-prem. Unless you do the cloud thing, but then you are still lugging around a whole virtual mainframe...

Re: scale-out of OS2200

<lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=346&group=comp.sys.unisys#346

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx11.iad.POSTED!not-for-mail
From: davidsch...@harrietmanor.com (David W Schroth)
Newsgroups: comp.sys.unisys
Subject: Re: scale-out of OS2200
Message-ID: <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com>
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com> <u7motv$2iibt$1@dont-email.me>
User-Agent: ForteAgent/8.00.32.1272
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Lines: 85
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Fri, 30 Jun 2023 22:16:05 -0500
X-Received-Bytes: 4249
 by: David W Schroth - Sat, 1 Jul 2023 03:16 UTC

On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
<sfuld@alumni.cmu.edu.invalid> wrote:

>On 6/29/2023 6:33 PM, Kurt Duncan wrote:
>> So, just spit-balling ideas here.
>> Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
>
>I am not sure exactly what you are proposing here, nor what the
>advantages of it are. See below for specifics.
>
My take is someone has a shaky grasp of how Operting Systems
(including OS2200) are structured.
>
>
>> One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.
>
>Does this mean that a file name has to specify somehow which MFD
>instance it is in, or does a run specify which instance all files that
>run references belong to? What if a run wants to access a file in a
>different instance? Or do you mean multiple copies of the same
>information? In that case, updates have to be propagated to multiple
>instances. Both are messy, and what do you gain?
>
>
>
>> TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.
>
>
>Sounds like early CICS on IBM S/360. Multiple "instances" of CICS, each
>managing its own set of transactions. It is costly performance wise as
>you are doing CPU dispatching twice, once in the OS and once in the CICS
>instance. There is a reason why TIP tended to out perform CICS.

I tend to believe the per-Application Group does a better job of
partitioning transactions.
>
>
>> Spin up one or more database instances as needed.
>
>What is an instance here? The common code to handle the database? Why
>waste the space? The user code? Already done. Again, I am not sure
>what you mean here.

Each Application Grop is a different DMS instance. I forget just hom
many Application Groups are currently supporte, I'm pretty sure it is
more than 10.
>
>
>
>> Spin up two or three batch services for nightly processing.
>>
>> Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.
>
>One of the really nice things about OS/2200 is that batch and demand
>share so much of the same facilities, both within the OS, and things
>like ECL? What is the advantage of separating and duplicating them?
>And what does adding another instance of a demand service if one get
>over 100 users buy you?
>
>
>> Same thing with BIS.
>
>IIRC BIS is what used to be called MAPPER. If so, I think you can have
>multiple Mapper runs open simultaneously. I don't know if anyone does
>this, and why?
>
>
>> Then you would truly have cloud-native...ish OS2200. Thoughts?
>
>My initial thoughts are that it seems like a lot of work for minimal
>benefit. But I freely admit, I don't fully understand your proposal. :-(
>
>One further note. Your talk of "cloudish" could be sort of like
>multiple 2200s in a shared disk environment. I don't know if such
>configurations are still supported.

As far as I know, we still support multiple 2200s in a shared disk
environment. I typically have demand sessions open on RS06, RS08,
RS15, and RS36 when I'm working. My Exec builds run from shared file
sets, RS06 PRIMUS (a DMS application) runs on all systems in the RS06
omplex. And I know there are other 2200 systems in the RS06 complex.

Regards,

David W. Schroth

Re: scale-out of OS2200

<7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=347&group=comp.sys.unisys#347

  copy link   Newsgroups: comp.sys.unisys
X-Received: by 2002:a05:620a:1787:b0:765:9bf7:29a5 with SMTP id ay7-20020a05620a178700b007659bf729a5mr11200qkb.8.1688185194818;
Fri, 30 Jun 2023 21:19:54 -0700 (PDT)
X-Received: by 2002:a05:6a00:1352:b0:668:70cd:5b37 with SMTP id
k18-20020a056a00135200b0066870cd5b37mr5138813pfu.5.1688185194177; Fri, 30 Jun
2023 21:19:54 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.sys.unisys
Date: Fri, 30 Jun 2023 21:19:53 -0700 (PDT)
In-Reply-To: <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com>
Injection-Info: google-groups.googlegroups.com; posting-host=75.71.150.4; posting-account=mEuitgoAAADY9v63PvAUFdA9nsy_ToEE
NNTP-Posting-Host: 75.71.150.4
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>
<u7motv$2iibt$1@dont-email.me> <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com>
Subject: Re: scale-out of OS2200
From: kurtadun...@gmail.com (Kurt Duncan)
Injection-Date: Sat, 01 Jul 2023 04:19:54 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 6178
 by: Kurt Duncan - Sat, 1 Jul 2023 04:19 UTC

On Friday, June 30, 2023 at 9:10:45 PM UTC-6, David W Schroth wrote:
> On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
> <sf...@alumni.cmu.edu.invalid> wrote:
>
> >On 6/29/2023 6:33 PM, Kurt Duncan wrote:
> >> So, just spit-balling ideas here.
> >> Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
> >
> >I am not sure exactly what you are proposing here, nor what the
> >advantages of it are. See below for specifics.
> >
> My take is someone has a shaky grasp of how Operting Systems
> (including OS2200) are structured.

I do have *some* small insight into OS2200 - not great, but some.
Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a minimum amount of re-coding and/or re-architecting.

It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.

> >
> >
> >> One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.
> >
> >Does this mean that a file name has to specify somehow which MFD
> >instance it is in, or does a run specify which instance all files that
> >run references belong to? What if a run wants to access a file in a
> >different instance? Or do you mean multiple copies of the same
> >information? In that case, updates have to be propagated to multiple
> >instances. Both are messy, and what do you gain?
> >
> >
> >
> >> TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.
> >
> >
> >Sounds like early CICS on IBM S/360. Multiple "instances" of CICS, each
> >managing its own set of transactions. It is costly performance wise as
> >you are doing CPU dispatching twice, once in the OS and once in the CICS
> >instance. There is a reason why TIP tended to out perform CICS.
> I tend to believe the per-Application Group does a better job of
> partitioning transactions.
> >
> >
> >> Spin up one or more database instances as needed.
> >
> >What is an instance here? The common code to handle the database? Why
> >waste the space? The user code? Already done. Again, I am not sure
> >what you mean here.
> Each Application Grop is a different DMS instance. I forget just hom
> many Application Groups are currently supporte, I'm pretty sure it is
> more than 10.
> >
> >
> >
> >> Spin up two or three batch services for nightly processing.
> >>
> >> Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.
> >
> >One of the really nice things about OS/2200 is that batch and demand
> >share so much of the same facilities, both within the OS, and things
> >like ECL? What is the advantage of separating and duplicating them?
> >And what does adding another instance of a demand service if one get
> >over 100 users buy you?
> >
> >
> >> Same thing with BIS.
> >
> >IIRC BIS is what used to be called MAPPER. If so, I think you can have
> >multiple Mapper runs open simultaneously. I don't know if anyone does
> >this, and why?
> >
> >
> >> Then you would truly have cloud-native...ish OS2200. Thoughts?
> >
> >My initial thoughts are that it seems like a lot of work for minimal
> >benefit. But I freely admit, I don't fully understand your proposal. :-(
> >
> >One further note. Your talk of "cloudish" could be sort of like
> >multiple 2200s in a shared disk environment. I don't know if such
> >configurations are still supported.
> As far as I know, we still support multiple 2200s in a shared disk
> environment. I typically have demand sessions open on RS06, RS08,
> RS15, and RS36 when I'm working. My Exec builds run from shared file
> sets, RS06 PRIMUS (a DMS application) runs on all systems in the RS06
> omplex. And I know there are other 2200 systems in the RS06 complex.
>
> Regards,
>
> David W. Schroth

Re: scale-out of OS2200

<80243eb1-cfee-4522-9bd8-bba432aab9ccn@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=349&group=comp.sys.unisys#349

  copy link   Newsgroups: comp.sys.unisys
X-Received: by 2002:ad4:4e66:0:b0:635:dff8:e601 with SMTP id ec6-20020ad44e66000000b00635dff8e601mr12486qvb.9.1688189407510;
Fri, 30 Jun 2023 22:30:07 -0700 (PDT)
X-Received: by 2002:a17:903:3306:b0:1b5:2757:af6e with SMTP id
jk6-20020a170903330600b001b52757af6emr3127431plb.13.1688189406566; Fri, 30
Jun 2023 22:30:06 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.sys.unisys
Date: Fri, 30 Jun 2023 22:30:05 -0700 (PDT)
In-Reply-To: <7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2601:602:c080:3f60:9cbb:10d6:47a6:d6e8;
posting-account=DycLBQoAAACVeYHALMkZoo5C926pUXDC
NNTP-Posting-Host: 2601:602:c080:3f60:9cbb:10d6:47a6:d6e8
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>
<u7motv$2iibt$1@dont-email.me> <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com> <7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <80243eb1-cfee-4522-9bd8-bba432aab9ccn@googlegroups.com>
Subject: Re: scale-out of OS2200
From: l_c...@juno.com (Lewis Cole)
Injection-Date: Sat, 01 Jul 2023 05:30:07 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 3954
 by: Lewis Cole - Sat, 1 Jul 2023 05:30 UTC

On Friday, June 30, 2023 at 9:19:55 PM UTC-7, Kurt Duncan wrote:
< snip >
> Let me clarify: I am not suggesting
> breaking up OS2200 into pieces, nor
> deploying lots of copies of it.
> I'm suggesting completely rewriting
> enough of the various parts of OS2200
> to allow what had been OS2200
> applications to begin looking more
> like cloud-native solutions...
> containers, all that fun stuff; and
> doing it in such a way as to require
> a minimum amount of re-coding and/or
> re-architecting.

Okay, so you're trying to come up with some way or ways to effectively keep OS2200 alive and perhaps even growing. That's nice.
(I don't understand why you think that "containers" should be a good selling point since a 2200 system VM should be able to do just about anything you might want to do with a container and it just so happens that a 2200 system VM just happens to exist [AKA PS2200] and doesn't require much, if any, OS2200 modifications to work.
Maybe I've missed something, but I wasn't aware of a significant up-tick in OS2200 acceptance and use because of PS2200.)
> It is certainly a mind-bender for a
> person steeped in monolithic OS's
> and hardware (such as I am), to
> understand and adapt to the
> cloud/container world.
> But in doing so - as almost the
> entire rest of the non-mainframe
> world is doing - I'd hate to see
> the OS2200 architecture lose even
> more ground.
> I'm trying to think of unique ways
> in which the 2200 eco-sphere
> -- from a customer point of view --
> might be altered such that it can
> compete in this new goofy
> frustrating world.
< snip >

I was tempted to ask what thing(s) you think is preventing OS2200 and its applications from being more prevalent -- something that doesn't involve buzzwords like "containers" and "cloud based", but I think I'd like to try a thought experiment and see where it leads.
Suppose that OS2200 was completely re-written to do something "magical" -- something that no other OS can do right now -- being able to potentially scale to billions and billions of processors efficiently like Barrelfish was hoping to show the way toward (i.e. the Multi-Kernel approach), for example..
In your humble opinion, why would anyone in their right mind jump on to using the new and improved OS2200 rather than wait for someone to hopefully come up with a way to get Linux to be able to do the same thing(s) or actually take an active part to make it do the same thing(s)?

Re: scale-out of OS2200

<191cf458-65bb-4e8a-8b31-1ee86bf0cbb4n@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=350&group=comp.sys.unisys#350

  copy link   Newsgroups: comp.sys.unisys
X-Received: by 2002:a05:620a:44c7:b0:767:1acb:61ef with SMTP id y7-20020a05620a44c700b007671acb61efmr17271qkp.6.1688227014814;
Sat, 01 Jul 2023 08:56:54 -0700 (PDT)
X-Received: by 2002:a17:903:4042:b0:1b5:67b9:f482 with SMTP id
n2-20020a170903404200b001b567b9f482mr4014880pla.7.1688227014247; Sat, 01 Jul
2023 08:56:54 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.sys.unisys
Date: Sat, 1 Jul 2023 08:56:53 -0700 (PDT)
In-Reply-To: <80243eb1-cfee-4522-9bd8-bba432aab9ccn@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=75.71.150.4; posting-account=mEuitgoAAADY9v63PvAUFdA9nsy_ToEE
NNTP-Posting-Host: 75.71.150.4
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>
<u7motv$2iibt$1@dont-email.me> <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com>
<7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com> <80243eb1-cfee-4522-9bd8-bba432aab9ccn@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <191cf458-65bb-4e8a-8b31-1ee86bf0cbb4n@googlegroups.com>
Subject: Re: scale-out of OS2200
From: kurtadun...@gmail.com (Kurt Duncan)
Injection-Date: Sat, 01 Jul 2023 15:56:54 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 5224
 by: Kurt Duncan - Sat, 1 Jul 2023 15:56 UTC

On Friday, June 30, 2023 at 11:30:08 PM UTC-6, Lewis Cole wrote:
> On Friday, June 30, 2023 at 9:19:55 PM UTC-7, Kurt Duncan wrote:
> < snip >
> > Let me clarify: I am not suggesting
> > breaking up OS2200 into pieces, nor
> > deploying lots of copies of it.
> > I'm suggesting completely rewriting
> > enough of the various parts of OS2200
> > to allow what had been OS2200
> > applications to begin looking more
> > like cloud-native solutions...
> > containers, all that fun stuff; and
> > doing it in such a way as to require
> > a minimum amount of re-coding and/or
> > re-architecting.
> Okay, so you're trying to come up with some way or ways to effectively keep OS2200 alive and perhaps even growing. That's nice.
> (I don't understand why you think that "containers" should be a good selling point since a 2200 system VM should be able to do just about anything you might want to do with a container and it just so happens that a 2200 system VM just happens to exist [AKA PS2200] and doesn't require much, if any, OS2200 modifications to work.
> Maybe I've missed something, but I wasn't aware of a significant up-tick in OS2200 acceptance and use because of PS2200.)
> > It is certainly a mind-bender for a
> > person steeped in monolithic OS's
> > and hardware (such as I am), to
> > understand and adapt to the
> > cloud/container world.
> > But in doing so - as almost the
> > entire rest of the non-mainframe
> > world is doing - I'd hate to see
> > the OS2200 architecture lose even
> > more ground.
> > I'm trying to think of unique ways
> > in which the 2200 eco-sphere
> > -- from a customer point of view --
> > might be altered such that it can
> > compete in this new goofy
> > frustrating world.
> < snip >
>
> I was tempted to ask what thing(s) you think is preventing OS2200 and its applications from being more prevalent -- something that doesn't involve buzzwords like "containers" and "cloud based", but I think I'd like to try a thought experiment and see where it leads.
> Suppose that OS2200 was completely re-written to do something "magical" -- something that no other OS can do right now -- being able to potentially scale to billions and billions of processors efficiently like Barrelfish was hoping to show the way toward (i.e. the Multi-Kernel approach), for example.
> In your humble opinion, why would anyone in their right mind jump on to using the new and improved OS2200 rather than wait for someone to hopefully come up with a way to get Linux to be able to do the same thing(s) or actually take an active part to make it do the same thing(s)?

I'm suggesting that someone might take a more active role in making the architectures and facilities which, buzzwords or not, actually do define the state of general enterprise computing, available to those people who are still in the world of TIP/HVTIP and RDMS, and batch processing. So that they can leverage at least some major portion of their existing code, while moving toward a computing paradigm which is in use today, has been for a number of years and (for better or words) will be for some number of years ahead.

I am not suggesting billions of processors. I am unsure that, if massive processing or IO is in a customers application mix, that anything other than a 2200 would satisfy them. But I don't think the entire world of OS2200 users requires un-obtanium. And I am not suggesting scaling an OS. I am suggesting providing the minimal environment necessary for a particular mix of (e..g., TIP) applications to function, and making that environment operate in exactly the same world which so many non-OS2200 applications operate.

Re: scale-out of OS2200

<4f4a1fbc-1597-4917-b025-1109ffdfd72fn@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=351&group=comp.sys.unisys#351

  copy link   Newsgroups: comp.sys.unisys
X-Received: by 2002:a05:620a:440f:b0:765:ab64:5323 with SMTP id v15-20020a05620a440f00b00765ab645323mr17727qkp.13.1688230774681;
Sat, 01 Jul 2023 09:59:34 -0700 (PDT)
X-Received: by 2002:a17:902:fb4f:b0:1b8:2055:fc1f with SMTP id
lf15-20020a170902fb4f00b001b82055fc1fmr4080849plb.2.1688230774168; Sat, 01
Jul 2023 09:59:34 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.sys.unisys
Date: Sat, 1 Jul 2023 09:59:33 -0700 (PDT)
In-Reply-To: <191cf458-65bb-4e8a-8b31-1ee86bf0cbb4n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2601:602:c080:3f60:9970:2eac:3d22:8730;
posting-account=DycLBQoAAACVeYHALMkZoo5C926pUXDC
NNTP-Posting-Host: 2601:602:c080:3f60:9970:2eac:3d22:8730
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>
<u7motv$2iibt$1@dont-email.me> <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com>
<7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com> <80243eb1-cfee-4522-9bd8-bba432aab9ccn@googlegroups.com>
<191cf458-65bb-4e8a-8b31-1ee86bf0cbb4n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <4f4a1fbc-1597-4917-b025-1109ffdfd72fn@googlegroups.com>
Subject: Re: scale-out of OS2200
From: l_c...@juno.com (Lewis Cole)
Injection-Date: Sat, 01 Jul 2023 16:59:34 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 5250
 by: Lewis Cole - Sat, 1 Jul 2023 16:59 UTC

On Saturday, July 1, 2023 at 8:56:55 AM UTC-7, Kurt Duncan wrote:
> On Friday, June 30, 2023 at 11:30:08 PM UTC-6, Lewis Cole wrote:
> > On Friday, June 30, 2023 at 9:19:55 PM UTC-7, Kurt Duncan wrote:
< snip >
> I'm suggesting that someone might
> take a more active role in making
> the architectures and facilities
> which, buzzwords or not, actually
> do define the state of general
> enterprise computing, available
> to those people who are still in
> the world of TIP/HVTIP and RDMS,
> and batch processing. So that
> they can leverage at least some
> major portion of their existing
> code, while moving toward a
> computing paradigm which is in
> use today, has been for a number
> of years and (for better or words)
> will be for some number of years
> ahead.

I think we may be talking past each other.

While you seem to be looking for ways to make things "better"/"easier" for software developers out in the field, ISTM that "better"/"easier" doesn't mean spit unless you can make an economic argument to convince people who are "in charge" (who might well not be software developers) that the changes you are suggesting are A Good Thing (meaning worth money).

> I am not suggesting billions of
> processors. I am unsure that, if
> massive processing or IO is in a
> customers application mix, that
> anything other than a 2200 would
> satisfy them. [...]

Supporting billions and billions (so to speak) of processors is something that (IMHO) is coming "Real Soon Now" because It Has To.
The shared memory paradigm depends on underlying hardware keeping a cached shared view of memory consistent quickly/efficiently and the ability of hardware to do so has been getting ever closer to not being able to make that happen especially as the number of processors goes up.
Barrelfish was/is an attempt to address this problem and while it has inspired other OS developers to see what they can do with the multi-kernel paradigm, neither Barrelfish nor any of the things it has inspired has come anywhere close to be "mainstream" yet despite 10+ years of effort.
Given that it was a research project, this is to be expected, but I think it clearly shows that there can be a LONG lead time before any sort of pay off shows up even for something that Has To be coming.

ISTM that the same is likely true with respect to the suggestions you are are making.
Even if you are right about the things you are waving your arms at, it might take a long time before anyone, not least of all the Company, to see any benefit.
The Company doesn't have the bodies that it used to and so I can't really see why they (or anyone else) would bother doing anything along the lines of what you suggesting unless it's going to almost certainly add to the bottom line "Real Soon Now".

ISTM that you're asking for changes to make things "better"/"easier" for someone, but not someone who can/will cough up money in exchange, and I'm asking you to wave your arms at an economic argument that rebuts this.

> [...] But I don't think the entire
> world of OS2200 users requires
> un-obtanium. And I am not suggesting
> scaling an OS. I am suggesting
> providing the minimal environment
> necessary for a particular mix of
> (e.g., TIP) applications to function,
> and making that environment operate
> in exactly the same world which so
> many non-OS2200 applications operate.

Once Upon a Time, many moons ago, a customer sent in a SUR (Software User Report) asking for the Company to make some change.
The response to the SUR was basically, "Send Money".

Re: scale-out of OS2200

<po8jaihu0eiiu6s05pcn8o56n95b2l5nrc@4ax.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=355&group=comp.sys.unisys#355

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx15.iad.POSTED!not-for-mail
From: davidsch...@harrietmanor.com (David W Schroth)
Newsgroups: comp.sys.unisys
Subject: Re: scale-out of OS2200
Message-ID: <po8jaihu0eiiu6s05pcn8o56n95b2l5nrc@4ax.com>
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com> <u7motv$2iibt$1@dont-email.me> <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com> <7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com>
User-Agent: ForteAgent/8.00.32.1272
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Lines: 44
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sat, 08 Jul 2023 13:04:37 -0500
X-Received-Bytes: 2992
 by: David W Schroth - Sat, 8 Jul 2023 18:04 UTC

On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
<kurtaduncan@gmail.com> wrote:

>On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:
>> On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
>> <sf...@alumni.cmu.edu.invalid> wrote:
>>
>> >On 6/29/2023 6:33 PM, Kurt Duncan wrote:
>> >> So, just spit-balling ideas here.
>> >> Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
>> >
>> >I am not sure exactly what you are proposing here, nor what the
>> >advantages of it are. See below for specifics.
>> >
>> My take is someone has a shaky grasp of how Operting Systems
>> (including OS2200) are structured.
>
>I do have *some* small insight into OS2200 - not great, but some.

I regard myself as reasonably cognizant of at least *some* of your
exposure to OS2200. Although, as I (barely) recall, it wasn't OS2200
in 1978.

>Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
>I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a minimum amount of re-coding and/or re-architecting.
>

Which leads to the question I can never really find the answer for -
what is a good reason for containers?

As best I can tell, containers appear to be a response to Open Source
Software shortcomings with a heavy overlay of "This isn't a bug, it's
a feature" marketing.

>It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
>But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
>I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.
>
>
<snip>

Regards,

David W. Schroth

Re: scale-out of OS2200

<wHlqM.22329$3qt4.21905@fx08.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=356&group=comp.sys.unisys#356

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx08.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: sco...@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: scale-out of OS2200
Newsgroups: comp.sys.unisys
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com> <u7motv$2iibt$1@dont-email.me> <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com> <7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com> <po8jaihu0eiiu6s05pcn8o56n95b2l5nrc@4ax.com>
Lines: 27
Message-ID: <wHlqM.22329$3qt4.21905@fx08.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Sat, 08 Jul 2023 22:46:52 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Sat, 08 Jul 2023 22:46:52 GMT
X-Received-Bytes: 2164
 by: Scott Lurndal - Sat, 8 Jul 2023 22:46 UTC

David W Schroth <davidschroth@harrietmanor.com> writes:
>On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
><kurtaduncan@gmail.com> wrote:

>Which leads to the question I can never really find the answer for -
>what is a good reason for containers?

In one word, devops.

Makes for simple software deployment (and isolation) in large
data centers.

>>It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
>>But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
>>I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.

Managing one mainframe (or four) running batch production or on-line
transaction processing is not a huge job with much dynamic management
activity.

Managing a large data center with applications intended to scale with
usage is where containers come into play. Simply deploy the container
to any host in the data center and it's up and running. All done
either automatically based on demand, or as commanded.

The application is isolated within the container both from the
system as well as the other applications. A security benefit.

Re: scale-out of OS2200

<d48d34ad-b068-459b-8950-74d8303959edn@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=358&group=comp.sys.unisys#358

  copy link   Newsgroups: comp.sys.unisys
X-Received: by 2002:a05:6214:1903:b0:635:db2e:e9d9 with SMTP id er3-20020a056214190300b00635db2ee9d9mr33464qvb.6.1688999160152;
Mon, 10 Jul 2023 07:26:00 -0700 (PDT)
X-Received: by 2002:a17:903:3291:b0:1b9:ce2c:3bb0 with SMTP id
jh17-20020a170903329100b001b9ce2c3bb0mr6594703plb.3.1688999159859; Mon, 10
Jul 2023 07:25:59 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.sys.unisys
Date: Mon, 10 Jul 2023 07:25:59 -0700 (PDT)
In-Reply-To: <po8jaihu0eiiu6s05pcn8o56n95b2l5nrc@4ax.com>
Injection-Info: google-groups.googlegroups.com; posting-host=75.71.150.4; posting-account=mEuitgoAAADY9v63PvAUFdA9nsy_ToEE
NNTP-Posting-Host: 75.71.150.4
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>
<u7motv$2iibt$1@dont-email.me> <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com>
<7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com> <po8jaihu0eiiu6s05pcn8o56n95b2l5nrc@4ax.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <d48d34ad-b068-459b-8950-74d8303959edn@googlegroups.com>
Subject: Re: scale-out of OS2200
From: kurtadun...@gmail.com (Kurt Duncan)
Injection-Date: Mon, 10 Jul 2023 14:26:00 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: Kurt Duncan - Mon, 10 Jul 2023 14:25 UTC

On Saturday, July 8, 2023 at 11:59:07 AM UTC-6, David W Schroth wrote:
> On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
> <kurta...@gmail.com> wrote:
> >On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:
> >> On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
> >> <sf...@alumni.cmu.edu.invalid> wrote:
> >>
> >> >On 6/29/2023 6:33 PM, Kurt Duncan wrote:
> >> >> So, just spit-balling ideas here.
> >> >> Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
> >> >
> >> >I am not sure exactly what you are proposing here, nor what the
> >> >advantages of it are. See below for specifics.
> >> >
> >> My take is someone has a shaky grasp of how Operting Systems
> >> (including OS2200) are structured.
> >
> >I do have *some* small insight into OS2200 - not great, but some.
> I regard myself as reasonably cognizant of at least *some* of your
> exposure to OS2200. Although, as I (barely) recall, it wasn't OS2200
> in 1978.
> >Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
> >I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a minimum amount of re-coding and/or re-architecting.
> >
> Which leads to the question I can never really find the answer for -
> what is a good reason for containers?
>
> As best I can tell, containers appear to be a response to Open Source
> Software shortcomings with a heavy overlay of "This isn't a bug, it's
> a feature" marketing.
> >It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
> >But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
> >I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.
> >
> >
> <snip>
>
> Regards,
>
> David W. Schroth

Your impression of the impetus for containers is shared by a lot of us. Nonetheless, they are certainly in use (Scott's answer is spot on). The combination of ease of rolling out new TIP transactions, married with the advantages of HVTIP and/or RTPS... are evocative of the advantages of containers. Having a single OS (not relevant to this discussion really, but) while all the containers are in separate security worlds is also a plus. Maybe... look at it this way. Instead of eight or ten application groups, you could have 100 of them, with data atomicity a feature of a combination of an app container and a db container, while execution atomicity is strictly in the domain of the particular container manager and the input/ingress/load-balancer/whatever.

WRT demand mode... back in the dawn of history, we did have performance/capacity issues... we ran production full bore, and the developers suffered. We had a hard time convincing the money people to let us buy another processor and more memory, until things got to where we were time-shifting the developers, and even that didn't help. If we had something that let us spin up a dev environment only as long as the developer needed it, then we wouldn't have to buy additional hardware for 24/7. If those instances were on Amazon eks or whatever, then we would only pay for the time we used *plus* we would not need to pay for the data center costs 24/7 for something used 8/5.

For batch mode... spin up what you need when you need it. Same economic argument as the DEMAND users. And finally... when hardware flakes out, it affects less of your production/dev if you are spread across a lot of loosely and nonely coupled systems.

Re: scale-out of OS2200

<3e168dc6-97a4-4ef1-b351-392a225cceedn@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=359&group=comp.sys.unisys#359

  copy link   Newsgroups: comp.sys.unisys
X-Received: by 2002:ad4:5a48:0:b0:635:eeb9:4fcd with SMTP id ej8-20020ad45a48000000b00635eeb94fcdmr43703qvb.2.1689011484008;
Mon, 10 Jul 2023 10:51:24 -0700 (PDT)
X-Received: by 2002:a17:90b:68e:b0:263:317f:7ca4 with SMTP id
m14-20020a17090b068e00b00263317f7ca4mr10878782pjz.9.1689011483392; Mon, 10
Jul 2023 10:51:23 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.sys.unisys
Date: Mon, 10 Jul 2023 10:51:22 -0700 (PDT)
In-Reply-To: <d48d34ad-b068-459b-8950-74d8303959edn@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=76.94.30.146; posting-account=V-JxhAoAAAA7K1REWiT1YEYM1aal3G4q
NNTP-Posting-Host: 76.94.30.146
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>
<u7motv$2iibt$1@dont-email.me> <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com>
<7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com> <po8jaihu0eiiu6s05pcn8o56n95b2l5nrc@4ax.com>
<d48d34ad-b068-459b-8950-74d8303959edn@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <3e168dc6-97a4-4ef1-b351-392a225cceedn@googlegroups.com>
Subject: Re: scale-out of OS2200
From: mpe...@gmail.com (mpe...@gmail.com)
Injection-Date: Mon, 10 Jul 2023 17:51:24 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 6725
 by: mpe...@gmail.com - Mon, 10 Jul 2023 17:51 UTC

On Monday, July 10, 2023 at 7:26:01 AM UTC-7, Kurt Duncan wrote:
> On Saturday, July 8, 2023 at 11:59:07 AM UTC-6, David W Schroth wrote:
> > On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
> > <kurta...@gmail.com> wrote:
> > >On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:
> > >> On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
> > >> <sf...@alumni.cmu.edu.invalid> wrote:
> > >>
> > >> >On 6/29/2023 6:33 PM, Kurt Duncan wrote:
> > >> >> So, just spit-balling ideas here.
> > >> >> Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
> > >> >
> > >> >I am not sure exactly what you are proposing here, nor what the
> > >> >advantages of it are. See below for specifics.
> > >> >
> > >> My take is someone has a shaky grasp of how Operting Systems
> > >> (including OS2200) are structured.
> > >
> > >I do have *some* small insight into OS2200 - not great, but some.
> > I regard myself as reasonably cognizant of at least *some* of your
> > exposure to OS2200. Although, as I (barely) recall, it wasn't OS2200
> > in 1978.
> > >Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
> > >I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a minimum amount of re-coding and/or re-architecting..
> > >
> > Which leads to the question I can never really find the answer for -
> > what is a good reason for containers?
> >
> > As best I can tell, containers appear to be a response to Open Source
> > Software shortcomings with a heavy overlay of "This isn't a bug, it's
> > a feature" marketing.
> > >It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
> > >But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
> > >I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.
> > >
> > >
> > <snip>
> >
> > Regards,
> >
> > David W. Schroth
> Your impression of the impetus for containers is shared by a lot of us. Nonetheless, they are certainly in use (Scott's answer is spot on). The combination of ease of rolling out new TIP transactions, married with the advantages of HVTIP and/or RTPS... are evocative of the advantages of containers.. Having a single OS (not relevant to this discussion really, but) while all the containers are in separate security worlds is also a plus. Maybe... look at it this way. Instead of eight or ten application groups, you could have 100 of them, with data atomicity a feature of a combination of an app container and a db container, while execution atomicity is strictly in the domain of the particular container manager and the input/ingress/load-balancer/whatever.
>
> WRT demand mode... back in the dawn of history, we did have performance/capacity issues... we ran production full bore, and the developers suffered. We had a hard time convincing the money people to let us buy another processor and more memory, until things got to where we were time-shifting the developers, and even that didn't help. If we had something that let us spin up a dev environment only as long as the developer needed it, then we wouldn't have to buy additional hardware for 24/7. If those instances were on Amazon eks or whatever, then we would only pay for the time we used *plus* we would not need to pay for the data center costs 24/7 for something used 8/5.
>
> For batch mode... spin up what you need when you need it. Same economic argument as the DEMAND users. And finally... when hardware flakes out, it affects less of your production/dev if you are spread across a lot of loosely and nonely coupled systems.

There is a somewhat new acronym being tossed around for server security.

DIE: Distributed, Immutable, and Ephemeral

Distributed: Work can and should be spread across multiple devices for <reasons>. Here in the mainframe community we can argue about that, but we all know the notion is out there and dominant.

Immutable: You can't change settings, code, databases, etc. You install a known secure configuration and set of software (to the degree that vulnerabilities are known and remediated) and nothing can cause configuration drift on that device.. Malware can't be installed. Configuration settings can't be changed to open up security holes. And, so on.

Ephemeral: When it is time to make a controlled change, new VMs are spun up and the immutable image is installed there. The old images are then destroyed.

Containers are a way of deploying those software defined images.

Re: scale-out of OS2200

<u8i5ef$2lqmh$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=360&group=comp.sys.unisys#360

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.sys.unisys
Subject: Re: scale-out of OS2200
Date: Mon, 10 Jul 2023 16:49:34 -0700
Organization: A noiseless patient Spider
Lines: 100
Message-ID: <u8i5ef$2lqmh$1@dont-email.me>
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>
<u7motv$2iibt$1@dont-email.me> <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com>
<7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com>
<po8jaihu0eiiu6s05pcn8o56n95b2l5nrc@4ax.com>
<d48d34ad-b068-459b-8950-74d8303959edn@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 10 Jul 2023 23:49:35 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="e4ac219b86608d35e866751f5969e679";
logging-data="2812625"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19gTTABClkmKqhDZPyGr+oZxN2utf+m5qE="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.13.0
Cancel-Lock: sha1:g8olfcLvWGAW7CBPh0+KRQeiWxQ=
Content-Language: en-US
In-Reply-To: <d48d34ad-b068-459b-8950-74d8303959edn@googlegroups.com>
 by: Stephen Fuld - Mon, 10 Jul 2023 23:49 UTC

On 7/10/2023 7:25 AM, Kurt Duncan wrote:
> On Saturday, July 8, 2023 at 11:59:07 AM UTC-6, David W Schroth wrote:
>> On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
>> <kurta...@gmail.com> wrote:
>>> On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:
>>>> On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
>>>> <sf...@alumni.cmu.edu.invalid> wrote:
>>>>
>>>>> On 6/29/2023 6:33 PM, Kurt Duncan wrote:
>>>>>> So, just spit-balling ideas here.
>>>>>> Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
>>>>>
>>>>> I am not sure exactly what you are proposing here, nor what the
>>>>> advantages of it are. See below for specifics.
>>>>>
>>>> My take is someone has a shaky grasp of how Operting Systems
>>>> (including OS2200) are structured.
>>>
>>> I do have *some* small insight into OS2200 - not great, but some.
>> I regard myself as reasonably cognizant of at least *some* of your
>> exposure to OS2200. Although, as I (barely) recall, it wasn't OS2200
>> in 1978.
>>> Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
>>> I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a minimum amount of re-coding and/or re-architecting.
>>>
>> Which leads to the question I can never really find the answer for -
>> what is a good reason for containers?
>>
>> As best I can tell, containers appear to be a response to Open Source
>> Software shortcomings with a heavy overlay of "This isn't a bug, it's
>> a feature" marketing.
>>> It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
>>> But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
>>> I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.
>>>
>>>
>> <snip>
>>
>> Regards,
>>
>> David W. Schroth
>
> Your impression of the impetus for containers is shared by a lot of us. Nonetheless, they are certainly in use (Scott's answer is spot on). The combination of ease of rolling out new TIP transactions, married with the advantages of HVTIP and/or RTPS... are evocative of the advantages of containers. Having a single OS (not relevant to this discussion really, but) while all the containers are in separate security worlds is also a plus. Maybe... look at it this way. Instead of eight or ten application groups, you could have 100 of them, with data atomicity a feature of a combination of an app container and a db container, while execution atomicity is strictly in the domain of the particular container manager and the input/ingress/load-balancer/whatever.
>
> WRT demand mode... back in the dawn of history, we did have performance/capacity issues... we ran production full bore, and the developers suffered. We had a hard time convincing the money people to let us buy another processor and more memory, until things got to where we were time-shifting the developers, and even that didn't help. If we had something that let us spin up a dev environment only as long as the developer needed it, then we wouldn't have to buy additional hardware for 24/7. If those instances were on Amazon eks or whatever, then we would only pay for the time we used *plus* we would not need to pay for the data center costs 24/7 for something used 8/5.
>
> For batch mode... spin up what you need when you need it. Same economic argument as the DEMAND users. And finally... when hardware flakes out, it affects less of your production/dev if you are spread across a lot of loosely and nonely coupled systems.

ISTM that there is some conflation of two things here. One is the
ability to provide greater isolation between applications than a single
OS can provide (though the single OS side could argue that a well
designed OS could provide all that you need), and the ability adjust
compute capacity on the fly, to meet varying demand.

If you really wanted more separation among instances of OS 2200, I don't
think there is any technical barriers preventing you from running
multiple instances of Linux each with its own copy of the emulator and
OS/2200 to gain that separation. I know that Unisys modified Linux in
some way to make OS/2200 run better on it, but I don't know if it is
technically possible or not to run multiple copies of the emulator on a
single Linux, or multiple copies of OS/2200 on a single copy of the
emulator. That would achieve the separation. But all of that is not
related to the varying the amount of CPU power available.

No matter what OS you have, if you need more compute capacity, you need
additional hardware. A single customer seems to me to be unlikely to
have additional hardware available "just in case", so that requirement
is met by some sort of "rent capacity as required" from some company
that has lots of extra capacity to make available to different customers
as needed, i.e. a "cloud computing facility". Having that seems to me
to be independent of what software you run on that cloud. For example,
assume that a business case could be made for it, I don't think there is
any substantial technical problem from getting extra X86 computer power
from a cloud, running Linux on that new CPU with OS2200 on top of that
Linux.

Of course, there are business issues that I am not competent to discuss.

On a related note, regarding the separation issue (again, not the
varying capacity issue), how is the solution different from something
like IBM's VM, which has been around since the 1970s?

And I may be misremembering, but I vaguely recall an aborted attempt by
Univac/Sperry to add a VM like capability in the form of a specialized
instruction to aid that, perhaps in the 1110? Am I just having a fever
dream??

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: scale-out of OS2200

<69fdde22-2870-4753-a0bb-9ca17a5a92dcn@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=361&group=comp.sys.unisys#361

  copy link   Newsgroups: comp.sys.unisys
X-Received: by 2002:a05:622a:5c8:b0:403:af80:8cce with SMTP id d8-20020a05622a05c800b00403af808ccemr13379qtb.10.1689038934715;
Mon, 10 Jul 2023 18:28:54 -0700 (PDT)
X-Received: by 2002:a05:6a00:1889:b0:682:5980:a0e0 with SMTP id
x9-20020a056a00188900b006825980a0e0mr19529575pfh.5.1689038934103; Mon, 10 Jul
2023 18:28:54 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.sys.unisys
Date: Mon, 10 Jul 2023 18:28:53 -0700 (PDT)
In-Reply-To: <u8i5ef$2lqmh$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=2601:602:c080:3f60:6177:7ab:5318:e26d;
posting-account=DycLBQoAAACVeYHALMkZoo5C926pUXDC
NNTP-Posting-Host: 2601:602:c080:3f60:6177:7ab:5318:e26d
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>
<u7motv$2iibt$1@dont-email.me> <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com>
<7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com> <po8jaihu0eiiu6s05pcn8o56n95b2l5nrc@4ax.com>
<d48d34ad-b068-459b-8950-74d8303959edn@googlegroups.com> <u8i5ef$2lqmh$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <69fdde22-2870-4753-a0bb-9ca17a5a92dcn@googlegroups.com>
Subject: Re: scale-out of OS2200
From: l_c...@juno.com (Lewis Cole)
Injection-Date: Tue, 11 Jul 2023 01:28:54 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 7853
 by: Lewis Cole - Tue, 11 Jul 2023 01:28 UTC

On Monday, July 10, 2023 at 4:49:37 PM UTC-7, Stephen Fuld wrote:
> ISTM that there is some conflation of two things here. One is the
> ability to provide greater isolation between applications than a single
> OS can provide (though the single OS side could argue that a well
> designed OS could provide all that you need), and the ability adjust
> compute capacity on the fly, to meet varying demand.

While containers supposedly provide greater isolation, I don't see that as something that Mr. Duncan is really pushing.
Instead, ISTM that it's just another selling point to supposedly make the case that implementing something container-like on OS2200 is A Good Thing.
What I see Mr. Duncan trying to do is to come up with something that will increase OS2200's desirability.

I can't fault him for that, but I don't see the value of what he's proposing economically.
I keep remembering when some of the Blue Bell suits came to talk to the troops in the Building 2 cafeteria and one of them basically said, "We can't cost-cut our way to profitability ... we have to grow our markets.".
I took that to mean that if the Company wanted to do more than just barely survive, it had to find other people to sell our products to (in particular, 1100/2200 hardware) to lots of people who weren't already our customers.

I still think that's the "correct" strategy, but at this point, I just don't know if there's anyone else to sell to and adding containers to OS2200 doesn't change that.

> If you really wanted more separation among instances of OS 2200, I don't
> think there is any technical barriers preventing you from running
> multiple instances of Linux each with its own copy of the emulator and
> OS/2200 to gain that separation. [...]

I don't think there's anything (aside from cost) to running 2200 emulator software directly on a Intel software with no hypervisor or a type-2 (bare metal) hypervisor rather than on a type-1 KVM Linux hypervisor.
But while doing so might make an emulated 2200 system more secure, I just don't see that as being economically worthwhile either.
(I've spent some time wondering how difficult it would be to get BOOTBOOT to load up a 2200 emulator on bare metal directly, but that's something that a hobbyist can do, not a for-profit company with a limited body count.)

> [...] I know that Unisys modified Linux in
> some way to make OS/2200 run better on it, but I don't know if it is
> technically possible or not to run multiple copies of the emulator on a
> single Linux, or multiple copies of OS/2200 on a single copy of the
> emulator. That would achieve the separation. But all of that is not
> related to the varying the amount of CPU power available.

It is my understanding that the Company has a modified Linux kernel as the hypervisor (which it calls SAIL if you want to look up the Unisys documentation for it) upon which its emulated 2200 software runs.
Since the Company tweaked it, I would kinda suspect that it not just a bog-standard KVM Linux.

> No matter what OS you have, if you need more compute capacity, you need
> additional hardware. A single customer seems to me to be unlikely to
> have additional hardware available "just in case", so that requirement
> is met by some sort of "rent capacity as required" from some company
> that has lots of extra capacity to make available to different customers
> as needed, i.e. a "cloud computing facility". Having that seems to me
> to be independent of what software you run on that cloud. For example,
> assume that a business case could be made for it, I don't think there is
> any substantial technical problem from getting extra X86 computer power
> from a cloud, running Linux on that new CPU with OS2200 on top of that
> Linux.

I agree which is why trying to modify OS2200 to support something container-like doesn't make much sense to me.
ISTM that it might if you had a server farm of 2200s (emulated or real) with a lot of excess compute capacity that you wanted to be able to re-purpose on the fly, but for some reason, I doubt that's what current Unisys customers have laying around.

> Of course, there are business issues that I am not competent to discuss.

I disagree. I think that people who are actually familiar with the good or service that a company sells are as competent as those who "run" a company but don't have a clue what that company really sells (because everything is a "widget").

> On a related note, regarding the separation issue (again, not the
> varying capacity issue), how is the solution different from something
> like IBM's VM, which has been around since the 1970s?

If you're referring to how a container is different from a VM, then I would say my understanding is that they are both functionally equivalent, but that containers are supposedly "lighter weight" when it comes to the underlying resources, quicker to spin up, and (usually?) faster.
Meanwhile, VMs are supposedly more secure.
Keep in mind that I'm just a poor dumb former Bootstrap programmer though who has no experience with containers and so anything I say on the subject should probably be taken with a large salt lick.

> And I may be misremembering, but I vaguely recall an aborted attempt by
> Univac/Sperry to add a VM like capability in the form of a specialized
> instruction to aid that, perhaps in the 1110? Am I just having a fever
> dream??

I would think that it would take more than a single instruction to add VM support to the 1100 architecture, although I supposed that one could do it with a VERY modified "execute" instruction.
FWIW, Once upon a time, I recall hearing that some processor (not a Univac/Sperry one) was going to have a "execute alternate architecture" instruction.
I don't know who was supposedly going to make it or whether or not anything actually became real.
I gather that IBM's "Future System" CPUs could be reprogrammed to be function specific CPUs.
Of course, at least some Burroughs machines could be loaded up with different instruction sets for programs written in different HLLs.
The only thing that kinda-sorta-vaguely strikes me as the same thing was Roanoake.

If you come up with more details, I would be interested in hearing them.

Re: scale-out of OS2200

<miopaih7tl8mtcmdso53g8fo28ubkobdpp@4ax.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=362&group=comp.sys.unisys#362

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx36.iad.POSTED!not-for-mail
From: davidsch...@harrietmanor.com (David W Schroth)
Newsgroups: comp.sys.unisys
Subject: Re: scale-out of OS2200
Message-ID: <miopaih7tl8mtcmdso53g8fo28ubkobdpp@4ax.com>
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com> <u7motv$2iibt$1@dont-email.me> <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com> <7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com> <po8jaihu0eiiu6s05pcn8o56n95b2l5nrc@4ax.com> <d48d34ad-b068-459b-8950-74d8303959edn@googlegroups.com> <u8i5ef$2lqmh$1@dont-email.me>
User-Agent: ForteAgent/8.00.32.1272
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Lines: 125
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Tue, 11 Jul 2023 00:28:10 -0500
X-Received-Bytes: 8676
 by: David W Schroth - Tue, 11 Jul 2023 05:28 UTC

On Mon, 10 Jul 2023 16:49:34 -0700, Stephen Fuld
<sfuld@alumni.cmu.edu.invalid> wrote:

>On 7/10/2023 7:25 AM, Kurt Duncan wrote:
>> On Saturday, July 8, 2023 at 11:59:07?AM UTC-6, David W Schroth wrote:
>>> On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
>>> <kurta...@gmail.com> wrote:
>>>> On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:
>>>>> On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
>>>>> <sf...@alumni.cmu.edu.invalid> wrote:
>>>>>
>>>>>> On 6/29/2023 6:33 PM, Kurt Duncan wrote:
>>>>>>> So, just spit-balling ideas here.
>>>>>>> Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
>>>>>>
>>>>>> I am not sure exactly what you are proposing here, nor what the
>>>>>> advantages of it are. See below for specifics.
>>>>>>
>>>>> My take is someone has a shaky grasp of how Operting Systems
>>>>> (including OS2200) are structured.
>>>>
>>>> I do have *some* small insight into OS2200 - not great, but some.
>>> I regard myself as reasonably cognizant of at least *some* of your
>>> exposure to OS2200. Although, as I (barely) recall, it wasn't OS2200
>>> in 1978.
>>>> Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
>>>> I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a minimum amount of re-coding and/or re-architecting.
>>>>
>>> Which leads to the question I can never really find the answer for -
>>> what is a good reason for containers?
>>>
>>> As best I can tell, containers appear to be a response to Open Source
>>> Software shortcomings with a heavy overlay of "This isn't a bug, it's
>>> a feature" marketing.
>>>> It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
>>>> But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
>>>> I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.
>>>>
>>>>
>>> <snip>
>>>
>>> Regards,
>>>
>>> David W. Schroth
>>
>> Your impression of the impetus for containers is shared by a lot of us. Nonetheless, they are certainly in use (Scott's answer is spot on). The combination of ease of rolling out new TIP transactions, married with the advantages of HVTIP and/or RTPS... are evocative of the advantages of containers. Having a single OS (not relevant to this discussion really, but) while all the containers are in separate security worlds is also a plus. Maybe... look at it this way. Instead of eight or ten application groups, you could have 100 of them, with data atomicity a feature of a combination of an app container and a db container, while execution atomicity is strictly in the domain of the particular container manager and the input/ingress/load-balancer/whatever.
>>

OS220 currently supports more than 8 or 10 Application Groups, and has
for some time. I am unaware of any customer using anywhere near the
maximum number of Application Groups.
>> WRT demand mode... back in the dawn of history, we did have performance/capacity issues... we ran production full bore, and the developers suffered. We had a hard time convincing the money people to let us buy another processor and more memory, until things got to where we were time-shifting the developers, and even that didn't help. If we had something that let us spin up a dev environment only as long as the developer needed it, then we wouldn't have to buy additional hardware for 24/7. If those instances were on Amazon eks or whatever, then we would only pay for the time we used *plus* we would not need to pay for the data center costs 24/7 for something used 8/5.
>>

Back in the dawn of historu, *everybody* had performance/capacity
issues, including OS1100. This does not generally seem to be the case
these days.

>> For batch mode... spin up what you need when you need it. Same economic argument as the DEMAND users. And finally... when hardware flakes out, it affects less of your production/dev if you are spread across a lot of loosely and nonely coupled systems.
>

>ISTM that there is some conflation of two things here. One is the
>ability to provide greater isolation between applications than a single
>OS can provide (though the single OS side could argue that a well
>designed OS could provide all that you need), and the ability adjust
>compute capacity on the fly, to meet varying demand.

Not to pick on you (I had to respond somewhere), but I'd *really* like
to see some evidence that containers are more secure than running
applications on a 2200 or an MCP ayatwm. Or more secure than running
applications on IBM kit, for that matter.

Historically, one could adjust capacity on the fly by using a fully
configured (as regards processors) system and moving processors
between "partitions". Most of the time, people didn't bother.

Much later in the game. the systems had more capacity than sites
needed, so sites paid for what they used (or, more likely, what they
thought they would use), with the ability to register a new
performance key on (relatively) short notice if they underestimated
how much performance they needed.

>
>If you really wanted more separation among instances of OS 2200, I don't
>think there is any technical barriers preventing you from running
>multiple instances of Linux each with its own copy of the emulator and
>OS/2200 to gain that separation. I know that Unisys modified Linux in
>some way to make OS/2200 run better on it, but I don't know if it is
>technically possible or not to run multiple copies of the emulator on a
>single Linux, or multiple copies of OS/2200 on a single copy of the
>emulator. That would achieve the separation. But all of that is not
>related to the varying the amount of CPU power available.
>
>No matter what OS you have, if you need more compute capacity, you need
>additional hardware. A single customer seems to me to be unlikely to
>have additional hardware available "just in case", so that requirement
>is met by some sort of "rent capacity as required" from some company
>that has lots of extra capacity to make available to different customers
>as needed, i.e. a "cloud computing facility". Having that seems to me
>to be independent of what software you run on that cloud. For example,
>assume that a business case could be made for it, I don't think there is
>any substantial technical problem from getting extra X86 computer power
>from a cloud, running Linux on that new CPU with OS2200 on top of that
>Linux.
>
>Of course, there are business issues that I am not competent to discuss.
>
>On a related note, regarding the separation issue (again, not the
>varying capacity issue), how is the solution different from something
>like IBM's VM, which has been around since the 1970s?
>
>And I may be misremembering, but I vaguely recall an aborted attempt by
>Univac/Sperry to add a VM like capability in the form of a specialized
>instruction to aid that, perhaps in the 1110? Am I just having a fever
>dream??

IIRC, the capability was designed into a system post 1100/80 and pre
2200/900, but never saw the light of day.

Regards,

David W. Schroth

Re: scale-out of OS2200

<3yyrM.286721$65y6.283943@fx17.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=363&group=comp.sys.unisys#363

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!peer01.ams1!peer.ams1.xlned.com!news.xlned.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx17.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: sco...@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: scale-out of OS2200
Newsgroups: comp.sys.unisys
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com> <u7motv$2iibt$1@dont-email.me> <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com> <7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com> <po8jaihu0eiiu6s05pcn8o56n95b2l5nrc@4ax.com> <d48d34ad-b068-459b-8950-74d8303959edn@googlegroups.com> <u8i5ef$2lqmh$1@dont-email.me>
Lines: 9
Message-ID: <3yyrM.286721$65y6.283943@fx17.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Wed, 12 Jul 2023 14:13:19 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Wed, 12 Jul 2023 14:13:19 GMT
X-Received-Bytes: 1368
 by: Scott Lurndal - Wed, 12 Jul 2023 14:13 UTC

Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
>On 7/10/2023 7:25 AM, Kurt Duncan wrote:

>On a related note, regarding the separation issue (again, not the
>varying capacity issue), how is the solution different from something
>like IBM's VM, which has been around since the 1970s?

A docker container (as an example) is much lighter weight than
a VM, yet still provides most of the isolation and security features.

Re: scale-out of OS2200

<%AyrM.286722$65y6.8155@fx17.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=364&group=comp.sys.unisys#364

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!news.1d4.us!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx17.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: sco...@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: scale-out of OS2200
Newsgroups: comp.sys.unisys
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com> <u7motv$2iibt$1@dont-email.me> <lv5v9itj7ujulr6brn20tdlit0vv957ha5@4ax.com> <7eac11c5-af6a-4616-bd83-555f26072815n@googlegroups.com> <po8jaihu0eiiu6s05pcn8o56n95b2l5nrc@4ax.com> <d48d34ad-b068-459b-8950-74d8303959edn@googlegroups.com> <u8i5ef$2lqmh$1@dont-email.me> <69fdde22-2870-4753-a0bb-9ca17a5a92dcn@googlegroups.com>
Lines: 15
Message-ID: <%AyrM.286722$65y6.8155@fx17.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Wed, 12 Jul 2023 14:16:27 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Wed, 12 Jul 2023 14:16:27 GMT
X-Received-Bytes: 1763
 by: Scott Lurndal - Wed, 12 Jul 2023 14:16 UTC

Lewis Cole <l_cole@juno.com> writes:
>On Monday, July 10, 2023 at 4:49:37=E2=80=AFPM UTC-7, Stephen Fuld wrote:

>I don't think there's anything (aside from cost) to running 2200 emulator s=
>oftware directly on a Intel software with no hypervisor or a type-2 (bare m=
>etal) hypervisor rather than on a type-1 KVM Linux hypervisor.
>But while doing so might make an emulated 2200 system more secure, I just d=
>on't see that as being economically worthwhile either.
>(I've spent some time wondering how difficult it would be to get BOOTBOOT t=
>o load up a 2200 emulator on bare metal directly, but that's something that=
> a hobbyist can do, not a for-profit company with a limited body count.)

Actually it should be quite straightforward to put the 2200 emulator
in a container and simply deploy as many containers as needed (c.f. docker).

Re: scale-out of OS2200

<u9brvj$2qj03$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=384&group=comp.sys.unisys#384

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.sys.unisys
Subject: Re: scale-out of OS2200
Date: Thu, 20 Jul 2023 10:47:29 -0700
Organization: A noiseless patient Spider
Lines: 52
Message-ID: <u9brvj$2qj03$1@dont-email.me>
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Thu, 20 Jul 2023 17:47:31 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="9137e70a6a7bf569071364256f873514";
logging-data="2968579"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19RCuQoI3Igy2buMpXZ04M3zep7QjeAhHE="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.13.0
Cancel-Lock: sha1:2dmPswhkk+F5c/KKxsqgm9Z4LTo=
Content-Language: en-US
In-Reply-To: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>
 by: Stephen Fuld - Thu, 20 Jul 2023 17:47 UTC

On 6/29/2023 6:33 PM, Kurt Duncan wrote:
> So, just spit-balling ideas here.
> Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
>
> One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.
>
> TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.
>
> Spin up one or more database instances as needed.
>
> Spin up two or three batch services for nightly processing.
>
> Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.
>
> Same thing with BIS.
>
> Then you would truly have cloud-native...ish OS2200. Thoughts?

After thinking about all the comments, ISTM that, without using the
names, you can do most, if not all of what you are suggesting using
already existing capabilities, without the need for any modifications to
the OS. The possible exception (and I am not sure about this and I
think this is a business issue, not so much a technical one), is if a
customer wants to avoid any direct hardware and hardware management
costs by using a commercial cloud service such as AWS.

But the claimed reason for doing this, to attract more 2200 customers,
seems dubious to me. While it might be attractive to an existing
customer, and thus prevent losing that customer, I just don't see it
attracting any new name customers. There are just not enough
advantages, and too high a cost for a new name customer to adopt the
2200 environment.

Don't get me wrong. I think the 2200 environment has a lot of very nice
features.

I have often said the the problem is with Univac/Sperry/Unisys
marketing's utter failure to convince the world that 36 is an integral
power of 2. The world has settled on 8 bit byte addressability. That
the Unisys development and marketing teams have kept the systems viable
for so long (I believe the 2200 is by far the oldest, and last surviving
and actively marketed, 36 bit system) is a tribute to their abilities
and determination. But it is ultimately doomed and their job now is to
stave off that doom for a long as possible, and I think they are doing
that well.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: scale-out of OS2200

<3891fcd0-6043-48b7-8f8d-8a1ac259b003n@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=385&group=comp.sys.unisys#385

  copy link   Newsgroups: comp.sys.unisys
X-Received: by 2002:a05:620a:211b:b0:767:d99d:f42d with SMTP id l27-20020a05620a211b00b00767d99df42dmr33452qkl.6.1689887412331;
Thu, 20 Jul 2023 14:10:12 -0700 (PDT)
X-Received: by 2002:a05:6808:2381:b0:3a3:c78e:d863 with SMTP id
bp1-20020a056808238100b003a3c78ed863mr263997oib.0.1689887412048; Thu, 20 Jul
2023 14:10:12 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.sys.unisys
Date: Thu, 20 Jul 2023 14:10:11 -0700 (PDT)
In-Reply-To: <u9brvj$2qj03$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=71.24.158.68; posting-account=mEuitgoAAADY9v63PvAUFdA9nsy_ToEE
NNTP-Posting-Host: 71.24.158.68
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com> <u9brvj$2qj03$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <3891fcd0-6043-48b7-8f8d-8a1ac259b003n@googlegroups.com>
Subject: Re: scale-out of OS2200
From: kurtadun...@gmail.com (Kurt Duncan)
Injection-Date: Thu, 20 Jul 2023 21:10:12 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 4044
 by: Kurt Duncan - Thu, 20 Jul 2023 21:10 UTC

On Thursday, July 20, 2023 at 11:47:34 AM UTC-6, Stephen Fuld wrote:
> On 6/29/2023 6:33 PM, Kurt Duncan wrote:
> > So, just spit-balling ideas here.
> > Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
> >
> > One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.
> >
> > TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.
> >
> > Spin up one or more database instances as needed.
> >
> > Spin up two or three batch services for nightly processing.
> >
> > Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.
> >
> > Same thing with BIS.
> >
> > Then you would truly have cloud-native...ish OS2200. Thoughts?
> After thinking about all the comments, ISTM that, without using the
> names, you can do most, if not all of what you are suggesting using
> already existing capabilities, without the need for any modifications to
> the OS. The possible exception (and I am not sure about this and I
> think this is a business issue, not so much a technical one), is if a
> customer wants to avoid any direct hardware and hardware management
> costs by using a commercial cloud service such as AWS.
>
> But the claimed reason for doing this, to attract more 2200 customers,
> seems dubious to me. While it might be attractive to an existing
> customer, and thus prevent losing that customer, I just don't see it
> attracting any new name customers. There are just not enough
> advantages, and too high a cost for a new name customer to adopt the
> 2200 environment.
>
> Don't get me wrong. I think the 2200 environment has a lot of very nice
> features.
>
> I have often said the the problem is with Univac/Sperry/Unisys
> marketing's utter failure to convince the world that 36 is an integral
> power of 2. The world has settled on 8 bit byte addressability. That
> the Unisys development and marketing teams have kept the systems viable
> for so long (I believe the 2200 is by far the oldest, and last surviving
> and actively marketed, 36 bit system) is a tribute to their abilities
> and determination. But it is ultimately doomed and their job now is to
> stave off that doom for a long as possible, and I think they are doing
> that well.
> --
> - Stephen Fuld
> (e-mail address disguised to prevent spam)

Thank you for an amazing discussion. I have much to consider, but I appreciate the knowledge in this forum, and your willingness to share.

Re: scale-out of OS2200

<ublmcf$3rqu0$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=401&group=comp.sys.unisys#401

  copy link   Newsgroups: comp.sys.unisys
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.sys.unisys
Subject: Re: scale-out of OS2200
Date: Thu, 17 Aug 2023 10:45:51 -0700
Organization: A noiseless patient Spider
Lines: 63
Message-ID: <ublmcf$3rqu0$1@dont-email.me>
References: <4100d4f8-7630-444c-90c3-907126a7d273n@googlegroups.com>
<u9brvj$2qj03$1@dont-email.me>
<3891fcd0-6043-48b7-8f8d-8a1ac259b003n@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 17 Aug 2023 17:45:51 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ee069377dca074244065d837b48cc752";
logging-data="4058048"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18fm+vBWdzvIh9L7MJT5k5lJJySlubJVLM="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:/rNzZEGMpvZ/4yqWSTAAWhg2Wx4=
In-Reply-To: <3891fcd0-6043-48b7-8f8d-8a1ac259b003n@googlegroups.com>
Content-Language: en-US
 by: Stephen Fuld - Thu, 17 Aug 2023 17:45 UTC

On 7/20/2023 2:10 PM, Kurt Duncan wrote:
> On Thursday, July 20, 2023 at 11:47:34 AM UTC-6, Stephen Fuld wrote:
>> On 6/29/2023 6:33 PM, Kurt Duncan wrote:
>>> So, just spit-balling ideas here.
>>> Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
>>>
>>> One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.
>>>
>>> TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.
>>>
>>> Spin up one or more database instances as needed.
>>>
>>> Spin up two or three batch services for nightly processing.
>>>
>>> Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.
>>>
>>> Same thing with BIS.
>>>
>>> Then you would truly have cloud-native...ish OS2200. Thoughts?
>> After thinking about all the comments, ISTM that, without using the
>> names, you can do most, if not all of what you are suggesting using
>> already existing capabilities, without the need for any modifications to
>> the OS. The possible exception (and I am not sure about this and I
>> think this is a business issue, not so much a technical one), is if a
>> customer wants to avoid any direct hardware and hardware management
>> costs by using a commercial cloud service such as AWS.
>>
>> But the claimed reason for doing this, to attract more 2200 customers,
>> seems dubious to me. While it might be attractive to an existing
>> customer, and thus prevent losing that customer, I just don't see it
>> attracting any new name customers. There are just not enough
>> advantages, and too high a cost for a new name customer to adopt the
>> 2200 environment.
>>
>> Don't get me wrong. I think the 2200 environment has a lot of very nice
>> features.
>>
>> I have often said the the problem is with Univac/Sperry/Unisys
>> marketing's utter failure to convince the world that 36 is an integral
>> power of 2. The world has settled on 8 bit byte addressability. That
>> the Unisys development and marketing teams have kept the systems viable
>> for so long (I believe the 2200 is by far the oldest, and last surviving
>> and actively marketed, 36 bit system) is a tribute to their abilities
>> and determination. But it is ultimately doomed and their job now is to
>> stave off that doom for a long as possible, and I think they are doing
>> that well.
>> --
>> - Stephen Fuld
>> (e-mail address disguised to prevent spam)
>
> Thank you for an amazing discussion. I have much to consider, but I appreciate the knowledge in this forum, and your willingness to share.

Speaking of 2200 "in the cloud", look at what Unisys just announced.

https://www.unisys.com/announcements-and-updates/ecs/new-capability-released-allowing-clearpath-os-2200/

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

1
server_pubkey.txt

rocksolid light 0.9.8
clearnet tor