Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

In specifications, Murphy's Law supersedes Ohm's.


computers / comp.os.vms / Re: OS implementation languages

SubjectAuthor
* OS implementation languagesSimon Clubley
+* Re: OS implementation languagesDennis Boone
|`* Re: OS implementation languagesSimon Clubley
| `* Re: OS implementation languagesJohnny Billquist
|  +* Re: OS implementation languagesSimon Clubley
|  |+* Re: OS implementation languagesArne Vajhøj
|  ||+- Re: OS implementation languagesterry-...@glaver.org
|  ||`* Re: OS implementation languageschrisq
|  || `* Re: OS implementation languagesSimon Clubley
|  ||  +* Re: OS implementation languagesSingle Stage to Orbit
|  ||  |+* Re: OS implementation languagesSimon Clubley
|  ||  ||+* Re: OS implementation languagesJohnny Billquist
|  ||  |||+- Re: OS implementation languagesRich Alderson
|  ||  |||+* Re: OS implementation languagesbill
|  ||  ||||+* Re: OS implementation languagesCraig A. Berry
|  ||  |||||+* Re: OS implementation languagesArne Vajhøj
|  ||  ||||||`- Re: OS implementation languagesCraig A. Berry
|  ||  |||||+* Re: OS implementation languagesArne Vajhøj
|  ||  ||||||`* Re: OS implementation languagesArne Vajhøj
|  ||  |||||| `- Re: OS implementation languagesDan Cross
|  ||  |||||+* Re: OS implementation languagesbill
|  ||  ||||||`- Re: OS implementation languagesArne Vajhøj
|  ||  |||||`* Re: OS implementation languagesBob Gezelter
|  ||  ||||| +* Re: OS implementation languagesIan Miller
|  ||  ||||| |+- Re: OS implementation languagesBob Gezelter
|  ||  ||||| |+* Re: OS implementation languagesBob Gezelter
|  ||  ||||| ||`* Re: OS implementation languagesJan-Erik Söderholm
|  ||  ||||| || `- Re: OS implementation languagesIan Miller
|  ||  ||||| |`- Re: OS implementation languagesSimon Clubley
|  ||  ||||| `- Re: OS implementation languagesDavid Jones
|  ||  ||||+* Re: OS implementation languagesJohnny Billquist
|  ||  |||||+* Re: OS implementation languagesterry-...@glaver.org
|  ||  ||||||`* Re: OS implementation languagesJohnny Billquist
|  ||  |||||| +- Re: OS implementation languagesIan Miller
|  ||  |||||| +- Re: OS implementation languagesJan-Erik Söderholm
|  ||  |||||| `* Re: OS implementation languagesArne Vajhøj
|  ||  ||||||  `* Re: OS implementation languagesBob Gezelter
|  ||  ||||||   +* Re: OS implementation languagesSimon Clubley
|  ||  ||||||   |+- Re: OS implementation languagesSingle Stage to Orbit
|  ||  ||||||   |`- Re: OS implementation languagesJohnny Billquist
|  ||  ||||||   `* Re: OS implementation languagesJohnny Billquist
|  ||  ||||||    `* Re: OS implementation languagesDave Froble
|  ||  ||||||     `* Re: OS implementation languagesRobert A. Brooks
|  ||  ||||||      +* Re: OS implementation languagesBob Gezelter
|  ||  ||||||      |`- Re: OS implementation languagesDave Froble
|  ||  ||||||      `- Re: OS implementation languagesDave Froble
|  ||  |||||`* Re: OS implementation languagesSimon Clubley
|  ||  ||||| +- Re: OS implementation languagesDan Cross
|  ||  ||||| +- Re: OS implementation languagesDave Froble
|  ||  ||||| +- Re: OS implementation languagesArne Vajhøj
|  ||  ||||| `- Re: OS implementation languagesJohnny Billquist
|  ||  ||||`* Re: OS implementation languagesSimon Clubley
|  ||  |||| `* Re: OS implementation languagesBob Gezelter
|  ||  ||||  `- Re: OS implementation languagesterry-...@glaver.org
|  ||  |||+* Re: OS implementation languagesSimon Clubley
|  ||  ||||`* Re: OS implementation languagesJohnny Billquist
|  ||  |||| `* Re: OS implementation languagesSimon Clubley
|  ||  ||||  `* Re: OS implementation languagesJohnny Billquist
|  ||  ||||   `- Re: OS implementation languagesSimon Clubley
|  ||  |||`* Re: OS implementation languagesDan Cross
|  ||  ||| `- Re: OS implementation languagesJohnny Billquist
|  ||  ||`* Re: OS implementation languagesgah4
|  ||  || +* Re: OS implementation languagesBob Gezelter
|  ||  || |`* Re: OS implementation languagesJohnny Billquist
|  ||  || | +* Re: OS implementation languagesBob Gezelter
|  ||  || | |`* Re: OS implementation languagesJohnny Billquist
|  ||  || | | +* Re: OS implementation languagesBob Gezelter
|  ||  || | | |`* Re: OS implementation languagesJohnny Billquist
|  ||  || | | | `* Re: OS implementation languagesJohnny Billquist
|  ||  || | | |  `* Re: OS implementation languagesgah4
|  ||  || | | |   `- Re: OS implementation languagesJohnny Billquist
|  ||  || | | `* Re: OS implementation languagesBob Gezelter
|  ||  || | |  `- Re: OS implementation languagesJohnny Billquist
|  ||  || | `* Re: OS implementation languagesBob Gezelter
|  ||  || |  +- Re: OS implementation languagesgah4
|  ||  || |  `- Re: OS implementation languagesJohnny Billquist
|  ||  || +- Re: OS implementation languagesSimon Clubley
|  ||  || `* Re: OS implementation languagesDan Cross
|  ||  ||  `- Re: OS implementation languagesJohnny Billquist
|  ||  |`* Re: OS implementation languagesArne Vajhøj
|  ||  | +- Re: OS implementation languagesSingle Stage to Orbit
|  ||  | `* Re: OS implementation languageschrisq
|  ||  |  +- Re: OS implementation languagesplugh
|  ||  |  +- Re: OS implementation languagesArne Vajhøj
|  ||  |  +- Re: OS implementation languagesplugh
|  ||  |  `* Re: OS implementation languagesScott Dorsey
|  ||  |   `* Re: OS implementation languagesChris Townley
|  ||  |    +* Re: OS implementation languagesSimon Clubley
|  ||  |    |+* Re: OS implementation languagesDave Froble
|  ||  |    ||+- Re: OS implementation languagesSingle Stage to Orbit
|  ||  |    ||+- Re: OS implementation languagesArne Vajhøj
|  ||  |    ||`* Re: OS implementation languagesbill
|  ||  |    || `* Re: OS implementation languagesDan Cross
|  ||  |    ||  +* Re: OS implementation languagesbill
|  ||  |    ||  |+* Re: OS implementation languagesSimon Clubley
|  ||  |    ||  ||+* Re: OS implementation languagesbill
|  ||  |    ||  |||+* Re: OS implementation languagesScott Dorsey
|  ||  |    ||  ||||`* Re: OS implementation languagesbill
|  ||  |    ||  |||| `- Re: OS implementation languagesScott Dorsey
|  ||  |    ||  |||`* Re: OS implementation languagesArne Vajhøj
|  ||  |    ||  ||| `* Re: OS implementation languagesbill
|  ||  |    ||  ||`* Re: OS implementation languagesArne Vajhøj
|  ||  |    ||  |`* Re: OS implementation languagesArne Vajhøj
|  ||  |    ||  `- Re: OS implementation languagesArne Vajhøj
|  ||  |    |+- Re: OS implementation languagesChris Townley
|  ||  |    |`* Re: OS implementation languagesBob Gezelter
|  ||  |    `- Re: OS implementation languagesScott Dorsey
|  ||  `* Re: OS implementation languagesArne Vajhøj
|  |`- Re: OS implementation languagesAlexander Schreiber
|  `* Re: OS implementation languagesRich Alderson
`* Re: OS implementation languagesBob Eager

Pages:12345678
Re: OS implementation languages

<ucq14l$39e89$3@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29553&group=comp.os.vms#29553

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!rocksolid2!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: club...@remove_me.eisner.decus.org-Earth.UFP (Simon Clubley)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Thu, 31 Aug 2023 12:30:13 -0000 (UTC)
Organization: A noiseless patient Spider
Lines: 24
Message-ID: <ucq14l$39e89$3@dont-email.me>
References: <uc84kt$3iet2$1@dont-email.me> <qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me> <uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me> <ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me> <uckne0$28ck8$2@dont-email.me> <52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu> <ucl9m8$2bgrm$1@dont-email.me> <uclgav$q0b$1@news.misty.com> <ucne2b$2o3hj$1@dont-email.me> <ucokqh$j50$5@news.misty.com>
Injection-Date: Thu, 31 Aug 2023 12:30:13 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ccac9350f88b1ad2aeac5f23f9bf9411";
logging-data="3455241"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+z6cGrSrGTJiXaDAtwYZHIM4owQnYuD+Y="
User-Agent: slrn/0.9.8.1 (VMS/Multinet)
Cancel-Lock: sha1:pwIbRlylU6n6+kAz9K9SV5eWPZE=
 by: Simon Clubley - Thu, 31 Aug 2023 12:30 UTC

On 2023-08-30, Johnny Billquist <bqt@softjar.se> wrote:
> On 2023-08-30 14:52, Simon Clubley wrote:
>>
>> Why do you say that ? There will always be OS overheads. The only question
>> is how large are those overheads ?
>
> Yes. And that was not the question. Maybe you should go back and check
> what question you actually wrote.
>

Actually, that's _exactly_ what the question was. Read it again.

Given that Linux, on the same hardware, ran slower than FreeBSD,
I was wondering how much slower VMS would be than Linux on the
same hardware.

Those differences, on the same hardware, are due to different OS overheads,
so yes, that's _exactly_ what the question was.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.

Re: OS implementation languages

<ucq4mg$ll1$1@reader2.panix.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29554&group=comp.os.vms#29554

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!panix!.POSTED.spitfire.i.gajendra.net!not-for-mail
From: cro...@spitfire.i.gajendra.net (Dan Cross)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Thu, 31 Aug 2023 13:30:56 -0000 (UTC)
Organization: PANIX Public Access Internet and UNIX, NYC
Message-ID: <ucq4mg$ll1$1@reader2.panix.com>
References: <uc84kt$3iet2$1@dont-email.me> <52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu> <ucl9m8$2bgrm$1@dont-email.me> <215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>
Injection-Date: Thu, 31 Aug 2023 13:30:56 -0000 (UTC)
Injection-Info: reader2.panix.com; posting-host="spitfire.i.gajendra.net:166.84.136.80";
logging-data="22177"; mail-complaints-to="abuse@panix.com"
X-Newsreader: trn 4.0-test77 (Sep 1, 2010)
Originator: cross@spitfire.i.gajendra.net (Dan Cross)
 by: Dan Cross - Thu, 31 Aug 2023 13:30 UTC

In article <215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>,
gah4 <gah4@u.washington.edu> wrote:
>On Tuesday, August 29, 2023 at 10:25:31 AM UTC-7, Simon Clubley wrote:
>
> (snip)
>
>> 400GB/s ??? Is that all ??? Amateurs!!! :-)
>
>> On a more serious note, I wonder what the maximum rate VMS is capable
>> of emitting data at if it was using the fastest network hardware
>> available.
>
>I am not sure what hardware can do now.
>
>Traditionally, Ethernet was much faster than processors, such that the
>shared media could handle the load.
>
>That is less obvious now, but a 400Gb/s network doesn't mean that one host
>can go that fast.

400Gbps is at the high-end of what one can deliver to a single
system at this point; one or two infiniband cards into a PCIe
gen4 backplane will get you there.

This will overwhelm just about any general purpose CPU currently
on the market, so a lot of overhead is offloaded to accelerator
hardware on the NIC, but making effective use of _that_ requires
specialized drivers and cooperation with the host. As a simple
example, the NIC may support offloading layer 3 checksum
calculations, but in order to use that effectively the host
software has to know about it, configure the hardware to do it,
and configure itself to avoid repeating the calculations higher
up in the stack (otherwise, what's the point of offloading?).

This also implies that, "throw more hardware at it!" is only
part of a possible solution to a performance problem: if the
software isn't similarly modified to take advantage of the
capabilities of that hardware, you may not see much in terms of
actual gains.

All of this is to say that the OS can have large effects on
realized, real-world performance on high-end hardware, and it
can be useful to quantify that so as to better understand
performance in context.

- Dan C.

Re: OS implementation languages

<c6d8bc75865f5b3573ab556d380fe1c5a34010d6.camel@munted.eu>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29555&group=comp.os.vms#29555

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!palladium.buellnet!not-for-mail
From: alex.bu...@munted.eu (Single Stage to Orbit)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Thu, 31 Aug 2023 14:26:56 +0100
Organization: One very high maintenance cat
Message-ID: <c6d8bc75865f5b3573ab556d380fe1c5a34010d6.camel@munted.eu>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com>
<uca66i$167h$1@dont-email.me> <uca6ss$12u$4@news.misty.com>
<uca8r4$1kce$1@dont-email.me> <ucbgj8$8kpp$2@dont-email.me>
<ucjbrc$1tji8$1@dont-email.me> <uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me> <uclgav$q0b$1@news.misty.com>
<kl72m6Fg0gpU1@mid.individual.net> <ucm2h6$g0g$2@news.misty.com>
<af00566a-19d5-4be2-85ed-1766f77f7eben@googlegroups.com>
<ucn01j$jie$4@news.misty.com> <ucoai1$2sgp8$1@dont-email.me>
<855b20c4-b1fc-4a4f-acc6-7a407abce7fcn@googlegroups.com>
<ucq08k$39e89$1@dont-email.me>
Reply-To: alex.buell@munted.eu
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Injection-Info: solani.org;
logging-data="835966"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Evolution 3.48.4
Cancel-Lock: sha1:I/Vm89LlqSVfDkxN/s0e60rN5do=
In-Reply-To: <ucq08k$39e89$1@dont-email.me>
X-User-ID: eJwNxMcBwDAIBLCVMOXA45i2/wiJHjLBQbnCoLa2Wu8OgfjO+tjwdoZLapSOX5jS6fjn1+IoypOuTTJRyUkfU50VXQ==
 by: Single Stage to Orbi - Thu, 31 Aug 2023 13:26 UTC

On Thu, 2023-08-31 at 12:15 +0000, Simon Clubley wrote:
> BTW, did you know you can send dd a signal to tell you how far along
> in the copy process it is and to tell you how fast the copy is going
> ?

Newer versions of dd have 'status=progress' you can use to tell dd to
do a running total of bytes copied and the speed it is being copied at.
--
Tactical Nuclear Kittens

Re: OS implementation languages

<ucqhjr$svv$2@news.misty.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29556&group=comp.os.vms#29556

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!.POSTED.10.184.180.213.static.wline.lns.sme.cust.swisscom.ch!not-for-mail
From: bqt...@softjar.se (Johnny Billquist)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Thu, 31 Aug 2023 19:11:23 +0200
Organization: MGT Consulting
Message-ID: <ucqhjr$svv$2@news.misty.com>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me>
<uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me>
<ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me>
<uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me> <uclgav$q0b$1@news.misty.com>
<kl72m6Fg0gpU1@mid.individual.net> <ucm2h6$g0g$2@news.misty.com>
<af00566a-19d5-4be2-85ed-1766f77f7eben@googlegroups.com>
<ucn01j$jie$4@news.misty.com> <ucoai1$2sgp8$1@dont-email.me>
<855b20c4-b1fc-4a4f-acc6-7a407abce7fcn@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 31 Aug 2023 17:11:23 -0000 (UTC)
Injection-Info: news.misty.com; posting-host="10.184.180.213.static.wline.lns.sme.cust.swisscom.ch:213.180.184.10";
logging-data="29695"; mail-complaints-to="abuse@misty.com"
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.14.0
In-Reply-To: <855b20c4-b1fc-4a4f-acc6-7a407abce7fcn@googlegroups.com>
 by: Johnny Billquist - Thu, 31 Aug 2023 17:11 UTC

On 2023-08-31 04:16, Bob Gezelter wrote:
> On Wednesday, August 30, 2023 at 4:58:45 PM UTC-4, Arne Vajhøj wrote:
>> On 8/30/2023 4:53 AM, Johnny Billquist wrote:
>>> Not sure how easy it is to dodge RMS under VMS. In RSX, you can just do
>>> the QIOs to the ACP yourself and go around the whole thing, which makes
>>> I/O way faster. Of course, since files still have this structure thing,
>>> most of the time you are still going to have to pay for it somewhere.
>>> But if you are happy with just raw disk blocks, the basic I/O do not
>>> have near as much penalty. Admitted, the ODS-1 (as well as ODS-2)
>>> structure have some inherent limitations that carry some cost as well.
>>> So you could improve things some by doing some other implementation on
>>> the file system level.
>>> But mainly, no matter what the file system design is, you are still
>>> going to have the pain of RMS, which is the majority of the cost. And
>>> you'll never get away from this as long as you use VMS.
>> SYS$QIO(W) for files works fine on VMS too.
>>
>> But a bit of a hassle to use.
>>
>> There are two alternative ways to to bypass RMS:
>> * SYS$IO_PERFORM(W) - the "fast I/O" thingy
>> * SYS$CRMPSC - mapping the file to memory
>>
>> Arne
> Arne,
>
> One can bypass RMS, but it is not RMS that is the inherent problem. In my experience, it is not so much using RMS, but using RMS poorly that is the source of most problems.
>
> As I noted in another post in this thread, increasing buffer factors and block sizes often virtually eliminates "RMS" performance problems. File extensions are costly, extending files by large increments also reduces overhead, increasing performance.

I would agree that you can certainly make RMS give better performance
than it does by default. Caching data, getting it to do fewer copying
operations when possible... Extend files by larger increments.
Definitely helps...

But depending on how far you want to push I/O, at some point, skipping
RMS is obviously always going to give you more I/O performance. But at
the cost of either have to replicate parts of what RMS do yourself, or
really just deal with raw disk blocks.

But it is also true that the ODS-2 (or -5 I guess) filesystem could also
be improved on. But that's a bit more work, and it's not horribly bad
most of the time.

The worst part is that for large files, you need to walk the retrieval
pointers in order to find which logical block to fetch when you request
a virtual block in a file. The retrieval pointer table is not that great
when you have a large file, as you cannot skip parts of it, but need to
always scan it from start up to the virtual block you want. Which might
require reading additional blocks for the file header, getting to the
extension headers.

Compared to, for example ffs in Unix, this is not as fast. In ffs, you
can directly compute where to find the mapping for a virtual block to
it's logical block without traversing a list of unknown size. Even for
very large files. It might require reading up to 3 additional disk
blocks in order to find the mapping, when we talk about really, really
big files. But that is still probably cheap compared to ODS-2 for
equally large files. Not to mention that caching makes it much cheaper
on the ffs side than on the ODS-2 side. Other Unix file systems have
improved some over ffs as well, so they do even better. ODS-2 is old,
and have it's less good sides.

Then you have all the complexity of the directory files that makes it
costly do add and remove information when the directory is large.

So sure. A new file system could help to improve performance of disk I/O
in VMS, but it's generally not at all as bad as some people try to make
it out as.

Another thing is memory mapped files, which is a big deal in Unix. The
reason for that is actually because "normal" I/O always goes through
intermediary buffers in Unix, so you have some significant overhead and
additional copying going on all the time. Using memory mapped I/O
circumvents all that bad cruft in Unix. In VMS, if you use QIO and talk
directly to the ACP (or XQP), you are already in that good place that
Unix people achieve with memory mapped I/O. Which is basically that
reads and writes go directly from disk to the user process memory. You
can't do any better than that. Memory mapped I/O do not allow you to go
around the fact that the data still need to be accessed on the disk, and
DMAed into memory somewhere. That is the absolute minimum that always
have to happen. And with the direct talking to the ACP, you are already
there.

Johnny

Re: OS implementation languages

<ucqhr3$svv$3@news.misty.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29557&group=comp.os.vms#29557

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!.POSTED.10.184.180.213.static.wline.lns.sme.cust.swisscom.ch!not-for-mail
From: bqt...@softjar.se (Johnny Billquist)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Thu, 31 Aug 2023 19:15:15 +0200
Organization: MGT Consulting
Message-ID: <ucqhr3$svv$3@news.misty.com>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me>
<uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me>
<ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me>
<uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me> <uclgav$q0b$1@news.misty.com>
<kl72m6Fg0gpU1@mid.individual.net> <ucm2h6$g0g$2@news.misty.com>
<af00566a-19d5-4be2-85ed-1766f77f7eben@googlegroups.com>
<ucn01j$jie$4@news.misty.com> <ucoai1$2sgp8$1@dont-email.me>
<855b20c4-b1fc-4a4f-acc6-7a407abce7fcn@googlegroups.com>
<ucq08k$39e89$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Thu, 31 Aug 2023 17:15:16 -0000 (UTC)
Injection-Info: news.misty.com; posting-host="10.184.180.213.static.wline.lns.sme.cust.swisscom.ch:213.180.184.10";
logging-data="29695"; mail-complaints-to="abuse@misty.com"
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.14.0
In-Reply-To: <ucq08k$39e89$1@dont-email.me>
 by: Johnny Billquist - Thu, 31 Aug 2023 17:15 UTC

On 2023-08-31 14:15, Simon Clubley wrote:
> On 2023-08-30, Bob Gezelter <gezelter@rlgsc.com> wrote:
>>
>> The other day I needed to copy a bare partition on a linux system. When refreshing my recollection of dd, noted that dd, for historical reasons, copies one block at a time. Increased performance by reading/writing a megabyte at a time.
>>
>
> :-)
>
> Adjusting dd command line options for efficiency is one of the first things
> you learn to do when you start using dd for "serious" things. :-)

Yes. That can make a *huge* difference.

> I learnt that back in the late 1990s when it comes to dd...
>
> BTW, did you know you can send dd a signal to tell you how far along
> in the copy process it is and to tell you how fast the copy is going ?

Yes. But which signal differs between systems. In Linux you use SIGUSR1
(for lack of any better choice).
In BSD you send it SIGINFO, which you can configure your terminal to
send when you hit something like ^T (now, where have I seen ^T for
printing out some information before...? ;-) ).

Johnny

Re: OS implementation languages

<ucqi23$svv$4@news.misty.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29558&group=comp.os.vms#29558

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!.POSTED.10.184.180.213.static.wline.lns.sme.cust.swisscom.ch!not-for-mail
From: bqt...@softjar.se (Johnny Billquist)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Thu, 31 Aug 2023 19:18:59 +0200
Organization: MGT Consulting
Message-ID: <ucqi23$svv$4@news.misty.com>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me>
<uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me>
<ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me>
<uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me> <uclgav$q0b$1@news.misty.com>
<ucne2b$2o3hj$1@dont-email.me> <ucokqh$j50$5@news.misty.com>
<ucq14l$39e89$3@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Thu, 31 Aug 2023 17:19:00 -0000 (UTC)
Injection-Info: news.misty.com; posting-host="10.184.180.213.static.wline.lns.sme.cust.swisscom.ch:213.180.184.10";
logging-data="29695"; mail-complaints-to="abuse@misty.com"
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.14.0
In-Reply-To: <ucq14l$39e89$3@dont-email.me>
 by: Johnny Billquist - Thu, 31 Aug 2023 17:18 UTC

On 2023-08-31 14:30, Simon Clubley wrote:
> On 2023-08-30, Johnny Billquist <bqt@softjar.se> wrote:
>> On 2023-08-30 14:52, Simon Clubley wrote:
>>>
>>> Why do you say that ? There will always be OS overheads. The only question
>>> is how large are those overheads ?
>>
>> Yes. And that was not the question. Maybe you should go back and check
>> what question you actually wrote.
>>
>
> Actually, that's _exactly_ what the question was. Read it again.
>
> Given that Linux, on the same hardware, ran slower than FreeBSD,
> I was wondering how much slower VMS would be than Linux on the
> same hardware.
>
> Those differences, on the same hardware, are due to different OS overheads,
> so yes, that's _exactly_ what the question was.

*Sigh*. Since you obviously then are incapable of checking what your
question was, or remember it, I guess I'll have to quote it for you:

"On a more serious note, I wonder what the maximum rate VMS is capable
of emitting data at if it was using the fastest network hardware
available. "

Now, where was there any talk about "the same hardware running another OS"?

All you said was "the fastest network hardware available". How hard can
this be to read?

Johnny

Re: OS implementation languages

<ucqjhr$3cqvj$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29559&group=comp.os.vms#29559

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: club...@remove_me.eisner.decus.org-Earth.UFP (Simon Clubley)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Thu, 31 Aug 2023 17:44:27 -0000 (UTC)
Organization: A noiseless patient Spider
Lines: 52
Message-ID: <ucqjhr$3cqvj$1@dont-email.me>
References: <uc84kt$3iet2$1@dont-email.me> <qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me> <uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me> <ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me> <uckne0$28ck8$2@dont-email.me> <52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu> <ucl9m8$2bgrm$1@dont-email.me> <uclgav$q0b$1@news.misty.com> <ucne2b$2o3hj$1@dont-email.me> <ucokqh$j50$5@news.misty.com> <ucq14l$39e89$3@dont-email.me> <ucqi23$svv$4@news.misty.com>
Injection-Date: Thu, 31 Aug 2023 17:44:27 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ccac9350f88b1ad2aeac5f23f9bf9411";
logging-data="3566579"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+WGLF1DWv0ya7KqZjwzze6NOD7p3xuBeY="
User-Agent: slrn/0.9.8.1 (VMS/Multinet)
Cancel-Lock: sha1:T9ug3BTviw4mnLEfG/U90bdhldQ=
 by: Simon Clubley - Thu, 31 Aug 2023 17:44 UTC

On 2023-08-31, Johnny Billquist <bqt@softjar.se> wrote:
> On 2023-08-31 14:30, Simon Clubley wrote:
>> On 2023-08-30, Johnny Billquist <bqt@softjar.se> wrote:
>>> On 2023-08-30 14:52, Simon Clubley wrote:
>>>>
>>>> Why do you say that ? There will always be OS overheads. The only question
>>>> is how large are those overheads ?
>>>
>>> Yes. And that was not the question. Maybe you should go back and check
>>> what question you actually wrote.
>>>
>>
>> Actually, that's _exactly_ what the question was. Read it again.
>>
>> Given that Linux, on the same hardware, ran slower than FreeBSD,
>> I was wondering how much slower VMS would be than Linux on the
>> same hardware.
>>
>> Those differences, on the same hardware, are due to different OS overheads,
>> so yes, that's _exactly_ what the question was.
>
> *Sigh*. Since you obviously then are incapable of checking what your
> question was, or remember it, I guess I'll have to quote it for you:
>
> "On a more serious note, I wonder what the maximum rate VMS is capable
> of emitting data at if it was using the fastest network hardware
> available. "
>
> Now, where was there any talk about "the same hardware running another OS"?
>

It's in the context of the discussion Johnny.

FreeBSD is being used in this case with what is pretty much the fastest
networking hardware available today in a server.

I merely extended that observation to ask how VMS would perform in
those circumstances.

I suspect that you are probably the only one around here who doesn't
understand that (or doesn't _want_ to understand that for whatever
reason). Wrong side of the bed or crappy weather in Zurich ? :-)

> All you said was "the fastest network hardware available". How hard can
> this be to read?
>

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.

Re: OS implementation languages

<ucqjkg$svv$5@news.misty.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29560&group=comp.os.vms#29560

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!.POSTED.10.184.180.213.static.wline.lns.sme.cust.swisscom.ch!not-for-mail
From: bqt...@softjar.se (Johnny Billquist)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Thu, 31 Aug 2023 19:45:52 +0200
Organization: MGT Consulting
Message-ID: <ucqjkg$svv$5@news.misty.com>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me>
<uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me>
<ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me>
<uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me>
<215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>
<44b4f729-ed03-4ac5-bed2-5e20244b10c8n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 31 Aug 2023 17:45:52 -0000 (UTC)
Injection-Info: news.misty.com; posting-host="10.184.180.213.static.wline.lns.sme.cust.swisscom.ch:213.180.184.10";
logging-data="29695"; mail-complaints-to="abuse@misty.com"
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.14.0
In-Reply-To: <44b4f729-ed03-4ac5-bed2-5e20244b10c8n@googlegroups.com>
 by: Johnny Billquist - Thu, 31 Aug 2023 17:45 UTC

On 2023-08-31 11:23, Bob Gezelter wrote:
> On Thursday, August 31, 2023 at 12:35:05 AM UTC-4, gah4 wrote:
>> On Tuesday, August 29, 2023 at 10:25:31 AM UTC-7, Simon Clubley wrote:
>>
>> (snip)
>>> 400GB/s ??? Is that all ??? Amateurs!!! :-)
>>> On a more serious note, I wonder what the maximum rate VMS is capable
>>> of emitting data at if it was using the fastest network hardware
>>> available.
>> I am not sure what hardware can do now.
>>
>> Traditionally, Ethernet was much faster than processors, such that the
>> shared media could handle the load.
>>
>> That is less obvious now, but a 400Gb/s network doesn't mean that one host
>> can go that fast.
>>
>> Otherwise, there are many stories of running old OS in emulation on modern
>> hardware, running into problems that never would have occurred years ago.
>>
>> One is that emulators often do synchronous I/O. The I/O interrupt occurs almost
>> immediately, as seen by the OS. Some OS assume that there is time in between.
>>
>> It is, then, possible that surprises will be found when running I/O at higher speed.
>>
>> It might be useful to say which host architecture you were asking about.
>> I am sure no-one thought about 400Gb/s Ethernet for VAX.
> gah4,
>
> Ethernet (and other CSMA/CD networking approaches) in a configuration with more than a single, full duplex connection connecting two adapters are essentially limited to a maximum effective utilization of 30% before contention backoff becomes unacceptable.

Actually, that was/is a common misconception that still appears to be
around. With ethernet it was shown that it performs just fine up to
about 70-80% utilization.

The 30% is basically what you see with Aloha based communication.
However, Ethernet is not really like Aloha. The point being that once
the initial 64 bits have been transmitted, noone else is going to start
transmitting, so no collision will happen after that, and the full
packet will go through. Basically, ethernet is based on listen before
starting transmit, while Aloha do no such thing.

You can find some math behid this on
https://intronetworks.cs.luc.edu/1/html/ethernet.html for example. But
you should be able to find more articles about this if you search around.

> Active switches, as opposed to hubs, can increase this threshold as each cable is physically connected to an interface and a switch port.

Yes. With TP and switches on ethernet, every connection is contention
less, and the only problem is that switches have finite resources, and
if many ports want to talk towards a specific port, that will be
saturated, and packets will have to be dropped. But you will be able to
get to the nominal speed of the port at least.

> Instant I/O completion has uncovered all manner of bugs and deficiencies over the years. Most such problems are at the driver level, where a driver fails to ensure that data structures are correctly prepared for an immediate interrupt when the "Go" bit is triggered. On occasion, an application has been written to run very I/O bound with the presumption that I/O delay will slow it down, e.g., OpenVMS COPY, RSX-11 PIP, linux dd. Combine a greedy application with a high dispatch priority and voila, monopolized machine for the duration. On OpenVMS, put a process at a base priority above most users, say 6-7, and run a large COPY from one instantaneous mass storage device to another. Other users see an effectively dead machine.

True. Most of the time I don't think there are many problems with I/O
completing immediately (there is one known such problem in RSX, and that
is the console terminal device driver that gets the data reversed if
interrupt happen immediately). But starving other processes for
resources when I/O completes immediately is definitely a thing in almost
any OS.

> Virtual machine-level synchronous I/O is a gargantuan performance impediment on many levels. My aforementioned Ph.D. dissertation on I/O performance, one of the main conclusions was that unnecessary serialization at any point in the I/O stack was a performance cliff. Serialization obstructs lower-level optimization and workload management. The mathematics could not be more definitive.

Yes. And I'll just reflect from an RSX point of view here, since I'm so
intimately aware of the innards there. RSX actually have very little
serialization in most places. The system is very asynchronous in its
core design. Which is utilized by drivers and software to allow hardware
to reorder operations when it want to, since hardware can do this better
than software most of the time. But reordering also in software is
something that RSX can do. But for the MSCP device driver, for example,
software reordering is disabled, because it just lets the hardware do it
instead. And requests from programs can easily be asynchronous as well.
In fact they always are, but programs can request to wait for the
completion if they don't want to do something else meanwhile. But lots
of programs do make use of this possibility, since the CPU wasn't that
fast to begin with, so you were often very interested in ways to get
more performance out of your software, and having I/O going on in the
background was the most obvious way to improve.

VMS inherited much of the design around this from RSX, so much is the
same. But of course, with more resources, less effort was put into
making use of such contructs. After all, it takes more work to write and
ensure it works right.

Johnny

Re: OS implementation languages

<ucqjtc$svv$6@news.misty.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29561&group=comp.os.vms#29561

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!.POSTED.10.184.180.213.static.wline.lns.sme.cust.swisscom.ch!not-for-mail
From: bqt...@softjar.se (Johnny Billquist)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Thu, 31 Aug 2023 19:50:36 +0200
Organization: MGT Consulting
Message-ID: <ucqjtc$svv$6@news.misty.com>
References: <uc84kt$3iet2$1@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me>
<215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>
<ucq4mg$ll1$1@reader2.panix.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 31 Aug 2023 17:50:36 -0000 (UTC)
Injection-Info: news.misty.com; posting-host="10.184.180.213.static.wline.lns.sme.cust.swisscom.ch:213.180.184.10";
logging-data="29695"; mail-complaints-to="abuse@misty.com"
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.14.0
In-Reply-To: <ucq4mg$ll1$1@reader2.panix.com>
 by: Johnny Billquist - Thu, 31 Aug 2023 17:50 UTC

On 2023-08-31 15:30, Dan Cross wrote:
> In article <215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>,
> gah4 <gah4@u.washington.edu> wrote:
>> On Tuesday, August 29, 2023 at 10:25:31 AM UTC-7, Simon Clubley wrote:
>>
>> (snip)
>>
>>> 400GB/s ??? Is that all ??? Amateurs!!! :-)
>>
>>> On a more serious note, I wonder what the maximum rate VMS is capable
>>> of emitting data at if it was using the fastest network hardware
>>> available.
>>
>> I am not sure what hardware can do now.
>>
>> Traditionally, Ethernet was much faster than processors, such that the
>> shared media could handle the load.
>>
>> That is less obvious now, but a 400Gb/s network doesn't mean that one host
>> can go that fast.
>
> 400Gbps is at the high-end of what one can deliver to a single
> system at this point; one or two infiniband cards into a PCIe
> gen4 backplane will get you there.
>
> This will overwhelm just about any general purpose CPU currently
> on the market, so a lot of overhead is offloaded to accelerator
> hardware on the NIC, but making effective use of _that_ requires
> specialized drivers and cooperation with the host. As a simple
> example, the NIC may support offloading layer 3 checksum
> calculations, but in order to use that effectively the host
> software has to know about it, configure the hardware to do it,
> and configure itself to avoid repeating the calculations higher
> up in the stack (otherwise, what's the point of offloading?).

Well. It's not that you are so much overloading the CPU as the bus. It's
a huge bottleneck to move data anywhere. So yes, offloading as much as
possible out to where the data is already passing by anyway is a big win.

I haven't checked, but I would hope that VMS can also make use of things
like checksum offloading. Pretty much any other OS can these days.

> This also implies that, "throw more hardware at it!" is only
> part of a possible solution to a performance problem: if the
> software isn't similarly modified to take advantage of the
> capabilities of that hardware, you may not see much in terms of
> actual gains.

Certainly. If you throw hardware at a problem, and then don't use the
hardware, you didn't gain anything.

Johnny

Re: OS implementation languages

<75e465e0-5eef-49e2-b7ea-aa845d526ec4n@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29562&group=comp.os.vms#29562

  copy link   Newsgroups: comp.os.vms
X-Received: by 2002:a05:622a:1814:b0:40f:ea7a:52a2 with SMTP id t20-20020a05622a181400b0040fea7a52a2mr9474qtc.3.1693509121801;
Thu, 31 Aug 2023 12:12:01 -0700 (PDT)
X-Received: by 2002:a17:902:ce83:b0:1c2:b50:c91d with SMTP id
f3-20020a170902ce8300b001c20b50c91dmr197702plg.10.1693509121260; Thu, 31 Aug
2023 12:12:01 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.os.vms
Date: Thu, 31 Aug 2023 12:12:00 -0700 (PDT)
In-Reply-To: <ucqjkg$svv$5@news.misty.com>
Injection-Info: google-groups.googlegroups.com; posting-host=108.27.245.253; posting-account=r2_qcwoAAACbIdit5Eka3ivGvrYZz7UQ
NNTP-Posting-Host: 108.27.245.253
References: <uc84kt$3iet2$1@dont-email.me> <qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com>
<uca66i$167h$1@dont-email.me> <uca6ss$12u$4@news.misty.com>
<uca8r4$1kce$1@dont-email.me> <ucbgj8$8kpp$2@dont-email.me>
<ucjbrc$1tji8$1@dont-email.me> <uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me> <215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>
<44b4f729-ed03-4ac5-bed2-5e20244b10c8n@googlegroups.com> <ucqjkg$svv$5@news.misty.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <75e465e0-5eef-49e2-b7ea-aa845d526ec4n@googlegroups.com>
Subject: Re: OS implementation languages
From: gezel...@rlgsc.com (Bob Gezelter)
Injection-Date: Thu, 31 Aug 2023 19:12:01 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 9437
 by: Bob Gezelter - Thu, 31 Aug 2023 19:12 UTC

On Thursday, August 31, 2023 at 1:45:56 PM UTC-4, Johnny Billquist wrote:
> On 2023-08-31 11:23, Bob Gezelter wrote:
> > On Thursday, August 31, 2023 at 12:35:05 AM UTC-4, gah4 wrote:
> >> On Tuesday, August 29, 2023 at 10:25:31 AM UTC-7, Simon Clubley wrote:
> >>
> >> (snip)
> >>> 400GB/s ??? Is that all ??? Amateurs!!! :-)
> >>> On a more serious note, I wonder what the maximum rate VMS is capable
> >>> of emitting data at if it was using the fastest network hardware
> >>> available.
> >> I am not sure what hardware can do now.
> >>
> >> Traditionally, Ethernet was much faster than processors, such that the
> >> shared media could handle the load.
> >>
> >> That is less obvious now, but a 400Gb/s network doesn't mean that one host
> >> can go that fast.
> >>
> >> Otherwise, there are many stories of running old OS in emulation on modern
> >> hardware, running into problems that never would have occurred years ago.
> >>
> >> One is that emulators often do synchronous I/O. The I/O interrupt occurs almost
> >> immediately, as seen by the OS. Some OS assume that there is time in between.
> >>
> >> It is, then, possible that surprises will be found when running I/O at higher speed.
> >>
> >> It might be useful to say which host architecture you were asking about.
> >> I am sure no-one thought about 400Gb/s Ethernet for VAX.
> > gah4,
> >
> > Ethernet (and other CSMA/CD networking approaches) in a configuration with more than a single, full duplex connection connecting two adapters are essentially limited to a maximum effective utilization of 30% before contention backoff becomes unacceptable.
> Actually, that was/is a common misconception that still appears to be
> around. With ethernet it was shown that it performs just fine up to
> about 70-80% utilization.
>
> The 30% is basically what you see with Aloha based communication.
> However, Ethernet is not really like Aloha. The point being that once
> the initial 64 bits have been transmitted, noone else is going to start
> transmitting, so no collision will happen after that, and the full
> packet will go through. Basically, ethernet is based on listen before
> starting transmit, while Aloha do no such thing.
>
> You can find some math behid this on
> https://intronetworks.cs.luc.edu/1/html/ethernet.html for example. But
> you should be able to find more articles about this if you search around.
> > Active switches, as opposed to hubs, can increase this threshold as each cable is physically connected to an interface and a switch port.
> Yes. With TP and switches on ethernet, every connection is contention
> less, and the only problem is that switches have finite resources, and
> if many ports want to talk towards a specific port, that will be
> saturated, and packets will have to be dropped. But you will be able to
> get to the nominal speed of the port at least.
> > Instant I/O completion has uncovered all manner of bugs and deficiencies over the years. Most such problems are at the driver level, where a driver fails to ensure that data structures are correctly prepared for an immediate interrupt when the "Go" bit is triggered. On occasion, an application has been written to run very I/O bound with the presumption that I/O delay will slow it down, e.g., OpenVMS COPY, RSX-11 PIP, linux dd. Combine a greedy application with a high dispatch priority and voila, monopolized machine for the duration. On OpenVMS, put a process at a base priority above most users, say 6-7, and run a large COPY from one instantaneous mass storage device to another. Other users see an effectively dead machine.
> True. Most of the time I don't think there are many problems with I/O
> completing immediately (there is one known such problem in RSX, and that
> is the console terminal device driver that gets the data reversed if
> interrupt happen immediately). But starving other processes for
> resources when I/O completes immediately is definitely a thing in almost
> any OS.
> > Virtual machine-level synchronous I/O is a gargantuan performance impediment on many levels. My aforementioned Ph.D. dissertation on I/O performance, one of the main conclusions was that unnecessary serialization at any point in the I/O stack was a performance cliff. Serialization obstructs lower-level optimization and workload management. The mathematics could not be more definitive.
> Yes. And I'll just reflect from an RSX point of view here, since I'm so
> intimately aware of the innards there. RSX actually have very little
> serialization in most places. The system is very asynchronous in its
> core design. Which is utilized by drivers and software to allow hardware
> to reorder operations when it want to, since hardware can do this better
> than software most of the time. But reordering also in software is
> something that RSX can do. But for the MSCP device driver, for example,
> software reordering is disabled, because it just lets the hardware do it
> instead. And requests from programs can easily be asynchronous as well.
> In fact they always are, but programs can request to wait for the
> completion if they don't want to do something else meanwhile. But lots
> of programs do make use of this possibility, since the CPU wasn't that
> fast to begin with, so you were often very interested in ways to get
> more performance out of your software, and having I/O going on in the
> background was the most obvious way to improve.
>
> VMS inherited much of the design around this from RSX, so much is the
> same. But of course, with more resources, less effort was put into
> making use of such contructs. After all, it takes more work to write and
> ensure it works right.
>
> Johnny
Johnny,

You are correct that much of the I/O plumbing is highly similar in RSX and VMS, and for that matter WNT. A case of common authorship.

To understand the issues fully, one has to think not just linearly, but also carefully consider the timeline and other time-related issues. There are many conceptual traps.

However, file I/O has an explicit serialization: virtual to logical block mapping. The original I/O packet is reused iteratively to process the potentially discontiguous logical blocks required. This imposes a serialization that is not conducive to performance. There is an extended description and analysis in my dissertation/monograph; a somewhat abbreviated version of which is in the previously referenced IEEE conference paper.

If one looks for correlations of logical block requests, the most common correlations are requests to the same file on the same access stream. Ibid. Sequential reuse of the same IO packet prevents optimization within a request stream.

Hardware reordering can only reorder requests that are simultaneously outstanding. Ibid.

The RSX design was obligatory due to the then lack of system pool. Then, even individual IO Packets were precious. Correctness beats efficiency. For more than two decades, increases in memory capacity and non-paged pool expansion has rendered the memory minimization design goal quaintly anachronistic.

The preceding is all in the reprint. A thorough reading of the referenced paper is highly recommended. When I can find the time to finish copy-editing the monograph and get it printed. I will note the general availability in a posting.

- Bob Gezelter, http://www.rlgsc.com

Re: OS implementation languages

<ucqpsg$3dpgm$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29563&group=comp.os.vms#29563

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: dav...@tsoft-inc.com (Dave Froble)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Thu, 31 Aug 2023 15:32:50 -0400
Organization: A noiseless patient Spider
Lines: 108
Message-ID: <ucqpsg$3dpgm$1@dont-email.me>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me>
<uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me>
<ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me>
<uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me> <uclgav$q0b$1@news.misty.com>
<kl72m6Fg0gpU1@mid.individual.net> <ucm2h6$g0g$2@news.misty.com>
<af00566a-19d5-4be2-85ed-1766f77f7eben@googlegroups.com>
<ucn01j$jie$4@news.misty.com> <ucoai1$2sgp8$1@dont-email.me>
<855b20c4-b1fc-4a4f-acc6-7a407abce7fcn@googlegroups.com>
<ucqhjr$svv$2@news.misty.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 31 Aug 2023 19:32:33 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="193eccf1757581dd16bd4262fff1893a";
logging-data="3597846"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+TlK2J1nlEA7I2WD3ePuPVe1lOhOHFi7w="
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:45.0) Gecko/20100101
Thunderbird/45.8.0
Cancel-Lock: sha1:53zWBDOj1sv1rKmAj4DsCAsrJ4w=
In-Reply-To: <ucqhjr$svv$2@news.misty.com>
 by: Dave Froble - Thu, 31 Aug 2023 19:32 UTC

On 8/31/2023 1:11 PM, Johnny Billquist wrote:
> On 2023-08-31 04:16, Bob Gezelter wrote:
>> On Wednesday, August 30, 2023 at 4:58:45 PM UTC-4, Arne Vajhøj wrote:
>>> On 8/30/2023 4:53 AM, Johnny Billquist wrote:
>>>> Not sure how easy it is to dodge RMS under VMS. In RSX, you can just do
>>>> the QIOs to the ACP yourself and go around the whole thing, which makes
>>>> I/O way faster. Of course, since files still have this structure thing,
>>>> most of the time you are still going to have to pay for it somewhere.
>>>> But if you are happy with just raw disk blocks, the basic I/O do not
>>>> have near as much penalty. Admitted, the ODS-1 (as well as ODS-2)
>>>> structure have some inherent limitations that carry some cost as well.
>>>> So you could improve things some by doing some other implementation on
>>>> the file system level.
>>>> But mainly, no matter what the file system design is, you are still
>>>> going to have the pain of RMS, which is the majority of the cost. And
>>>> you'll never get away from this as long as you use VMS.
>>> SYS$QIO(W) for files works fine on VMS too.
>>>
>>> But a bit of a hassle to use.
>>>
>>> There are two alternative ways to to bypass RMS:
>>> * SYS$IO_PERFORM(W) - the "fast I/O" thingy
>>> * SYS$CRMPSC - mapping the file to memory
>>>
>>> Arne
>> Arne,
>>
>> One can bypass RMS, but it is not RMS that is the inherent problem. In my
>> experience, it is not so much using RMS, but using RMS poorly that is the
>> source of most problems.
>>
>> As I noted in another post in this thread, increasing buffer factors and block
>> sizes often virtually eliminates "RMS" performance problems. File extensions
>> are costly, extending files by large increments also reduces overhead,
>> increasing performance.
>
> I would agree that you can certainly make RMS give better performance than it
> does by default. Caching data, getting it to do fewer copying operations when
> possible... Extend files by larger increments. Definitely helps...

Making your database files contiguous seems to be a winner. Larger clustersize
for database files also helps, a lot.

Moving the data directly from disk to I/O buffers, and back, avoids intermediate
copying of the data. Not talking about RMS here.

> But depending on how far you want to push I/O, at some point, skipping RMS is
> obviously always going to give you more I/O performance. But at the cost of
> either have to replicate parts of what RMS do yourself, or really just deal with
> raw disk blocks.

Breaking out the data fields is costly. But what else would work? If one needs
access to the data fields, then that is part of the task.

> But it is also true that the ODS-2 (or -5 I guess) filesystem could also be
> improved on. But that's a bit more work, and it's not horribly bad most of the
> time.

Depends on how it is used.

> The worst part is that for large files, you need to walk the retrieval pointers
> in order to find which logical block to fetch when you request a virtual block
> in a file. The retrieval pointer table is not that great when you have a large
> file, as you cannot skip parts of it, but need to always scan it from start up
> to the virtual block you want. Which might require reading additional blocks for
> the file header, getting to the extension headers.

Contiguous files ....

> Compared to, for example ffs in Unix, this is not as fast. In ffs, you can
> directly compute where to find the mapping for a virtual block to it's logical
> block without traversing a list of unknown size. Even for very large files. It
> might require reading up to 3 additional disk blocks in order to find the
> mapping, when we talk about really, really big files. But that is still probably
> cheap compared to ODS-2 for equally large files. Not to mention that caching
> makes it much cheaper on the ffs side than on the ODS-2 side. Other Unix file
> systems have improved some over ffs as well, so they do even better. ODS-2 is
> old, and have it's less good sides.
>
> Then you have all the complexity of the directory files that makes it costly do
> add and remove information when the directory is large.

Don't have large directory files ....

> So sure. A new file system could help to improve performance of disk I/O in VMS,
> but it's generally not at all as bad as some people try to make it out as.
>
> Another thing is memory mapped files, which is a big deal in Unix. The reason
> for that is actually because "normal" I/O always goes through intermediary
> buffers in Unix, so you have some significant overhead and additional copying
> going on all the time. Using memory mapped I/O circumvents all that bad cruft in
> Unix. In VMS, if you use QIO and talk directly to the ACP (or XQP), you are
> already in that good place that Unix people achieve with memory mapped I/O.
> Which is basically that reads and writes go directly from disk to the user
> process memory. You can't do any better than that. Memory mapped I/O do not
> allow you to go around the fact that the data still need to be accessed on the
> disk, and DMAed into memory somewhere. That is the absolute minimum that always
> have to happen. And with the direct talking to the ACP, you are already there.

Intelligent usage is a big winner ....

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: davef@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

Re: OS implementation languages

<ucqut1$3eejq$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29564&group=comp.os.vms#29564

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: FIRST.L...@vmssoftware.com (Robert A. Brooks)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Thu, 31 Aug 2023 16:58:10 -0400
Organization: A noiseless patient Spider
Lines: 15
Message-ID: <ucqut1$3eejq$1@dont-email.me>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me>
<uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me>
<ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me>
<uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me> <uclgav$q0b$1@news.misty.com>
<kl72m6Fg0gpU1@mid.individual.net> <ucm2h6$g0g$2@news.misty.com>
<af00566a-19d5-4be2-85ed-1766f77f7eben@googlegroups.com>
<ucn01j$jie$4@news.misty.com> <ucoai1$2sgp8$1@dont-email.me>
<855b20c4-b1fc-4a4f-acc6-7a407abce7fcn@googlegroups.com>
<ucqhjr$svv$2@news.misty.com> <ucqpsg$3dpgm$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 31 Aug 2023 20:58:10 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="693ffeb5d74d202d31eeddf13c1677fc";
logging-data="3619450"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+T4nB57s5/S19hHeyBa6k3NSOSziOg1l/q1NFEVhhNsw=="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.15.0
Cancel-Lock: sha1:wOp2mrEkSxuDWXT79S6fwhuaI2g=
X-Antivirus: Avast (VPS 230831-6, 8/31/2023), Outbound message
X-Antivirus-Status: Clean
In-Reply-To: <ucqpsg$3dpgm$1@dont-email.me>
Content-Language: en-US
 by: Robert A. Brooks - Thu, 31 Aug 2023 20:58 UTC

On 8/31/2023 3:32 PM, Dave Froble wrote:

> Making your database files contiguous seems to be a winner.  Larger clustersize
> for database files also helps, a lot.

Cluster size is not used for I/O; it's only used for storage allocation, where
a single bit in the storage allocation bitmap covers the number of blocks
in that volume's clustersize.

I/O is not done in clustersize chunks.

--

--- Rob

Re: OS implementation languages

<9e5dc8f0-9304-45f1-8a42-dab90e6c062en@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29565&group=comp.os.vms#29565

  copy link   Newsgroups: comp.os.vms
X-Received: by 2002:ad4:57a8:0:b0:63d:32a7:5257 with SMTP id g8-20020ad457a8000000b0063d32a75257mr17371qvx.4.1693526906200;
Thu, 31 Aug 2023 17:08:26 -0700 (PDT)
X-Received: by 2002:a05:6a00:17a9:b0:68a:4bef:5faa with SMTP id
s41-20020a056a0017a900b0068a4bef5faamr530202pfg.0.1693526905674; Thu, 31 Aug
2023 17:08:25 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.os.vms
Date: Thu, 31 Aug 2023 17:08:24 -0700 (PDT)
In-Reply-To: <ucqut1$3eejq$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=108.27.245.253; posting-account=r2_qcwoAAACbIdit5Eka3ivGvrYZz7UQ
NNTP-Posting-Host: 108.27.245.253
References: <uc84kt$3iet2$1@dont-email.me> <qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com>
<uca66i$167h$1@dont-email.me> <uca6ss$12u$4@news.misty.com>
<uca8r4$1kce$1@dont-email.me> <ucbgj8$8kpp$2@dont-email.me>
<ucjbrc$1tji8$1@dont-email.me> <uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me> <uclgav$q0b$1@news.misty.com>
<kl72m6Fg0gpU1@mid.individual.net> <ucm2h6$g0g$2@news.misty.com>
<af00566a-19d5-4be2-85ed-1766f77f7eben@googlegroups.com> <ucn01j$jie$4@news.misty.com>
<ucoai1$2sgp8$1@dont-email.me> <855b20c4-b1fc-4a4f-acc6-7a407abce7fcn@googlegroups.com>
<ucqhjr$svv$2@news.misty.com> <ucqpsg$3dpgm$1@dont-email.me> <ucqut1$3eejq$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <9e5dc8f0-9304-45f1-8a42-dab90e6c062en@googlegroups.com>
Subject: Re: OS implementation languages
From: gezel...@rlgsc.com (Bob Gezelter)
Injection-Date: Fri, 01 Sep 2023 00:08:26 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 3559
 by: Bob Gezelter - Fri, 1 Sep 2023 00:08 UTC

On Thursday, August 31, 2023 at 4:58:13 PM UTC-4, Robert A. Brooks wrote:
> On 8/31/2023 3:32 PM, Dave Froble wrote:
>
> > Making your database files contiguous seems to be a winner. Larger clustersize
> > for database files also helps, a lot.
> Cluster size is not used for I/O; it's only used for storage allocation, where
> a single bit in the storage allocation bitmap covers the number of blocks
> in that volume's clustersize.
>
> I/O is not done in clustersize chunks.
>
> --
>
> --- Rob
Rob,

WADR to Dave, he omitted some steps. When Hein was giving DECUS and Bootcamp presentations on optimizing files, he took note of the fact that newer storage arrays worked with 16 block chunks internally. If I recall correctly, he recommended matching the cluster factor and other I/O-related parameters, e.g., bucket size and buffer sizes to the same granularity.

While it does not guarantee a lack of split I/Os it dramatically reduces their frequency, much as aligning quadwords and longwords improves program performance.

I have often encountered logical volumes that originated on RP and RM-class disk drives originally on VAX systems, with cluster sizes set to track and integral divisors of tracks. In the same way, they reduce splits. Of course, when these volumes are migrated to present technologies, the parameters require reconsideration and adjustment for optimal performance.

You are correct that there is no direct connection between cluster size and I/O request size. However, the lack of a direct connection does not prevent one from arranging it to be the most frequent case.

- Bob Gezelter, http://www.rlgsc.com

Re: OS implementation languages

<1920c266-a799-4715-a694-49e81e5fea27n@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29567&group=comp.os.vms#29567

  copy link   Newsgroups: comp.os.vms
X-Received: by 2002:a05:6214:4003:b0:63c:edce:c71e with SMTP id kd3-20020a056214400300b0063cedcec71emr172664qvb.3.1693562814758;
Fri, 01 Sep 2023 03:06:54 -0700 (PDT)
X-Received: by 2002:a17:902:e811:b0:1b8:3c5e:2289 with SMTP id
u17-20020a170902e81100b001b83c5e2289mr759870plg.2.1693562814442; Fri, 01 Sep
2023 03:06:54 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.os.vms
Date: Fri, 1 Sep 2023 03:06:53 -0700 (PDT)
In-Reply-To: <ucqjkg$svv$5@news.misty.com>
Injection-Info: google-groups.googlegroups.com; posting-host=108.27.245.253; posting-account=r2_qcwoAAACbIdit5Eka3ivGvrYZz7UQ
NNTP-Posting-Host: 108.27.245.253
References: <uc84kt$3iet2$1@dont-email.me> <qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com>
<uca66i$167h$1@dont-email.me> <uca6ss$12u$4@news.misty.com>
<uca8r4$1kce$1@dont-email.me> <ucbgj8$8kpp$2@dont-email.me>
<ucjbrc$1tji8$1@dont-email.me> <uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me> <215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>
<44b4f729-ed03-4ac5-bed2-5e20244b10c8n@googlegroups.com> <ucqjkg$svv$5@news.misty.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <1920c266-a799-4715-a694-49e81e5fea27n@googlegroups.com>
Subject: Re: OS implementation languages
From: gezel...@rlgsc.com (Bob Gezelter)
Injection-Date: Fri, 01 Sep 2023 10:06:54 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 9005
 by: Bob Gezelter - Fri, 1 Sep 2023 10:06 UTC

On Thursday, August 31, 2023 at 1:45:56 PM UTC-4, Johnny Billquist wrote:
> On 2023-08-31 11:23, Bob Gezelter wrote:
> > On Thursday, August 31, 2023 at 12:35:05 AM UTC-4, gah4 wrote:
> >> On Tuesday, August 29, 2023 at 10:25:31 AM UTC-7, Simon Clubley wrote:
> >>
> >> (snip)
> >>> 400GB/s ??? Is that all ??? Amateurs!!! :-)
> >>> On a more serious note, I wonder what the maximum rate VMS is capable
> >>> of emitting data at if it was using the fastest network hardware
> >>> available.
> >> I am not sure what hardware can do now.
> >>
> >> Traditionally, Ethernet was much faster than processors, such that the
> >> shared media could handle the load.
> >>
> >> That is less obvious now, but a 400Gb/s network doesn't mean that one host
> >> can go that fast.
> >>
> >> Otherwise, there are many stories of running old OS in emulation on modern
> >> hardware, running into problems that never would have occurred years ago.
> >>
> >> One is that emulators often do synchronous I/O. The I/O interrupt occurs almost
> >> immediately, as seen by the OS. Some OS assume that there is time in between.
> >>
> >> It is, then, possible that surprises will be found when running I/O at higher speed.
> >>
> >> It might be useful to say which host architecture you were asking about.
> >> I am sure no-one thought about 400Gb/s Ethernet for VAX.
> > gah4,
> >
> > Ethernet (and other CSMA/CD networking approaches) in a configuration with more than a single, full duplex connection connecting two adapters are essentially limited to a maximum effective utilization of 30% before contention backoff becomes unacceptable.
> Actually, that was/is a common misconception that still appears to be
> around. With ethernet it was shown that it performs just fine up to
> about 70-80% utilization.
>
> The 30% is basically what you see with Aloha based communication.
> However, Ethernet is not really like Aloha. The point being that once
> the initial 64 bits have been transmitted, noone else is going to start
> transmitting, so no collision will happen after that, and the full
> packet will go through. Basically, ethernet is based on listen before
> starting transmit, while Aloha do no such thing.
>
> You can find some math behid this on
> https://intronetworks.cs.luc.edu/1/html/ethernet.html for example. But
> you should be able to find more articles about this if you search around.
> > Active switches, as opposed to hubs, can increase this threshold as each cable is physically connected to an interface and a switch port.
> Yes. With TP and switches on ethernet, every connection is contention
> less, and the only problem is that switches have finite resources, and
> if many ports want to talk towards a specific port, that will be
> saturated, and packets will have to be dropped. But you will be able to
> get to the nominal speed of the port at least.
> > Instant I/O completion has uncovered all manner of bugs and deficiencies over the years. Most such problems are at the driver level, where a driver fails to ensure that data structures are correctly prepared for an immediate interrupt when the "Go" bit is triggered. On occasion, an application has been written to run very I/O bound with the presumption that I/O delay will slow it down, e.g., OpenVMS COPY, RSX-11 PIP, linux dd. Combine a greedy application with a high dispatch priority and voila, monopolized machine for the duration. On OpenVMS, put a process at a base priority above most users, say 6-7, and run a large COPY from one instantaneous mass storage device to another. Other users see an effectively dead machine.
> True. Most of the time I don't think there are many problems with I/O
> completing immediately (there is one known such problem in RSX, and that
> is the console terminal device driver that gets the data reversed if
> interrupt happen immediately). But starving other processes for
> resources when I/O completes immediately is definitely a thing in almost
> any OS.
> > Virtual machine-level synchronous I/O is a gargantuan performance impediment on many levels. My aforementioned Ph.D. dissertation on I/O performance, one of the main conclusions was that unnecessary serialization at any point in the I/O stack was a performance cliff. Serialization obstructs lower-level optimization and workload management. The mathematics could not be more definitive.
> Yes. And I'll just reflect from an RSX point of view here, since I'm so
> intimately aware of the innards there. RSX actually have very little
> serialization in most places. The system is very asynchronous in its
> core design. Which is utilized by drivers and software to allow hardware
> to reorder operations when it want to, since hardware can do this better
> than software most of the time. But reordering also in software is
> something that RSX can do. But for the MSCP device driver, for example,
> software reordering is disabled, because it just lets the hardware do it
> instead. And requests from programs can easily be asynchronous as well.
> In fact they always are, but programs can request to wait for the
> completion if they don't want to do something else meanwhile. But lots
> of programs do make use of this possibility, since the CPU wasn't that
> fast to begin with, so you were often very interested in ways to get
> more performance out of your software, and having I/O going on in the
> background was the most obvious way to improve.
>
> VMS inherited much of the design around this from RSX, so much is the
> same. But of course, with more resources, less effort was put into
> making use of such contructs. After all, it takes more work to write and
> ensure it works right.
>
> Johnny
Johnny,

I am somewhat busy today, but the "With ethernet it was shown that it performs just fine up to
about 70-80% utilization." is somewhat apples/oranges.

You are correct that Aloha is the first, or at least the first well-known discussion of the phenomenon. However, the cited lecture notes miss several points.

- Aloha network nodes do not have guaranteed mutual visibility (a side effect of being radio-based on islands).
- The comparable Ethernet analysis presumes 10Base5 or 10Base2 coax with possible repeaters, not switches, with a network diameter maximum computer to guarantee that the signal of a transmitting node is detectable within the minimum packet size.

The presence of switches, which are inherently store-and-forward at a packet level, change the analysis dramatically. For the record, I recall that there were 10BaseT hubs, but the cost saving was not significant, so they are rare.

With care and careful design, one can get higher utilization, but that is a different story. For reference, look up Digital's CI, a higher performance CSMA/CD scheme, but with some twists to run at higher utilization percentages.

IEEE 802.11 (aka WiFi) is a different conversation altogether.

- Bob Gezelter, http://www.rlgsc.com

Re: OS implementation languages

<624ab7b8-1c19-473a-a7df-0f49150c3b5dn@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29568&group=comp.os.vms#29568

  copy link   Newsgroups: comp.os.vms
X-Received: by 2002:a05:6214:1849:b0:63d:a43:7b06 with SMTP id d9-20020a056214184900b0063d0a437b06mr40274qvy.9.1693563350016;
Fri, 01 Sep 2023 03:15:50 -0700 (PDT)
X-Received: by 2002:a17:902:e748:b0:1bc:7001:6e5c with SMTP id
p8-20020a170902e74800b001bc70016e5cmr756371plf.3.1693563349785; Fri, 01 Sep
2023 03:15:49 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!tncsrv06.tnetconsulting.net!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.os.vms
Date: Fri, 1 Sep 2023 03:15:48 -0700 (PDT)
In-Reply-To: <1920c266-a799-4715-a694-49e81e5fea27n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2601:602:9700:4689:49a2:3d66:d5e8:16e3;
posting-account=gLDX1AkAAAA26M5HM-O3sVMAXdxK9FPA
NNTP-Posting-Host: 2601:602:9700:4689:49a2:3d66:d5e8:16e3
References: <uc84kt$3iet2$1@dont-email.me> <qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com>
<uca66i$167h$1@dont-email.me> <uca6ss$12u$4@news.misty.com>
<uca8r4$1kce$1@dont-email.me> <ucbgj8$8kpp$2@dont-email.me>
<ucjbrc$1tji8$1@dont-email.me> <uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me> <215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>
<44b4f729-ed03-4ac5-bed2-5e20244b10c8n@googlegroups.com> <ucqjkg$svv$5@news.misty.com>
<1920c266-a799-4715-a694-49e81e5fea27n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <624ab7b8-1c19-473a-a7df-0f49150c3b5dn@googlegroups.com>
Subject: Re: OS implementation languages
From: gah...@u.washington.edu (gah4)
Injection-Date: Fri, 01 Sep 2023 10:15:50 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 2636
 by: gah4 - Fri, 1 Sep 2023 10:15 UTC

On Friday, September 1, 2023 at 3:06:56 AM UTC-7, Bob Gezelter wrote:

(snip)

> The presence of switches, which are inherently store-and-forward at a packet
> level, change the analysis dramatically. For the record, I recall that there were
>10BaseT hubs, but the cost saving was not significant, so they are rare.

Early 10baseT devices were Ethernet repeaters. In the hub and spoke system,
they are the hubs, but technically they should be repeaters.

But yes, they are commonly called hubs.
> With care and careful design, one can get higher utilization, but that is a
> different story. For reference, look up Digital's CI, a higher performance
> CSMA/CD scheme, but with some twists to run at higher utilization percentages.
Ethernet does resolve collisions pretty fast, and often does get pretty high
utilization with usual traffic flows.

Re: OS implementation languages

<ucsr3k$3q0ci$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29570&group=comp.os.vms#29570

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: dav...@tsoft-inc.com (Dave Froble)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Fri, 1 Sep 2023 10:05:55 -0400
Organization: A noiseless patient Spider
Lines: 22
Message-ID: <ucsr3k$3q0ci$1@dont-email.me>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me>
<uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me>
<ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me>
<uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me> <uclgav$q0b$1@news.misty.com>
<kl72m6Fg0gpU1@mid.individual.net> <ucm2h6$g0g$2@news.misty.com>
<af00566a-19d5-4be2-85ed-1766f77f7eben@googlegroups.com>
<ucn01j$jie$4@news.misty.com> <ucoai1$2sgp8$1@dont-email.me>
<855b20c4-b1fc-4a4f-acc6-7a407abce7fcn@googlegroups.com>
<ucqhjr$svv$2@news.misty.com> <ucqpsg$3dpgm$1@dont-email.me>
<ucqut1$3eejq$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Fri, 1 Sep 2023 14:05:40 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="64700308c91cacdb7047508313408250";
logging-data="3998098"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18QbL1Q28jEw1EJbB1rkTge/rdhHvWJdaU="
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:45.0) Gecko/20100101
Thunderbird/45.8.0
Cancel-Lock: sha1:foQwcqOVxY0e8Nmrc8zYRxeclsM=
In-Reply-To: <ucqut1$3eejq$1@dont-email.me>
 by: Dave Froble - Fri, 1 Sep 2023 14:05 UTC

On 8/31/2023 4:58 PM, Robert A. Brooks wrote:
> On 8/31/2023 3:32 PM, Dave Froble wrote:
>
>> Making your database files contiguous seems to be a winner. Larger
>> clustersize for database files also helps, a lot.
>
> Cluster size is not used for I/O; it's only used for storage allocation, where
> a single bit in the storage allocation bitmap covers the number of blocks
> in that volume's clustersize.
>
> I/O is not done in clustersize chunks.
>

That is correct, but, it helps with contiguous chunks, if the entire file isn't
contiguous.

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: davef@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

Re: OS implementation languages

<ucsr8k$3q0ci$2@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29571&group=comp.os.vms#29571

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: dav...@tsoft-inc.com (Dave Froble)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Fri, 1 Sep 2023 10:08:40 -0400
Organization: A noiseless patient Spider
Lines: 38
Message-ID: <ucsr8k$3q0ci$2@dont-email.me>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me>
<uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me>
<ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me>
<uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me> <uclgav$q0b$1@news.misty.com>
<kl72m6Fg0gpU1@mid.individual.net> <ucm2h6$g0g$2@news.misty.com>
<af00566a-19d5-4be2-85ed-1766f77f7eben@googlegroups.com>
<ucn01j$jie$4@news.misty.com> <ucoai1$2sgp8$1@dont-email.me>
<855b20c4-b1fc-4a4f-acc6-7a407abce7fcn@googlegroups.com>
<ucqhjr$svv$2@news.misty.com> <ucqpsg$3dpgm$1@dont-email.me>
<ucqut1$3eejq$1@dont-email.me>
<9e5dc8f0-9304-45f1-8a42-dab90e6c062en@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 1 Sep 2023 14:08:20 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="64700308c91cacdb7047508313408250";
logging-data="3998098"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18tptIfA7s79fNyjLsQJolXmZPxfYTlWBQ="
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:45.0) Gecko/20100101
Thunderbird/45.8.0
Cancel-Lock: sha1:8GZetFKUTM3ATxR1hdYxfokZ45k=
In-Reply-To: <9e5dc8f0-9304-45f1-8a42-dab90e6c062en@googlegroups.com>
 by: Dave Froble - Fri, 1 Sep 2023 14:08 UTC

On 8/31/2023 8:08 PM, Bob Gezelter wrote:
> On Thursday, August 31, 2023 at 4:58:13 PM UTC-4, Robert A. Brooks wrote:
>> On 8/31/2023 3:32 PM, Dave Froble wrote:
>>
>>> Making your database files contiguous seems to be a winner. Larger clustersize
>>> for database files also helps, a lot.
>> Cluster size is not used for I/O; it's only used for storage allocation, where
>> a single bit in the storage allocation bitmap covers the number of blocks
>> in that volume's clustersize.
>>
>> I/O is not done in clustersize chunks.
>>
>> --
>>
>> --- Rob
> Rob,
>
> WADR to Dave, he omitted some steps. When Hein was giving DECUS and Bootcamp presentations on optimizing files, he took note of the fact that newer storage arrays worked with 16 block chunks internally. If I recall correctly, he recommended matching the cluster factor and other I/O-related parameters, e.g., bucket size and buffer sizes to the same granularity.
>
> While it does not guarantee a lack of split I/Os it dramatically reduces their frequency, much as aligning quadwords and longwords improves program performance.
>
> I have often encountered logical volumes that originated on RP and RM-class disk drives originally on VAX systems, with cluster sizes set to track and integral divisors of tracks. In the same way, they reduce splits. Of course, when these volumes are migrated to present technologies, the parameters require reconsideration and adjustment for optimal performance.
>
> You are correct that there is no direct connection between cluster size and I/O request size. However, the lack of a direct connection does not prevent one from arranging it to be the most frequent case.
>
> - Bob Gezelter, http://www.rlgsc.com
>

I think my point was:

Intelligent usage is a big winner ....

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: davef@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

Re: OS implementation languages

<uctm21$mre$4@news.misty.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29576&group=comp.os.vms#29576

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!.POSTED.80-218-16-84.dclient.hispeed.ch!not-for-mail
From: bqt...@softjar.se (Johnny Billquist)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Fri, 1 Sep 2023 23:45:37 +0200
Organization: MGT Consulting
Message-ID: <uctm21$mre$4@news.misty.com>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me>
<uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me>
<ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me>
<uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me>
<215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>
<44b4f729-ed03-4ac5-bed2-5e20244b10c8n@googlegroups.com>
<ucqjkg$svv$5@news.misty.com>
<75e465e0-5eef-49e2-b7ea-aa845d526ec4n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Fri, 1 Sep 2023 21:45:37 -0000 (UTC)
Injection-Info: news.misty.com; posting-host="80-218-16-84.dclient.hispeed.ch:80.218.16.84";
logging-data="23406"; mail-complaints-to="abuse@misty.com"
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.14.0
In-Reply-To: <75e465e0-5eef-49e2-b7ea-aa845d526ec4n@googlegroups.com>
 by: Johnny Billquist - Fri, 1 Sep 2023 21:45 UTC

On 2023-08-31 21:12, Bob Gezelter wrote:

> Johnny,
>
> You are correct that much of the I/O plumbing is highly similar in RSX and VMS, and for that matter WNT. A case of common authorship.

Yeah... :-)

> To understand the issues fully, one has to think not just linearly, but also carefully consider the timeline and other time-related issues. There are many conceptual traps.

True.

> However, file I/O has an explicit serialization: virtual to logical block mapping. The original I/O packet is reused iteratively to process the potentially discontiguous logical blocks required. This imposes a serialization that is not conducive to performance. There is an extended description and analysis in my dissertation/monograph; a somewhat abbreviated version of which is in the previously referenced IEEE conference paper.

In RSX atleast, this isn't neccesarily true. Both with FCS and RMS, they
offer the capability of read-ahead in a completely transparent way for
the application. And they do this by issuing multiple QIO in parallel,
so there are multiple I/O packets in flight.
But the mapping between virtual and logical blocks are happening in
F11ACP, which is a chokepoint. But because of this, for an open file,
the mapping from virtual to logical blocks can actually happen in the
kernel without involving the ACP, when the mapping is already in memory.
So the first read will stall until the mapping have been read in, and
then continue finding the translation and issuing the read of the proper
logical block. The next I/O that comes right on the heels of the first
one (as well as the third) will hopefully do the translation without any
I/O involved, and the read for the correct logical block will be queued
up to the driver before anything have been completed. And then the
driver immediately places the requests on the queue of the controller,
who then can both work on doing them in the optimal order, and the
completion is going to happen in some random order. However, as soon as
the read for the first virtual block is completed, FCS (or RMS) will
return operation to the program that is reading the file. Meanwhile in
the background, while the program is operating, the other I/Os will also
eventually complete, so by the time the program's reading progress to
those blocks, they are already in memory. And in the background, FCS (or
RMS) will already have issued additional reads for more blocks while the
program was doing its work.

But this is RSX. Details on how much of this also is done in VMS I don't
know.

> If one looks for correlations of logical block requests, the most common correlations are requests to the same file on the same access stream. Ibid. Sequential reuse of the same IO packet prevents optimization within a request stream.
>
> Hardware reordering can only reorder requests that are simultaneously outstanding. Ibid.
>
> The RSX design was obligatory due to the then lack of system pool. Then, even individual IO Packets were precious. Correctness beats efficiency. For more than two decades, increases in memory capacity and non-paged pool expansion has rendered the memory minimization design goal quaintly anachronistic.

Well. Like I said above, RSX actually is somewhat eager to do this.
Especially on 11M+, which do have a bit more resources available. In
plain 11M, you didn't see this done as much, since resources were more
scarce.

> The preceding is all in the reprint. A thorough reading of the referenced paper is highly recommended. When I can find the time to finish copy-editing the monograph and get it printed. I will note the general availability in a posting.

And if you want more deep diving into RSX, just let me know. I'll
happily guide you through it. But it will be lots of MACRO-11. ;-)

Johnny

Re: OS implementation languages

<uctmhc$mre$5@news.misty.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29577&group=comp.os.vms#29577

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!.POSTED.80-218-16-84.dclient.hispeed.ch!not-for-mail
From: bqt...@softjar.se (Johnny Billquist)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Fri, 1 Sep 2023 23:53:47 +0200
Organization: MGT Consulting
Message-ID: <uctmhc$mre$5@news.misty.com>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me>
<uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me>
<ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me>
<uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me>
<215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>
<44b4f729-ed03-4ac5-bed2-5e20244b10c8n@googlegroups.com>
<ucqjkg$svv$5@news.misty.com>
<1920c266-a799-4715-a694-49e81e5fea27n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Fri, 1 Sep 2023 21:53:48 -0000 (UTC)
Injection-Info: news.misty.com; posting-host="80-218-16-84.dclient.hispeed.ch:80.218.16.84";
logging-data="23406"; mail-complaints-to="abuse@misty.com"
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.14.0
In-Reply-To: <1920c266-a799-4715-a694-49e81e5fea27n@googlegroups.com>
 by: Johnny Billquist - Fri, 1 Sep 2023 21:53 UTC

On 2023-09-01 12:06, Bob Gezelter wrote:

> Johnny,
>
> I am somewhat busy today, but the "With ethernet it was shown that it performs just fine up to
> about 70-80% utilization." is somewhat apples/oranges.

Well. That might be. But in more recent development, that number just
goes up. It don't go down. But we're going into very fine details here.

> You are correct that Aloha is the first, or at least the first well-known discussion of the phenomenon. However, the cited lecture notes miss several points.
>
> - Aloha network nodes do not have guaranteed mutual visibility (a side effect of being radio-based on islands).

There are multiple reasons why Aloha basically drops in performance when
you get above around 30% saturation. But it's an aspect of Aloha, and is
not applicable to ethernet.

> - The comparable Ethernet analysis presumes 10Base5 or 10Base2 coax with possible repeaters, not switches, with a network diameter maximum computer to guarantee that the signal of a transmitting node is detectable within the minimum packet size.

Yes. And ethernet have a defined maximum diameter for that specific
reason. (And I said wrong, it's 64 bytes, not 64 bits). But the point
is, with a defined maximum diameter, and a defined minimum packet size,
you are guaranteed to know if there is a collision by that time, and
beyond that moment in time, you will not be getting collisions. And this
allows the performance throughput to get much higher than the 30% you
naively might think.

> The presence of switches, which are inherently store-and-forward at a packet level, change the analysis dramatically. For the record, I recall that there were 10BaseT hubs, but the cost saving was not significant, so they are rare.

True. With switches it becomes a different ballgame. But it's a
ballgames that allows you to reach higher utilization, since the
collisions go away, except if you are dealing with half duplex, where
you can collide if both sides start talking at the same time.
With switches and full duplex, you no longer get any collisions at all.

With 10Base (even -T) hubs were not uncommon. I think I still have some
lying somewhere. I think there were still a few by 100Base, but they
basically went away around that point, and beyond that, hubs were no
longer even available, or allowed.

> With care and careful design, one can get higher utilization, but that is a different story. For reference, look up Digital's CI, a higher performance CSMA/CD scheme, but with some twists to run at higher utilization percentages.
>
> IEEE 802.11 (aka WiFi) is a different conversation altogether.

I don't even want to go there... :-P

Johnny

Re: OS implementation languages

<8d9dd98b-3339-4442-993d-bc81e8daa89fn@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29578&group=comp.os.vms#29578

  copy link   Newsgroups: comp.os.vms
X-Received: by 2002:ad4:4b26:0:b0:649:463d:bf40 with SMTP id s6-20020ad44b26000000b00649463dbf40mr148711qvw.1.1693615892733;
Fri, 01 Sep 2023 17:51:32 -0700 (PDT)
X-Received: by 2002:a17:902:e5c1:b0:1bd:b678:4bd6 with SMTP id
u1-20020a170902e5c100b001bdb6784bd6mr1456976plf.4.1693615892176; Fri, 01 Sep
2023 17:51:32 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.os.vms
Date: Fri, 1 Sep 2023 17:51:31 -0700 (PDT)
In-Reply-To: <uctm21$mre$4@news.misty.com>
Injection-Info: google-groups.googlegroups.com; posting-host=108.27.245.253; posting-account=r2_qcwoAAACbIdit5Eka3ivGvrYZz7UQ
NNTP-Posting-Host: 108.27.245.253
References: <uc84kt$3iet2$1@dont-email.me> <qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com>
<uca66i$167h$1@dont-email.me> <uca6ss$12u$4@news.misty.com>
<uca8r4$1kce$1@dont-email.me> <ucbgj8$8kpp$2@dont-email.me>
<ucjbrc$1tji8$1@dont-email.me> <uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me> <215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>
<44b4f729-ed03-4ac5-bed2-5e20244b10c8n@googlegroups.com> <ucqjkg$svv$5@news.misty.com>
<75e465e0-5eef-49e2-b7ea-aa845d526ec4n@googlegroups.com> <uctm21$mre$4@news.misty.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <8d9dd98b-3339-4442-993d-bc81e8daa89fn@googlegroups.com>
Subject: Re: OS implementation languages
From: gezel...@rlgsc.com (Bob Gezelter)
Injection-Date: Sat, 02 Sep 2023 00:51:32 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 7048
 by: Bob Gezelter - Sat, 2 Sep 2023 00:51 UTC

On Friday, September 1, 2023 at 5:45:41 PM UTC-4, Johnny Billquist wrote:
> On 2023-08-31 21:12, Bob Gezelter wrote:
>
> > Johnny,
> >
> > You are correct that much of the I/O plumbing is highly similar in RSX and VMS, and for that matter WNT. A case of common authorship.
> Yeah... :-)
> > To understand the issues fully, one has to think not just linearly, but also carefully consider the timeline and other time-related issues. There are many conceptual traps.
> True.
> > However, file I/O has an explicit serialization: virtual to logical block mapping. The original I/O packet is reused iteratively to process the potentially discontiguous logical blocks required. This imposes a serialization that is not conducive to performance. There is an extended description and analysis in my dissertation/monograph; a somewhat abbreviated version of which is in the previously referenced IEEE conference paper.
> In RSX atleast, this isn't neccesarily true. Both with FCS and RMS, they
> offer the capability of read-ahead in a completely transparent way for
> the application. And they do this by issuing multiple QIO in parallel,
> so there are multiple I/O packets in flight.
> But the mapping between virtual and logical blocks are happening in
> F11ACP, which is a chokepoint. But because of this, for an open file,
> the mapping from virtual to logical blocks can actually happen in the
> kernel without involving the ACP, when the mapping is already in memory.
> So the first read will stall until the mapping have been read in, and
> then continue finding the translation and issuing the read of the proper
> logical block. The next I/O that comes right on the heels of the first
> one (as well as the third) will hopefully do the translation without any
> I/O involved, and the read for the correct logical block will be queued
> up to the driver before anything have been completed. And then the
> driver immediately places the requests on the queue of the controller,
> who then can both work on doing them in the optimal order, and the
> completion is going to happen in some random order. However, as soon as
> the read for the first virtual block is completed, FCS (or RMS) will
> return operation to the program that is reading the file. Meanwhile in
> the background, while the program is operating, the other I/Os will also
> eventually complete, so by the time the program's reading progress to
> those blocks, they are already in memory. And in the background, FCS (or
> RMS) will already have issued additional reads for more blocks while the
> program was doing its work.
>
> But this is RSX. Details on how much of this also is done in VMS I don't
> know.
> > If one looks for correlations of logical block requests, the most common correlations are requests to the same file on the same access stream. Ibid. Sequential reuse of the same IO packet prevents optimization within a request stream.
> >
> > Hardware reordering can only reorder requests that are simultaneously outstanding. Ibid.
> >
> > The RSX design was obligatory due to the then lack of system pool. Then, even individual IO Packets were precious. Correctness beats efficiency. For more than two decades, increases in memory capacity and non-paged pool expansion has rendered the memory minimization design goal quaintly anachronistic.
> Well. Like I said above, RSX actually is somewhat eager to do this.
> Especially on 11M+, which do have a bit more resources available. In
> plain 11M, you didn't see this done as much, since resources were more
> scarce.
> > The preceding is all in the reprint. A thorough reading of the referenced paper is highly recommended. When I can find the time to finish copy-editing the monograph and get it printed. I will note the general availability in a posting.
> And if you want more deep diving into RSX, just let me know. I'll
> happily guide you through it. But it will be lots of MACRO-11. ;-)
>
> Johnny
Johnny,

I almost fell into that trap. There are multiple IO Packets in use when one does read-ahead/write-behind. If you look at my tuning presentation from the late 1980s, you will find precisely that advise.

However, there is a serialization bottleneck when an individual IO Request covers a logically discontiguous sequence of logical blocks on a volume. Once again, it is in the previously referenced IEEE conference paper. The serialization in IO Packet mapping is described in the OpenVMS Internals and Data Structures manual.

By the way, the previous comment about file mapping is not quire accurate. If one were using the index file entries without hints it would be correct. However, index file entries are converted into mapping blocks, one again see the IDSM. The tree structure is helpful if starting from scratch. Window turns are common, but not catastrophically so in general. As always, the devil is in the details.

- Bob Gezelter, http://www.rlgsc.com
P.S. I took Business Law I/II during my undergraduate years. The professor reminded us that a very important part of research is reading the footnotes..

Re: OS implementation languages

<1df423b5-7749-4705-95b5-76b72b377ed5n@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29579&group=comp.os.vms#29579

  copy link   Newsgroups: comp.os.vms
X-Received: by 2002:a05:620a:25d:b0:76f:2d1:dc21 with SMTP id q29-20020a05620a025d00b0076f02d1dc21mr133217qkn.1.1693616202944;
Fri, 01 Sep 2023 17:56:42 -0700 (PDT)
X-Received: by 2002:a17:902:e54e:b0:1b8:a593:7568 with SMTP id
n14-20020a170902e54e00b001b8a5937568mr1398392plf.8.1693616202431; Fri, 01 Sep
2023 17:56:42 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.os.vms
Date: Fri, 1 Sep 2023 17:56:41 -0700 (PDT)
In-Reply-To: <uctm21$mre$4@news.misty.com>
Injection-Info: google-groups.googlegroups.com; posting-host=108.27.245.253; posting-account=r2_qcwoAAACbIdit5Eka3ivGvrYZz7UQ
NNTP-Posting-Host: 108.27.245.253
References: <uc84kt$3iet2$1@dont-email.me> <qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com>
<uca66i$167h$1@dont-email.me> <uca6ss$12u$4@news.misty.com>
<uca8r4$1kce$1@dont-email.me> <ucbgj8$8kpp$2@dont-email.me>
<ucjbrc$1tji8$1@dont-email.me> <uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me> <215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>
<44b4f729-ed03-4ac5-bed2-5e20244b10c8n@googlegroups.com> <ucqjkg$svv$5@news.misty.com>
<75e465e0-5eef-49e2-b7ea-aa845d526ec4n@googlegroups.com> <uctm21$mre$4@news.misty.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <1df423b5-7749-4705-95b5-76b72b377ed5n@googlegroups.com>
Subject: Re: OS implementation languages
From: gezel...@rlgsc.com (Bob Gezelter)
Injection-Date: Sat, 02 Sep 2023 00:56:42 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 6273
 by: Bob Gezelter - Sat, 2 Sep 2023 00:56 UTC

On Friday, September 1, 2023 at 5:45:41 PM UTC-4, Johnny Billquist wrote:
> On 2023-08-31 21:12, Bob Gezelter wrote:
>
> > Johnny,
> >
> > You are correct that much of the I/O plumbing is highly similar in RSX and VMS, and for that matter WNT. A case of common authorship.
> Yeah... :-)
> > To understand the issues fully, one has to think not just linearly, but also carefully consider the timeline and other time-related issues. There are many conceptual traps.
> True.
> > However, file I/O has an explicit serialization: virtual to logical block mapping. The original I/O packet is reused iteratively to process the potentially discontiguous logical blocks required. This imposes a serialization that is not conducive to performance. There is an extended description and analysis in my dissertation/monograph; a somewhat abbreviated version of which is in the previously referenced IEEE conference paper.
> In RSX atleast, this isn't neccesarily true. Both with FCS and RMS, they
> offer the capability of read-ahead in a completely transparent way for
> the application. And they do this by issuing multiple QIO in parallel,
> so there are multiple I/O packets in flight.
> But the mapping between virtual and logical blocks are happening in
> F11ACP, which is a chokepoint. But because of this, for an open file,
> the mapping from virtual to logical blocks can actually happen in the
> kernel without involving the ACP, when the mapping is already in memory.
> So the first read will stall until the mapping have been read in, and
> then continue finding the translation and issuing the read of the proper
> logical block. The next I/O that comes right on the heels of the first
> one (as well as the third) will hopefully do the translation without any
> I/O involved, and the read for the correct logical block will be queued
> up to the driver before anything have been completed. And then the
> driver immediately places the requests on the queue of the controller,
> who then can both work on doing them in the optimal order, and the
> completion is going to happen in some random order. However, as soon as
> the read for the first virtual block is completed, FCS (or RMS) will
> return operation to the program that is reading the file. Meanwhile in
> the background, while the program is operating, the other I/Os will also
> eventually complete, so by the time the program's reading progress to
> those blocks, they are already in memory. And in the background, FCS (or
> RMS) will already have issued additional reads for more blocks while the
> program was doing its work.
>
> But this is RSX. Details on how much of this also is done in VMS I don't
> know.
> > If one looks for correlations of logical block requests, the most common correlations are requests to the same file on the same access stream. Ibid. Sequential reuse of the same IO packet prevents optimization within a request stream.
> >
> > Hardware reordering can only reorder requests that are simultaneously outstanding. Ibid.
> >
> > The RSX design was obligatory due to the then lack of system pool. Then, even individual IO Packets were precious. Correctness beats efficiency. For more than two decades, increases in memory capacity and non-paged pool expansion has rendered the memory minimization design goal quaintly anachronistic.
> Well. Like I said above, RSX actually is somewhat eager to do this.
> Especially on 11M+, which do have a bit more resources available. In
> plain 11M, you didn't see this done as much, since resources were more
> scarce.
> > The preceding is all in the reprint. A thorough reading of the referenced paper is highly recommended. When I can find the time to finish copy-editing the monograph and get it printed. I will note the general availability in a posting.
> And if you want more deep diving into RSX, just let me know. I'll
> happily guide you through it. But it will be lots of MACRO-11. ;-)
>
> Johnny
Johnny,

I still have my RSX-11M listings from Version 3.2. I was one of the systems programmers on our PDP-11/34, did quite a bit of driver architecting and writing. Also the first person outside RSX Engineering to write a queue manager symbiont (uncovered a nasty synchronization design flaw).

MACRO-11 was my fourth assembler language.

- Bob Gezelter, http://www.rlgsc.com

Re: OS implementation languages

<ud08hd$spi$3@news.misty.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29589&group=comp.os.vms#29589

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!.POSTED.80-218-16-84.dclient.hispeed.ch!not-for-mail
From: bqt...@softjar.se (Johnny Billquist)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Sat, 2 Sep 2023 23:13:17 +0200
Organization: MGT Consulting
Message-ID: <ud08hd$spi$3@news.misty.com>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me>
<uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me>
<ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me>
<uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me>
<215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>
<44b4f729-ed03-4ac5-bed2-5e20244b10c8n@googlegroups.com>
<ucqjkg$svv$5@news.misty.com>
<75e465e0-5eef-49e2-b7ea-aa845d526ec4n@googlegroups.com>
<uctm21$mre$4@news.misty.com>
<8d9dd98b-3339-4442-993d-bc81e8daa89fn@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 2 Sep 2023 21:13:17 -0000 (UTC)
Injection-Info: news.misty.com; posting-host="80-218-16-84.dclient.hispeed.ch:80.218.16.84";
logging-data="29490"; mail-complaints-to="abuse@misty.com"
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.15.0
In-Reply-To: <8d9dd98b-3339-4442-993d-bc81e8daa89fn@googlegroups.com>
 by: Johnny Billquist - Sat, 2 Sep 2023 21:13 UTC

On 2023-09-02 02:51, Bob Gezelter wrote:
> On Friday, September 1, 2023 at 5:45:41 PM UTC-4, Johnny Billquist wrote:
>> On 2023-08-31 21:12, Bob Gezelter wrote:
>>
>>> The RSX design was obligatory due to the then lack of system pool. Then, even individual IO Packets were precious. Correctness beats efficiency. For more than two decades, increases in memory capacity and non-paged pool expansion has rendered the memory minimization design goal quaintly anachronistic.
>> Well. Like I said above, RSX actually is somewhat eager to do this.
>> Especially on 11M+, which do have a bit more resources available. In
>> plain 11M, you didn't see this done as much, since resources were more
>> scarce.
>>> The preceding is all in the reprint. A thorough reading of the referenced paper is highly recommended. When I can find the time to finish copy-editing the monograph and get it printed. I will note the general availability in a posting.
>> And if you want more deep diving into RSX, just let me know. I'll
>> happily guide you through it. But it will be lots of MACRO-11. ;-)
>>
>> Johnny
> Johnny,
>
> I almost fell into that trap. There are multiple IO Packets in use when one does read-ahead/write-behind. If you look at my tuning presentation from the late 1980s, you will find precisely that advise.
>
> However, there is a serialization bottleneck when an individual IO Request covers a logically discontiguous sequence of logical blocks on a volume. Once again, it is in the previously referenced IEEE conference paper. The serialization in IO Packet mapping is described in the OpenVMS Internals and Data Structures manual.

Well, yes, one individual I/O request is in itself inherently
serialized. But like I said, at least with RSX, the read ahead/write
behind functionality is not done with just one I/O request, because then
you would not actually get much read ahead benefit, since the I/O don't
complete until all of it is complete.

It's done with multiple smaller reads. And they can complete in any
order. But software will get data when the request that directly access
the data requested right now have completed. The other I/O requests are
hopefully finished by the time the software gets that far. And new ones
can be issued while the software is still working on data it previously
received.

Johnny

Re: OS implementation languages

<ud08ka$spi$4@news.misty.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29590&group=comp.os.vms#29590

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!.POSTED.80-218-16-84.dclient.hispeed.ch!not-for-mail
From: bqt...@softjar.se (Johnny Billquist)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Sat, 2 Sep 2023 23:14:50 +0200
Organization: MGT Consulting
Message-ID: <ud08ka$spi$4@news.misty.com>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me>
<uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me>
<ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me>
<uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me>
<215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>
<44b4f729-ed03-4ac5-bed2-5e20244b10c8n@googlegroups.com>
<ucqjkg$svv$5@news.misty.com>
<75e465e0-5eef-49e2-b7ea-aa845d526ec4n@googlegroups.com>
<uctm21$mre$4@news.misty.com>
<1df423b5-7749-4705-95b5-76b72b377ed5n@googlegroups.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 2 Sep 2023 21:14:50 -0000 (UTC)
Injection-Info: news.misty.com; posting-host="80-218-16-84.dclient.hispeed.ch:80.218.16.84";
logging-data="29490"; mail-complaints-to="abuse@misty.com"
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.15.0
In-Reply-To: <1df423b5-7749-4705-95b5-76b72b377ed5n@googlegroups.com>
 by: Johnny Billquist - Sat, 2 Sep 2023 21:14 UTC

On 2023-09-02 02:56, Bob Gezelter wrote:
> On Friday, September 1, 2023 at 5:45:41 PM UTC-4, Johnny Billquist wrote:
>> And if you want more deep diving into RSX, just let me know. I'll
>> happily guide you through it. But it will be lots of MACRO-11. ;-)
>>
>> Johnny
> Johnny,
>
> I still have my RSX-11M listings from Version 3.2. I was one of the systems programmers on our PDP-11/34, did quite a bit of driver architecting and writing. Also the first person outside RSX Engineering to write a queue manager symbiont (uncovered a nasty synchronization design flaw).
>
> MACRO-11 was my fourth assembler language.

Cool. But just as a little warning, a lot was changed and improved in
11M+ compared to 11M. And a lot was changed when V4 came around. V3 is
outright ancient. :-D

But really nice to hear you have a solid ground.

Johnny

Re: OS implementation languages

<ud1tf9$tju$1@news.misty.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29595&group=comp.os.vms#29595

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!.POSTED.80-218-16-84.dclient.hispeed.ch!not-for-mail
From: bqt...@softjar.se (Johnny Billquist)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Sun, 3 Sep 2023 14:16:41 +0200
Organization: MGT Consulting
Message-ID: <ud1tf9$tju$1@news.misty.com>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me>
<uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me>
<ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me>
<uckne0$28ck8$2@dont-email.me>
<52170811d6a7d662fad88c54b1556c33c456b08e.camel@munted.eu>
<ucl9m8$2bgrm$1@dont-email.me>
<215e5a5a-d9b6-40fb-ad94-3ee8e8ad92e8n@googlegroups.com>
<44b4f729-ed03-4ac5-bed2-5e20244b10c8n@googlegroups.com>
<ucqjkg$svv$5@news.misty.com>
<75e465e0-5eef-49e2-b7ea-aa845d526ec4n@googlegroups.com>
<uctm21$mre$4@news.misty.com>
<8d9dd98b-3339-4442-993d-bc81e8daa89fn@googlegroups.com>
<ud08hd$spi$3@news.misty.com> <ud1nom$s1uc$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 3 Sep 2023 12:16:41 -0000 (UTC)
Injection-Info: news.misty.com; posting-host="80-218-16-84.dclient.hispeed.ch:80.218.16.84";
logging-data="30334"; mail-complaints-to="abuse@misty.com"
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.15.0
In-Reply-To: <ud1nom$s1uc$1@dont-email.me>
 by: Johnny Billquist - Sun, 3 Sep 2023 12:16 UTC

On 2023-09-03 12:39, Jan-Erik Söderholm wrote:
> Den 2023-09-02 kl. 23:13, skrev Johnny Billquist:
>> On 2023-09-02 02:51, Bob Gezelter wrote:
>>> On Friday, September 1, 2023 at 5:45:41 PM UTC-4, Johnny Billquist
>>> wrote:
>>>> On 2023-08-31 21:12, Bob Gezelter wrote:
>>>>
>>>>> The RSX design was obligatory due to the then lack of system pool.
>>>>> Then, even individual IO Packets were precious. Correctness beats
>>>>> efficiency. For more than two decades, increases in memory capacity
>>>>> and non-paged pool expansion has rendered the memory minimization
>>>>> design goal quaintly anachronistic.
>>>> Well. Like I said above, RSX actually is somewhat eager to do this.
>>>> Especially on 11M+, which do have a bit more resources available. In
>>>> plain 11M, you didn't see this done as much, since resources were more
>>>> scarce.
>>>>> The preceding is all in the reprint. A thorough reading of the
>>>>> referenced paper is highly recommended. When I can find the time to
>>>>> finish copy-editing the monograph and get it printed. I will note
>>>>> the general availability in a posting.
>>>> And if you want more deep diving into RSX, just let me know. I'll
>>>> happily guide you through it. But it will be lots of MACRO-11. ;-)
>>>>
>>>> Johnny
>>> Johnny,
>>>
>>> I almost fell into that trap. There are multiple IO Packets in use
>>> when one does read-ahead/write-behind. If you look at my tuning
>>> presentation from the late 1980s, you will find precisely that advise.
>>>
>>> However, there is a serialization bottleneck when an individual IO
>>> Request covers a logically discontiguous sequence of logical blocks
>>> on a volume. Once again, it is in the previously referenced IEEE
>>> conference paper. The serialization in IO Packet mapping is described
>>> in the OpenVMS Internals and Data Structures manual.
>>
>> Well, yes, one individual I/O request is in itself inherently
>> serialized. But like I said, at least with RSX, the read ahead/write
>> behind functionality is not done with just one I/O request, because
>> then you would not actually get much read ahead benefit, since the I/O
>> don't complete until all of it is complete.
>>
>> It's done with multiple smaller reads. And they can complete in any
>> order. But software will get data when the request that directly
>> access the data requested right now have completed. The other I/O
>> requests are hopefully finished by the time the software gets that
>> far. And new ones can be issued while the software is still working on
>> data it previously received.
>>
>>    Johnny
>>
>
> Since Rdb was mentioned up-thread...
>
> This is in a way similar to how Rdb uses "Asynchronous Prefetch"
> and "Asynchronous Batch-Write" to get less stalls while waiting
> for disk reads and writes.
>
> Reads:
> "When the asynchronous prefetch feature is enabled, Oracle Rdb
> examines each process and attempts to predict the process’ future
> access patterns. When Oracle Rdb can predict a sequential scan
> from a process’ successive page requests, it fetches the pages
> that it anticipates will be included in the sequential scan.
> This prefetching of pages (fetching a page before it has been
> requested) is asynchronous. That is, Oracle Rdb does not wait
> for the prefetched pages to be read into memory from disk; it
> continues with its current processing activities."
>
> So, Rdb doesn't always "prefetch", only when the access pattern
> indicates that it would be a performance win.
>
> Writes:
> "Oracle Rdb reads and writes pages while executing transactions.
> By default, it supports asynchronous batch-write operations, which
> reduce the number of stalls experienced by database processes while
> waiting for writes to disk to complete.
> The goal of asynchronous batch-write operations is to increase database
> performance by making it possible for a certain number of buffers in
> each process’ allocate set to have write operations in progress at
> any time without causing the process to stall."
>
> It is worth noting that the data is safe, the transaction journal
> file (AIJ file) has already been updated for these "dirty" buffers,
> so the update can always be recreated, if there is an issue with
> the later write-ack of the data to the tables.
>
> And again, RMS is not involved in this at all, at's just QIO's.

QIO is such a wonderful solution to the question of I/O, and makes it
possible to easyly to some nice tricks to get a lot of performance.
Which is why I sometimes wonder when people claim I/O is so bad in VMS.
QIO is way better than most I/O under Unix systems, and memory mapped
I/O as well as the select()/poll() and signals in Unix is there just
because something similar to QIO don't really exist.

There is still some issues with ODS that certainly could be improved.
But then it's usually actually RMS that is the bottleneck. But as Bob
Gelzer observed and commented, RMS can be tweaked to at least give much
better performance that it does by default.

But also, of course, if programs don't make use of the capabilities QIO
gives them, then things will also suffer. And I guess a lot of people
aren't that comfortable in writing programs that have a rather
asynchronous behavior. And I can understand that. And it's usually more
difficult to utilize in high level languages. So improving libraries and
services like RMS to do this for you is probably the best way to make
things move faster.

I think I remember from maybe a year ago that RMS under VMS don't allow
you to us locate mode for writes. Which means that any writing of data
via RMS involves at least one data copy. That's sad, and it would be
nice if it was improved. RMS under RSX do allow locate mode for writes.

But I wonder how much code isn't using locate mode for reads as well,
meaning yet another copying of data there.

Johnny

Re: OS implementation languages

<ud4mim$1g95n$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=29604&group=comp.os.vms#29604

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: devz...@nospam.com (chrisq)
Newsgroups: comp.os.vms
Subject: Re: OS implementation languages
Date: Mon, 4 Sep 2023 13:37:25 +0000
Organization: A noiseless patient Spider
Lines: 61
Message-ID: <ud4mim$1g95n$1@dont-email.me>
References: <uc84kt$3iet2$1@dont-email.me>
<qoqdnRsKnIVATnr5nZ2dnZfqn_ednZ2d@giganews.com> <uca66i$167h$1@dont-email.me>
<uca6ss$12u$4@news.misty.com> <uca8r4$1kce$1@dont-email.me>
<ucbgj8$8kpp$2@dont-email.me> <ucjbrc$1tji8$1@dont-email.me>
<uckne0$28ck8$2@dont-email.me> <uclicr$2cs0e$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 4 Sep 2023 13:37:26 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="0fa007c50a9d12ec6a3f4d3f8763e65c";
logging-data="1582263"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+FEQzRxZE24MoZtqRja0dr"
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:102.0) Gecko/20100101
Thunderbird/102.13.0
Cancel-Lock: sha1:FpeTpraSY9Kbxl9VN2RuD92iTdA=
In-Reply-To: <uclicr$2cs0e$1@dont-email.me>
Content-Language: en-US
 by: chrisq - Mon, 4 Sep 2023 13:37 UTC

On 8/29/23 19:54, Arne Vajhøj wrote:
> On 8/29/2023 8:13 AM, Simon Clubley wrote:
>> On 2023-08-28, chrisq <devzero@nospam.com> wrote:
>>> Very much FreeBSD here for some years, after decades first with dec,
>>> then Sun. Forms the basic of at least some proprietary offerings, as
>>> well as millions of embedded devices. Linux is still a unix,
>>> and runs the majority of web sites of the world, so if anything,
>>> unix has won the os wars...
>>
>> Yes, very much so. (And I can't believe Arne thinks the *BSDs have no
>> serious users... :-) ).
>
> It definitely has some but not as many as it once had.
>
> 20 years ago FreeBSD was sort of the free "high end" OS
> and used by places where Windows and Linux was not considered
> good enough.
>
> The world has changed since then.
>
> Linux has also squeezed FreeBSD market share.
>
> Primarily for non-technical reasons:
> - Linux got backing from IBM, Oracle etc.
> - Easier to hire Linux expertise
> - Many companies standardize on a Linux only strategy for applications
>   (exception for the stuff supporting PC's)
> - Cloud vendors has pushed Linux
> - Many companies are moving applications to Kubernetes on Linux (*)
>
> *) I believe that FreeBSD got jails before Linux got containers and
>    jails should be just as good, but FreeBSD jails does not have
>    the eco-system that Linux containers has (Kubernetes, OpenShift etc.)
>
> Arne
>
>

What FreeBSD has managed to do is to maintain the elegance and
simplicity of trad unix, while including advanced system
options like ZFS in the out of the box distribution. Fully
preemptive / real time without a compiler rebuild. Makes it a
worthy successor to Solaris, which was noted for its robustness.
Run a public ntp server, hundreds of hits a minute at times. It
has a current uptime of over two years. On a ups of course. but
seriously reliable. At least partly due to a very conservative
design process and a software engineering attitude. Thousands of
packages, including most Linux packages, with those that are not,
built from source. All the usual desktop choices, with xfce4
being the best compromise between lightweightness and features.
Just gets the job done with minimum of fuss.

Compare that with Linux, earlier versions still in use here, but
becomes ever more complex and opaque. Had to give up on it after
the systemd trainwreck. A valid windows substitute, nice decor,
but not for serious work here. The most professional distros at
present are arguably Suse and Debian, imho...

Chris


computers / comp.os.vms / Re: OS implementation languages

Pages:12345678
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor