Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

Life is NP-hard, and then you die. -- Dave Cock


computers / comp.os.vms / Some I/O results

SubjectAuthor
* Some I/O resultsMark Berryman
+* Re: Some I/O resultsSimon Clubley
|`- Re: Some I/O resultsMark Berryman
+* Re: Some I/O resultsStephen Hoffman
|`- Re: Some I/O resultsMark Berryman
+- Re: Some I/O resultsRobert A. Brooks
`- Re: Some I/O resultsJake Hamby (Solid State Jake)

1
Some I/O results

<uilsso$2v37e$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=31066&group=comp.os.vms#31066

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!news.hispagatos.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: mar...@theberrymans.com (Mark Berryman)
Newsgroups: comp.os.vms
Subject: Some I/O results
Date: Fri, 10 Nov 2023 11:30:46 -0700
Organization: A noiseless patient Spider
Lines: 58
Message-ID: <uilsso$2v37e$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Fri, 10 Nov 2023 18:30:48 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="85d049814f10285f6217f9dbe7247d7b";
logging-data="3116270"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19VK4iKE7npplpLjO7Jl3tP6ZIs/vNCtN8="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:PyqtEFHpXAwAI+x75NS9gJDbe9I=
Content-Language: en-US
 by: Mark Berryman - Fri, 10 Nov 2023 18:30 UTC

I have a Mac with ~24TB of data (consisting of fairly large files) that
needs to be rolled out to tape. Unfortunately, my tape drives are fibre
channel (SAN-based) and the only systems that currently have access to
them are my VMS systems.

No problem, says I. I'll just create a 24TB volume set, copy the data
up, and roll it out to tape using VMS backup.

Aside:
The maximum volume size currently supported by VMS is closer to 2TiB
than 2TB. If we assume 2TB = 2000GB then 2TiB is somewhere between 2199
and 2200GB. Creating a volume at 2199GB caused VMS to complain that it
was too big and some space would be unused. Same at 2190GB. I
eventually settled on 12 volumes of 2181GB each.

Now, how to get the data onto my VMS cluster? The Alpha and Integrity
systems only have 1G Ethernet interfaces but the x86 system, as an ESXi
guest, has both 1G and 10G interfaces.

SCP does not send file size information so the destination file is
constantly extended until the copy completes. However, I discovered
that if I pre-allocate the file and specify the version number of the
destination file, SCP would the equivalent of a copy/overlay. With a
one line change to the source code, I was able to do the same with
Hunter Goatley's FTP server.

So, with SCP, the best I could manage was a copy around 5 MB/sec. On
the other hand, a Mac to Mac copy - over the same 1G LAN - achieved over
90 MB/sec, which is close to the maximum possible.

With FTP, I could achieve around 14-15 MB/sec.

My last option was NFS. It managed better than 20 MB/sec. It was also
much easier since I could NFS mount the source and then just use a copy
command to get the data and copy pre-allocates the destination file on
its own.

Now here is the part that was surprising to me. As mentioned, the Alpha
and Integrity systems were copying over a 1G Ethernet path whereas the
x86 system was using a 10G path (the Mac has both 1G and 10G
interfaces). Using the exact same source and exact same I/O sizes, the
Alpha averaged 335 I/Os per second to its target disk, the Integrity
system managed just over 400 I/Os per second, but the x86 system
couldn't do any better than 279 I/Os per second - even on a link that
was 10 times faster. (It also has a faster HBA than the other systems).

I did check kernel mode usage and there wasn't much difference between
the 3 systems. It simply appears that I/O on x86 is somewhat slower
than I would have expected. I'll check again to see if there is any
difference once the system is built with optimizing compilers.

Mark Berryman

P.S. If anyone was wondering, it took a week to copy all of the data up
and over 2 days to roll it out to tape. If I ever have to restore it,
it will still take a *lot* less time than originally creating the data did.

I really hope support for SAN-based tape drives is coming soon to x86.

Re: Some I/O results

<uiluln$2vg5l$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=31067&group=comp.os.vms#31067

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!news.nntp4.net!news.hispagatos.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: club...@remove_me.eisner.decus.org-Earth.UFP (Simon Clubley)
Newsgroups: comp.os.vms
Subject: Re: Some I/O results
Date: Fri, 10 Nov 2023 19:01:11 -0000 (UTC)
Organization: A noiseless patient Spider
Lines: 48
Message-ID: <uiluln$2vg5l$1@dont-email.me>
References: <uilsso$2v37e$1@dont-email.me>
Injection-Date: Fri, 10 Nov 2023 19:01:11 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="5e56e5749f2f551fa3b3bd218ffa2e66";
logging-data="3129525"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/GXs2A6GceiPpbbcgv51/lMEjOHvXzBmU="
User-Agent: slrn/0.9.8.1 (VMS/Multinet)
Cancel-Lock: sha1:CYHfDi7Er8hHmU5Ohd6o3Aw7nFs=
 by: Simon Clubley - Fri, 10 Nov 2023 19:01 UTC

On 2023-11-10, Mark Berryman <mark@theberrymans.com> wrote:
>
> Now, how to get the data onto my VMS cluster? The Alpha and Integrity
> systems only have 1G Ethernet interfaces but the x86 system, as an ESXi
> guest, has both 1G and 10G interfaces.
>

[snip]

>
> So, with SCP, the best I could manage was a copy around 5 MB/sec. On
> the other hand, a Mac to Mac copy - over the same 1G LAN - achieved over
> 90 MB/sec, which is close to the maximum possible.
>

I assume the Mac to Mac copy is also using SCP ? It's not completely clear.

> With FTP, I could achieve around 14-15 MB/sec.
>
> My last option was NFS. It managed better than 20 MB/sec. It was also
> much easier since I could NFS mount the source and then just use a copy
> command to get the data and copy pre-allocates the destination file on
> its own.
>

Are delayed ACKs turned on or off on the VMS systems ?

> Now here is the part that was surprising to me. As mentioned, the Alpha
> and Integrity systems were copying over a 1G Ethernet path whereas the
> x86 system was using a 10G path (the Mac has both 1G and 10G
> interfaces). Using the exact same source and exact same I/O sizes, the
> Alpha averaged 335 I/Os per second to its target disk, the Integrity
> system managed just over 400 I/Os per second, but the x86 system
> couldn't do any better than 279 I/Os per second - even on a link that
> was 10 times faster. (It also has a faster HBA than the other systems).
>

What type of disk ? Hard drive or SSD ?

It is possible the VM is causing some additional overhead ?

Anything in the Ethernet error and retry statistics from the usual tools ?

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.

Re: Some I/O results

<uiluu4$2vho1$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=31068&group=comp.os.vms#31068

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!news.hispagatos.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: seaoh...@hoffmanlabs.invalid (Stephen Hoffman)
Newsgroups: comp.os.vms
Subject: Re: Some I/O results
Date: Fri, 10 Nov 2023 14:05:40 -0500
Organization: HoffmanLabs LLC
Lines: 20
Message-ID: <uiluu4$2vho1$1@dont-email.me>
References: <uilsso$2v37e$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: dont-email.me; posting-host="26d5256f03879bb2c04d6702514f6012";
logging-data="3131137"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18Ruv57xmuOOK2+LRXps7U7DTPcED6LvlY="
User-Agent: Unison/2.2
Cancel-Lock: sha1:eep2qeEIH9RUdO8uXPf4qAVIVmk=
 by: Stephen Hoffman - Fri, 10 Nov 2023 19:05 UTC

On 2023-11-10 18:30:46 +0000, Mark Berryman said:

> I have a Mac with ~24TB of data (consisting of fairly large files) that
> needs to be rolled out to tape. Unfortunately, my tape drives are
> fibre channel (SAN-based) and the only systems that currently have
> access to them are my VMS systems.
> ...
> I really hope support for SAN-based tape drives is coming soon to x86.

ATTO has available 32 Gb and 16 Gb Thunderbolt to Fibre Channel HBAs
for recent Mac models, and also offers Thunderbolt 25 GbE NICs.

What you'd use to write the tape from macOS is fodder for another
discussion or three. LTFS, maybe?

--
Pure Personal Opinion | HoffmanLabs LLC

Re: Some I/O results

<uimkcp$33hcr$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=31069&group=comp.os.vms#31069

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: mar...@theberrymans.com (Mark Berryman)
Newsgroups: comp.os.vms
Subject: Re: Some I/O results
Date: Fri, 10 Nov 2023 18:11:51 -0700
Organization: A noiseless patient Spider
Lines: 63
Message-ID: <uimkcp$33hcr$1@dont-email.me>
References: <uilsso$2v37e$1@dont-email.me> <uiluln$2vg5l$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sat, 11 Nov 2023 01:11:53 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="358c2e031f3ee123862201a00e1d9930";
logging-data="3261851"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18pmYnAC91lI05TRHCc2Xe8ivWy/3UHRBk="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:+wjlCnWC6kywE0X+CaqPS4kOrc8=
Content-Language: en-US
In-Reply-To: <uiluln$2vg5l$1@dont-email.me>
 by: Mark Berryman - Sat, 11 Nov 2023 01:11 UTC

On 11/10/23 12:01 PM, Simon Clubley wrote:
> On 2023-11-10, Mark Berryman <mark@theberrymans.com> wrote:
>>
>> Now, how to get the data onto my VMS cluster? The Alpha and Integrity
>> systems only have 1G Ethernet interfaces but the x86 system, as an ESXi
>> guest, has both 1G and 10G interfaces.
>>
>
> [snip]
>
>>
>> So, with SCP, the best I could manage was a copy around 5 MB/sec. On
>> the other hand, a Mac to Mac copy - over the same 1G LAN - achieved over
>> 90 MB/sec, which is close to the maximum possible.
>>
>
> I assume the Mac to Mac copy is also using SCP ? It's not completely clear.

Yes, it was SCP.

>
>> With FTP, I could achieve around 14-15 MB/sec.
>>
>> My last option was NFS. It managed better than 20 MB/sec. It was also
>> much easier since I could NFS mount the source and then just use a copy
>> command to get the data and copy pre-allocates the destination file on
>> its own.
>>
>
> Are delayed ACKs turned on or off on the VMS systems ?

I tried it both ways. For this, there was little measurable difference.
Off hand, I can't remember which was slightly better.

>
>> Now here is the part that was surprising to me. As mentioned, the Alpha
>> and Integrity systems were copying over a 1G Ethernet path whereas the
>> x86 system was using a 10G path (the Mac has both 1G and 10G
>> interfaces). Using the exact same source and exact same I/O sizes, the
>> Alpha averaged 335 I/Os per second to its target disk, the Integrity
>> system managed just over 400 I/Os per second, but the x86 system
>> couldn't do any better than 279 I/Os per second - even on a link that
>> was 10 times faster. (It also has a faster HBA than the other systems).
>>
>
> What type of disk ? Hard drive or SSD ?

SAS Hard drive. All storage is located on the SAN.

>
> It is possible the VM is causing some additional overhead ?

Possible. But if I boot the system bare metal then I don't have access
to the 10G interface (no support in VMS for it). At 1G, the transfers
rates are in the KB/sec range.

>
> Anything in the Ethernet error and retry statistics from the usual tools ?

The Ethernet is completely clean. No errors or retries. Everything is
full duplex with jumbo frames enabled.

Mark

Re: Some I/O results

<uimkq2$33jgs$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=31070&group=comp.os.vms#31070

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!news.hispagatos.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: mar...@theberrymans.com (Mark Berryman)
Newsgroups: comp.os.vms
Subject: Re: Some I/O results
Date: Fri, 10 Nov 2023 18:18:56 -0700
Organization: A noiseless patient Spider
Lines: 28
Message-ID: <uimkq2$33jgs$1@dont-email.me>
References: <uilsso$2v37e$1@dont-email.me> <uiluu4$2vho1$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 11 Nov 2023 01:18:58 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="358c2e031f3ee123862201a00e1d9930";
logging-data="3264028"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19Dm7KPtByobr36pVKX6xJiJv/ai5Q+7Jo="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:qrygSlb8V5CvxD2oVqyL97pQWKY=
Content-Language: en-US
In-Reply-To: <uiluu4$2vho1$1@dont-email.me>
 by: Mark Berryman - Sat, 11 Nov 2023 01:18 UTC

On 11/10/23 12:05 PM, Stephen Hoffman wrote:
> On 2023-11-10 18:30:46 +0000, Mark Berryman said:
>
>> I have a Mac with ~24TB of data (consisting of fairly large files)
>> that needs to be rolled out to tape.  Unfortunately, my tape drives
>> are fibre channel (SAN-based) and the only systems that currently have
>> access to them are my VMS systems.
>> ...
>> I really hope support for SAN-based tape drives is coming soon to x86.
>
> ATTO has available 32 Gb and 16 Gb Thunderbolt to Fibre Channel HBAs for
> recent Mac models, and also offers Thunderbolt 25 GbE NICs.
>
> What you'd use to write the tape from macOS is fodder for another
> discussion or three. LTFS, maybe?

The Mac has an ATTO HBA. All of its storage is located on the SAN.
What it lacks is a tape driver or any software to write to tape. Such
packages are available but I haven't found one I can afford as of yet.
The closest I could come was BRU but BRU doesn't work unless all of the
drives in your tape library can read the same tapes and mine can't (LTO4
and LTO7 drives).

I have 10G ports but no 25G (or 40 or 100) so I haven't looked at faster
NICs.

Mark

Re: Some I/O results

<uimm2d$33l8p$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=31072&group=comp.os.vms#31072

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!news.hispagatos.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: FIRST.L...@vmssoftware.com (Robert A. Brooks)
Newsgroups: comp.os.vms
Subject: Re: Some I/O results
Date: Fri, 10 Nov 2023 20:40:28 -0500
Organization: A noiseless patient Spider
Lines: 11
Message-ID: <uimm2d$33l8p$1@dont-email.me>
References: <uilsso$2v37e$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sat, 11 Nov 2023 01:40:29 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="0e8fd2bf74eb3f6f14517e0ee8a9288c";
logging-data="3265817"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19aFriy1UFvNeBQv+EahCw0UrxjheZsZXFxZydiI8JigQ=="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:zEH8xyP45oMlMbOUrCngtZAGfts=
In-Reply-To: <uilsso$2v37e$1@dont-email.me>
Content-Language: en-US
X-Antivirus-Status: Clean
X-Antivirus: Avast (VPS 231110-8, 11/10/2023), Outbound message
 by: Robert A. Brooks - Sat, 11 Nov 2023 01:40 UTC

On 11/10/2023 1:30 PM, Mark Berryman wrote:

> I really hope support for SAN-based tape drives is coming soon to x86.

I'll add that to the list of stuff to do for testing ESXi fibre channel passthrough.

My guess is that it'll just work, but until we smoke it out, all bets are off.

--
-- Rob

Re: Some I/O results

<d34c3a5b-b620-4dac-ae77-13b919a2d40an@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=31074&group=comp.os.vms#31074

  copy link   Newsgroups: comp.os.vms
X-Received: by 2002:a05:620a:21cd:b0:770:fad0:153 with SMTP id h13-20020a05620a21cd00b00770fad00153mr29265qka.15.1699677538892;
Fri, 10 Nov 2023 20:38:58 -0800 (PST)
X-Received: by 2002:a17:90b:3d4:b0:26d:2b05:4926 with SMTP id
go20-20020a17090b03d400b0026d2b054926mr280565pjb.1.1699677538590; Fri, 10 Nov
2023 20:38:58 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!border-2.nntp.ord.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.os.vms
Date: Fri, 10 Nov 2023 20:38:57 -0800 (PST)
In-Reply-To: <uilsso$2v37e$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:46b0:abc0:1290:397f:45b6:f859;
posting-account=OGFVHQoAAAASiNAamRQec8BtkuXxYFnQ
NNTP-Posting-Host: 2600:1700:46b0:abc0:1290:397f:45b6:f859
References: <uilsso$2v37e$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <d34c3a5b-b620-4dac-ae77-13b919a2d40an@googlegroups.com>
Subject: Re: Some I/O results
From: jake.ha...@gmail.com (Jake Hamby (Solid State Jake))
Injection-Date: Sat, 11 Nov 2023 04:38:58 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Lines: 83
 by: Jake Hamby (Solid St - Sat, 11 Nov 2023 04:38 UTC

On Friday, November 10, 2023 at 10:30:52 AM UTC-8, Mark Berryman wrote:
> Now here is the part that was surprising to me. As mentioned, the Alpha
> and Integrity systems were copying over a 1G Ethernet path whereas the
> x86 system was using a 10G path (the Mac has both 1G and 10G
> interfaces). Using the exact same source and exact same I/O sizes, the
> Alpha averaged 335 I/Os per second to its target disk, the Integrity
> system managed just over 400 I/Os per second, but the x86 system
> couldn't do any better than 279 I/Os per second - even on a link that
> was 10 times faster. (It also has a faster HBA than the other systems).
>
> I did check kernel mode usage and there wasn't much difference between
> the 3 systems. It simply appears that I/O on x86 is somewhat slower
> than I would have expected. I'll check again to see if there is any
> difference once the system is built with optimizing compilers.

I'd be curious to hear the results of turning on system profiling with the PCS SDA execlet, as was mentioned in a thread about WASD performance that someone here pointed me to. Volker Halle posted the instructions:

https://groups.google.com/g/comp.os.vms/c/Eu2eP4Yid7Y

"Here are the correct commands for the PCS$SDA PC sampling SDA extension - sorry about that ;-(

$ ANA/SYS
SDA> READ/EXEC ! get better symbolization (routine names)
SDA> PCS LOAD ! PC sampling SDA Execlet
SDA> PCS START TRACE
....
SDA> PCS STOP TRACE
SDA> PCS SHOW TRACE/STAT/MODE=KERNEL
....
SDA> PCS UNLOAD
SDA> EXIT

SDA> PCS SHOW TRACE/STAT/MODE=KERNEL will show the PC values seen most often during the trace period (sorted by decreasing no. of occurences) and it will symbolize the PC values as routine names or execlet names. This might give you an idea, which kind of kernel mode code is running how often."

It's also interesting to see the results for interrupt, user, supervisor, executive, or all modes, as well as for kernel mode. Since you have access to all three architectures, you're perfectly positioned to find out what's the same and what's different.

BTW, I couldn't save traces to analyze later. "pcs dump filename" only gave me empty files. It looks like the fastest sampling interval you can use is 1 tick (1 ms). The default /TICKDELAY is 10 ticks. What I suspect you'll find on x86 is it's spending proportionally more time in kernel mode in AST handling and SWIS routines related to interrupts.

I'm guessing the SWIS code is new and C and could benefit from a more optimized compiler, while the AST handling code is older, likely mostly BLISS and MACRO-32, and could also benefit from being compiled with more LLVM optimizations enabled. It's only the x86 assembly routines that *wouldn't* benefit from more optimized compilers, and that's likely a tiny amount of code.

The really scary speedups (but also the most difficult to enable, even on Linux, without it generating buggy code due to pointer aliasing or who-knows-what) is when you enable LTO (link-time optimization) and the entire module you're building is compiled as a unit from LLVM bitcode from the individual .o files, enabling interprocedural inlining and other optimization across the entire module.

That's unlikely to arrive on VMS any time soon considering the amount of work required in the linker to enable running the LTO phase, and the object analysis code to be able to deal with both x86-64 code and intermediate LLVM code (depending on flags and different defaults for GCC and LLVM, you may get just the bitcode or you may get the bitcode plus native code in the same file).

Apart from turning on more LLVM optimization phases, I suspect there are hot paths where the code is doing some VAX or Alpha-compatibility thing as a consequence of being written in BLISS or MACRO-32 and assuming it has the Alpha register file or that VAX atomic queue operations are cheap, or some other assumption that holds for Itanium but not x86.

I'm also wondering about code size and whether kernel, driver, executive, and user-mode code working sets aren't fitting in 32 KB of L1I cache because the code isn't being optimized for size. That would be an additional penalty of being compiled without optimization.

1
server_pubkey.txt

rocksolid light 0.9.8
clearnet tor