Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

Today is a good day for information-gathering. Read someone else's mail file.


computers / comp.os.vms / Linux vs VMS x86 Performance Part 2: IPC and ZeroMQ

SubjectAuthor
* Linux vs VMS x86 Performance Part 2: IPC and ZeroMQJake Hamby (Solid State Jake)
`* Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQJake Hamby (Solid State Jake)
 +- Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQJohn Reagan
 `* Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQIan Miller
  `* Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQArne Vajhøj
   `* Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQJake Hamby (Solid State Jake)
    `- Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQArne Vajhøj

1
Linux vs VMS x86 Performance Part 2: IPC and ZeroMQ

<490dc67a-7ddf-4be2-9d3a-7b92af4ea2f2n@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=30550&group=comp.os.vms#30550

  copy link   Newsgroups: comp.os.vms
X-Received: by 2002:a05:620a:8891:b0:76f:27af:2785 with SMTP id qk17-20020a05620a889100b0076f27af2785mr502914qkn.14.1697435034971;
Sun, 15 Oct 2023 22:43:54 -0700 (PDT)
X-Received: by 2002:a9d:6357:0:b0:6bb:1c29:f0fa with SMTP id
y23-20020a9d6357000000b006bb1c29f0famr10669287otk.5.1697435034697; Sun, 15
Oct 2023 22:43:54 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!1.us.feeder.erje.net!feeder.erje.net!border-1.nntp.ord.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.os.vms
Date: Sun, 15 Oct 2023 22:43:54 -0700 (PDT)
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:46b0:abc0:c90:23bc:9861:5e03;
posting-account=OGFVHQoAAAASiNAamRQec8BtkuXxYFnQ
NNTP-Posting-Host: 2600:1700:46b0:abc0:c90:23bc:9861:5e03
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <490dc67a-7ddf-4be2-9d3a-7b92af4ea2f2n@googlegroups.com>
Subject: Linux vs VMS x86 Performance Part 2: IPC and ZeroMQ
From: jake.ha...@gmail.com (Jake Hamby (Solid State Jake))
Injection-Date: Mon, 16 Oct 2023 05:43:54 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Lines: 91
 by: Jake Hamby (Solid St - Mon, 16 Oct 2023 05:43 UTC

Last batch of comparisons of OpenVMS to Ubuntu Server 22.04 LTS running on identically configured VirtualBox 7.0.10 VMs (6 CPUs, 16GB RAM) on an Ubuntu Linux host, with an Intel Xeon W-1290P CPU @ 3.70GHz.

I was hoping to be able to compare ZeroMQ speeds, since VSI has a package for it, and the release notes mention the benchmarks. The local benchmarks are, well, embarrassingly slow.

inproc_lat (128 B): Linux: 9.909 us, VMS: 482995.15 us (48743x)
inproc_lat (512 B): Linux: 9.931 us, VMS: 482995.15 us (48635x)
inproc_lat (1024 B): Linux: 9.966 us, VMS: 482995.20 us (48464x)
inproc_lat (4096 B): Linux: 9.966 us, VMS: 482995.15 us (48464x)

inproc_thr (128 B): Linux: 3353294.667 msg/s, VMS: 2325 msg/s (1442x)
inproc_thr (128 B): Linux: 3433.774 Mb/s, VMS: 2.381 Mb/s (1442x)
inproc_thr (512 B): Linux: 2996375.333 msg/s, VMS: 2325 msg/s (1289x)
inproc_thr (512 B): Linux: 12273.153 Mb/s, VMS: 9.523 Mb/s (1289x)
inproc_thr (1024 B): Linux: 2923643.333 msg/s, VMS: 2325 msg/s (1257x)
inproc_thr (1024 B): Linux: 23950.486 Mb/s, VMS: 19.046 Mb/s (1257x)
inproc_thr (4096 B): Linux: 2540187.667 msg/s, VMS: 2325 msg/s (1093x)
inproc_thr (4096 B): Linux: 83236.869 Mb/s, VMS: 76.186 Mb/s (1093x)

I'd like to report the results for ZeroMQ over TCP, but the local_ and remote_ test apps both crash inside the ZeroMQ library trying to connect or bind, respectively.

The more interesting benchmark suite for IPC is one that I forked to port to VMS to test on Itanium a year or so ago:

https://github.com/jhamby/vms-ipc_benchmark/

The tests that worked on VMS are pipe, socketpair, TCP, and UDP. Here are the results with different write sizes. Note that DECC$STREAM_PIPE must be enabled or the pipe() benchmark is 2x-3x slower than these results (closer to the VMS UDP results, actually).

pipe (128 B): Linux: 305 MB/s, VMS: 17 MB/s (17.9x)
pipe (128 B): Linux: 2500748 msg/s, VMS: 139019 msg/s (18.0x)
pipe (256 B): Linux: 574 MB/s, VMS: 34 MB/s (16.9x)
pipe (256 B): Linux: 2350066 msg/s, VMS: 138256 msg/s (17.0x)
pipe (512 B): Linux: 1164 MB/s, VMS: 67 MB/s (17.4x)
pipe (512 B): Linux: 2383168 msg/s, VMS: 136845 msg/s (17.4x)
pipe (1024 B): Linux: 2003 MB/s, VMS: 134 MB/s (14.9x)
pipe (1024 B): Linux: 2051325 msg/s, VMS: 137307 msg/s (14.9x)
pipe (2048 B): Linux: 3609 MB/s, VMS: 263 MB/s (13.7x)
pipe (2048 B): Linux: 1847980 msg/s, VMS: 134540 msg/s (13.7x)
pipe (4096 B): Linux: 5890 MB/s, VMS: 339 MB/s (17.4x)
pipe (4096 B): Linux: 1507907 msg/s, VMS: 86663 msg/s (17.4x)

socketpair (128 B): Linux: 161 MB/s, VMS: 11 MB/s (14.6x)
socketpair (128 B): Linux: 1317178 msg/s, VMS: 88159 msg/s (14.9x)
socketpair (256 B): Linux: 317 MB/s, VMS: 20 MB/s (15.9x)
socketpair (256 B): Linux: 1299375 msg/s, VMS: 79961 msg/s (16.2x)
socketpair (512 B): Linux: 627 MB/s, VMS: 34 MB/s (18.4x)
socketpair (512 B): Linux: 1283596 msg/s, VMS: 69000 msg/s (18.6x)
socketpair (1024 B): Linux: 1186 MB/s, VMS: 51 MB/s (23.3x)
socketpair (1024 B): Linux: 1214167 msg/s, VMS: 52067 msg/s (23.3x)
socketpair (2048 B): Linux: 2147 MB/s, VMS: 96 MB/s (22.4x)
socketpair (2048 B): Linux: 1099505 msg/s, VMS: 49435 msg/s (22.2x)
socketpair (4096 B): Linux: 3755 MB/s, VMS: 141 MB/s (26.6x)
socketpair (4096 B): Linux: 961280 msg/s, VMS: 36015 msg/s (26.7x)

tcp (128 B): Linux: 255 MB/s, VMS: 11 MB/s (23.1x)
tcp (128 B): Linux: 2091401 msg/s, VMS: 88029 msg/s (23.8x)
tcp (256 B): Linux: 502 MB/s, VMS: 20 MB/s (25.1x)
tcp (256 B): Linux: 2055034 msg/s, VMS: 80885 msg/s (25.4x)
tcp (512 B): Linux: 966 MB/s, VMS: 34 MB/s (28.4x)
tcp (512 B): Linux: 1977886 msg/s, VMS: 70080 msg/s (28.2x)
tcp (1024 B): Linux: 1750 MB/s, VMS: 50 MB/s (35.0x)
tcp (1024 B): Linux: 1791894 msg/s, VMS: 50536 msg/s (35.5x)
tcp (2048 B): Linux: 2944 MB/s, VMS: 92 MB/s (32.0x)
tcp (2048 B): Linux: 1507203 msg/s, VMS: 47104 msg/s (32.0x)
tcp (4096 B): Linux: 4635 MB/s, VMS: 173 MB/s (26.8x)
tcp (4096 B): Linux: 1186551 msg/s, VMS: 44168 msg/s (26.9x)

udp (128 B): Linux: 88 MB/s, VMS: 5 MB/s (17.6x)
udp (128 B): Linux: 722460 msg/s, VMS: 40199 msg/s (18.0x)
udp (256 B): Linux: 173 MB/s, VMS: 9 MB/s (19.2x)
udp (256 B): Linux: 708782 msg/s, VMS: 38315 msg/s (18.5x)
udp (512 B): Linux: 357 MB/s, VMS: 19 MB/s (18.8x)
udp (512 B): Linux: 731181 msg/s, VMS: 38281 msg/s (19.1x)
udp (1024 B): Linux: 675 MB/s, VMS: 37 MB/s (18.2x)
udp (1024 B): Linux: 690771 msg/s, VMS: 38146 msg/s (18.1x)
udp (2048 B): Linux: 1267 MB/s, VMS: 72 MB/s (17.6x)
udp (2048 B): Linux: 648895 msg/s, VMS: 37103 msg/s (17.5x)
udp (4096 B): Linux: 2225 MB/s, VMS: 126 MB/s (17.7x)
udp (4096 B): Linux: 569686 msg/s, VMS: 32201 msg/s (17.7x)

There's a lot of room for improvement, to say the least.

Cheers,
Jake Hamby

Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQ

<8a294dd4-0ad0-474d-9461-1c943f5d9c5fn@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=30561&group=comp.os.vms#30561

  copy link   Newsgroups: comp.os.vms
X-Received: by 2002:a05:620a:1997:b0:774:307c:d3b2 with SMTP id bm23-20020a05620a199700b00774307cd3b2mr260099qkb.0.1697480927397;
Mon, 16 Oct 2023 11:28:47 -0700 (PDT)
X-Received: by 2002:a05:6870:1648:b0:1ea:973:51da with SMTP id
c8-20020a056870164800b001ea097351damr2734496oae.0.1697480927210; Mon, 16 Oct
2023 11:28:47 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.os.vms
Date: Mon, 16 Oct 2023 11:28:46 -0700 (PDT)
In-Reply-To: <490dc67a-7ddf-4be2-9d3a-7b92af4ea2f2n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:46b0:abc0:d4dc:66dd:1bfb:ecdf;
posting-account=OGFVHQoAAAASiNAamRQec8BtkuXxYFnQ
NNTP-Posting-Host: 2600:1700:46b0:abc0:d4dc:66dd:1bfb:ecdf
References: <490dc67a-7ddf-4be2-9d3a-7b92af4ea2f2n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <8a294dd4-0ad0-474d-9461-1c943f5d9c5fn@googlegroups.com>
Subject: Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQ
From: jake.ha...@gmail.com (Jake Hamby (Solid State Jake))
Injection-Date: Mon, 16 Oct 2023 18:28:47 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 2722
 by: Jake Hamby (Solid St - Mon, 16 Oct 2023 18:28 UTC

On Sunday, October 15, 2023 at 10:43:56 PM UTC-7, Jake Hamby wrote:
> Last batch of comparisons of OpenVMS to Ubuntu Server 22.04 LTS running on identically configured VirtualBox 7.0.10 VMs (6 CPUs, 16GB RAM) on an Ubuntu Linux host, with an Intel Xeon W-1290P CPU @ 3.70GHz.

Oops, I meant of course "Part 3". I'm still pondering what might be causing a 17-18x slowdown for VMS relative to Linux in the same VM for pipe() and UDP sockets, with up to 35x slowdown for TCP sockets in the worst case (1K buffer sizes). It's also curious that TCP sockets are faster than socketpair() across the board on Linux, but the same speed on VMS, except for 4K buffers, where TCP pulls ahead for some reason.

This is within the same system, so there's no network or any other device overhead. My first guess as to what might be going on is an excessive number of data copies. My second guess would be an excessive number of context switches, or maybe not enough context switches? My third guess would be low-level IPC code that hasn't been optimized yet for x86, and my fourth guess would be VMS may be doing a lot of cache flushing.

Perhaps I'll add a new IPC benchmark that uses a VMS mailbox and $QIO calls.. I'm curious how many can be batched together and what an optimal number of outstanding requests would be for processing a linear stream of bytes and/or fixed-size records.

Regards,
Jake Hamby

Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQ

<4ee929a6-dfd5-49ca-8321-9224218f6cfcn@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=30564&group=comp.os.vms#30564

  copy link   Newsgroups: comp.os.vms
X-Received: by 2002:ad4:55ec:0:b0:66d:689:1ff0 with SMTP id bu12-20020ad455ec000000b0066d06891ff0mr22166qvb.7.1697504185482;
Mon, 16 Oct 2023 17:56:25 -0700 (PDT)
X-Received: by 2002:a9d:61cb:0:b0:6bd:c20:4215 with SMTP id
h11-20020a9d61cb000000b006bd0c204215mr230165otk.7.1697504185247; Mon, 16 Oct
2023 17:56:25 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!peer03.ams1!peer.ams1.xlned.com!news.xlned.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.os.vms
Date: Mon, 16 Oct 2023 17:56:24 -0700 (PDT)
In-Reply-To: <8a294dd4-0ad0-474d-9461-1c943f5d9c5fn@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=35.132.253.234; posting-account=M3IgSwoAAADJd6EnOmsrCCfB6_OyTOkv
NNTP-Posting-Host: 35.132.253.234
References: <490dc67a-7ddf-4be2-9d3a-7b92af4ea2f2n@googlegroups.com> <8a294dd4-0ad0-474d-9461-1c943f5d9c5fn@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <4ee929a6-dfd5-49ca-8321-9224218f6cfcn@googlegroups.com>
Subject: Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQ
From: xyzzy1...@gmail.com (John Reagan)
Injection-Date: Tue, 17 Oct 2023 00:56:25 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 2996
 by: John Reagan - Tue, 17 Oct 2023 00:56 UTC

On Monday, October 16, 2023 at 2:28:48 PM UTC-4, Jake Hamby (Solid State Jake) wrote:
> On Sunday, October 15, 2023 at 10:43:56 PM UTC-7, Jake Hamby wrote:
> > Last batch of comparisons of OpenVMS to Ubuntu Server 22.04 LTS running on identically configured VirtualBox 7.0.10 VMs (6 CPUs, 16GB RAM) on an Ubuntu Linux host, with an Intel Xeon W-1290P CPU @ 3.70GHz.
> Oops, I meant of course "Part 3". I'm still pondering what might be causing a 17-18x slowdown for VMS relative to Linux in the same VM for pipe() and UDP sockets, with up to 35x slowdown for TCP sockets in the worst case (1K buffer sizes). It's also curious that TCP sockets are faster than socketpair() across the board on Linux, but the same speed on VMS, except for 4K buffers, where TCP pulls ahead for some reason.
>
> This is within the same system, so there's no network or any other device overhead. My first guess as to what might be going on is an excessive number of data copies. My second guess would be an excessive number of context switches, or maybe not enough context switches? My third guess would be low-level IPC code that hasn't been optimized yet for x86, and my fourth guess would be VMS may be doing a lot of cache flushing.
>
> Perhaps I'll add a new IPC benchmark that uses a VMS mailbox and $QIO calls. I'm curious how many can be batched together and what an optimal number of outstanding requests would be for processing a linear stream of bytes and/or fixed-size records.
>
> Regards,
> Jake Hamby
pipe() has historically been abysmally slow. That isn't a shock to me.

Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQ

<6f8bc584-4cb2-4dfe-a8b4-2ff645eb228cn@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=30565&group=comp.os.vms#30565

  copy link   Newsgroups: comp.os.vms
X-Received: by 2002:a05:6214:2a85:b0:66a:ca66:85fb with SMTP id jr5-20020a0562142a8500b0066aca6685fbmr38020qvb.13.1697535029365;
Tue, 17 Oct 2023 02:30:29 -0700 (PDT)
X-Received: by 2002:a05:6870:4208:b0:1e9:8e86:e661 with SMTP id
u8-20020a056870420800b001e98e86e661mr774757oac.8.1697535029148; Tue, 17 Oct
2023 02:30:29 -0700 (PDT)
Path: i2pn2.org!rocksolid2!news.neodome.net!news.mixmin.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.os.vms
Date: Tue, 17 Oct 2023 02:30:28 -0700 (PDT)
In-Reply-To: <8a294dd4-0ad0-474d-9461-1c943f5d9c5fn@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=92.22.30.67; posting-account=xnH4mQkAAADgGjKHSw0dMDzsXknFp5II
NNTP-Posting-Host: 92.22.30.67
References: <490dc67a-7ddf-4be2-9d3a-7b92af4ea2f2n@googlegroups.com> <8a294dd4-0ad0-474d-9461-1c943f5d9c5fn@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <6f8bc584-4cb2-4dfe-a8b4-2ff645eb228cn@googlegroups.com>
Subject: Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQ
From: gxy...@uk2.net (Ian Miller)
Injection-Date: Tue, 17 Oct 2023 09:30:29 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: Ian Miller - Tue, 17 Oct 2023 09:30 UTC

On Monday, October 16, 2023 at 7:28:48 PM UTC+1, Jake Hamby (Solid State Jake) wrote:
> On Sunday, October 15, 2023 at 10:43:56 PM UTC-7, Jake Hamby wrote:
> > Last batch of comparisons of OpenVMS to Ubuntu Server 22.04 LTS running on identically configured VirtualBox 7.0.10 VMs (6 CPUs, 16GB RAM) on an Ubuntu Linux host, with an Intel Xeon W-1290P CPU @ 3.70GHz.
> Oops, I meant of course "Part 3". I'm still pondering what might be causing a 17-18x slowdown for VMS relative to Linux in the same VM for pipe() and UDP sockets, with up to 35x slowdown for TCP sockets in the worst case (1K buffer sizes). It's also curious that TCP sockets are faster than socketpair() across the board on Linux, but the same speed on VMS, except for 4K buffers, where TCP pulls ahead for some reason.
>
> This is within the same system, so there's no network or any other device overhead. My first guess as to what might be going on is an excessive number of data copies. My second guess would be an excessive number of context switches, or maybe not enough context switches? My third guess would be low-level IPC code that hasn't been optimized yet for x86, and my fourth guess would be VMS may be doing a lot of cache flushing.
>
> Perhaps I'll add a new IPC benchmark that uses a VMS mailbox and $QIO calls. I'm curious how many can be batched together and what an optimal number of outstanding requests would be for processing a linear stream of bytes and/or fixed-size records.
>
> Regards,
> Jake Hamby

VMS mailboxes are slow.

You could try ICC$ system services.

Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQ

<ugn4cf$37n93$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=30566&group=comp.os.vms#30566

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: arn...@vajhoej.dk (Arne Vajhøj)
Newsgroups: comp.os.vms
Subject: Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQ
Date: Tue, 17 Oct 2023 19:12:15 -0400
Organization: A noiseless patient Spider
Lines: 30
Message-ID: <ugn4cf$37n93$1@dont-email.me>
References: <490dc67a-7ddf-4be2-9d3a-7b92af4ea2f2n@googlegroups.com>
<8a294dd4-0ad0-474d-9461-1c943f5d9c5fn@googlegroups.com>
<6f8bc584-4cb2-4dfe-a8b4-2ff645eb228cn@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 17 Oct 2023 23:12:15 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="84d47e0ae08b5bc06707cacaca74b78c";
logging-data="3398947"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+1Vr3XYY395kbQdK9ig4CzhWCo2x2V1dg="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:O2qDyU8CbOpU4yj9H0SBO8qGkdo=
In-Reply-To: <6f8bc584-4cb2-4dfe-a8b4-2ff645eb228cn@googlegroups.com>
Content-Language: en-US
 by: Arne Vajhøj - Tue, 17 Oct 2023 23:12 UTC

On 10/17/2023 5:30 AM, Ian Miller wrote:
> On Monday, October 16, 2023 at 7:28:48 PM UTC+1, Jake Hamby (Solid
> State Jake) wrote:
>> Perhaps I'll add a new IPC benchmark that uses a VMS mailbox and
>> $QIO calls. I'm curious how many can be batched together and what
>> an optimal number of outstanding requests would be for processing a
>> linear stream of bytes and/or fixed-size records.
>
> VMS mailboxes are slow.
>
> You could try ICC$ system services.

That is also my thoughts.

Mailboxes are a solution for sending maybe 100 byte messages at a
rate of maybe 10 per minute.

Trying to stream megabytes through a mailbox will not perform well.

SYS$ICC_* system services may potentially perform well as they are
intended for that kind of traffic. I have never tested their
performance, so I don't know how well they actually perform.

https://www.vajhoej.dk/arne/articles/vmsipc.html#icc has some simple
examples of usage.

Arne

Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQ

<5ce48cc7-e9ef-4e73-ae31-8368008959c0n@googlegroups.com>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=30572&group=comp.os.vms#30572

  copy link   Newsgroups: comp.os.vms
X-Received: by 2002:a05:620a:3d8d:b0:775:74c4:36de with SMTP id ts13-20020a05620a3d8d00b0077574c436demr70563qkn.3.1697599228802;
Tue, 17 Oct 2023 20:20:28 -0700 (PDT)
X-Received: by 2002:a05:6870:5b84:b0:1e9:6d19:935b with SMTP id
em4-20020a0568705b8400b001e96d19935bmr1575975oab.5.1697599228490; Tue, 17 Oct
2023 20:20:28 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.os.vms
Date: Tue, 17 Oct 2023 20:20:27 -0700 (PDT)
In-Reply-To: <ugn4cf$37n93$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:46b0:abc0:40f2:169b:14b5:7bf1;
posting-account=OGFVHQoAAAASiNAamRQec8BtkuXxYFnQ
NNTP-Posting-Host: 2600:1700:46b0:abc0:40f2:169b:14b5:7bf1
References: <490dc67a-7ddf-4be2-9d3a-7b92af4ea2f2n@googlegroups.com>
<8a294dd4-0ad0-474d-9461-1c943f5d9c5fn@googlegroups.com> <6f8bc584-4cb2-4dfe-a8b4-2ff645eb228cn@googlegroups.com>
<ugn4cf$37n93$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <5ce48cc7-e9ef-4e73-ae31-8368008959c0n@googlegroups.com>
Subject: Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQ
From: jake.ha...@gmail.com (Jake Hamby (Solid State Jake))
Injection-Date: Wed, 18 Oct 2023 03:20:28 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: Jake Hamby (Solid St - Wed, 18 Oct 2023 03:20 UTC

On Tuesday, October 17, 2023 at 4:12:20 PM UTC-7, Arne Vajhøj wrote:
> That is also my thoughts.
>
> Mailboxes are a solution for sending maybe 100 byte messages at a
> rate of maybe 10 per minute.
>
> Trying to stream megabytes through a mailbox will not perform well.
>
> SYS$ICC_* system services may potentially perform well as they are
> intended for that kind of traffic. I have never tested their
> performance, so I don't know how well they actually perform.
>
> https://www.vajhoej.dk/arne/articles/vmsipc.html#icc has some simple
> examples of usage.

The examples on your site are very useful, thanks! Since I had no luck getting the SYSV shared mem + SYSV semaphore IPC benchmark to work (it appears to deadlock with the read/write semaphores), I think I'll try adding a new IPC benchmark modifying the SYSV IPC code to use SYS$ICC_*. It's unfortunate that the feature requires SYSNAM, and also that the maximum message size is 1000 bytes.

POSIX pipe() is fast enough for my purposes (updating bash and the GNV apps). For a distributed server, I'm thinking that JVM-based frameworks would be the best way to optimize message passing within a machine, because you're working with inherently shared memory and kernel threads.

I've been curious about Erlang and the Elixir language, but they're just a bit too far off the beaten path for me. After discovering that the JVM on OpenVMS works best with the system and user WSMAX values set to as large as possible (4000000 pages was what I ended up using, since >2GB working sets apparently aren't allowed by AUTOGEN), I'm fairly impressed with the JVM for running Scala code and something like Akka Actors:

https://doc.akka.io/docs/akka/current/typed/guide/actors-intro.html

What I like even better is the Akka framework has its own clustering facility called Akka Cluster with all the features you'd expect from a failure-tolerant distributed service framework:

https://doc.akka.io/docs/akka/current/typed/cluster.html

There's a Web application framework called Play (terrible name for SEO!) that's built on Akka and I'm definitely interested in learning about:

https://www.playframework.com/

Regards,
Jake

Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQ

<ugoodq$3mdn2$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=30579&group=comp.os.vms#30579

  copy link   Newsgroups: comp.os.vms
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: arn...@vajhoej.dk (Arne Vajhøj)
Newsgroups: comp.os.vms
Subject: Re: Linux vs VMS x86 Performance Part 3: IPC and ZeroMQ
Date: Wed, 18 Oct 2023 10:00:25 -0400
Organization: A noiseless patient Spider
Lines: 30
Message-ID: <ugoodq$3mdn2$1@dont-email.me>
References: <490dc67a-7ddf-4be2-9d3a-7b92af4ea2f2n@googlegroups.com>
<8a294dd4-0ad0-474d-9461-1c943f5d9c5fn@googlegroups.com>
<6f8bc584-4cb2-4dfe-a8b4-2ff645eb228cn@googlegroups.com>
<ugn4cf$37n93$1@dont-email.me>
<5ce48cc7-e9ef-4e73-ae31-8368008959c0n@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 18 Oct 2023 14:00:26 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="84d47e0ae08b5bc06707cacaca74b78c";
logging-data="3880674"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19KIn8dkSdLdg3zIwDaSrPLY+jTwTK0vHo="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:5SCO2O/xHORUlUdfkbAtXIWmiZY=
Content-Language: en-US
In-Reply-To: <5ce48cc7-e9ef-4e73-ae31-8368008959c0n@googlegroups.com>
 by: Arne Vajhøj - Wed, 18 Oct 2023 14:00 UTC

On 10/17/2023 11:20 PM, Jake Hamby (Solid State Jake) wrote:
> I'm fairly impressed with the JVM for running Scala code and something like Akka Actors:
>
> https://doc.akka.io/docs/akka/current/typed/guide/actors-intro.html

Note that usage of Akka does not require programming in Scala - you
can use it from Java code.

Admitted: using it from Scala result in way more elegant code than
from Java.

Akka was pretty hot some years ago. Not so much today.

The switch from Apache license to BSL may not have helped.

> There's a Web application framework called Play (terrible name for SEO!) that's built on Akka and I'm definitely interested in learning about:
>
> https://www.playframework.com/

Play is one of the big users of Scala and Akka.

Play is using the RoR model like many other modern MVC
frameworks (including Grails that I mentioned previously).

Arne

1
server_pubkey.txt

rocksolid light 0.9.8
clearnet tor