Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

It's not an optical illusion, it just looks like one. -- Phil White


devel / comp.arch.embedded / Re: Multithreaded disk access

SubjectAuthor
* Multithreaded disk accessDon Y
+* Re: Multithreaded disk accessRichard Damon
|`* Re: Multithreaded disk accessDon Y
| `* Re: Multithreaded disk accessRichard Damon
|  `- Re: Multithreaded disk accessDon Y
+* Re: Multithreaded disk accessDimiter_Popoff
|`* Re: Multithreaded disk accessDon Y
| `- Re: Multithreaded disk accessDimiter_Popoff
`* Re: Multithreaded disk accessBrett
 `* Re: Multithreaded disk accessDon Y
  +- Re: Multithreaded disk accessDon Y
  +* Re: Multithreaded disk accessDimiter_Popoff
  |`* Re: Multithreaded disk accessDon Y
  | `* Re: Multithreaded disk accessDimiter_Popoff
  |  `* Re: Multithreaded disk accessDon Y
  |   +* Re: Multithreaded disk accessDimiter_Popoff
  |   |+- Re: Multithreaded disk accessDon Y
  |   |`* Re: Multithreaded disk accessantispam
  |   | `- Re: Multithreaded disk accessDimiter_Popoff
  |   `* Re: Multithreaded disk accessDon Y
  |    `* Re: Multithreaded disk accessRichard Damon
  |     `- Re: Multithreaded disk accessDon Y
  `- Re: Multithreaded disk accessClifford Heath

1
Multithreaded disk access

<skc214$tm7$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=624&group=comp.arch.embedded#624

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Multithreaded disk access
Date: Fri, 15 Oct 2021 07:08:23 -0700
Organization: A noiseless patient Spider
Lines: 17
Message-ID: <skc214$tm7$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Fri, 15 Oct 2021 14:08:37 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="4564e26e83417835c7172dde871b9930";
logging-data="30407"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/LwJiNUmD++Q0vaIC8meHR"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:8jSyy/6raIVipK25R96LnB20xss=
Content-Language: en-US
X-Mozilla-News-Host: news://news.eternal-september.org:119
 by: Don Y - Fri, 15 Oct 2021 14:08 UTC

As a *rough* figure, what would you expect the bandwidth of
a disk drive (spinning rust) to do as a function of number of
discrete files being accessed, concurrently?

E.g., if you can monitor the rough throughput of each
stream and sum them, will they sum to 100% of the drive's
bandwidth? 90%? 110? etc.

[Note that drives have read-ahead and write caches so
the speed of the media might not bleed through to the
application layer. And, filesystem code also throws
a wrench in the works. Assume caching in the system
is disabled/ineffective.]

Said another way, what's a reasonably reliable way of
determining when you are I/O bound by the hardware
and when more threads won't result in more performance?

Re: Multithreaded disk access

<mghaJ.90933$jm6.81072@fx07.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=625&group=comp.arch.embedded#625

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!paganini.bofh.team!news.dns-netz.com!news.freedyn.net!newsfeed.xs4all.nl!newsfeed7.news.xs4all.nl!news-out.netnews.com!news.alt.net!fdc2.netnews.com!peer01.ams1!peer.ams1.xlned.com!news.xlned.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx07.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
Gecko/20100101 Thunderbird/91.2.0
Subject: Re: Multithreaded disk access
Content-Language: en-US
Newsgroups: comp.arch.embedded
References: <skc214$tm7$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <skc214$tm7$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 37
Message-ID: <mghaJ.90933$jm6.81072@fx07.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Fri, 15 Oct 2021 11:38:58 -0400
X-Received-Bytes: 2542
 by: Richard Damon - Fri, 15 Oct 2021 15:38 UTC

On 10/15/21 10:08 AM, Don Y wrote:
> As a *rough* figure, what would you expect the bandwidth of
> a disk drive (spinning rust) to do as a function of number of
> discrete files being accessed, concurrently?
>
> E.g., if you can monitor the rough throughput of each
> stream and sum them, will they sum to 100% of the drive's
> bandwidth?  90%?  110?  etc.
>
> [Note that drives have read-ahead and write caches so
> the speed of the media might not bleed through to the
> application layer.  And, filesystem code also throws
> a wrench in the works.  Assume caching in the system
> is disabled/ineffective.]
>
> Said another way, what's a reasonably reliable way of
> determining when you are I/O bound by the hardware
> and when more threads won't result in more performance?

You know that you can't actually get data off the media faster than the
fundamental data rate of the media.

As you mention, cache can give an apparent rate faster than the media,
but you seem to be willing to assume that caching doesn't affect your
rate, and each chunk will only be returned once.

Pathological access patterns can reduce this rate dramatically, and
worse case can result in rates of only a few percent of this factor if
you force significant seeks between each sector read (and overload the
buffering so it can't hold larger reads for a given stream).

Non-Pathological access can often result in near 100% of the access rate.

The best test of if you are I/O bound is if the I/O system is constantly
in use, and every I/O request has another pending when it finishes, then
you are totally I/O bound.

Re: Multithreaded disk access

<skc7pf$fnv$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=626&group=comp.arch.embedded#626

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: dp...@tgi-sci.com (Dimiter_Popoff)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Fri, 15 Oct 2021 18:46:53 +0300
Organization: TGI
Lines: 50
Message-ID: <skc7pf$fnv$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me>
Reply-To: dp@tgi-sci.com
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 15 Oct 2021 15:46:55 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="fb354f6f75d48b68d7d44840f4220dff";
logging-data="16127"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18JLkgkAqjQI7hntuZmscZeou3hK8yO1Pc="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
Thunderbird/78.14.0
Cancel-Lock: sha1:d2/4ICvT+VUrsT8abbLJv15e1zY=
In-Reply-To: <skc214$tm7$1@dont-email.me>
Content-Language: en-US
 by: Dimiter_Popoff - Fri, 15 Oct 2021 15:46 UTC

On 10/15/2021 17:08, Don Y wrote:
> As a *rough* figure, what would you expect the bandwidth of
> a disk drive (spinning rust) to do as a function of number of
> discrete files being accessed, concurrently?
>
> E.g., if you can monitor the rough throughput of each
> stream and sum them, will they sum to 100% of the drive's
> bandwidth?  90%?  110?  etc.
>
> [Note that drives have read-ahead and write caches so
> the speed of the media might not bleed through to the
> application layer.  And, filesystem code also throws
> a wrench in the works.  Assume caching in the system
> is disabled/ineffective.]

If caching is disabled things can get really bad quite quickly,
think on updating directory entries to reflect modification/access
dates, file sizes, scattering etc., think also allocation
table accesses etc. E.g. in dps on a larger disk partition
(say >100 gigabytes) the first CAT (cluster allocation table)
access after boot takes some noticeable time, a second maybe;
then it stops being noticeable at all as the CAT is updated
rarely and on a modified area basis only (this on a a processor
capable of 20 Mbytes/second) (dps needs the entire CAT to allocate
new space in order to do its (enhanced) worst fit scheme).
IOW if you torture the disk with constant seeks and scattered
accesses you can slow it down from somewhat to a lot, depends
on way too many factors to be worth wondering about.

>
> Said another way, what's a reasonably reliable way of
> determining when you are I/O bound by the hardware
> and when more threads won't result in more performance?

Just try it out for some time and make your pick. Recently
I did that dfs (distributed file system, over tcp) for dps
and had to watch much of this going on, at some point you
reach something between 50 and 100% of the hardware limit,
depending on file sizes you copy and who knows what else
overhead you can think of.

Dimiter

======================================================
Dimiter Popoff, TGI http://www.tgi-sci.com
======================================================
http://www.flickr.com/photos/didi_tgi/

Re: Multithreaded disk access

<skc8iq$vdn$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=627&group=comp.arch.embedded#627

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Fri, 15 Oct 2021 09:00:11 -0700
Organization: A noiseless patient Spider
Lines: 50
Message-ID: <skc8iq$vdn$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <mghaJ.90933$jm6.81072@fx07.iad>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Fri, 15 Oct 2021 16:00:26 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="4564e26e83417835c7172dde871b9930";
logging-data="32183"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19OsrMYVMEIUoIZEw0562cq"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:bx/JF8XQs5JrU1dgxIDVplxk1VY=
In-Reply-To: <mghaJ.90933$jm6.81072@fx07.iad>
Content-Language: en-US
 by: Don Y - Fri, 15 Oct 2021 16:00 UTC

On 10/15/2021 8:38 AM, Richard Damon wrote:
> You know that you can't actually get data off the media faster than the
> fundamental data rate of the media.

Yes, but you don't know that rate *and* that rate varies based on
"where" you're accesses land on the physical medium (e.g., ZDR,
shingled drives, etc.)

> As you mention, cache can give an apparent rate faster than the media, but you
> seem to be willing to assume that caching doesn't affect your rate, and each
> chunk will only be returned once.

Cache in the filesystem code will be counterproductive. Cache in
the drive may be a win for some accesses and a loss for others
(e.g., if the drive read ahead thinking the next read was going to
be sequential with the last -- and that proves to be wrong -- the
drive may have missed an opportunity to respond more quickly to the
ACTUAL access that follows).

[I'm avoiding talking about reads AND writes just to keep the
discussion complexity manageable -- to avoid having to introduce
caveats with every statement]

> Pathological access patterns can reduce this rate dramatically, and worse case
> can result in rates of only a few percent of this factor if you force
> significant seeks between each sector read (and overload the buffering so it
> can't hold larger reads for a given stream).

Exactly. But, you don't necessarily know where your next access will
take you. This variation in throughput is what makes defining
"i/o bound" tricky; if the access patterns at some instant (instant
being a period over which you base your decision) make the drive
look slow, then you would opt NOT to spawn a new thread to take
advantage of excess throughput. Similarly, if the drive "looks"
serendipitously fast, you may spawn another thread and its
accesses will eventually conflict with those of the first thread
to lower overall throughput.

> Non-Pathological access can often result in near 100% of the access rate.
>
> The best test of if you are I/O bound is if the I/O system is constantly in
> use, and every I/O request has another pending when it finishes, then you are
> totally I/O bound.

But, if you make that assessment when the access pattern is "unfortunate",
you erroneously conclude the disk is at its capacity. And, vice versa.

Without control over the access patterns, it seems like there is no
reliable strategy for determining when another thread can be
advantageous (?)

Re: Multithreaded disk access

<skc9mu$lp5$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=628&group=comp.arch.embedded#628

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Fri, 15 Oct 2021 09:19:25 -0700
Organization: A noiseless patient Spider
Lines: 43
Message-ID: <skc9mu$lp5$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <skc7pf$fnv$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Fri, 15 Oct 2021 16:19:43 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="4564e26e83417835c7172dde871b9930";
logging-data="22309"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+ZmUQQYB2E+x61tXd5pEtK"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:tUQZC/WKg8uaGTG4RodTqgdcpWI=
In-Reply-To: <skc7pf$fnv$1@dont-email.me>
Content-Language: en-US
 by: Don Y - Fri, 15 Oct 2021 16:19 UTC

On 10/15/2021 8:46 AM, Dimiter_Popoff wrote:
> On 10/15/2021 17:08, Don Y wrote:
>
> If caching is disabled things can get really bad quite quickly,
> think on updating directory entries to reflect modification/access
> dates, file sizes, scattering etc., think also allocation
> table accesses etc.

My point re: filesystem cache (not the on-board disk cache) was that
the user objects accessed will only be visited once. So, no value
to caching *them* in the filesystem's buffers.

> E.g. in dps on a larger disk partition
> (say >100 gigabytes) the first CAT (cluster allocation table)
> access after boot takes some noticeable time, a second maybe;
> then it stops being noticeable at all as the CAT is updated
> rarely and on a modified area basis only (this on a a processor
> capable of 20 Mbytes/second) (dps needs the entire CAT to allocate
> new space in order to do its (enhanced) worst fit scheme).
> IOW if you torture the disk with constant seeks and scattered
> accesses you can slow it down from somewhat to a lot, depends
> on way too many factors to be worth wondering about.

I'm trying NOT to be aware of any particulars of the specific
filesystem *type* (FAT*, NTFS, *BSD, etc.) and make decisions
just from high level observations of disk performance.

>> Said another way, what's a reasonably reliable way of
>> determining when you are I/O bound by the hardware
>> and when more threads won't result in more performance?
>
> Just try it out for some time and make your pick. Recently
> I did that dfs (distributed file system, over tcp) for dps
> and had to watch much of this going on, at some point you
> reach something between 50 and 100% of the hardware limit,
> depending on file sizes you copy and who knows what else
> overhead you can think of.

I think the cost of any extra complexity in the algorithm
(to dynamically try to optimize number of threads) is
hard to justify -- given no control over the actual
media. I.e., it seems like it's best to just aim for
"simple" and live with whatever throughput you get...

Re: Multithreaded disk access

<skca82$t04$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=629&group=comp.arch.embedded#629

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: dp...@tgi-sci.com (Dimiter_Popoff)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Fri, 15 Oct 2021 19:28:49 +0300
Organization: TGI
Lines: 21
Message-ID: <skca82$t04$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <skc7pf$fnv$1@dont-email.me>
<skc9mu$lp5$1@dont-email.me>
Reply-To: dp@tgi-sci.com
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 15 Oct 2021 16:28:50 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="fb354f6f75d48b68d7d44840f4220dff";
logging-data="29700"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+FliUoxli0H3Hbmc2tUATHnR/O1yBdbGc="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
Thunderbird/78.14.0
Cancel-Lock: sha1:a4PiyOEjUIrO7Hh6FtRuiFBTxe0=
In-Reply-To: <skc9mu$lp5$1@dont-email.me>
Content-Language: en-US
 by: Dimiter_Popoff - Fri, 15 Oct 2021 16:28 UTC

On 10/15/2021 19:19, Don Y wrote:
> On 10/15/2021 8:46 AM, Dimiter_Popoff wrote:
> ....
>> Just try it out for some time and make your pick. Recently
>> I did that dfs (distributed file system, over tcp) for dps
>> and had to watch much of this going on, at some point you
>> reach something between 50 and 100% of the hardware limit,
>> depending on file sizes you copy and who knows what else
>> overhead you can think of.
>
> I think the cost of any extra complexity in the algorithm
> (to dynamically try to optimize number of threads) is
> hard to justify -- given no control over the actual
> media.  I.e., it seems like it's best to just aim for
> "simple" and live with whatever throughput you get...

I meant going the simplest way, not adding algorithms.
Just leave it for now and have a few systems running,
look at what is going on and pick some sane figure,
perhaps try it out either way before you settle.

Re: Multithreaded disk access

<QhiaJ.29364$nh7.20636@fx22.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=630&group=comp.arch.embedded#630

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!aioe.org!news.uzoreto.com!news-out.netnews.com!news.alt.net!fdc2.netnews.com!peer01.ams1!peer.ams1.xlned.com!news.xlned.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx22.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
Gecko/20100101 Thunderbird/91.2.0
Subject: Re: Multithreaded disk access
Content-Language: en-US
Newsgroups: comp.arch.embedded
References: <skc214$tm7$1@dont-email.me> <mghaJ.90933$jm6.81072@fx07.iad>
<skc8iq$vdn$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <skc8iq$vdn$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 88
Message-ID: <QhiaJ.29364$nh7.20636@fx22.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Fri, 15 Oct 2021 12:48:48 -0400
X-Received-Bytes: 5282
 by: Richard Damon - Fri, 15 Oct 2021 16:48 UTC

On 10/15/21 12:00 PM, Don Y wrote:
> On 10/15/2021 8:38 AM, Richard Damon wrote:
>> You know that you can't actually get data off the media faster than
>> the fundamental data rate of the media.
>
> Yes, but you don't know that rate *and* that rate varies based on
> "where" you're accesses land on the physical medium (e.g., ZDR,
> shingled drives, etc.)

But all of these still have a 'maximum' rate, so you can still define a
maximum. It does say that the 'expected' rate you can get gets more
variable.

>
>> As you mention, cache can give an apparent rate faster than the media,
>> but you seem to be willing to assume that caching doesn't affect your
>> rate, and each chunk will only be returned once.
>
> Cache in the filesystem code will be counterproductive.  Cache in
> the drive may be a win for some accesses and a loss for others
> (e.g., if the drive read ahead thinking the next read was going to
> be sequential with the last -- and that proves to be wrong -- the
> drive may have missed an opportunity to respond more quickly to the
> ACTUAL access that follows).
>
> [I'm avoiding talking about reads AND writes just to keep the
> discussion complexity manageable -- to avoid having to introduce
> caveats with every statement]
>

Yes, the drive might try to read ahead and hurt itself, or it might not.
That is mostly out of your control.

>> Pathological access patterns can reduce this rate dramatically, and
>> worse case can result in rates of only a few percent of this factor if
>> you force significant seeks between each sector read (and overload the
>> buffering so it can't hold larger reads for a given stream).
>
> Exactly.  But, you don't necessarily know where your next access will
> take you.  This variation in throughput is what makes defining
> "i/o bound" tricky;  if the access patterns at some instant (instant
> being a period over which you base your decision) make the drive
> look slow, then you would opt NOT to spawn a new thread to take
> advantage of excess throughput.  Similarly, if the drive "looks"
> serendipitously fast, you may spawn another thread and its
> accesses will eventually conflict with those of the first thread
> to lower overall throughput.

>
>> Non-Pathological access can often result in near 100% of the access rate.
>>
>> The best test of if you are I/O bound is if the I/O system is
>> constantly in use, and every I/O request has another pending when it
>> finishes, then you are totally I/O bound.
>
> But, if you make that assessment when the access pattern is "unfortunate",
> you erroneously conclude the disk is at its capacity.  And, vice versa.
>
> Without control over the access patterns, it seems like there is no
> reliable strategy for determining when another thread can be
> advantageous (?)

Yes, adding more threads might change the access pattern. it will TEND
to make the pattern less sequential, and thus more towards that
pathological case (and thus more threads actually decrease the rate you
can do I/O and thus slow down your I//O bound rate). It is possible that
it just happens to be fortunate to make things more sequential, if the
system can see that one thread wants sector N and another wants sector
N+1, something can schedule the reads together and drop a seek.

Predicting that sort of behavior can't be done 'in the abstract'. You
need to think about the details of the system.

As a general principle, if the I/O system is saturated, the job is I/O
bound. Adding more threads will only help if you have the resources to
queue up more requests and can optimize the order of servicing them to
be more efficient with I/O. Predicting that means you need to know and
have some control over the access pattern.

Note, part of this is being able to trade memory to improve I/O speed.
If you know that EVENTUALLY you will want the next sector after the one
you are reading, reading that now and caching it will be a win, but only
if you will be able to use that data before you need to claim that
memory for other uses. This sort of improvement really does require
knowing details you want to try to assume you don't want to know, so you
are limiting your ability to make accurate decisions.

Re: Multithreaded disk access

<skchv8$ot0$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=631&group=comp.arch.embedded#631

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Fri, 15 Oct 2021 11:40:25 -0700
Organization: A noiseless patient Spider
Lines: 103
Message-ID: <skchv8$ot0$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <mghaJ.90933$jm6.81072@fx07.iad>
<skc8iq$vdn$1@dont-email.me> <QhiaJ.29364$nh7.20636@fx22.iad>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Fri, 15 Oct 2021 18:40:42 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="4564e26e83417835c7172dde871b9930";
logging-data="25504"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+P+e4fkIGTwI/TSGCGfe4c"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:pv0kVMP2V7Si2LbWF/DNv5YyjdM=
In-Reply-To: <QhiaJ.29364$nh7.20636@fx22.iad>
Content-Language: en-US
 by: Don Y - Fri, 15 Oct 2021 18:40 UTC

On 10/15/2021 9:48 AM, Richard Damon wrote:
> On 10/15/21 12:00 PM, Don Y wrote:
>> On 10/15/2021 8:38 AM, Richard Damon wrote:
>>> You know that you can't actually get data off the media faster than the
>>> fundamental data rate of the media.
>>
>> Yes, but you don't know that rate *and* that rate varies based on
>> "where" you're accesses land on the physical medium (e.g., ZDR,
>> shingled drives, etc.)
>
> But all of these still have a 'maximum' rate, so you can still define a
> maximum. It does say that the 'expected' rate you can get gets more variable.

But only if you have control over the hardware.

How long will a "backup" take on your PC. Today?
Tomorrow? Last week?

If you removed the disk and put it in another PC,
how would those figures change?

If you restore (using file access and not sector access),
and then backup again, how will the numbers change?

>>> As you mention, cache can give an apparent rate faster than the media, but
>>> you seem to be willing to assume that caching doesn't affect your rate, and
>>> each chunk will only be returned once.
>>
>> Cache in the filesystem code will be counterproductive. Cache in
>> the drive may be a win for some accesses and a loss for others
>> (e.g., if the drive read ahead thinking the next read was going to
>> be sequential with the last -- and that proves to be wrong -- the
>> drive may have missed an opportunity to respond more quickly to the
>> ACTUAL access that follows).
>>
>> [I'm avoiding talking about reads AND writes just to keep the
>> discussion complexity manageable -- to avoid having to introduce
>> caveats with every statement]
>
> Yes, the drive might try to read ahead and hurt itself, or it might not. That
> is mostly out of your control.

Exactly. So, I can't do anything other than OBSERVE the performance
I am getting.

>>> Non-Pathological access can often result in near 100% of the access rate.
>>>
>>> The best test of if you are I/O bound is if the I/O system is constantly in
>>> use, and every I/O request has another pending when it finishes, then you
>>> are totally I/O bound.
>>
>> But, if you make that assessment when the access pattern is "unfortunate",
>> you erroneously conclude the disk is at its capacity. And, vice versa.
>>
>> Without control over the access patterns, it seems like there is no
>> reliable strategy for determining when another thread can be
>> advantageous (?)
>
> Yes, adding more threads might change the access pattern. it will TEND to make
> the pattern less sequential, and thus more towards that pathological case (and
> thus more threads actually decrease the rate you can do I/O and thus slow down
> your I//O bound rate). It is possible that it just happens to be fortunate to
> make things more sequential, if the system can see that one thread wants sector
> N and another wants sector N+1, something can schedule the reads together and
> drop a seek.

The point of additional threads is that another thread can schedule
the next access while the processor is busy processing the previous
one. So, the I/O is always kept busy instead of letting it idle
between accesses.

> Predicting that sort of behavior can't be done 'in the abstract'. You need to
> think about the details of the system.
>
> As a general principle, if the I/O system is saturated, the job is I/O bound.

The goal is to *ensure* the I/O system is completely saturated.

> Adding more threads will only help if you have the resources to queue up more
> requests and can optimize the order of servicing them to be more efficient with

Ordering them is an optimization that requires knowledge of how they
will interact *in* the drive. However, simply having ANOTHER request
ready as soon as the previous one is completed (neglecting the
potential for the drive to queue requests internally) is an
enhancement to throughput.

> I/O. Predicting that means you need to know and have some control over the
> access pattern.
>
> Note, part of this is being able to trade memory to improve I/O speed. If you
> know that EVENTUALLY you will want the next sector after the one you are
> reading, reading that now and caching it will be a win, but only if you will be
> able to use that data before you need to claim that memory for other uses. This
> sort of improvement really does require knowing details you want to try to
> assume you don't want to know, so you are limiting your ability to make
> accurate decisions.

Moving the code to another platform (something that the user can do
in a heartbeat) will invalidate any assumptions I have made about the
performance on my original platform. Hence the desire to have the
*code* sort out what it *can* do to increase performance by observing
its actual performance on TODAY'S actual hardware.

Re: Multithreaded disk access

<ski0v5$7iu$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=632&group=comp.arch.embedded#632

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: ggt...@yahoo.com (Brett)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Sun, 17 Oct 2021 20:27:17 -0000 (UTC)
Organization: A noiseless patient Spider
Lines: 29
Message-ID: <ski0v5$7iu$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 17 Oct 2021 20:27:17 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="1b74ddc485751c13558a969d60d5636d";
logging-data="7774"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+8ZgbDxMH2IFe7otKPgaSB"
User-Agent: NewsTap/5.5 (iPad)
Cancel-Lock: sha1:YF/eznNyKlUWQseV/72zOz5/u60=
sha1:FiRjzZqjyRkw96a8CSpcJfbn4yg=
 by: Brett - Sun, 17 Oct 2021 20:27 UTC

Don Y <blockedofcourse@foo.invalid> wrote:
> As a *rough* figure, what would you expect the bandwidth of
> a disk drive (spinning rust) to do as a function of number of
> discrete files being accessed, concurrently?
>
> E.g., if you can monitor the rough throughput of each
> stream and sum them, will they sum to 100% of the drive's
> bandwidth? 90%? 110? etc.
>
> [Note that drives have read-ahead and write caches so
> the speed of the media might not bleed through to the
> application layer. And, filesystem code also throws
> a wrench in the works. Assume caching in the system
> is disabled/ineffective.]
>
> Said another way, what's a reasonably reliable way of
> determining when you are I/O bound by the hardware
> and when more threads won't result in more performance?

Roughly speaking a drive spinning at 7500 rpm divided by 60 Is 125
revolutions a second and a seek takes half a revolution and the next file
is another half a revolution away on average, which gets you 125 files a
second roughly speaking depending on the performance of the drive if my
numbers are not too far off.

This is plenty to support a dozen Windows VM’s on average if it were not
for Windows updates that saturate the disks with hundreds of little file
updates at once, causing Microsoft SQL timeouts for the VM’s.

Re: Multithreaded disk access

<ski5q2$77e$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=633&group=comp.arch.embedded#633

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Sun, 17 Oct 2021 14:49:52 -0700
Organization: A noiseless patient Spider
Lines: 58
Message-ID: <ski5q2$77e$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <ski0v5$7iu$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 17 Oct 2021 21:49:54 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="8bc380aa80c0bb87cba0ce4c7f9ae3d3";
logging-data="7406"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18sD/jJMJyW0MjFFHHInD2H"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:cfCgB1yij4+JP2inLeX2xfk3SbA=
In-Reply-To: <ski0v5$7iu$1@dont-email.me>
Content-Language: en-US
 by: Don Y - Sun, 17 Oct 2021 21:49 UTC

On 10/17/2021 1:27 PM, Brett wrote:
> Don Y <blockedofcourse@foo.invalid> wrote:
>> As a *rough* figure, what would you expect the bandwidth of
>> a disk drive (spinning rust) to do as a function of number of
>> discrete files being accessed, concurrently?
>>
>> E.g., if you can monitor the rough throughput of each
>> stream and sum them, will they sum to 100% of the drive's
>> bandwidth? 90%? 110? etc.
>>
>> [Note that drives have read-ahead and write caches so
>> the speed of the media might not bleed through to the
>> application layer. And, filesystem code also throws
>> a wrench in the works. Assume caching in the system
>> is disabled/ineffective.]
>>
>> Said another way, what's a reasonably reliable way of
>> determining when you are I/O bound by the hardware
>> and when more threads won't result in more performance?
>
> Roughly speaking a drive spinning at 7500 rpm divided by 60 Is 125
> revolutions a second and a seek takes half a revolution and the next file
> is another half a revolution away on average, which gets you 125 files a
> second roughly speaking depending on the performance of the drive if my
> numbers are not too far off.

You're assuming files are laid out contiguously -- that no seeks are needed
"between sectors".

You're also assuming moving to another track (seek time) is instantaneous
(or, within the half-cylinder rotational delay).

For a 7200 rpm (some are as slow as 5400, some as fast as 15K) drive,
AVERAGE rotational delay is 8.3+ ms/2 = ~4ms.

But, seek time can be 10, 15, + ms. (on my enterprise drives, its 4; but,
average rotational delay is 2.) And, if the desired sector lies on a
"distant" cylinder, you can scale that almost linearly.

I.e., looking at the disk's specs is largely useless unless you know how
the data on it is laid out. The only way to know that is to *look* at it.

But, looking at part of the data doesn't mean you can extrapolate to ALL
of the data. So, I'm back to my assumption that you can't really alter
your approach -- with any degree of predictable success -- before hand.
E.g., I can keep spawning threads until I find them queuing (more than
one deep) on the disk driver. But, even then, a moment from now, the
backlog can clear. Or, it can get worse (which means I've wasted
the resources that the threads consume AND added complexity to the
algorithm with no direct benefit)

Below, you're also assuming Windows.

And, for writes, shingled drives throw all of that down the toilet.

> This is plenty to support a dozen Windows VM’s on average if it were not
> for Windows updates that saturate the disks with hundreds of little file
> updates at once, causing Microsoft SQL timeouts for the VM’s.

Re: Multithreaded disk access

<ski6f9$eg5$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=634&group=comp.arch.embedded#634

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Sun, 17 Oct 2021 15:01:08 -0700
Organization: A noiseless patient Spider
Lines: 7
Message-ID: <ski6f9$eg5$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <ski0v5$7iu$1@dont-email.me>
<ski5q2$77e$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 17 Oct 2021 22:01:13 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="d5df8c821fd03b19598f1151ee9483a5";
logging-data="14853"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19/7yLX5vvgvBRBC9obaxVl"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:vVqC5AgkAKaKDNPBq86OIiDsqP0=
In-Reply-To: <ski5q2$77e$1@dont-email.me>
Content-Language: en-US
 by: Don Y - Sun, 17 Oct 2021 22:01 UTC

On 10/17/2021 2:49 PM, Don Y wrote:
> But, seek time can be 10, 15, + ms. (on my enterprise drives, its 4; but,
> average rotational delay is 2.) And, if the desired sector lies on a
> "distant" cylinder, you can scale that almost linearly.

Sorry, that should be .4 (I should discipline myself to add leading zeroes!)

Re: Multithreaded disk access

<ski6vd$hf1$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=635&group=comp.arch.embedded#635

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: dp...@tgi-sci.com (Dimiter_Popoff)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Mon, 18 Oct 2021 01:09:48 +0300
Organization: TGI
Lines: 42
Message-ID: <ski6vd$hf1$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <ski0v5$7iu$1@dont-email.me>
<ski5q2$77e$1@dont-email.me>
Reply-To: dp@tgi-sci.com
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 17 Oct 2021 22:09:49 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="618f32fbfcd8f98ab5166a58644956bd";
logging-data="17889"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19LVP/oLp59J8BgdObNOkezJ4HzR7X9dBY="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
Thunderbird/78.14.0
Cancel-Lock: sha1:f/4wS7rOG3yHR/dfogdf8Sz1rq0=
In-Reply-To: <ski5q2$77e$1@dont-email.me>
Content-Language: en-US
 by: Dimiter_Popoff - Sun, 17 Oct 2021 22:09 UTC

On 10/18/2021 0:49, Don Y wrote:
> On 10/17/2021 1:27 PM, Brett wrote:
>> Don Y <blockedofcourse@foo.invalid> wrote:
>>> As a *rough* figure, what would you expect the bandwidth of
>>> a disk drive (spinning rust) to do as a function of number of
>>> discrete files being accessed, concurrently?
>>>
>>> E.g., if you can monitor the rough throughput of each
>>> stream and sum them, will they sum to 100% of the drive's
>>> bandwidth?  90%?  110?  etc.
>>>
>>> [Note that drives have read-ahead and write caches so
>>> the speed of the media might not bleed through to the
>>> application layer.  And, filesystem code also throws
>>> a wrench in the works.  Assume caching in the system
>>> is disabled/ineffective.]
>>>
>>> Said another way, what's a reasonably reliable way of
>>> determining when you are I/O bound by the hardware
>>> and when more threads won't result in more performance?
>>
>> Roughly speaking a drive spinning at 7500 rpm divided by 60 Is 125
>> revolutions a second and a seek takes half a revolution and the next file
>> is another half a revolution away on average, which gets you 125 files a
>> second roughly speaking depending on the performance of the drive if my
>> numbers are not too far off.
>
> You're assuming files are laid out contiguously -- that no seeks are needed
> "between sectors".

This is the typical case anyway, most files are contiguously allocated.
Even on popular filesystems which have long forgotten how to do worst
fit allocation and have to defragment their disks not so infrequently.
But I think they have to access at least 3 locations to get to a file;
the directory entry, some kind of FAT-like thing, then the file.
Unlike dps, where 2 accesses are enough. And of course dps does
worst fit allocation so defragmentating is just unnecessary.

======================================================
Dimiter Popoff, TGI http://www.tgi-sci.com
======================================================
http://www.flickr.com/photos/didi_tgi/

Re: Multithreaded disk access

<16aefa205d03a2f0$1$3155485$e8dde262@news.thecubenet.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=636&group=comp.arch.embedded#636

  copy link   Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Newsgroups: comp.arch.embedded
References: <skc214$tm7$1@dont-email.me> <ski0v5$7iu$1@dont-email.me> <ski5q2$77e$1@dont-email.me>
From: no.s...@please.net (Clifford Heath)
Date: Mon, 18 Oct 2021 11:58:49 +1100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <ski5q2$77e$1@dont-email.me>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
Message-ID: <16aefa205d03a2f0$1$3155485$e8dde262@news.thecubenet.com>
Lines: 39
Path: i2pn2.org!i2pn.org!aioe.org!news.uzoreto.com!tr2.eu1.usenetexpress.com!feeder.usenetexpress.com!tr1.iad1.usenetexpress.com!2a07:8080:119:fe:2ddf:7d15:5c99:67fa.MISMATCH!news.thecubenet.com!not-for-mail
NNTP-Posting-Date: Mon, 18 Oct 2021 00:58:51 +0000
X-Received-Bytes: 2313
Organization: theCubeNet - www.thecubenet.com
X-Complaints-To: abuse@thecubenet.com
 by: Clifford Heath - Mon, 18 Oct 2021 00:58 UTC

On 18/10/21 8:49 am, Don Y wrote:
> On 10/17/2021 1:27 PM, Brett wrote:
>> Don Y <blockedofcourse@foo.invalid> wrote:
>>> As a *rough* figure, what would you expect the bandwidth of
>>> a disk drive (spinning rust) to do as a function of number of
>>> discrete files being accessed, concurrently?
>>>
>>> E.g., if you can monitor the rough throughput of each
>>> stream and sum them, will they sum to 100% of the drive's
>>> bandwidth?  90%?  110?  etc.
>>>
>>> [Note that drives have read-ahead and write caches so
>>> the speed of the media might not bleed through to the
>>> application layer.  And, filesystem code also throws
>>> a wrench in the works.  Assume caching in the system
>>> is disabled/ineffective.]
>>>
>>> Said another way, what's a reasonably reliable way of
>>> determining when you are I/O bound by the hardware
>>> and when more threads won't result in more performance?
>>
>> Roughly speaking a drive spinning at 7500 rpm divided by 60 Is 125
>> revolutions a second and a seek takes half a revolution and the next file
>> is another half a revolution away on average

> For a 7200 rpm (some are as slow as 5400, some as fast as 15K) drive,
> AVERAGE rotational delay is 8.3+ ms/2 = ~4ms.
> I.e., looking at the disk's specs is largely useless unless you know how
> the data on it is laid out.

> And, for writes, shingled drives throw all of that down the toilet.

Modern drives do so much caching and bad-sector reassignment that your
physical intuition isn't likely to be of much help in any case.
Typically there will be many full-track caches and a full RTOS running
I/O scheduling using head kinematics. It is incredibly sophisticated and
unlikely to yield to any trivial rotational analysis.

Clifford Heath

Re: Multithreaded disk access

<skk2fr$ooe$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=637&group=comp.arch.embedded#637

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Mon, 18 Oct 2021 08:05:28 -0700
Organization: A noiseless patient Spider
Lines: 57
Message-ID: <skk2fr$ooe$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <ski0v5$7iu$1@dont-email.me>
<ski5q2$77e$1@dont-email.me> <ski6vd$hf1$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 18 Oct 2021 15:05:32 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="d5df8c821fd03b19598f1151ee9483a5";
logging-data="25358"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1876wCr92xmbKUieXdZkt/m"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:siPX8GR2jGAPrlqSQ3lDINnVHYk=
In-Reply-To: <ski6vd$hf1$1@dont-email.me>
Content-Language: en-US
 by: Don Y - Mon, 18 Oct 2021 15:05 UTC

On 10/17/2021 3:09 PM, Dimiter_Popoff wrote:
>> You're assuming files are laid out contiguously -- that no seeks are needed
>> "between sectors".
>
> This is the typical case anyway, most files are contiguously allocated.

I'm not sure that is the case for files that have been modified on a medium.
Write once, read multiple MAY support that sort of approach (assuming there
is enough contiguous space for that first write). But, if you are appending
to a file or overwriting it, I don't know what guarantees you can expect;
there are a LOT of different file systems out there!

I would assume CD9660 would be the friendliest, in this regard. And, it
is typically immutable so once the tracks are laid, they can persist.

[IIRC, CD9660 also has a "summary VToC, of sorts, so you don't have to
seek to individual subdirectories to find things from the medium's root]

> Even on popular filesystems which have long forgotten how to do worst
> fit allocation and have to defragment their disks not so infrequently.
> But I think they have to access at least 3 locations to get to a file;
> the directory entry, some kind of FAT-like thing, then the file.
> Unlike dps, where 2 accesses are enough. And of course dps does
> worst fit allocation so defragmentating is just unnecessary.

I think directories are cached. And, possibly entire drive structures
(depending on how much physical RAM you have available).

I still can't see an easy threading strategy that can be applied
without more details of the target hardware/OS, filesystem layout,
specifics of the drives/volumes, etc.

E.g., my disk sanitizer times each (fixed size) access to profile the
drive's performance as well as looking for trouble spots on the media.
But, things like recal cycles or remapping bad sectors introduce
completely unpredictable blips in the throughput. So much so that
I've had to implement a fair bit of logic to identify whether a
"delay" was part of normal operation *or* a sign of an exceptional
event.

[But, the sanitizer has a very predictable access pattern so
there's no filesystem/content -specific issues involved; just
process sectors as fast as possible. (also, there is no
need to have multiple threads per spindle; just a thread *per*
spindle -- plus some overhead threads)

And, the sanitizer isn't as concerned with throughput as the
human operator is the bottleneck (I can crank out a 500+GB drive
every few minutes).]

I'll mock up some synthetic loads and try various thread-spawning
strategies to see the sorts of performance I *might* be able
to get -- with different "preexisting" media (to minimize my
impact on that).

I'm sure I can round up a dozen or more platforms to try -- just
from stuff I have lying around here! :>

Re: Multithreaded disk access

<skk753$tmp$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=638&group=comp.arch.embedded#638

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: dp...@tgi-sci.com (Dimiter_Popoff)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Mon, 18 Oct 2021 19:25:05 +0300
Organization: TGI
Lines: 101
Message-ID: <skk753$tmp$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <ski0v5$7iu$1@dont-email.me>
<ski5q2$77e$1@dont-email.me> <ski6vd$hf1$1@dont-email.me>
<skk2fr$ooe$1@dont-email.me>
Reply-To: dp@tgi-sci.com
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 18 Oct 2021 16:25:07 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="618f32fbfcd8f98ab5166a58644956bd";
logging-data="30425"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19fWSuq3hSgycAFR/yDJ4H3lu8rkMWNwRs="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
Thunderbird/78.14.0
Cancel-Lock: sha1:lCEw9T4eLyk0W4Tm4QZUz+6xkNw=
In-Reply-To: <skk2fr$ooe$1@dont-email.me>
Content-Language: en-US
 by: Dimiter_Popoff - Mon, 18 Oct 2021 16:25 UTC

On 10/18/2021 18:05, Don Y wrote:
> On 10/17/2021 3:09 PM, Dimiter_Popoff wrote:
>>> You're assuming files are laid out contiguously -- that no seeks are
>>> needed
>>> "between sectors".
>>
>> This is the typical case anyway, most files are contiguously allocated.
>
> I'm not sure that is the case for files that have been modified on a
> medium.

It is not the case for files which are appended to, say logs etc.
But files like that do not make such a high percentage. I looked at a log
file which logs some activities several times a day, it has grown to
52 megabytes (ascii text) for something like 15 months (first entry
is June 1 2020). It is spread over 32 segments, as one would expect.

Then I looked at the directory where I archive emails, 311 files
(one of them being Don.txt :-). Just one or two were two segmented,
the rest were all contiguous.

The devil is not that black (literally translating a Bulgarian saying)
as you see. Worst fit allocation is of course crucial to get to
such figures, the mainstream OS-s don't do it and things there
must be much worse.

> ....
>> Even on popular filesystems which have long forgotten how to do worst
>> fit allocation and have to defragment their disks not so infrequently.
>> But I think they have to access at least 3 locations to get to a file;
>> the directory entry, some kind of FAT-like thing, then the file.
>> Unlike dps, where 2 accesses are enough. And of course dps does
>> worst fit allocation so defragmentating is just unnecessary.
>
> I think directories are cached.  And, possibly entire drive structures
> (depending on how much physical RAM you have available).

Well of course they must be caching them, especially since there are
gigabytes of RAM available. I know what dps does: it caches longnamed
directories which coexist with the old 8.4 ones in the same filesystem
and work faster than the 8.4 ones which typically don't get cached
(these were done to work well even on floppies, a directory entry
update writes back only the sector(s , if crossing) it occupies
etc. Then in dps the CAT (cluster allocation tables) are cached all
the time (do that for a 500G partition and enjoy reading all the
4 megabytes each time the CAT is needed to allocate new space...
it can be done, in fact the caches are enabled upon boot explicitly
on a per LUN/partition basis).

> ...
>
> E.g., my disk sanitizer times each (fixed size) access to profile the
> drive's performance as well as looking for trouble spots on the media.
> But, things like recal cycles or remapping bad sectors introduce
> completely unpredictable blips in the throughput.  So much so that
> I've had to implement a fair bit of logic to identify whether a
> "delay" was part of normal operation *or* a sign of an exceptional
> event.
>
> [But, the sanitizer has a very predictable access pattern so
> there's no filesystem/content -specific issues involved; just
> process sectors as fast as possible.  (also, there is no
> need to have multiple threads per spindle; just a thread *per*
> spindle -- plus some overhead threads)
>
> And, the sanitizer isn't as concerned with throughput as the
> human operator is the bottleneck (I can crank out a 500+GB drive
> every few minutes).]

I did something similar many years ago, wneh the largest drive
a nukeman had was 200 (230 IIRC) megabytes, i.e. in prior to
magnetoresistive heads came to the world. It did develop
bad sectors and did not do much internally about it (1993).
So I wrote the "lockout" command (still available, I see I have
recompiled it for power, last change 2016 - can't remember if it
did anything useful, nor why I did that). It accessed sector by
sector the LUN it was told to and built the lockout CAT on
its filesystem (LCAT being ORed to CAT prior to the LUN being
usable for the OS). Took quite some time on that drive but
did the job back then.

>
> I'll mock up some synthetic loads and try various thread-spawning
> strategies to see the sorts of performance I *might* be able
> to get -- with different "preexisting" media (to minimize my
> impact on that).
>
> I'm sure I can round up a dozen or more platforms to try -- just
> from stuff I have lying around here!  :>

I think this will give you plenty of an idea how to go about it.
Once you know the limit you can run at some reasonable figure
below it and be happy. Getting more precise figures about all
that is neither easy nor will it buy you anything.

======================================================
Dimiter Popoff, TGI http://www.tgi-sci.com
======================================================
http://www.flickr.com/photos/didi_tgi/

Re: Multithreaded disk access

<skkmg9$hl3$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=639&group=comp.arch.embedded#639

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Mon, 18 Oct 2021 13:46:59 -0700
Organization: A noiseless patient Spider
Lines: 117
Message-ID: <skkmg9$hl3$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <ski0v5$7iu$1@dont-email.me>
<ski5q2$77e$1@dont-email.me> <ski6vd$hf1$1@dont-email.me>
<skk2fr$ooe$1@dont-email.me> <skk753$tmp$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 18 Oct 2021 20:47:06 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="d5df8c821fd03b19598f1151ee9483a5";
logging-data="18083"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18Ih7N0aKtH7illea2c+XCG"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:KYae+myMoEUoaBlEF/o97MS3a4M=
In-Reply-To: <skk753$tmp$1@dont-email.me>
Content-Language: en-US
 by: Don Y - Mon, 18 Oct 2021 20:46 UTC

On 10/18/2021 9:25 AM, Dimiter_Popoff wrote:
> The devil is not that black (literally translating a Bulgarian saying)
> as you see. Worst fit allocation is of course crucial to get to
> such figures, the mainstream OS-s don't do it and things there
> must be much worse.

I think a lot depends on the amount of "churn" the filesystem
experiences, in normal operation. E.g., the "system" disk on
the workstation I'm using today has about 800G in use of 1T total.
But, the vast majority of it is immutable -- binaries, libraries,
etc. So, there's very low fragmentation (because I "build" the
disk in one shot, instead of incrementally revising and
"updating" its contents)

By contrast, the other disks in the machine all see a fair bit of
turnover as things get created, revised and deleted.

>>> Even on popular filesystems which have long forgotten how to do worst
>>> fit allocation and have to defragment their disks not so infrequently.
>>> But I think they have to access at least 3 locations to get to a file;
>>> the directory entry, some kind of FAT-like thing, then the file.
>>> Unlike dps, where 2 accesses are enough. And of course dps does
>>> worst fit allocation so defragmentating is just unnecessary.
>>
>> I think directories are cached. And, possibly entire drive structures
>> (depending on how much physical RAM you have available).
>
> Well of course they must be caching them, especially since there are
> gigabytes of RAM available.

Yes, but *which*? And how many (how "much")? I would assume memory
set aside for caching files and file system structures is dynamically
managed -- if a process looks at lots of directories but few files
vs. a process that looks at few directories but many files...

> I know what dps does: it caches longnamed
> directories which coexist with the old 8.4 ones in the same filesystem
> and work faster than the 8.4 ones which typically don't get cached
> (these were done to work well even on floppies, a directory entry
> update writes back only the sector(s , if crossing) it occupies
> etc. Then in dps the CAT (cluster allocation tables) are cached all
> the time (do that for a 500G partition and enjoy reading all the
> 4 megabytes each time the CAT is needed to allocate new space...
> it can be done, in fact the caches are enabled upon boot explicitly
> on a per LUN/partition basis).
>
>> E.g., my disk sanitizer times each (fixed size) access to profile the
>> drive's performance as well as looking for trouble spots on the media.
>> But, things like recal cycles or remapping bad sectors introduce
>> completely unpredictable blips in the throughput. So much so that
>> I've had to implement a fair bit of logic to identify whether a
>> "delay" was part of normal operation *or* a sign of an exceptional
>> event.
>>
>> [But, the sanitizer has a very predictable access pattern so
>> there's no filesystem/content -specific issues involved; just
>> process sectors as fast as possible. (also, there is no
>> need to have multiple threads per spindle; just a thread *per*
>> spindle -- plus some overhead threads)
>>
>> And, the sanitizer isn't as concerned with throughput as the
>> human operator is the bottleneck (I can crank out a 500+GB drive
>> every few minutes).]
>
> I did something similar many years ago, wneh the largest drive
> a nukeman had was 200 (230 IIRC) megabytes, i.e. in prior to
> magnetoresistive heads came to the world. It did develop
> bad sectors and did not do much internally about it (1993).
> So I wrote the "lockout" command (still available, I see I have
> recompiled it for power, last change 2016 - can't remember if it
> did anything useful, nor why I did that). It accessed sector by
> sector the LUN it was told to and built the lockout CAT on
> its filesystem (LCAT being ORed to CAT prior to the LUN being
> usable for the OS). Took quite some time on that drive but
> did the job back then.

Yes, support for "bad block management" can be done outside the
drive. In the sanitizer's case, it has to report on whether or not
it was able to successfully "scrub" every PHYSICAL sector that
might contain user data (for some "fussy" users).

So, if it appears that a sector may have been remapped (visible
by a drop in instantaneous access rate), I query the drive's
bad sector statistics to see if I should just abort the process
now and mark the drive to be (physically) shredded -- *if*
it belongs to a "fussy" user. Regardless (for fussy users),
I will query those stats at the end of the operation to see
if they have changed during the process.

But, again, that's a very specific application with different
prospects for optimization. E.g., there's no file system
support required as the disk is just a bunch of data blocks
(sectors) having no particular structure nor meaning. (So,
no need for filesystem code at all! One can scrub a SPARC
disk just as easily as a Mac!)

>> I'll mock up some synthetic loads and try various thread-spawning
>> strategies to see the sorts of performance I *might* be able
>> to get -- with different "preexisting" media (to minimize my
>> impact on that).
>>
>> I'm sure I can round up a dozen or more platforms to try -- just
>> from stuff I have lying around here! :>
>
> I think this will give you plenty of an idea how to go about it.
> Once you know the limit you can run at some reasonable figure
> below it and be happy. Getting more precise figures about all
> that is neither easy nor will it buy you anything.

I suspect "1" is going to end up as the "best compromise". So,
I'm treating this as an exercise in *validating* that assumption.
I'll see if I can salvage some of the performance monitoring code
from the sanitizer to give me details from which I might be able
to ferret out "opportunities". If I start by restricting my
observations to non-destructive synthetic loads, then I can
pull a drive and see how it fares in a different host while
running the same code, etc.

Re: Multithreaded disk access

<skkpv8$8cc$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=640&group=comp.arch.embedded#640

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: dp...@tgi-sci.com (Dimiter_Popoff)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Tue, 19 Oct 2021 00:46:15 +0300
Organization: TGI
Lines: 37
Message-ID: <skkpv8$8cc$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <ski0v5$7iu$1@dont-email.me>
<ski5q2$77e$1@dont-email.me> <ski6vd$hf1$1@dont-email.me>
<skk2fr$ooe$1@dont-email.me> <skk753$tmp$1@dont-email.me>
<skkmg9$hl3$1@dont-email.me>
Reply-To: dp@tgi-sci.com
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 18 Oct 2021 21:46:16 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="618f32fbfcd8f98ab5166a58644956bd";
logging-data="8588"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/YtgwdtyUwYoYcV/XUr7O/ATQ8PIPmhTU="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
Thunderbird/78.14.0
Cancel-Lock: sha1:7UGBckcyEqmdEtYb/T+Hwpmhpys=
In-Reply-To: <skkmg9$hl3$1@dont-email.me>
Content-Language: en-US
 by: Dimiter_Popoff - Mon, 18 Oct 2021 21:46 UTC

On 10/18/2021 23:46, Don Y wrote:
> On 10/18/2021 9:25 AM, Dimiter_Popoff wrote:
>> The devil is not that black (literally translating a Bulgarian saying)
>> as you see. Worst fit allocation is of course crucial to get to
>> such figures, the mainstream OS-s don't do it and things there
>> must be much worse.
>
> I think a lot depends on the amount of "churn" the filesystem
> experiences, in normal operation.  E.g., the "system" disk on
> the workstation I'm using today has about 800G in use of 1T total.
> But, the vast majority of it is immutable -- binaries, libraries,
> etc.  So, there's very low fragmentation (because I "build" the
> disk in one shot, instead of incrementally revising and
> "updating" its contents)

Typically so on most systems, I expect.

>
> By contrast, the other disks in the machine all see a fair bit of
> turnover as things get created, revised and deleted.

Now this is where the worst fit allocation strategy becomes the game
changer. A newly created file is almost certainly contiguously
allocated; fragmentation occurs when it is appended to (and the
space past its last block has been already allocated in the meantime).
I think I saw somewhere (never really got interested) that mainstream
operating systems of today do just first fit - which means once you
delete a file, no matter how small, its space will be allocated as
part of the next request etc., no idea why they do it (if they do so,
my memory on that is not very certain) in such a primitive manner
but here they are.

======================================================
Dimiter Popoff, TGI http://www.tgi-sci.com
======================================================
http://www.flickr.com/photos/didi_tgi/

Re: Multithreaded disk access

<skkva3$u6f$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=641&group=comp.arch.embedded#641

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Mon, 18 Oct 2021 16:17:13 -0700
Organization: A noiseless patient Spider
Lines: 37
Message-ID: <skkva3$u6f$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <ski0v5$7iu$1@dont-email.me>
<ski5q2$77e$1@dont-email.me> <ski6vd$hf1$1@dont-email.me>
<skk2fr$ooe$1@dont-email.me> <skk753$tmp$1@dont-email.me>
<skkmg9$hl3$1@dont-email.me> <skkpv8$8cc$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 18 Oct 2021 23:17:24 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="a5124b5e01176a8b5504146fc032eed7";
logging-data="30927"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/rL4cJhavleJHbuO8Hj+Ez"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:JunGD0q0/4vBj2IvtwmtP2WaMmw=
In-Reply-To: <skkpv8$8cc$1@dont-email.me>
Content-Language: en-US
 by: Don Y - Mon, 18 Oct 2021 23:17 UTC

On 10/18/2021 2:46 PM, Dimiter_Popoff wrote:
>> By contrast, the other disks in the machine all see a fair bit of
>> turnover as things get created, revised and deleted.
>
> Now this is where the worst fit allocation strategy becomes the game
> changer. A newly created file is almost certainly contiguously
> allocated; fragmentation occurs when it is appended to (and the
> space past its last block has been already allocated in the meantime).
> I think I saw somewhere (never really got interested) that mainstream
> operating systems of today do just first fit

Dunno. There are *lots* of different "file systems". And, as most
systems use the filesystem for its global namespace, the term is
sadly overloaded (in ways that have nothing to do with storage media).

I don't use large secondary stores in my designs. Where "big storage"
is required, it is accessed remotely (network) so the problem of
dealing with it is not mine. E.g., in my current project, there is
no file system exposed; any persistent storage is done via "tables"
that are created, as needed (by the system and its apps) in a remote
database server.

[This lets me delegate the issue of data retention to a specific box.
And, also lets me impart structure to the data that is stored. So,
for example, I can find the most recent firmware image for a particular
hardware module by just issuing a query and waiting on the result
instead of having to parse a bunch of names, somewhere in a
filesystem/namespace, looking for the one that has the highest
rev level IN its name: firmware_1.0.1, firmware_1.0.2, firmware_5.3,
firmware 6.1 (typo! the '_' was omitted -- will the parse algorithm
choke on that omission??)]

> - which means once you
> delete a file, no matter how small, its space will be allocated as
> part of the next request etc., no idea why they do it (if they do so,
> my memory on that is not very certain) in such a primitive manner
> but here they are.

Re: Multithreaded disk access

<sl63sc$olb$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=672&group=comp.arch.embedded#672

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Mon, 25 Oct 2021 04:19:34 -0700
Organization: A noiseless patient Spider
Lines: 27
Message-ID: <sl63sc$olb$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <ski0v5$7iu$1@dont-email.me>
<ski5q2$77e$1@dont-email.me> <ski6vd$hf1$1@dont-email.me>
<skk2fr$ooe$1@dont-email.me> <skk753$tmp$1@dont-email.me>
<skkmg9$hl3$1@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 25 Oct 2021 11:19:41 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="0a41b6a329df076c2a27c48b9f62cde0";
logging-data="25259"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+4wN7USVUU9SqkdhHTcVFU"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:hBxvuthzHR6rzl3XC2A6Cm1r3v8=
In-Reply-To: <skkmg9$hl3$1@dont-email.me>
Content-Language: en-US
 by: Don Y - Mon, 25 Oct 2021 11:19 UTC

On 10/18/2021 1:46 PM, Don Y wrote:
>> I think this will give you plenty of an idea how to go about it.
>> Once you know the limit you can run at some reasonable figure
>> below it and be happy. Getting more precise figures about all
>> that is neither easy nor will it buy you anything.
>
> I suspect "1" is going to end up as the "best compromise". So,
> I'm treating this as an exercise in *validating* that assumption.
> I'll see if I can salvage some of the performance monitoring code
> from the sanitizer to give me details from which I might be able
> to ferret out "opportunities". If I start by restricting my
> observations to non-destructive synthetic loads, then I can
> pull a drive and see how it fares in a different host while
> running the same code, etc.

Actually, '2' turns out to be marginally better than '1'.
Beyond that, its hard to generalize without controlling some
of the other variables.

'2' wins because there is always the potential to make the
disk busy, again, just after it satisfies the access for
the 1st thread (which is now busy using the data, etc.)

But, if the first thread finishes up before the second thread's
request has been satisfied, then the presence of a THIRD thread
would just be clutter. (i.e., the work performed correlates
with the number of threads that can have value)

Re: Multithreaded disk access

<2UHdJ.2$452.1@fx22.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=694&group=comp.arch.embedded#694

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!aioe.org!feeder1.feed.usenet.farm!feed.usenet.farm!peer02.ams4!peer.am4.highwinds-media.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx22.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
Gecko/20100101 Thunderbird/91.2.1
Subject: Re: Multithreaded disk access
Content-Language: en-US
Newsgroups: comp.arch.embedded
References: <skc214$tm7$1@dont-email.me> <ski0v5$7iu$1@dont-email.me>
<ski5q2$77e$1@dont-email.me> <ski6vd$hf1$1@dont-email.me>
<skk2fr$ooe$1@dont-email.me> <skk753$tmp$1@dont-email.me>
<skkmg9$hl3$1@dont-email.me> <sl63sc$olb$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <sl63sc$olb$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 47
Message-ID: <2UHdJ.2$452.1@fx22.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Mon, 25 Oct 2021 20:23:25 -0400
X-Received-Bytes: 3399
 by: Richard Damon - Tue, 26 Oct 2021 00:23 UTC

On 10/25/21 7:19 AM, Don Y wrote:
> On 10/18/2021 1:46 PM, Don Y wrote:
>>> I think this will give you plenty of an idea how to go about it.
>>> Once you know the limit you can run at some reasonable figure
>>> below it and be happy. Getting more precise figures about all
>>> that is neither easy nor will it buy you anything.
>>
>> I suspect "1" is going to end up as the "best compromise".  So,
>> I'm treating this as an exercise in *validating* that assumption.
>> I'll see if I can salvage some of the performance monitoring code
>> from the sanitizer to give me details from which I might be able
>> to ferret out "opportunities".  If I start by restricting my
>> observations to non-destructive synthetic loads, then I can
>> pull a drive and see how it fares in a different host while
>> running the same code, etc.
>
> Actually, '2' turns out to be marginally better than '1'.
> Beyond that, its hard to generalize without controlling some
> of the other variables.
>
> '2' wins because there is always the potential to make the
> disk busy, again, just after it satisfies the access for
> the 1st thread (which is now busy using the data, etc.)
>
> But, if the first thread finishes up before the second thread's
> request has been satisfied, then the presence of a THIRD thread
> would just be clutter.  (i.e., the work performed correlates
> with the number of threads that can have value)

Actually, 2 might be slower than 1, because the new request from the
second thread is apt to need a seek, while a single thread making all
the calls is more apt to sequentially read much more of the disk.

The controller, if not given a new chained command, might choose to
automatically start reading the next sector of the cylinder, which could
be likely the next one asked for.

The real optimum is likely a single process doing asynchronous requests
queuing up a series of requests, and then distributing the data as it
comes in to processing threads to do what ever crunching needs to be done.

These threads then send the data to a single thread that does
asynchronous writes of the results.

You can easily tell if the input or output processes are I/O bound or
not and use that to adjust the number of crunching threads in the middle.

Re: Multithreaded disk access

<sl83op$per$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=696&group=comp.arch.embedded#696

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: blockedo...@foo.invalid (Don Y)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Mon, 25 Oct 2021 22:29:58 -0700
Organization: A noiseless patient Spider
Lines: 66
Message-ID: <sl83op$per$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <ski0v5$7iu$1@dont-email.me>
<ski5q2$77e$1@dont-email.me> <ski6vd$hf1$1@dont-email.me>
<skk2fr$ooe$1@dont-email.me> <skk753$tmp$1@dont-email.me>
<skkmg9$hl3$1@dont-email.me> <sl63sc$olb$1@dont-email.me>
<2UHdJ.2$452.1@fx22.iad>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 26 Oct 2021 05:30:04 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="aad27a47c67af399cd185ebd20909dc7";
logging-data="26075"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+NVBlPjGftmWTgt19s6Qpv"
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
Thunderbird/52.1.1
Cancel-Lock: sha1:KrUi694I6+Oe5LidNx8kjKqaYA8=
In-Reply-To: <2UHdJ.2$452.1@fx22.iad>
Content-Language: en-US
 by: Don Y - Tue, 26 Oct 2021 05:29 UTC

On 10/25/2021 5:23 PM, Richard Damon wrote:
> On 10/25/21 7:19 AM, Don Y wrote:
>> On 10/18/2021 1:46 PM, Don Y wrote:
>>>> I think this will give you plenty of an idea how to go about it.
>>>> Once you know the limit you can run at some reasonable figure
>>>> below it and be happy. Getting more precise figures about all
>>>> that is neither easy nor will it buy you anything.
>>>
>>> I suspect "1" is going to end up as the "best compromise". So,
>>> I'm treating this as an exercise in *validating* that assumption.
>>> I'll see if I can salvage some of the performance monitoring code
>>> from the sanitizer to give me details from which I might be able
>>> to ferret out "opportunities". If I start by restricting my
>>> observations to non-destructive synthetic loads, then I can
>>> pull a drive and see how it fares in a different host while
>>> running the same code, etc.
>>
>> Actually, '2' turns out to be marginally better than '1'.
>> Beyond that, its hard to generalize without controlling some
>> of the other variables.
>>
>> '2' wins because there is always the potential to make the
>> disk busy, again, just after it satisfies the access for
>> the 1st thread (which is now busy using the data, etc.)
>>
>> But, if the first thread finishes up before the second thread's
>> request has been satisfied, then the presence of a THIRD thread
>> would just be clutter. (i.e., the work performed correlates
>> with the number of threads that can have value)
>
> Actually, 2 might be slower than 1, because the new request from the second
> thread is apt to need a seek, while a single thread making all the calls is
> more apt to sequentially read much more of the disk.

Second thread is another instance of first thread; same code, same data.
So, it is just "looking ahead" -- i.e., it WILL do what the first thread
WOULD do if the second thread hadn't gotten to it, first! The strategy
is predicated on tightly coupled actors so they each can see what
the other has done/is doing.

If the layout of the object that thread 1 is processing calls for
a seek, then thread 2 will perform that seek as if thread 1 were to
do it "when done chewing on the past data".

If the disk caches data, then thread 2 will reap the benefits of
that cache AS IF it was thread 1 acting later.

Etc.

A second thread just hides the cost of the data processing.

> The controller, if not given a new chained command, might choose to
> automatically start reading the next sector of the cylinder, which could be
> likely the next one asked for.
>
> The real optimum is likely a single process doing asynchronous requests queuing
> up a series of requests, and then distributing the data as it comes in to
> processing threads to do what ever crunching needs to be done.
>
> These threads then send the data to a single thread that does asynchronous
> writes of the results.
>
> You can easily tell if the input or output processes are I/O bound or not and
> use that to adjust the number of crunching threads in the middle.
>

Re: Multithreaded disk access

<slmqba$b12$2@z-news.wcss.wroc.pl>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=708&group=comp.arch.embedded#708

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!news.swapon.de!fu-berlin.de!newsfeed.pionier.net.pl!pwr.wroc.pl!news.wcss.wroc.pl!not-for-mail
From: antis...@math.uni.wroc.pl
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Sun, 31 Oct 2021 19:21:14 +0000 (UTC)
Organization: Politechnika Wroclawska
Lines: 24
Message-ID: <slmqba$b12$2@z-news.wcss.wroc.pl>
References: <skc214$tm7$1@dont-email.me> <ski0v5$7iu$1@dont-email.me> <ski5q2$77e$1@dont-email.me> <ski6vd$hf1$1@dont-email.me> <skk2fr$ooe$1@dont-email.me> <skk753$tmp$1@dont-email.me> <skkmg9$hl3$1@dont-email.me> <skkpv8$8cc$1@dont-email.me>
NNTP-Posting-Host: hera.math.uni.wroc.pl
X-Trace: z-news.wcss.wroc.pl 1635708074 11298 156.17.86.1 (31 Oct 2021 19:21:14 GMT)
X-Complaints-To: abuse@news.pwr.wroc.pl
NNTP-Posting-Date: Sun, 31 Oct 2021 19:21:14 +0000 (UTC)
Cancel-Lock: sha1:zU/BoEak6grYMSBn92W2+e6GFms=
User-Agent: tin/2.4.3-20181224 ("Glen Mhor") (UNIX) (Linux/4.19.0-10-amd64 (x86_64))
 by: antis...@math.uni.wroc.pl - Sun, 31 Oct 2021 19:21 UTC

Dimiter_Popoff <dp@tgi-sci.com> wrote:
>
> Now this is where the worst fit allocation strategy becomes the game
> changer. A newly created file is almost certainly contiguously
> allocated; fragmentation occurs when it is appended to (and the
> space past its last block has been already allocated in the meantime).
> I think I saw somewhere (never really got interested) that mainstream
> operating systems of today do just first fit - which means once you
> delete a file, no matter how small, its space will be allocated as
> part of the next request etc., no idea why they do it (if they do so,
> my memory on that is not very certain) in such a primitive manner
> but here they are.

There was some research and first fit turned out to be pretty good.
But it is implemented slightly differently: there is moving pointer,
search advances it. So, after deallocation you wait till moving
pointer arrives to the hole. Way better than immediately filling
hole: there is reasonable chance that holes will coalece into
bigger free area. Another point is that you refuse allocation
when disc is too full (say at 95% utilization). The two things
together mean that normally fragmentation is not a problem.

--
Waldek Hebisch

Re: Multithreaded disk access

<slmsbf$v3v$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=709&group=comp.arch.embedded#709

  copy link   Newsgroups: comp.arch.embedded
Path: i2pn2.org!i2pn.org!eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail
From: dp...@tgi-sci.com (Dimiter_Popoff)
Newsgroups: comp.arch.embedded
Subject: Re: Multithreaded disk access
Date: Sun, 31 Oct 2021 21:55:25 +0200
Organization: TGI
Lines: 38
Message-ID: <slmsbf$v3v$1@dont-email.me>
References: <skc214$tm7$1@dont-email.me> <ski0v5$7iu$1@dont-email.me>
<ski5q2$77e$1@dont-email.me> <ski6vd$hf1$1@dont-email.me>
<skk2fr$ooe$1@dont-email.me> <skk753$tmp$1@dont-email.me>
<skkmg9$hl3$1@dont-email.me> <skkpv8$8cc$1@dont-email.me>
<slmqba$b12$2@z-news.wcss.wroc.pl>
Reply-To: dp@tgi-sci.com
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 31 Oct 2021 19:55:27 -0000 (UTC)
Injection-Info: reader02.eternal-september.org; posting-host="2a78a08469ec6c22fb100a6862b867ce";
logging-data="31871"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19o+3R3d6zTAW8kSyrhCp1uPEKZc/ZFC4M="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.2.1
Cancel-Lock: sha1:staK+ZmZH5G0MrLdf1NGKqj3StI=
In-Reply-To: <slmqba$b12$2@z-news.wcss.wroc.pl>
Content-Language: en-US
 by: Dimiter_Popoff - Sun, 31 Oct 2021 19:55 UTC

On 10/31/2021 21:21, antispam@math.uni.wroc.pl wrote:
> Dimiter_Popoff <dp@tgi-sci.com> wrote:
>>
>> Now this is where the worst fit allocation strategy becomes the game
>> changer. A newly created file is almost certainly contiguously
>> allocated; fragmentation occurs when it is appended to (and the
>> space past its last block has been already allocated in the meantime).
>> I think I saw somewhere (never really got interested) that mainstream
>> operating systems of today do just first fit - which means once you
>> delete a file, no matter how small, its space will be allocated as
>> part of the next request etc., no idea why they do it (if they do so,
>> my memory on that is not very certain) in such a primitive manner
>> but here they are.
>
> There was some research and first fit turned out to be pretty good.
> But it is implemented slightly differently: there is moving pointer,
> search advances it. So, after deallocation you wait till moving
> pointer arrives to the hole. Way better than immediately filling
> hole: there is reasonable chance that holes will coalece into
> bigger free area. Another point is that you refuse allocation
> when disc is too full (say at 95% utilization). The two things
> together mean that normally fragmentation is not a problem.
>

This might be somewhat better than plain first fit but it can be no
match to worst fit. They probably don't do worst fit because they
must have huge amounts of memory to dig through due to poor
filesystem design (they have to go through 32 bits or more for
each cluster).
In DPS the CAT is one bit per cluster which makes worst fit doable
without any problems like that. I have enhanced it a little
(many years ago) noticing that the partition could come to a
point where it has two almost equally sized empty parts; now you append
to a file, it gets some clusters from one of these; next time you
append to it the *other* one is the largest and it takes from it... :).
So now dps first checks if there are empty clusters to allocate
in succession of these already allocated to the file, if none
then it does worst fit.

1
server_pubkey.txt

rocksolid light 0.9.8
clearnet tor