Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

HOST SYSTEM NOT RESPONDING, PROBABLY DOWN. DO YOU WANT TO WAIT? (Y/N)


devel / comp.arch / How convergent was the general use of binary floating point?

SubjectAuthor
* How convergent was the general use of binary floating point?Russell Wallace
+- Re: How convergent was the general use of binary floating point?MitchAlsup
+* Re: How convergent was the general use of binary floating point?John Levine
|+* Re: How convergent was the general use of binary floating point?robf...@gmail.com
||`* Re: How convergent was the general use of binary floating point?BGB
|| `* Re: How convergent was the general use of binary floating point?John Levine
||  +- Re: How convergent was the general use of binary floating point?MitchAlsup
||  `- Re: How convergent was the general use of binary floating point?BGB
|+* Re: How convergent was the general use of binary floating point?Stephen Fuld
||`* Re: How convergent was the general use of binary floating point?John Levine
|| `- Re: How convergent was the general use of binary floating point?Scott Lurndal
|+- Re: How convergent was the general use of binary floating point?Thomas Koenig
|`- Re: How convergent was the general use of binary floating point?Scott Lurndal
+* Re: How convergent was the general use of binary floating point?Anton Ertl
|+* Re: How convergent was the general use of binary floating point?Russell Wallace
||`* Re: How convergent was the general use of binary floating point?Anton Ertl
|| +* Re: How convergent was the general use of binary floating point?Russell Wallace
|| |`* Re: How convergent was the general use of binary floating point?Anton Ertl
|| | +* Re: How convergent was the general use of binary floating point?BGB
|| | |`* Re: How convergent was the general use of binary floating point?MitchAlsup
|| | | +* Re: How convergent was the general use of binary floating point?Stephen Fuld
|| | | |`* Re: How convergent was the general use of binary floating point?MitchAlsup
|| | | | +* Re: How convergent was the general use of binary floating point?Stephen Fuld
|| | | | |`- Re: How convergent was the general use of binary floating point?Stephen Fuld
|| | | | `- Re: string me along, How convergent was the general use of binary floating pointJohn Levine
|| | | +* Re: How convergent was the general use of binary floating point?Benny Lyne Amorsen
|| | | |`- Re: How convergent was the general use of binary floating point?MitchAlsup
|| | | `- Re: How convergent was the general use of binary floating point?Bernd Linsel
|| | `* Re: How convergent was the general use of binary floating point?Terje Mathisen
|| |  `* Re: How convergent was the general use of binary floating point?Tim Rentsch
|| |   +* Re: How convergent was the general use of binary floating point?Terje Mathisen
|| |   |`* Re: How convergent was the general use of binary floating point?Tim Rentsch
|| |   | +* Re: How convergent was the general use of binary floating point?MitchAlsup
|| |   | |`- Re: How convergent was the general use of binary floating point?Tim Rentsch
|| |   | `* Re: How convergent was the general use of binary floating point?Terje Mathisen
|| |   |  `- Re: How convergent was the general use of binary floating point?Tim Rentsch
|| |   `* Re: How convergent was the general use of binary floating point?BGB
|| |    +* Re: How convergent was the general use of binary floating point?MitchAlsup
|| |    |`- Re: How convergent was the general use of binary floating point?BGB
|| |    `- Re: How convergent was the general use of binary floating point?Tim Rentsch
|| `* Re: How convergent was the general use of binary floating point?Russell Wallace
||  `- Re: How convergent was the general use of binary floating point?Anton Ertl
|+- Re: How convergent was the general use of binary floating point?Peter Lund
|`* Re: How convergent was the general use of binary floating point?MitchAlsup
| `* Re: How convergent was the general use of binary floating point?Anton Ertl
|  +* Re: How convergent was the general use of binary floating point?Stephen Fuld
|  |`* Re: How convergent was the general use of binary floating point?Anton Ertl
|  | +* Re: How convergent was the general use of binary floating point?Bill Findlay
|  | |`* Re: How convergent was the general use of binary floating point?Anton Ertl
|  | | `* Re: How convergent was the general use of binary floating point?Bill Findlay
|  | |  `* Re: How convergent was the general use of binary floating point?Anton Ertl
|  | |   `- Re: How convergent was the general use of binary floating point?Bill Findlay
|  | +* Re: How convergent was the general use of binary floating point?Stephen Fuld
|  | |`* Re: How convergent was the general use of binary floating point?Scott Lurndal
|  | | `- Re: How convergent was the general use of binary floating point?Stephen Fuld
|  | `* Re: How convergent was the general use of binary floating point?Stephen Fuld
|  |  `* Re: How convergent was the general use of binary floating point?Thomas Koenig
|  |   +- Re: How convergent was the general use of binary floating point?Scott Lurndal
|  |   `* Re: How convergent was the general use of binary floating point?Stephen Fuld
|  |    `* Re: How convergent was the general use of binary floating point?Thomas Koenig
|  |     +* Re: How convergent was the general use of binary floating point?Anton Ertl
|  |     |+* Re: How convergent was the general use of binary floating point?Michael S
|  |     ||+* Re: How convergent was the general use of binary floating point?Anton Ertl
|  |     |||`* Re: terminals and servers, was How convergent was the general use of binary floaJohn Levine
|  |     ||| `* Re: terminals and servers, was How convergent was the general use ofMitchAlsup
|  |     |||  `* Re: terminals and servers, was How convergent was the general use of binary floaAnton Ertl
|  |     |||   `- Re: terminals and servers, was How convergent was the general use of binary floaAnne & Lynn Wheeler
|  |     ||`* Re: How convergent was the general use of binary floating point?Stephen Fuld
|  |     || +* Re: How convergent was the general use of binary floating point?BGB
|  |     || |`- Re: How convergent was the general use of binary floating point?Terje Mathisen
|  |     || `* Re: How convergent was the general use of binary floating point?Thomas Koenig
|  |     ||  `* Re: How convergent was the general use of binary floating point?Scott Lurndal
|  |     ||   `- Re: How convergent was the general use of binary floating point?Anton Ertl
|  |     |`- Re: How convergent was the general use of binary floating point?Stephen Fuld
|  |     `* Re: How convergent was the general use of binary floating point?Quadibloc
|  |      `* Re: How convergent was the general use of binary floating point?Anton Ertl
|  |       +* Re: How convergent was the general use of binary floating point?Anton Ertl
|  |       |+* Re: How convergent was the general use of binary floating point?Thomas Koenig
|  |       ||`* Re: How convergent was the general use of binary floating point?BGB
|  |       || +* Re: How convergent was the general use of binary floating point?Niklas Holsti
|  |       || |`* Re: How convergent was the general use of binary floating point?BGB
|  |       || | `* Re: How convergent was the general use of binary floating point?David Brown
|  |       || |  `- Re: How convergent was the general use of binary floating point?BGB
|  |       || +* Re: How convergent was the general use of binary floating point?Scott Lurndal
|  |       || |`* Re: How convergent was the general use of binary floating point?MitchAlsup
|  |       || | `* Re: How convergent was the general use of binary floating point?Scott Lurndal
|  |       || |  `- Re: How convergent was the general use of binary floating point?BGB
|  |       || +* Re: How convergent was the general use of binary floating point?MitchAlsup
|  |       || |`- Re: How convergent was the general use of binary floating point?BGB
|  |       || `* Re: How convergent was the general use of binary floating point?David Brown
|  |       ||  `* Re: How convergent was the general use of binary floating point?BGB
|  |       ||   +- Re: How convergent was the general use of binary floating point?Scott Lurndal
|  |       ||   `* Re: How convergent was the general use of binary floating point?David Brown
|  |       ||    +- Re: How convergent was the general use of binary floating point?Michael S
|  |       ||    `* Re: How convergent was the general use of binary floating point?BGB
|  |       ||     `- Re: How convergent was the general use of binary floating point?David Brown
|  |       |`- Re: How convergent was the general use of binary floating point?George Neuner
|  |       `- Re: C history on micros, How convergent was the general use of binary floating pJohn Levine
|  +* Re: How convergent was the general use of binary floating point?MitchAlsup
|  |`* Re: How convergent was the general use of binary floating point?John Levine
|  | `* Re: How convergent was the general use of binary floating point?Anton Ertl
|  `* Re: How convergent was the general use of binary floating point?Stephen Fuld
`- Re: How convergent was the general use of binary floating point?Quadibloc

Pages:12345
How convergent was the general use of binary floating point?

<e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29257&group=comp.arch#29257

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:1b82:b0:6fc:967e:c7b4 with SMTP id dv2-20020a05620a1b8200b006fc967ec7b4mr19221331qkb.253.1670122905023;
Sat, 03 Dec 2022 19:01:45 -0800 (PST)
X-Received: by 2002:a05:6808:1247:b0:35a:8bd1:2f4d with SMTP id
o7-20020a056808124700b0035a8bd12f4dmr28072331oiv.261.1670122904736; Sat, 03
Dec 2022 19:01:44 -0800 (PST)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sat, 3 Dec 2022 19:01:44 -0800 (PST)
Injection-Info: google-groups.googlegroups.com; posting-host=2a02:8084:6020:5780:5871:cc19:ab:d7e4;
posting-account=f4I3oAkAAABDSN7-E4aFhBpEX3HML7-_
NNTP-Posting-Host: 2a02:8084:6020:5780:5871:cc19:ab:d7e4
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
Subject: How convergent was the general use of binary floating point?
From: russell....@gmail.com (Russell Wallace)
Injection-Date: Sun, 04 Dec 2022 03:01:45 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 7616
 by: Russell Wallace - Sun, 4 Dec 2022 03:01 UTC

Some historical events and trends are contingent. For example, the rise and dominance of the x86 architecture could easily have been otherwise, had the IBM PC team chosen a different chip such as the Motorola 68000. Others are convergent; somewhat different historical circumstances would still have converged on the same outcome. For example, given that the industry settled on the 8-bit byte, nothing short of Moore's law being interrupted by a global cataclysm would have prevented the existence of 64-bit computers in 2022.

I'm trying to figure out which category floating point as we know it falls into.

By this, I do not mean the specifics of the IEEE 754 standard for binary floating point. I hold the fairly conventional view that 754 is mostly pretty decent, certainly an improvement on the previous state of affairs; there are a few things I would like to fix, but in the grand scheme of things, that's not terribly important. 754 does a decent job of serving its intended users, e.g. physical scientists running simulations in Fortran (which general, fuzzy category of people I will refer to here as 'scientists' for short).. Let's take as a starting point that IEEE in the late seventies and early eighties agreed on the standard that it did in our timeline.

But the phrase 'its intended users' in the above paragraph is doing a lot of work. The remarkable thing about the history of numerical computation in our world is not that binary floating point exists in its current form, or that it is used by scientists, but that it is also used by everyone else! It's not ideal for accountants, who would have preferred decimal floating point, where the rounding errors match what they would get on paper. It's not ideal for gamers, who would have preferred not to waste electricity on denormals, switchable rounding modes and signaling NaNs. It is for other reasons, not ideal for machine learning (though that is getting specialized hardware nowadays). But everyone ended up using the system designed for scientists. Why?

(Not that I'm complaining about the state of affairs. A case could be made that, because our species underinvests in public goods like scientific research, it is a good thing that our industry has, even if by accident, ended up on a path where gamers subsidize the development of hardware that can also be used by scientists. In this discussion, I am not taking a position on whether the result is good or bad, only asking whether it is convergent.)

Roughly speaking, the sequence of events was that Intel developed a floating-point coprocessor (the 8087), everyone programmed on the basis of 'use IEEE floating point so the code will run faster if an FPU is present', then with the 486, Intel started integrating the FPU into the CPU.

What exactly did people buy 8087s for, in the early eighties? Of course there were some number of scientists buying them to run simulations and suchlike in Fortran, the main intended purpose of binary floating point. But they were a relatively small group to begin with, and then the only ones who would settle for running their simulations on an 8087 (in the low tens of kiloflops) were those who couldn't get time on a bigger machine. As far as I can tell, most of the 8087s were purchased for one of two things:

Spreadsheets. VisiCalc had used decimal floating point, which is what you preferably want in a spreadsheet; not only does it do arithmetic in a way that better suits accountants, but at that tech level, it would be faster than binary, because the majority of numbers encountered in spreadsheets, are simple when expressed in decimal. It's faster to multiply arbitrary 64-bit numbers in binary, but spreadsheets don't typically work with arbitrary numbers, and at that tech level, it's faster to multiply 1.23 by 4.56 in decimal, because the multiply loop can exit early. But for whatever reason, Lotus decided to use binary for 1-2-3, and that meant your spreadsheets would recalculate faster if you bought an 8087.

CAD.

Point of departure: Lotus decides their spreadsheet should use decimal, like VisiCalc. Autodesk notices this, decides the 8087 won't be a big hit, and puts in the programming effort needed to make AutoCAD work with fixed point instead.

On this timeline, far fewer people have a reason to buy an 8087. Intel keeps making them for the relatively small handful of customers, but floating-point coprocessors remain a niche product. C and Pascal compiler vendors notice this, and don't bother supporting the 8087, instead shipping a different binary floating point format that's optimized for software implementation.. Noting that this is still rather slow, Borland takes the step of providing language-extension support for fixed point, and other compiler vendors follow suit.

At the end of the eighties, Intel doesn't bother wasting silicon integrating an FPU into the 486, preferring to spend the transistors making integers, fixed point and decimal run faster. In the nineties, as floating point has not become fast by default, 3D game programmers don't start using it. They stick with fixed point, diverting the positive feedback loop that happened OTL from floating to fixed point. GPUs do likewise, as do machine learning researchers when that becomes important.

Scientists find themselves under pressure to find a way to get by with cheap commodity hardware, and Fortran compiler vendors respond by adding support for fixed point. Which is /not/ quite an adequate substitute. When you're writing computational fluid dynamics code, you would much prefer not to have to try to figure out in advance what the dynamic range will be; that's why floating point was invented in the first place! But sometimes 'cheap commodity off-the-shelf' beats 'ideally designed for my purposes'.

Is that a realistic alternate history or, given that point of departure, would numerical computation have still converged on the general use of binary floating point? If the latter, why and how? Note that I'm not making any claims about one form of numerical computation being better than another /in general/ (as opposed to for specific purposes). I'm trying to evaluate a conjecture about how the dynamics of the market would bring about a result.

Re: How convergent was the general use of binary floating point?

<dfe02530-60f2-4b6b-871d-e3136f2664a5n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29258&group=comp.arch#29258

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a37:8883:0:b0:6fb:628a:1aea with SMTP id k125-20020a378883000000b006fb628a1aeamr69530981qkd.697.1670123801625;
Sat, 03 Dec 2022 19:16:41 -0800 (PST)
X-Received: by 2002:a9d:6c4f:0:b0:66d:2671:da35 with SMTP id
g15-20020a9d6c4f000000b0066d2671da35mr27173895otq.137.1670123801376; Sat, 03
Dec 2022 19:16:41 -0800 (PST)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sat, 3 Dec 2022 19:16:41 -0800 (PST)
In-Reply-To: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:291:29f0:d9a2:d158:7c10:2310;
posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 2600:1700:291:29f0:d9a2:d158:7c10:2310
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <dfe02530-60f2-4b6b-871d-e3136f2664a5n@googlegroups.com>
Subject: Re: How convergent was the general use of binary floating point?
From: MitchAl...@aol.com (MitchAlsup)
Injection-Date: Sun, 04 Dec 2022 03:16:41 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 8712
 by: MitchAlsup - Sun, 4 Dec 2022 03:16 UTC

On Saturday, December 3, 2022 at 9:01:46 PM UTC-6, Russell Wallace wrote:
> Some historical events and trends are contingent. For example, the rise and dominance of the x86 architecture could easily have been otherwise, had the IBM PC team chosen a different chip such as the Motorola 68000. Others are convergent; somewhat different historical circumstances would still have converged on the same outcome. For example, given that the industry settled on the 8-bit byte, nothing short of Moore's law being interrupted by a global cataclysm would have prevented the existence of 64-bit computers in 2022.
>
> I'm trying to figure out which category floating point as we know it falls into.
>
> By this, I do not mean the specifics of the IEEE 754 standard for binary floating point. I hold the fairly conventional view that 754 is mostly pretty decent, certainly an improvement on the previous state of affairs; there are a few things I would like to fix, but in the grand scheme of things, that's not terribly important. 754 does a decent job of serving its intended users, e.g. physical scientists running simulations in Fortran (which general, fuzzy category of people I will refer to here as 'scientists' for short). Let's take as a starting point that IEEE in the late seventies and early eighties agreed on the standard that it did in our timeline.
>
> But the phrase 'its intended users' in the above paragraph is doing a lot of work. The remarkable thing about the history of numerical computation in our world is not that binary floating point exists in its current form, or that it is used by scientists, but that it is also used by everyone else! It's not ideal for accountants, who would have preferred decimal floating point, where the rounding errors match what they would get on paper. It's not ideal for gamers, who would have preferred not to waste electricity on denormals, switchable rounding modes and signaling NaNs. It is for other reasons, not ideal for machine learning (though that is getting specialized hardware nowadays). But everyone ended up using the system designed for scientists. Why?
>
> (Not that I'm complaining about the state of affairs. A case could be made that, because our species underinvests in public goods like scientific research, it is a good thing that our industry has, even if by accident, ended up on a path where gamers subsidize the development of hardware that can also be used by scientists. In this discussion, I am not taking a position on whether the result is good or bad, only asking whether it is convergent.)
>
> Roughly speaking, the sequence of events was that Intel developed a floating-point coprocessor (the 8087), everyone programmed on the basis of 'use IEEE floating point so the code will run faster if an FPU is present', then with the 486, Intel started integrating the FPU into the CPU.
>
> What exactly did people buy 8087s for, in the early eighties? Of course there were some number of scientists buying them to run simulations and suchlike in Fortran, the main intended purpose of binary floating point. But they were a relatively small group to begin with, and then the only ones who would settle for running their simulations on an 8087 (in the low tens of kiloflops) were those who couldn't get time on a bigger machine. As far as I can tell, most of the 8087s were purchased for one of two things:
>
> Spreadsheets. VisiCalc had used decimal floating point, which is what you preferably want in a spreadsheet; not only does it do arithmetic in a way that better suits accountants, but at that tech level, it would be faster than binary, because the majority of numbers encountered in spreadsheets, are simple when expressed in decimal. It's faster to multiply arbitrary 64-bit numbers in binary, but spreadsheets don't typically work with arbitrary numbers, and at that tech level, it's faster to multiply 1.23 by 4.56 in decimal, because the multiply loop can exit early. But for whatever reason, Lotus decided to use binary for 1-2-3, and that meant your spreadsheets would recalculate faster if you bought an 8087.
>
> CAD.
>
> Point of departure: Lotus decides their spreadsheet should use decimal, like VisiCalc. Autodesk notices this, decides the 8087 won't be a big hit, and puts in the programming effort needed to make AutoCAD work with fixed point instead.
>
> On this timeline, far fewer people have a reason to buy an 8087. Intel keeps making them for the relatively small handful of customers, but floating-point coprocessors remain a niche product. C and Pascal compiler vendors notice this, and don't bother supporting the 8087, instead shipping a different binary floating point format that's optimized for software implementation. Noting that this is still rather slow, Borland takes the step of providing language-extension support for fixed point, and other compiler vendors follow suit.
>
<
By the mid 1980s the RISC evolution was well underway, MIPS R2000 could do FADD in 2 cycles, FMUL in
3 (or was it 4) cycles, LDs were 2 cycles, and almost everything else was 1 cycle. This is where your alternate
history falls apart. The RISC camp delivered FP, and x86 had to follow suit and go from a separate optional
chip (386+387) to integrated and pipelined (486).
<
> At the end of the eighties, Intel doesn't bother wasting silicon integrating an FPU into the 486, preferring to spend the transistors making integers, fixed point and decimal run faster. In the nineties, as floating point has not become fast by default, 3D game programmers don't start using it. They stick with fixed point, diverting the positive feedback loop that happened OTL from floating to fixed point. GPUs do likewise, as do machine learning researchers when that becomes important.
>
> Scientists find themselves under pressure to find a way to get by with cheap commodity hardware, and Fortran compiler vendors respond by adding support for fixed point. Which is /not/ quite an adequate substitute. When you're writing computational fluid dynamics code, you would much prefer not to have to try to figure out in advance what the dynamic range will be; that's why floating point was invented in the first place! But sometimes 'cheap commodity off-the-shelf' beats 'ideally designed for my purposes'.
<
Unfortunately, RISC guys could not get the cost structure of workstations down to the cost structure of
PCs. Once x86 became pipelined, then SuperScalar, the cubic dollars available to fund design teams
were easily able to outrun the design teams of smaller companies (RISC guys) and by Pentium Pro
their (RISC) days had been numbered, counted, and scheduled.
>
> Is that a realistic alternate history or, given that point of departure, would numerical computation have still converged on the general use of binary floating point? If the latter, why and how? Note that I'm not making any claims about one form of numerical computation being better than another /in general/ (as opposed to for specific purposes). I'm trying to evaluate a conjecture about how the dynamics of the market would bring about a result..

Re: How convergent was the general use of binary floating point?

<tmh9ga$2q2t$1@gal.iecc.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29259&group=comp.arch#29259

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!news.iecc.com!.POSTED.news.iecc.com!not-for-mail
From: joh...@taugh.com (John Levine)
Newsgroups: comp.arch
Subject: Re: How convergent was the general use of binary floating point?
Date: Sun, 4 Dec 2022 04:59:54 -0000 (UTC)
Organization: Taughannock Networks
Message-ID: <tmh9ga$2q2t$1@gal.iecc.com>
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
Injection-Date: Sun, 4 Dec 2022 04:59:54 -0000 (UTC)
Injection-Info: gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970";
logging-data="92253"; mail-complaints-to="abuse@iecc.com"
In-Reply-To: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
Cleverness: some
X-Newsreader: trn 4.0-test77 (Sep 1, 2010)
Originator: johnl@iecc.com (John Levine)
 by: John Levine - Sun, 4 Dec 2022 04:59 UTC

According to Russell Wallace <russell.wallace@gmail.com>:
>But the phrase 'its intended users' in the above paragraph is doing a lot of work. The remarkable thing about the
>history of numerical computation in our world is not that binary floating point exists in its current form, or that
>it is used by scientists, but that it is also used by everyone else! It's not ideal for accountants, who would have
>preferred decimal floating point, where the rounding errors match what they would get on paper. It's not ideal for
>gamers, who would have preferred not to waste electricity on denormals, switchable rounding modes and signaling
>NaNs. It is for other reasons, not ideal for machine learning (though that is getting specialized hardware
>nowadays). But everyone ended up using the system designed for scientists. Why?

Because binary floating point gets the job done. The first hardware
floating point on the IBM 704 in 1954 looks a lot like floating point
today. There was a sign bit, an 8 bit exponent stored in excess 128,
and a usually normalized 27 bit fraction. On the 704 they knew that
all normalized floats had a 1 in the high bit (the manual says so)
but it took two decades to notice that you could not store it and get
an extra precision bit.

A few decimal machines in the 1950s had decimal FP, and the IBM 360
had famously botched hex FP, but other than that, it was all binary
all the way.

I believe that some early IEEE FP implementations trapped when the
result would be denormal and did it in software, so I don't think it
were ever a performance issue for normal results. Denormals definitely
help to avoid surprising precision errors.

>Spreadsheets. VisiCalc had used decimal floating point, ...
> But for whatever reason, Lotus decided to use binary for 1-2-3, and that meant your spreadsheets would
>recalculate faster if you bought an 8087.

Visicalc used sort of a virtual machine to make it easy to port to all
of the different micros before the IBM PC, and I expect it was easier
to do decimal than to try and deal with all of the quirks of the
underlying machines' arithmetic. Lotus made a deliberate choice to
target only the IBM PC and take full advantage of all of its quirks,
such as using all of the keys on the keyboard. The people who started
Lotus knew the authors of Visicalc and I'm sure were aware of the
decimal vs binary issue.

Nobody actually wants decimal floating point, rather they want scaled
decimal with predictable precision, which is why DFP has all of that
quantum stuff.

Back in the 80s I worked on Javelin, a modeling package that sort of
competed with 1-2-3. I wrote the financial functions for bond price
and yield calculations, which are defined to do decimal rounding. It
was a minor pain to implement using 8087 floating point but it wasn't
all that hard. So long as you are careful about rounding to avoid
nonsense like 10 * 0.1 = 0.9999999997, few people could tell what
the internal radix was, and binary is a lot faster to implement.

--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly

Re: How convergent was the general use of binary floating point?

<b98311ec-6a8d-4c0f-a151-d2a8f794cc86n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29260&group=comp.arch#29260

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:10a2:b0:6e7:1224:7940 with SMTP id h2-20020a05620a10a200b006e712247940mr69481879qkk.35.1670133741420;
Sat, 03 Dec 2022 22:02:21 -0800 (PST)
X-Received: by 2002:a05:6830:115:b0:66e:7ec7:d1e1 with SMTP id
i21-20020a056830011500b0066e7ec7d1e1mr5207403otp.23.1670133741093; Sat, 03
Dec 2022 22:02:21 -0800 (PST)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sat, 3 Dec 2022 22:02:20 -0800 (PST)
In-Reply-To: <tmh9ga$2q2t$1@gal.iecc.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2607:fea8:1dde:6a00:1f:a9fb:ba:8625;
posting-account=QId4bgoAAABV4s50talpu-qMcPp519Eb
NNTP-Posting-Host: 2607:fea8:1dde:6a00:1f:a9fb:ba:8625
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com> <tmh9ga$2q2t$1@gal.iecc.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <b98311ec-6a8d-4c0f-a151-d2a8f794cc86n@googlegroups.com>
Subject: Re: How convergent was the general use of binary floating point?
From: robfi...@gmail.com (robf...@gmail.com)
Injection-Date: Sun, 04 Dec 2022 06:02:21 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 5319
 by: robf...@gmail.com - Sun, 4 Dec 2022 06:02 UTC

On Saturday, December 3, 2022 at 11:59:58 PM UTC-5, John Levine wrote:
> According to Russell Wallace <russell...@gmail.com>:
> >But the phrase 'its intended users' in the above paragraph is doing a lot of work. The remarkable thing about the
> >history of numerical computation in our world is not that binary floating point exists in its current form, or that
> >it is used by scientists, but that it is also used by everyone else! It's not ideal for accountants, who would have
> >preferred decimal floating point, where the rounding errors match what they would get on paper. It's not ideal for
> >gamers, who would have preferred not to waste electricity on denormals, switchable rounding modes and signaling
> >NaNs. It is for other reasons, not ideal for machine learning (though that is getting specialized hardware
> >nowadays). But everyone ended up using the system designed for scientists. Why?
> Because binary floating point gets the job done. The first hardware
> floating point on the IBM 704 in 1954 looks a lot like floating point
> today. There was a sign bit, an 8 bit exponent stored in excess 128,
> and a usually normalized 27 bit fraction. On the 704 they knew that
> all normalized floats had a 1 in the high bit (the manual says so)
> but it took two decades to notice that you could not store it and get
> an extra precision bit.
>
> A few decimal machines in the 1950s had decimal FP, and the IBM 360
> had famously botched hex FP, but other than that, it was all binary
> all the way.
>
> I believe that some early IEEE FP implementations trapped when the
> result would be denormal and did it in software, so I don't think it
> were ever a performance issue for normal results. Denormals definitely
> help to avoid surprising precision errors.
>
> >Spreadsheets. VisiCalc had used decimal floating point, ...
> > But for whatever reason, Lotus decided to use binary for 1-2-3, and that meant your spreadsheets would
> >recalculate faster if you bought an 8087.
> Visicalc used sort of a virtual machine to make it easy to port to all
> of the different micros before the IBM PC, and I expect it was easier
> to do decimal than to try and deal with all of the quirks of the
> underlying machines' arithmetic. Lotus made a deliberate choice to
> target only the IBM PC and take full advantage of all of its quirks,
> such as using all of the keys on the keyboard. The people who started
> Lotus knew the authors of Visicalc and I'm sure were aware of the
> decimal vs binary issue.
>
> Nobody actually wants decimal floating point, rather they want scaled
> decimal with predictable precision, which is why DFP has all of that
> quantum stuff.
>
> Back in the 80s I worked on Javelin, a modeling package that sort of
> competed with 1-2-3. I wrote the financial functions for bond price
> and yield calculations, which are defined to do decimal rounding. It
> was a minor pain to implement using 8087 floating point but it wasn't
> all that hard. So long as you are careful about rounding to avoid
> nonsense like 10 * 0.1 = 0.9999999997, few people could tell what
> the internal radix was, and binary is a lot faster to implement.
>
> --
> Regards,
> John Levine, jo...@taugh.com, Primary Perpetrator of "The Internet for Dummies",
> Please consider the environment before reading this e-mail. https://jl.ly

My guess is probably fairly convergent. I think it would be hard to beat for performance and power. Not necessarily IEEE format, but any other binary format as well. What if trinary numbers had become common? I seem to recall a computer system built around trinary values instead of binary ones.

I am reminded of the dinosaurs that converged on the same forms in different periods of time. Even though the forms converge it does not mean that there are not other forms in existence.

Re: How convergent was the general use of binary floating point?

<tmhdjg$3j2lc$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29261&group=comp.arch#29261

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.arch
Subject: Re: How convergent was the general use of binary floating point?
Date: Sat, 3 Dec 2022 22:09:51 -0800
Organization: A noiseless patient Spider
Lines: 72
Message-ID: <tmhdjg$3j2lc$1@dont-email.me>
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
<tmh9ga$2q2t$1@gal.iecc.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 4 Dec 2022 06:09:52 -0000 (UTC)
Injection-Info: reader01.eternal-september.org; posting-host="2f33ea14ddd81eb2952c7d26149ee4be";
logging-data="3771052"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/RyphSB/lrK2dLy8PPxQDjK10TfgAh5AE="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.5.1
Cancel-Lock: sha1:TNjm7CvuFlSHDbTHLFUP1Kw3aFg=
In-Reply-To: <tmh9ga$2q2t$1@gal.iecc.com>
Content-Language: en-US
 by: Stephen Fuld - Sun, 4 Dec 2022 06:09 UTC

On 12/3/2022 8:59 PM, John Levine wrote:
> According to Russell Wallace <russell.wallace@gmail.com>:
>> But the phrase 'its intended users' in the above paragraph is doing a lot of work. The remarkable thing about the
>> history of numerical computation in our world is not that binary floating point exists in its current form, or that
>> it is used by scientists, but that it is also used by everyone else! It's not ideal for accountants, who would have
>> preferred decimal floating point, where the rounding errors match what they would get on paper. It's not ideal for
>> gamers, who would have preferred not to waste electricity on denormals, switchable rounding modes and signaling
>> NaNs. It is for other reasons, not ideal for machine learning (though that is getting specialized hardware
>> nowadays). But everyone ended up using the system designed for scientists. Why?
>
> Because binary floating point gets the job done. The first hardware
> floating point on the IBM 704 in 1954 looks a lot like floating point
> today. There was a sign bit, an 8 bit exponent stored in excess 128,
> and a usually normalized 27 bit fraction. On the 704 they knew that
> all normalized floats had a 1 in the high bit (the manual says so)
> but it took two decades to notice that you could not store it and get
> an extra precision bit.
>
> A few decimal machines in the 1950s had decimal FP, and the IBM 360
> had famously botched hex FP, but other than that, it was all binary
> all the way.

A minor quibble, which you actually address below. As you said there,
contrary to what the OP said, accountants didn't want DFP. They wanted
scaled decimal fixed point with the compiler taking care of scaling
automatically.

In the 1970s, I worked at the United States Social Security
Administration, which, from a business/accounting point of view, looks a
lot like a large insurance company. All the main programs that "ran the
business" used COMP-3, packed decimal for all the calculations. I
believe that many/most large companies did similar. So it wasn't
"binary all the way", but, at least for business applications, it was
primarily scaled decimal.

> I believe that some early IEEE FP implementations trapped when the
> result would be denormal and did it in software, so I don't think it
> were ever a performance issue for normal results. Denormals definitely
> help to avoid surprising precision errors.
>
>> Spreadsheets. VisiCalc had used decimal floating point, ...
>> But for whatever reason, Lotus decided to use binary for 1-2-3, and that meant your spreadsheets would
>> recalculate faster if you bought an 8087.
>
> Visicalc used sort of a virtual machine to make it easy to port to all
> of the different micros before the IBM PC, and I expect it was easier
> to do decimal than to try and deal with all of the quirks of the
> underlying machines' arithmetic. Lotus made a deliberate choice to
> target only the IBM PC and take full advantage of all of its quirks,
> such as using all of the keys on the keyboard. The people who started
> Lotus knew the authors of Visicalc and I'm sure were aware of the
> decimal vs binary issue.
>
> Nobody actually wants decimal floating point, rather they want scaled
> decimal with predictable precision, which is why DFP has all of that
> quantum stuff.
>
> Back in the 80s I worked on Javelin, a modeling package that sort of
> competed with 1-2-3. I wrote the financial functions for bond price
> and yield calculations, which are defined to do decimal rounding. It
> was a minor pain to implement using 8087 floating point but it wasn't
> all that hard. So long as you are careful about rounding to avoid
> nonsense like 10 * 0.1 = 0.9999999997, few people could tell what
> the internal radix was, and binary is a lot faster to implement.
>

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: How convergent was the general use of binary floating point?

<tmhopg$3kueq$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29265&group=comp.arch#29265

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: cr88...@gmail.com (BGB)
Newsgroups: comp.arch
Subject: Re: How convergent was the general use of binary floating point?
Date: Sun, 4 Dec 2022 03:20:46 -0600
Organization: A noiseless patient Spider
Lines: 142
Message-ID: <tmhopg$3kueq$1@dont-email.me>
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
<tmh9ga$2q2t$1@gal.iecc.com>
<b98311ec-6a8d-4c0f-a151-d2a8f794cc86n@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 4 Dec 2022 09:20:48 -0000 (UTC)
Injection-Info: reader01.eternal-september.org; posting-host="20b46459b761f3ccae71f8b5c7a443fb";
logging-data="3832282"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+w8ylgU0hsNEKEVIOi4O9H"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.5.1
Cancel-Lock: sha1:4ziDaMNpSy261tYn67I4anKjn3Q=
In-Reply-To: <b98311ec-6a8d-4c0f-a151-d2a8f794cc86n@googlegroups.com>
Content-Language: en-US
 by: BGB - Sun, 4 Dec 2022 09:20 UTC

On 12/4/2022 12:02 AM, robf...@gmail.com wrote:
> On Saturday, December 3, 2022 at 11:59:58 PM UTC-5, John Levine wrote:
>> According to Russell Wallace <russell...@gmail.com>:
>>> But the phrase 'its intended users' in the above paragraph is doing a lot of work. The remarkable thing about the
>>> history of numerical computation in our world is not that binary floating point exists in its current form, or that
>>> it is used by scientists, but that it is also used by everyone else! It's not ideal for accountants, who would have
>>> preferred decimal floating point, where the rounding errors match what they would get on paper. It's not ideal for
>>> gamers, who would have preferred not to waste electricity on denormals, switchable rounding modes and signaling
>>> NaNs. It is for other reasons, not ideal for machine learning (though that is getting specialized hardware
>>> nowadays). But everyone ended up using the system designed for scientists. Why?
>> Because binary floating point gets the job done. The first hardware
>> floating point on the IBM 704 in 1954 looks a lot like floating point
>> today. There was a sign bit, an 8 bit exponent stored in excess 128,
>> and a usually normalized 27 bit fraction. On the 704 they knew that
>> all normalized floats had a 1 in the high bit (the manual says so)
>> but it took two decades to notice that you could not store it and get
>> an extra precision bit.
>>
>> A few decimal machines in the 1950s had decimal FP, and the IBM 360
>> had famously botched hex FP, but other than that, it was all binary
>> all the way.
>>
>> I believe that some early IEEE FP implementations trapped when the
>> result would be denormal and did it in software, so I don't think it
>> were ever a performance issue for normal results. Denormals definitely
>> help to avoid surprising precision errors.
>>
>>> Spreadsheets. VisiCalc had used decimal floating point, ...
>>> But for whatever reason, Lotus decided to use binary for 1-2-3, and that meant your spreadsheets would
>>> recalculate faster if you bought an 8087.
>> Visicalc used sort of a virtual machine to make it easy to port to all
>> of the different micros before the IBM PC, and I expect it was easier
>> to do decimal than to try and deal with all of the quirks of the
>> underlying machines' arithmetic. Lotus made a deliberate choice to
>> target only the IBM PC and take full advantage of all of its quirks,
>> such as using all of the keys on the keyboard. The people who started
>> Lotus knew the authors of Visicalc and I'm sure were aware of the
>> decimal vs binary issue.
>>
>> Nobody actually wants decimal floating point, rather they want scaled
>> decimal with predictable precision, which is why DFP has all of that
>> quantum stuff.
>>
>> Back in the 80s I worked on Javelin, a modeling package that sort of
>> competed with 1-2-3. I wrote the financial functions for bond price
>> and yield calculations, which are defined to do decimal rounding. It
>> was a minor pain to implement using 8087 floating point but it wasn't
>> all that hard. So long as you are careful about rounding to avoid
>> nonsense like 10 * 0.1 = 0.9999999997, few people could tell what
>> the internal radix was, and binary is a lot faster to implement.
>>
>> --
>> Regards,
>> John Levine, jo...@taugh.com, Primary Perpetrator of "The Internet for Dummies",
>> Please consider the environment before reading this e-mail. https://jl.ly
>
> My guess is probably fairly convergent. I think it would be hard to beat for performance and power. Not necessarily IEEE format, but any other binary format as well. What if trinary numbers had become common? I seem to recall a computer system built around trinary values instead of binary ones.
>
> I am reminded of the dinosaurs that converged on the same forms in different periods of time. Even though the forms converge it does not mean that there are not other forms in existence.
>

In a general sense, I suspect floating-point was likely inevitable.

As for what parts could have varied:
Location of the sign and/or exponent;
Sign/Magnitude and/or Twos Complement;
Handling of values near zero;
Number of exponent bits for larger types;
...

For example, one could have used Twos Complement allowing a signed
integer compare to be used unmodified for floating-point compare.

For many purposes, subnormal numbers do not matter.
* DAZ / FTZ is "usually good enough";
* Could have maybe defined that "zero does not exist per-se".
** Zero would not be strictly zero, merely the smallest possible value.
** OTOH: ((0+0)!=0) would be "kinda stupid".
** IMHO: Zero does at least mostly justify the cost of its existence.

The relative value of giving special semantics (in hardware) to the
Inf/NaN cases could be debated. Though, arguably, they do have meaning
as "something has gone wrong in the math" signals, which would not
necessarily have been preserved with clamping.

One could have maybe also folded the Inf/NaN cases into 0, say:
Exp=0, all bits zero, Zero
Exp=0, high bits of mantissa are not zero, Inf or NaN.
With the largest exponent as the upper end of the normal range.

Cheap hardware might have calculations with these as inputs could simply
have them decay to zeroes.

One could similarly make an argument that the idea of user-specified
rounding modes (or, for that matter, any rounding other than "truncate
towards zero" need not exist).

For many purposes, more than a small number of sub ULP bits does not matter.
* One may observe that one gets "basically similar" results with 2 or 4
sub-ULP bits as they would with more.

Main exception case is things like multiply-accumulate and catastrophic
cancellation, but this almost makes more sense as a special-case
operator (since "A*B+C" may generate different results with FMAC than as
FMUL+FADD).

One could make a case for having a hardware FMAC that always produces
the "double rounded result" (say, being cheaper for hardware and
consistent with the FMUL+FADD sequence).

In this case, the non-double-rounded version could be provided as a
function in "math.h" or similar.

But, I guess one could raise the issue of, say:
If floating point math had been defined as strictly DAZ+FTZ,
Truncate-Only, defining the existence of exactly 4 sub-ULP bits for
Double ops, ...

Would anyone have really noticed?... ( Apart from maybe it being
slightly easier to get bit-identical results between implementations,
since the bar would have been set a little lower. )

It is sort of like, despite typically having less precision, it is
easier to get bit-identical results with fixed-point calculations than
with floating point.

It would be sort of like, say, if people agreed to do math with PI
defined as 201/64 (3.140625) or maybe 3217/1024 (3.1416015625) under the
rationale that, while not exactly the "true to life" values, its
relative imprecision can also give it more robustness in the face of
intermediate calculations.

....

Re: How convergent was the general use of binary floating point?

<2022Dec4.101848@mips.complang.tuwien.ac.at>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29266&group=comp.arch#29266

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: ant...@mips.complang.tuwien.ac.at (Anton Ertl)
Newsgroups: comp.arch
Subject: Re: How convergent was the general use of binary floating point?
Date: Sun, 04 Dec 2022 09:18:48 GMT
Organization: Institut fuer Computersprachen, Technische Universitaet Wien
Lines: 105
Message-ID: <2022Dec4.101848@mips.complang.tuwien.ac.at>
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
Injection-Info: reader01.eternal-september.org; posting-host="f71e9b82ca0642d8befb58df19c180f9";
logging-data="3855745"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19xVqod+Kcdez5gBboLf4DW"
Cancel-Lock: sha1:Hq4HvqD+4030AyZJeKWAiQIXmM4=
X-newsreader: xrn 10.11
 by: Anton Ertl - Sun, 4 Dec 2022 09:18 UTC

Russell Wallace <russell.wallace@gmail.com> writes:
[Binary FP]
>I=
>t's not ideal for accountants, who would have preferred decimal floating po=
>int, where the rounding errors match what they would get on paper.

This statement is contradictory. What they do on paper is decimal
fixed point, so true decimal FP would produce different rounding
errors. That's why what was standardized as "decimal FP" actually is
not really a floating-point number representation: in the usual case
the point does not float. Still, the lack of hardware support
(outside IBM) and the lack of improved software libraries for decimal
FP speaks against this stuff being a success.

>Spreadsheets. VisiCalc had used decimal floating point, which is what you p=
>referably want in a spreadsheet; not only does it do arithmetic in a way th=
>at better suits accountants, but at that tech level, it would be faster tha=
>n binary, because the majority of numbers encountered in spreadsheets, are =
>simple when expressed in decimal. It's faster to multiply arbitrary 64-bit =
>numbers in binary, but spreadsheets don't typically work with arbitrary num=
>bers, and at that tech level, it's faster to multiply 1.23 by 4.56 in decim=
>al, because the multiply loop can exit early.

On hardware with a multiplier loop like the 386 and 486 the binary
multiplication time was data-dependent, i.e., it exits early on some
values. What should the supposed advantage of decimal be?

On hardware with a full-blown multiplier like the 88100 integer
multiply takes a few cycles independent of the data values. What
should the advantage of decimal be there?

>Point of departure: Lotus decides their spreadsheet should use decimal, lik=
>e VisiCalc.

Some competitor uses binary FP, and most users decide that they like
speed more than decimal rounding. Even the few who would like
commercial rounding rules would have failed to specify the rounding
rules to the spreadsheet program; such specifications would just have
been too complex to make and verify; this would have been a job for a
programmer, not the majority of 1-2-3 customers.

OTOH, having decimal fixed point as optional feature might have
supported marketing even if the users would not have used it
successfully in practice. It works for IBM, after all. Still, 1-2-3
sold well without such a feature (right?); and if a competitor
introduced such a feature, it did not help them enough to beat 1-2-3
or force Lotus to add that feature to 1-2-3.

Looking at the Wikipedia page of 1-2-3 it is interesting to see what
features were added over time.

>On this timeline, far fewer people have a reason to buy an 8087. Intel keep=
>s making them for the relatively small handful of customers, but floating-p=
>oint coprocessors remain a niche product. C and Pascal compiler vendors not=
>ice this, and don't bother supporting the 8087, instead shipping a differen=
>t binary floating point format that's optimized for software implementation=
>. Noting that this is still rather slow, Borland takes the step of providin=
>g language-extension support for fixed point, and other compiler vendors fo=
>llow suit.

No way. Fixed point lost to floating point everywhere where the CPU
was big enough to support floating-point hardware; this started in the
1950s with the IBM 704, repeated itself on the minis, and repeated
itself again on the micros. The programming ease of floating-point
won out over fixed point every time. Even without hardware
floating-point was very popular (e.g., in MS Basic on all the micros).

>Scientists find themselves under pressure to find a way to get by with chea=
>p commodity hardware,

Actually the fact that floating point won in scientific computing,
where the software crisis either has not happened or was much less
forceful, and it happened more than a decade before anyone even coined
the term "software crisis", shows how important the programming
advantage of floating-point is. Customers of scientific computers are
usually happy to save money on the hardware at the cost of additional
software expense (because they still save more on the hardware than
they have to pay for the additional programming effort; i.e., no
software crisis), but not in the case of floating-point vs. fixed
point.

And if floating point wins in scientific computing, it is all the more
important in areas where the software crisis is relevant.

>and Fortran compiler vendors respond by adding suppor=
>t for fixed point.

Ada has support for fixed point (including IIRC decimal fixed point).
Yet commercial users failed to flock to Ada. It seems that the desire
of commercial users for implementing their rounding rule is very
limited.

>Which is /not/ quite an adequate substitute. When you're=
> writing computational fluid dynamics code, you would much prefer not to ha=
>ve to try to figure out in advance what the dynamic range will be; that's w=
>hy floating point was invented in the first place! But sometimes 'cheap com=
>modity off-the-shelf' beats 'ideally designed for my purposes'.

In this case, floating point beat fixed point, every time, as soon as
hardware was big enough to support hardware floating-point.

- anton
--
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

Re: How convergent was the general use of binary floating point?

<tmi4k6$chju$2@newsreader4.netcologne.de>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29270&group=comp.arch#29270

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!.POSTED.2001-4dd7-d844-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de!not-for-mail
From: tkoe...@netcologne.de (Thomas Koenig)
Newsgroups: comp.arch
Subject: Re: How convergent was the general use of binary floating point?
Date: Sun, 4 Dec 2022 12:42:46 -0000 (UTC)
Organization: news.netcologne.de
Distribution: world
Message-ID: <tmi4k6$chju$2@newsreader4.netcologne.de>
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
<tmh9ga$2q2t$1@gal.iecc.com>
Injection-Date: Sun, 4 Dec 2022 12:42:46 -0000 (UTC)
Injection-Info: newsreader4.netcologne.de; posting-host="2001-4dd7-d844-0-7285-c2ff-fe6c-992d.ipv6dyn.netcologne.de:2001:4dd7:d844:0:7285:c2ff:fe6c:992d";
logging-data="411262"; mail-complaints-to="abuse@netcologne.de"
User-Agent: slrn/1.0.3 (Linux)
 by: Thomas Koenig - Sun, 4 Dec 2022 12:42 UTC

John Levine <johnl@taugh.com> schrieb:

> A few decimal machines in the 1950s had decimal FP, and the IBM 360
> had famously botched hex FP, but other than that, it was all binary
> all the way.

I believe Data General actually used IBM's floating point format.
If it was to save a few gates, or because they thought that
the market leader in computers could not possibly be wrong,
I don't know.

Re: How convergent was the general use of binary floating point?

<72d6ecc1-8b1a-4b21-9cb7-61ad118a42dan@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29272&group=comp.arch#29272

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:1309:b0:3a5:def:19fe with SMTP id v9-20020a05622a130900b003a50def19femr75096728qtk.175.1670158621877;
Sun, 04 Dec 2022 04:57:01 -0800 (PST)
X-Received: by 2002:a9d:61ca:0:b0:66e:6d59:b2df with SMTP id
h10-20020a9d61ca000000b0066e6d59b2dfmr9421938otk.201.1670158621616; Sun, 04
Dec 2022 04:57:01 -0800 (PST)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sun, 4 Dec 2022 04:57:01 -0800 (PST)
In-Reply-To: <2022Dec4.101848@mips.complang.tuwien.ac.at>
Injection-Info: google-groups.googlegroups.com; posting-host=2a02:8084:6020:5780:5871:cc19:ab:d7e4;
posting-account=f4I3oAkAAABDSN7-E4aFhBpEX3HML7-_
NNTP-Posting-Host: 2a02:8084:6020:5780:5871:cc19:ab:d7e4
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com> <2022Dec4.101848@mips.complang.tuwien.ac.at>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <72d6ecc1-8b1a-4b21-9cb7-61ad118a42dan@googlegroups.com>
Subject: Re: How convergent was the general use of binary floating point?
From: russell....@gmail.com (Russell Wallace)
Injection-Date: Sun, 04 Dec 2022 12:57:01 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 4448
 by: Russell Wallace - Sun, 4 Dec 2022 12:57 UTC

On Sunday, December 4, 2022 at 10:20:58 AM UTC, Anton Ertl wrote:
> This statement is contradictory. What they do on paper is decimal
> fixed point, so true decimal FP would produce different rounding
> errors. That's why what was standardized as "decimal FP" actually is
> not really a floating-point number representation: in the usual case
> the point does not float.

Right, sort of. A fixed program, e.g. a payroll program in COBOL, typically uses actual fixed point, which is most efficiently represented as scaled binary integers, i.e. just represent the number of cents as a plain integer. What I'm referring to here as decimal floating point, is what you would normally do in a spreadsheet, where the point floats only when necessary. I'm agnostic about whether 'floating point' is the best term for that; it is not the same thing as fixed point. (Unlike the payroll program, VisiCalc can happily take very small numbers without reprogramming.)

> On hardware with a multiplier loop like the 386 and 486 the binary
> multiplication time was data-dependent, i.e., it exits early on some
> values. What should the supposed advantage of decimal be?

Two advantages:

Values that allow early exit in decimal, occur much more often in spreadsheets than values that allow early exit in binary; see my example of 1.23 x 4..56.

It's faster to update the display when the values are in decimal.
> On hardware with a full-blown multiplier like the 88100 integer
> multiply takes a few cycles independent of the data values. What
> should the advantage of decimal be there?

Now the faster-calculation advantage of decimal is gone, though by this stage both are probably fast enough. Decimal still has better rounding and faster display update.

> Even the few who would like
> commercial rounding rules would have failed to specify the rounding
> rules to the spreadsheet program; such specifications would just have
> been too complex to make and verify; this would have been a job for a
> programmer, not the majority of 1-2-3 customers.

Is it hard to specify rounding rules to a spreadsheet? It seems to me that in most cases, it could be done by selecting from a shortlist of options.

> Actually the fact that floating point won in scientific computing,
> where the software crisis either has not happened or was much less
> forceful, and it happened more than a decade before anyone even coined
> the term "software crisis", shows how important the programming
> advantage of floating-point is. Customers of scientific computers are
> usually happy to save money on the hardware at the cost of additional
> software expense (because they still save more on the hardware than
> they have to pay for the additional programming effort; i.e., no
> software crisis), but not in the case of floating-point vs. fixed
> point.
>
> And if floating point wins in scientific computing, it is all the more
> important in areas where the software crisis is relevant.

Yeah, this is a good point.

Re: How convergent was the general use of binary floating point?

<78ce2f25-640f-482f-9a5c-11fe63d5e264n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29273&group=comp.arch#29273

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:6214:5f08:b0:4bc:1237:c611 with SMTP id lx8-20020a0562145f0800b004bc1237c611mr54972306qvb.126.1670159551214;
Sun, 04 Dec 2022 05:12:31 -0800 (PST)
X-Received: by 2002:a05:6808:1996:b0:35a:303a:6ddf with SMTP id
bj22-20020a056808199600b0035a303a6ddfmr28589751oib.113.1670159550961; Sun, 04
Dec 2022 05:12:30 -0800 (PST)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sun, 4 Dec 2022 05:12:30 -0800 (PST)
In-Reply-To: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=162.157.97.93; posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 162.157.97.93
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <78ce2f25-640f-482f-9a5c-11fe63d5e264n@googlegroups.com>
Subject: Re: How convergent was the general use of binary floating point?
From: jsav...@ecn.ab.ca (Quadibloc)
Injection-Date: Sun, 04 Dec 2022 13:12:31 +0000
Content-Type: text/plain; charset="UTF-8"
X-Received-Bytes: 2010
 by: Quadibloc - Sun, 4 Dec 2022 13:12 UTC

On Saturday, December 3, 2022 at 8:01:46 PM UTC-7, Russell Wallace wrote:

> Point of departure: Lotus decides their spreadsheet should use decimal, like VisiCalc.
> Autodesk notices this, decides the 8087 won't be a big hit, and puts in the programming
> effort needed to make AutoCAD work with fixed point instead.

If those things happened, indeed we might never have seen the chips by Weitek and
others which were intended as faster alternatives to the 8087. Those were often
directly pitched at Autodesk users.

But because binary floating-point was nearly universal in mainframes, ultimately
floating point would still have made it into microprocessors. Maybe by the time of the
Pentium II instead of the 486, but it is too obvious a feature to forever by abandoned
for fixed point simply because of a few key applications.

John Savard

Re: How convergent was the general use of binary floating point?

<Vq2jL.61535$pem1.52787@fx10.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29276&group=comp.arch#29276

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx10.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: sco...@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: How convergent was the general use of binary floating point?
Newsgroups: comp.arch
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com> <tmh9ga$2q2t$1@gal.iecc.com>
Lines: 12
Message-ID: <Vq2jL.61535$pem1.52787@fx10.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Sun, 04 Dec 2022 14:48:21 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Sun, 04 Dec 2022 14:48:21 GMT
X-Received-Bytes: 1123
 by: Scott Lurndal - Sun, 4 Dec 2022 14:48 UTC

John Levine <johnl@taugh.com> writes:

>A few decimal machines in the 1950s had decimal FP, and the IBM 360
>had famously botched hex FP, but other than that, it was all binary
>all the way.

And at least one line of decimal machines with decimal FP was
still active in 2010 (Unisys nee Burroughs). Although I will
admit that very few customers actually used it; as it wasn't
particularly interesting to the banking, insurance and other
bookkeeeping applications.

Re: How convergent was the general use of binary floating point?

<2022Dec4.193333@mips.complang.tuwien.ac.at>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29282&group=comp.arch#29282

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: ant...@mips.complang.tuwien.ac.at (Anton Ertl)
Newsgroups: comp.arch
Subject: Re: How convergent was the general use of binary floating point?
Date: Sun, 04 Dec 2022 18:33:33 GMT
Organization: Institut fuer Computersprachen, Technische Universitaet Wien
Lines: 48
Message-ID: <2022Dec4.193333@mips.complang.tuwien.ac.at>
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com> <2022Dec4.101848@mips.complang.tuwien.ac.at> <72d6ecc1-8b1a-4b21-9cb7-61ad118a42dan@googlegroups.com>
Injection-Info: reader01.eternal-september.org; posting-host="f71e9b82ca0642d8befb58df19c180f9";
logging-data="3919346"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19rT11HmEcoBYR9Iz49YDcD"
Cancel-Lock: sha1:tvQOBOO4ePvYHxMobGdsvvpLnj0=
X-newsreader: xrn 10.11
 by: Anton Ertl - Sun, 4 Dec 2022 18:33 UTC

Russell Wallace <russell.wallace@gmail.com> writes:
>On Sunday, December 4, 2022 at 10:20:58 AM UTC, Anton Ertl wrote:
>> On hardware with a multiplier loop like the 386 and 486 the binary=20
>> multiplication time was data-dependent, i.e., it exits early on some=20
>> values. What should the supposed advantage of decimal be?=20
>
>Two advantages:
>
>Values that allow early exit in decimal, occur much more often in spreadshe=
>ets than values that allow early exit in binary; see my example of 1.23 x 4=
>.56.

If these are represented as the binary integers 123 (7 bits) and 456
(9 bits), it seems to me that the multiplier can early-out at least as
early as if they are represented as BCD numbers (both 12 bits).

>It's faster to update the display when the values are in decimal.

Display update is so slow that the conversion cost is minor change.

>> Even the few who would like=20
>> commercial rounding rules would have failed to specify the rounding=20
>> rules to the spreadsheet program; such specifications would just have=20
>> been too complex to make and verify; this would have been a job for a=20
>> programmer, not the majority of 1-2-3 customers.=20
>
>Is it hard to specify rounding rules to a spreadsheet? It seems to me that =
>in most cases, it could be done by selecting from a shortlist of options.

I found it hard to specify the output format, but then I rarely use
spreadsheets. Have you seen a spreadsheet that lets you specify
rounding rules? We have enough CPU power, we have decimal FP
libraries, but does Excel or LibreOffice Calc etc. make any use of
them? Not to my knowledge.

The usual idea of spreadsheet usage seems to be that you type in some
numbers, you type in some formulas (already pretty advanced stuff for
the user who is not used to programming), and the spreadsheet does the
rest. You change some numbers, and look what you get. Now,
formatting: the user needs to abstract from one concrete result to a
general result, that's really getting into the deep. Rounding rules
are apparently too far out to even hide in a corner of the present-day
spreadsheets.

- anton
--
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

Re: How convergent was the general use of binary floating point?

<tmirdq$552$1@gal.iecc.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29284&group=comp.arch#29284

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!news.iecc.com!.POSTED.news.iecc.com!not-for-mail
From: joh...@taugh.com (John Levine)
Newsgroups: comp.arch
Subject: Re: How convergent was the general use of binary floating point?
Date: Sun, 4 Dec 2022 19:11:54 -0000 (UTC)
Organization: Taughannock Networks
Message-ID: <tmirdq$552$1@gal.iecc.com>
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com> <tmh9ga$2q2t$1@gal.iecc.com> <b98311ec-6a8d-4c0f-a151-d2a8f794cc86n@googlegroups.com> <tmhopg$3kueq$1@dont-email.me>
Injection-Date: Sun, 4 Dec 2022 19:11:54 -0000 (UTC)
Injection-Info: gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970";
logging-data="5282"; mail-complaints-to="abuse@iecc.com"
In-Reply-To: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com> <tmh9ga$2q2t$1@gal.iecc.com> <b98311ec-6a8d-4c0f-a151-d2a8f794cc86n@googlegroups.com> <tmhopg$3kueq$1@dont-email.me>
Cleverness: some
X-Newsreader: trn 4.0-test77 (Sep 1, 2010)
Originator: johnl@iecc.com (John Levine)
 by: John Levine - Sun, 4 Dec 2022 19:11 UTC

According to BGB <cr88192@gmail.com>:
>In a general sense, I suspect floating-point was likely inevitable.
>
>As for what parts could have varied:
> Location of the sign and/or exponent;
> Sign/Magnitude and/or Twos Complement;
> Handling of values near zero;
> Number of exponent bits for larger types;

I think I've seen variations in all of those.

>For example, one could have used Twos Complement allowing a signed
>integer compare to be used unmodified for floating-point compare.

Yup, the PDP-6/10 did that.

>For many purposes, subnormal numbers do not matter.

I believe that the argument is that if you don't know a lot about
numerical analysis, your intuitions about when they don't matter
are likely to be wrong. Denormals make it more likely that
naive code will get an accurate answer.

>The relative value of giving special semantics (in hardware) to the
>Inf/NaN cases could be debated. Though, arguably, they do have meaning
>as "something has gone wrong in the math" signals, which would not
>necessarily have been preserved with clamping.

Same point, they tell the naive programmer that the code didn't
work. As some wag pointed out a long time ago, if you don't care
if the results are right, I can make the program as fast as you want.

--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly

Re: How convergent was the general use of binary floating point?

<tmissm$at0$1@gal.iecc.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29289&group=comp.arch#29289

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!news.iecc.com!.POSTED.news.iecc.com!not-for-mail
From: joh...@taugh.com (John Levine)
Newsgroups: comp.arch
Subject: Re: How convergent was the general use of binary floating point?
Date: Sun, 4 Dec 2022 19:36:54 -0000 (UTC)
Organization: Taughannock Networks
Message-ID: <tmissm$at0$1@gal.iecc.com>
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com> <tmh9ga$2q2t$1@gal.iecc.com> <tmhdjg$3j2lc$1@dont-email.me>
Injection-Date: Sun, 4 Dec 2022 19:36:54 -0000 (UTC)
Injection-Info: gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970";
logging-data="11168"; mail-complaints-to="abuse@iecc.com"
In-Reply-To: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com> <tmh9ga$2q2t$1@gal.iecc.com> <tmhdjg$3j2lc$1@dont-email.me>
Cleverness: some
X-Newsreader: trn 4.0-test77 (Sep 1, 2010)
Originator: johnl@iecc.com (John Levine)
 by: John Levine - Sun, 4 Dec 2022 19:36 UTC

According to Stephen Fuld <sfuld@alumni.cmu.edu.invalid>:
>> A few decimal machines in the 1950s had decimal FP, and the IBM 360
>> had famously botched hex FP, but other than that, it was all binary
>> all the way.
>
>A minor quibble, which you actually address below. As you said there,
>contrary to what the OP said, accountants didn't want DFP. They wanted
>scaled decimal fixed point with the compiler taking care of scaling
>automatically.

Sorry I wasn't clear. I meant that floating point was all binary. The
IBM 1620 was an odd little decimal scientific computer from the late
1950s much beloved by its users. It had a floating point option which
was decimal because the whole machine was. Someone else mentioned a
Unisys legacy architecture with decimal FP that survived into the
2000s, I expect emulated on mass market chips. But they were dead
ends. IBM's replacement for the 1620 was the 16 bit binary 1130
with FP in software.

It's quite striking how similar IEEE floating point is to 704's FP
from that 1950s. Once you have the idea, you converge pretty fast.

By the way, when I said it took two decades to invent the hidden bit,
I was wrong. Wikipedia says that the electromechanical Zuse Z1 in 1938
had floating point with a hidden bit. IBM licensed Zuse's patents
after the war so they were aware of his work.

--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly

Re: How convergent was the general use of binary floating point?

<1f875cad-52b2-499f-97f7-23dd85d20d36n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29291&group=comp.arch#29291

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:1806:b0:3a7:e170:830 with SMTP id t6-20020a05622a180600b003a7e1700830mr2795312qtc.578.1670183099891;
Sun, 04 Dec 2022 11:44:59 -0800 (PST)
X-Received: by 2002:a05:6808:179b:b0:359:af90:2bb7 with SMTP id
bg27-20020a056808179b00b00359af902bb7mr41754799oib.23.1670183099627; Sun, 04
Dec 2022 11:44:59 -0800 (PST)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sun, 4 Dec 2022 11:44:59 -0800 (PST)
In-Reply-To: <2022Dec4.193333@mips.complang.tuwien.ac.at>
Injection-Info: google-groups.googlegroups.com; posting-host=2a02:8084:6020:5780:5871:cc19:ab:d7e4;
posting-account=f4I3oAkAAABDSN7-E4aFhBpEX3HML7-_
NNTP-Posting-Host: 2a02:8084:6020:5780:5871:cc19:ab:d7e4
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
<2022Dec4.101848@mips.complang.tuwien.ac.at> <72d6ecc1-8b1a-4b21-9cb7-61ad118a42dan@googlegroups.com>
<2022Dec4.193333@mips.complang.tuwien.ac.at>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <1f875cad-52b2-499f-97f7-23dd85d20d36n@googlegroups.com>
Subject: Re: How convergent was the general use of binary floating point?
From: russell....@gmail.com (Russell Wallace)
Injection-Date: Sun, 04 Dec 2022 19:44:59 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 2911
 by: Russell Wallace - Sun, 4 Dec 2022 19:44 UTC

On Sunday, December 4, 2022 at 6:46:30 PM UTC, Anton Ertl wrote:
> If these are represented as the binary integers 123 (7 bits) and 456
> (9 bits), it seems to me that the multiplier can early-out at least as
> early as if they are represented as BCD numbers (both 12 bits).

Right, scaled integers are the other good way to represent decimal numbers. (But this is a very different solution from binary floating point.)

> Display update is so slow that the conversion cost is minor change.

You're probably assuming a bitmap display with overlapping windows and scalable fonts? Once you're at that tech level, the arithmetic cost (in whichever format) is also minor change. But Lotus 1-2-3 ran on a character cell display, where updating each character was just a store to a single memory location.

> I found it hard to specify the output format, but then I rarely use
> spreadsheets. Have you seen a spreadsheet that lets you specify
> rounding rules? We have enough CPU power, we have decimal FP
> libraries, but does Excel or LibreOffice Calc etc. make any use of
> them? Not to my knowledge.

Right. I conjecture that's because when Lotus chose binary floating point, it diverted spreadsheets down a path away from where you could usefully specify rounding rules. And that in an alternate timeline where spreadsheets still used decimal (whether with BCD or scaled integers) letting you specify rounding rules would be a natural thing to do.

Re: How convergent was the general use of binary floating point?

<4fa3167c-72c3-4bc1-9a4e-1efb47e11582n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29293&group=comp.arch#29293

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:15db:b0:6fc:ac4e:6149 with SMTP id o27-20020a05620a15db00b006fcac4e6149mr12713270qkm.498.1670184010708;
Sun, 04 Dec 2022 12:00:10 -0800 (PST)
X-Received: by 2002:a9d:61ca:0:b0:66e:6d59:b2df with SMTP id
h10-20020a9d61ca000000b0066e6d59b2dfmr10002219otk.201.1670184010446; Sun, 04
Dec 2022 12:00:10 -0800 (PST)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!border-2.nntp.ord.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sun, 4 Dec 2022 12:00:10 -0800 (PST)
In-Reply-To: <tmirdq$552$1@gal.iecc.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:291:29f0:7d65:940a:3e39:425;
posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 2600:1700:291:29f0:7d65:940a:3e39:425
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
<tmh9ga$2q2t$1@gal.iecc.com> <b98311ec-6a8d-4c0f-a151-d2a8f794cc86n@googlegroups.com>
<tmhopg$3kueq$1@dont-email.me> <tmirdq$552$1@gal.iecc.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <4fa3167c-72c3-4bc1-9a4e-1efb47e11582n@googlegroups.com>
Subject: Re: How convergent was the general use of binary floating point?
From: MitchAl...@aol.com (MitchAlsup)
Injection-Date: Sun, 04 Dec 2022 20:00:10 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Lines: 35
 by: MitchAlsup - Sun, 4 Dec 2022 20:00 UTC

On Sunday, December 4, 2022 at 1:11:58 PM UTC-6, John Levine wrote:
> According to BGB <cr8...@gmail.com>:
> >In a general sense, I suspect floating-point was likely inevitable.
> >
> >As for what parts could have varied:
> > Location of the sign and/or exponent;
> > Sign/Magnitude and/or Twos Complement;
> > Handling of values near zero;
> > Number of exponent bits for larger types;
> I think I've seen variations in all of those.
> >For example, one could have used Twos Complement allowing a signed
> >integer compare to be used unmodified for floating-point compare.
> Yup, the PDP-6/10 did that.
> >For many purposes, subnormal numbers do not matter.
> I believe that the argument is that if you don't know a lot about
> numerical analysis, your intuitions about when they don't matter
> are likely to be wrong. Denormals make it more likely that
> naive code will get an accurate answer.
<
Can we restate that to:: ...naïve code will get less inaccurate answers.
{denormals have already lost precision, just not a whole fraction's worth.}
<
> >The relative value of giving special semantics (in hardware) to the
> >Inf/NaN cases could be debated. Though, arguably, they do have meaning
> >as "something has gone wrong in the math" signals, which would not
> >necessarily have been preserved with clamping.
> Same point, they tell the naive programmer that the code didn't
> work. As some wag pointed out a long time ago, if you don't care
> if the results are right, I can make the program as fast as you want.
> --
> Regards,
> John Levine, jo...@taugh.com, Primary Perpetrator of "The Internet for Dummies",
> Please consider the environment before reading this e-mail. https://jl.ly

Re: How convergent was the general use of binary floating point?

<DC7jL.99066$Use.25242@fx15.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29295&group=comp.arch#29295

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx15.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: sco...@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: How convergent was the general use of binary floating point?
Newsgroups: comp.arch
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com> <tmh9ga$2q2t$1@gal.iecc.com> <tmhdjg$3j2lc$1@dont-email.me> <tmissm$at0$1@gal.iecc.com>
Lines: 24
Message-ID: <DC7jL.99066$Use.25242@fx15.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Sun, 04 Dec 2022 20:42:11 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Sun, 04 Dec 2022 20:42:11 GMT
X-Received-Bytes: 1916
 by: Scott Lurndal - Sun, 4 Dec 2022 20:42 UTC

John Levine <johnl@taugh.com> writes:
>According to Stephen Fuld <sfuld@alumni.cmu.edu.invalid>:
>>> A few decimal machines in the 1950s had decimal FP, and the IBM 360
>>> had famously botched hex FP, but other than that, it was all binary
>>> all the way.
>>
>>A minor quibble, which you actually address below. As you said there,
>>contrary to what the OP said, accountants didn't want DFP. They wanted
>>scaled decimal fixed point with the compiler taking care of scaling
>>automatically.
>
>Sorry I wasn't clear. I meant that floating point was all binary. The
>IBM 1620 was an odd little decimal scientific computer from the late
>1950s much beloved by its users. It had a floating point option which
>was decimal because the whole machine was. Someone else mentioned a
>Unisys legacy architecture with decimal FP that survived into the
>2000s, I expect emulated on mass market chips.

No, the system in question was twenty five years old at the
time of replacement. They (a city in southern california)
replaced it with 26 windows servers.
The other legacy stack architecture burroughs mainframe line
is still sold, as you note, in emulation on 64-bit intel/amd processors.

Re: How convergent was the general use of binary floating point?

<509856dc-e7be-40f4-9608-f65b696e5aedn@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29296&group=comp.arch#29296

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:5a88:b0:3a5:46b0:ffec with SMTP id fz8-20020a05622a5a8800b003a546b0ffecmr75700236qtb.306.1670189893010;
Sun, 04 Dec 2022 13:38:13 -0800 (PST)
X-Received: by 2002:a05:6870:f80d:b0:143:dea4:c591 with SMTP id
fr13-20020a056870f80d00b00143dea4c591mr15135334oab.106.1670189892734; Sun, 04
Dec 2022 13:38:12 -0800 (PST)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sun, 4 Dec 2022 13:38:12 -0800 (PST)
In-Reply-To: <2022Dec4.101848@mips.complang.tuwien.ac.at>
Injection-Info: google-groups.googlegroups.com; posting-host=80.62.117.224; posting-account=iwcJjQoAAAAIecwT8pOXxaSOyiUTZMJr
NNTP-Posting-Host: 80.62.117.224
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com> <2022Dec4.101848@mips.complang.tuwien.ac.at>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <509856dc-e7be-40f4-9608-f65b696e5aedn@googlegroups.com>
Subject: Re: How convergent was the general use of binary floating point?
From: peterfir...@gmail.com (Peter Lund)
Injection-Date: Sun, 04 Dec 2022 21:38:13 +0000
Content-Type: text/plain; charset="UTF-8"
X-Received-Bytes: 1806
 by: Peter Lund - Sun, 4 Dec 2022 21:38 UTC

On Sunday, December 4, 2022 at 11:20:58 AM UTC+1, Anton Ertl wrote:

> No way. Fixed point lost to floating point everywhere where the CPU
> was big enough to support floating-point hardware; this started in the
> 1950s with the IBM 704, repeated itself on the minis, and repeated
> itself again on the micros. The programming ease of floating-point
> won out over fixed point every time. Even without hardware
> floating-point was very popular (e.g., in MS Basic on all the micros).

Z3 had floating-point (22-bit binary add/sub/mul/div/sqrt). It also had +/- inf and undefined.

1941!

-Peter

Re: How convergent was the general use of binary floating point?

<221030dd-505c-4d56-b2c2-9dfaff340152n@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29297&group=comp.arch#29297

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:6002:b0:3a5:8c9a:638f with SMTP id he2-20020a05622a600200b003a58c9a638fmr75799877qtb.350.1670191771402;
Sun, 04 Dec 2022 14:09:31 -0800 (PST)
X-Received: by 2002:a54:4699:0:b0:355:4eda:47e0 with SMTP id
k25-20020a544699000000b003554eda47e0mr41452577oic.167.1670191771151; Sun, 04
Dec 2022 14:09:31 -0800 (PST)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sun, 4 Dec 2022 14:09:30 -0800 (PST)
In-Reply-To: <2022Dec4.101848@mips.complang.tuwien.ac.at>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:291:29f0:7d65:940a:3e39:425;
posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 2600:1700:291:29f0:7d65:940a:3e39:425
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com> <2022Dec4.101848@mips.complang.tuwien.ac.at>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <221030dd-505c-4d56-b2c2-9dfaff340152n@googlegroups.com>
Subject: Re: How convergent was the general use of binary floating point?
From: MitchAl...@aol.com (MitchAlsup)
Injection-Date: Sun, 04 Dec 2022 22:09:31 +0000
Content-Type: text/plain; charset="UTF-8"
X-Received-Bytes: 2307
 by: MitchAlsup - Sun, 4 Dec 2022 22:09 UTC

On Sunday, December 4, 2022 at 4:20:58 AM UTC-6, Anton Ertl wrote:
> Russell Wallace <russell...@gmail.com> writes:
>
> No way. Fixed point lost to floating point everywhere where the CPU
> was big enough to support floating-point hardware; this started in the
> 1950s with the IBM 704, repeated itself on the minis, and repeated
> itself again on the micros. The programming ease of floating-point
> won out over fixed point every time. Even without hardware
> floating-point was very popular (e.g., in MS Basic on all the micros).
<
Accurate but overstated.
<
Floating point is an ideal representation for calculations where the exponent
varies at least as much as the fraction--that is scientific and numerical codes.
Almost nothing here is known to more than 10 digits of precision anyway, and
there is no closed form solution to many of the codes, either.
<
Floating point was never in any real competition with fixed point--which is the
ideal representation for money--and calculations where one must not lose
precision (do to rounding).

Re: How convergent was the general use of binary floating point?

<tmj5td$3or3q$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29298&group=comp.arch#29298

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: cr88...@gmail.com (BGB)
Newsgroups: comp.arch
Subject: Re: How convergent was the general use of binary floating point?
Date: Sun, 4 Dec 2022 16:10:51 -0600
Organization: A noiseless patient Spider
Lines: 235
Message-ID: <tmj5td$3or3q$1@dont-email.me>
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
<tmh9ga$2q2t$1@gal.iecc.com>
<b98311ec-6a8d-4c0f-a151-d2a8f794cc86n@googlegroups.com>
<tmhopg$3kueq$1@dont-email.me> <tmirdq$552$1@gal.iecc.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 4 Dec 2022 22:10:54 -0000 (UTC)
Injection-Info: reader01.eternal-september.org; posting-host="20b46459b761f3ccae71f8b5c7a443fb";
logging-data="3959930"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/w0IPKKV+ZFR+SZb4zCBwA"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.5.1
Cancel-Lock: sha1:9gO9Te5k4jeVnAs/T6KNQ8R56B4=
In-Reply-To: <tmirdq$552$1@gal.iecc.com>
Content-Language: en-US
 by: BGB - Sun, 4 Dec 2022 22:10 UTC

On 12/4/2022 1:11 PM, John Levine wrote:
> According to BGB <cr88192@gmail.com>:
>> In a general sense, I suspect floating-point was likely inevitable.
>>
>> As for what parts could have varied:
>> Location of the sign and/or exponent;
>> Sign/Magnitude and/or Twos Complement;
>> Handling of values near zero;
>> Number of exponent bits for larger types;
>
> I think I've seen variations in all of those.
>
>> For example, one could have used Twos Complement allowing a signed
>> integer compare to be used unmodified for floating-point compare.
>
> Yup, the PDP-6/10 did that.
>

Yep.

Have noted that things can get "fun" for microfloat formats, since (for
8 bit formats) one is operating on the bare edge of what is "sufficient".

So, things like whether or not there is a sign bit, exact size of the
exponent and mantissa, bias values, etc, will tend to vary by a
significant amount (even within the same domain, noting for example,
that both Mu-Law and A-Law exist, etc).

>> For many purposes, subnormal numbers do not matter.
>
> I believe that the argument is that if you don't know a lot about
> numerical analysis, your intuitions about when they don't matter
> are likely to be wrong. Denormals make it more likely that
> naive code will get an accurate answer.
>

Possible, though apart from edge cases (such as dividing by very small
numbers, or multiplying by very large numbers), any effect denormals
would have had on the result is likely insignificant.

In the case where one divides by a "would have been" denormal, turning
the result into NaN or Inf (as-if it were 0) almost makes more sense.

For Binary32 or Binary64, the dynamic range is large enough that they
are unlikely to matter.

For Binary16, it does mean that the smallest possible normal-range value
is 0.00006103515625, which "could matter", but people presumably aren't
going to be using this for things where "precision actually matters".

For some of my neural-net training experiments, things did tend to start
running into the dynamic-range limits of Binary16, which I ended hacking
around.

Ironically, adding a case to detect intermediate values going outside of
a reasonable range and then decaying any weights which fed into this
value, seemed to cause it to converge a little more quickly.

>> The relative value of giving special semantics (in hardware) to the
>> Inf/NaN cases could be debated. Though, arguably, they do have meaning
>> as "something has gone wrong in the math" signals, which would not
>> necessarily have been preserved with clamping.
>
> Same point, they tell the naive programmer that the code didn't
> work. As some wag pointed out a long time ago, if you don't care
> if the results are right, I can make the program as fast as you want.
>

Granted.

Keeping Inf and NaN semantics as diagnostic signals does at least make
sense. In my case, have mostly kept Inf and NaN as they seem able to
justify their existence.

There are also some limits though to how much can be gained by cheap
approximations.

For example, one could try to define a Binary32 reciprocal as:
recip=0x7F000000-divisor;
Or, FDIV as:
quot=0x3F800000+(dividend-divisor);

Does not take long to realize that, for most uses, this is not sufficient.

For things like pixel color calculations, this sorta thing may be
"mostly passable" though (or can be "made sufficient" with 1 to 3
Newton-Raphson stages).

For things like approximate bounding-sphere checks, it may be sufficient
to define square root as:
sqrta=0x1FC00000+(val>>1);

As can be noted, some of my neural-net stuff was also essentially using
these sort of definitions for the operators.

For an integer equivalent, had also noted that there were ways to
approximate distances, say:
dx=ax-bx; dy=ay-by;
dist=sqrt(dx*dx+dy*dy);
Being approximated as, say:
dx=abs(ax-bx); dy=abs(ay-by);
if(dx>=dy)
dist=dx+(dy>>1);
else
dist=dy+(dx>>1);

Had encountered this nifty trick in the ROTT engine, but had ended up
borrowing it for my BGBTech3 engine experiment.

Another recent experiment was getting the low-precision FPU in my case
up to (roughly) full Binary32 precision, however it still needs a little
more testing.

Lots of corner cutting in in this case, as the low-precision FPU was
more built to be "cheap" than good (even vs the main FPU, which was
already DAZ+FTZ).

Say, main FPU:
FADD: 64-bit mantissa, 12-bit exponent, ~ 9 sub-ULP bits.
FMUL: 54-bit mantissa, 12-bit exponent;
6x DSP48 "triangle" multiplier,
Plus a few LUT-based mini-multipliers for the low-order bits.
Full precision would need 9 DSP48s (with some mini-multipliers),
or 16 DSP48s (no mini multipliers).
Both operators have a 6-cycle latency;
Main FPU supports traditional rounding modes.

Low Precision FPU (original):
FADD: 16-bit mantissa, 9-bit exponent (truncated by 7 bits).
FMUL: 16-bit mantissa, 9-bit exponent (truncated by 7 bits);
1x DSP48 multiplier.
These operators had a 3-cycle latency.

The 16 bit mantissa was used as it maps exactly to the DSP48 in this
case. But, for Binary32, would truncate the mantissa.

In this version, FADD/FSUB was using One's Complement math for the
mantissa, as for truncated Binary32 this was tending to be "closer to
correct" than the result of using Two's Complement.

Low Precision FPU (with FP32 extension):
FADD: 28-bit mantissa, 9-bit exponent, ~ 2 sub-ULP bits (1).
FMUL: 26-bit mantissa, 9-bit exponent;
1x DSP48 multiplier, plus two 9-bit LUT based multipliers (2).
These operators still have a 3-cycle latency;
Effectively hard-wired as Truncate rounding.

The added precision makes it sufficient to use in a few cases where the
original low-precision FPU was not sufficient. It can now entirely take
over for Binary32 SIMD ops.

It is also possible to route some Binary64 ops through it as well,
albeit with the same effective dynamic range and precision as Binary32
(but, 3 cycle rather than 6 cycle).

1: s.eeeeeeee.fffffffffffffffffffffff,
Maps mantissa as:
001fffffffffffffffffffffff00
With the high-bits allowing for both sign and carry to a larger
exponent. Sub-ULP bits allowing for carry into the ULP.

In this version, FADD/FSUB uses Two's Complement for the mantissa.

2: DSP48 does high order bits, with the two 9 bit multipliers producing
the rest. The results from the two 9-bit multipliers can be added to the
low-order bits from the DSP48.

Say, mantissa maps as:
01fffffffffffffffffffffff0
With the top 18 bits each fed into the DSP48, producing a 36 bit result.

The 9 bit multipliers deal with multiplying the high bits from one input
against the low bits from the other (the low-bits that were out of reach
of the DSP48), with these results being added together and then added to
the appropriate place within the 36-bit initial result.

In this case, the 9b multipliers were built from 3x3->6 multipliers, eg:
0 1 2 3 4 5 6 7
0 00 00 00 00 00 00 00 00
1 00 01 02 03 04 05 06 07
2 00 02 04 06 10 12 14 16
3 00 03 06 11 14 17 22 25
4 00 04 10 14 20 24 30 34
5 00 05 12 17 24 31 36 43
6 00 06 14 22 30 36 44 52
7 00 07 16 25 34 43 52 61
Where values here are in Base-8.

In this case, the 3x3 multipliers used because these can fit more easily
into a LUT6 (so likely more efficient than either working on 2 or 4 bit
values in this case).

Needed to build the multipliers manually as otherwise Vivado seemed to
see the 9-bit multiplies and tried using DSP48s for these as well, but
in this case didn't want to burn 8 additional DSPs on the SIMD unit.

However, as noted since the "low*middle" and "low*low" results are
effectively not calculated, whatever they might have contributed to the
probability of "correctly rounded ULP" is effectively lost.

But, this becomes less a question on numerical precision per-se, but
more of the statistical probability of the results being different, or
of this probability having a visible or meaningful effect on the result.

Apart from a few edge cases, the potential contribution from these
low-order results in the final result effectively approaches zero.

At least, in the absence of hardware support for a Single-Rounded FMAC
operator.

....

Re: How convergent was the general use of binary floating point?

<b767272f-e607-4a40-b757-60fa8d0a9ffbn@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29300&group=comp.arch#29300

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:6214:2e12:b0:4c6:d6b2:736a with SMTP id mx18-20020a0562142e1200b004c6d6b2736amr47920394qvb.57.1670197517456;
Sun, 04 Dec 2022 15:45:17 -0800 (PST)
X-Received: by 2002:a05:6808:1309:b0:359:d97b:3f6f with SMTP id
y9-20020a056808130900b00359d97b3f6fmr32739023oiv.298.1670197517149; Sun, 04
Dec 2022 15:45:17 -0800 (PST)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sun, 4 Dec 2022 15:45:16 -0800 (PST)
In-Reply-To: <2022Dec4.193333@mips.complang.tuwien.ac.at>
Injection-Info: google-groups.googlegroups.com; posting-host=2a02:8084:6020:5780:5871:cc19:ab:d7e4;
posting-account=f4I3oAkAAABDSN7-E4aFhBpEX3HML7-_
NNTP-Posting-Host: 2a02:8084:6020:5780:5871:cc19:ab:d7e4
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
<2022Dec4.101848@mips.complang.tuwien.ac.at> <72d6ecc1-8b1a-4b21-9cb7-61ad118a42dan@googlegroups.com>
<2022Dec4.193333@mips.complang.tuwien.ac.at>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <b767272f-e607-4a40-b757-60fa8d0a9ffbn@googlegroups.com>
Subject: Re: How convergent was the general use of binary floating point?
From: russell....@gmail.com (Russell Wallace)
Injection-Date: Sun, 04 Dec 2022 23:45:17 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 2226
 by: Russell Wallace - Sun, 4 Dec 2022 23:45 UTC

On Sunday, December 4, 2022 at 6:46:30 PM UTC, Anton Ertl wrote:
> Have you seen a spreadsheet that lets you specify
> rounding rules? We have enough CPU power, we have decimal FP
> libraries, but does Excel or LibreOffice Calc etc. make any use of
> them? Not to my knowledge.

To expand a little on my last reply,

This change is definitely not going to happen today. Recently, scientists had to rename genes because Microsoft wouldn't fix the frigging CSV import in Excel. It's clearly a long way past the point where significantly changing the arithmetic model would have been on the cards.

That's one reason I'm casting this as alternate history: to understand the underlying dynamics, separate from the question of what is still open to change in the organizational politics of 2022.

Re: How convergent was the general use of binary floating point?

<2022Dec5.093902@mips.complang.tuwien.ac.at>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29302&group=comp.arch#29302

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: ant...@mips.complang.tuwien.ac.at (Anton Ertl)
Newsgroups: comp.arch
Subject: Re: How convergent was the general use of binary floating point?
Date: Mon, 05 Dec 2022 08:39:02 GMT
Organization: Institut fuer Computersprachen, Technische Universitaet Wien
Lines: 52
Message-ID: <2022Dec5.093902@mips.complang.tuwien.ac.at>
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com> <2022Dec4.101848@mips.complang.tuwien.ac.at> <221030dd-505c-4d56-b2c2-9dfaff340152n@googlegroups.com>
Injection-Info: reader01.eternal-september.org; posting-host="d105a018139554ee5ee549087e8f70d9";
logging-data="4135702"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+tGVxMh6Mbi4+XuaR7Xk8X"
Cancel-Lock: sha1:8C+8dAH4s9qtNn7XAT0FSNUFHaw=
X-newsreader: xrn 10.11
 by: Anton Ertl - Mon, 5 Dec 2022 08:39 UTC

MitchAlsup <MitchAlsup@aol.com> writes:
>Floating point is an ideal representation for calculations where the exponent
>varies at least as much as the fraction--that is scientific and numerical codes.
>Almost nothing here is known to more than 10 digits of precision anyway, and
>there is no closed form solution to many of the codes, either.
><
>Floating point was never in any real competition with fixed point--which is the
>ideal representation for money--and calculations where one must not lose
>precision (do to rounding).

Reality check:

1) Spreadsheets use floating point, and people use spreadsheets for
computing stuff about money. I expect that spreadsheet companies
considered adding fixed point, but their market research told them
that they would not gain a competetive advantage.

2) Likewise, from what I hear about a big Forth application that also
deals with money, they use floating point for dealing with money.
That's despite fixed-point ideology and, to a certain degree, support
being strong in Forth. The FP representation used are the 80-bit 8087
FP numbers; when the Forth system they used was upgraded to AMD64, the
Forth system switched to 64-bit SSE2 FP numbers, but the developers of
that application complained, because they wanted the 80-bit numbers.

3) Every month I get an invoice about EUR 38.24 that is made up of the
sum of EUR 37 and EUR 1.25. I expect that the company that sends out
the invoice sends out thousands of invoices with that error every
month, so they lose tens of euros every month. It's apparently not
important enough to invest money for fixing this. And I can
understand this. A proper fix would require switching to fixed point,
which would cost maybe EUR 100000 of development time. And some small
correction of the existing FP computation might be cheaper in
development, but might lead to thousands of justified complaints about
erring in the other direction. Given that 1.25 and 38.25 are
representable in floating point without rounding error, it may be that
they just subtract EUR 0.01 from every invoice to get rid of rounding
errors in the other direction, and reduce the number of complaints.

For you and me it seems obvious that one should use fixed point for
money, just like it seemed obvious to von Neumann that one should use
fixed point for physics computation. But actual programmers and users
dealing with money work with floating-point, just as actual programmers
doing physics computations work with floating point.

So, in reality, floating point used to be competition for fixed point,
and floating point has won.

- anton
--
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

Re: How convergent was the general use of binary floating point?

<2022Dec5.101511@mips.complang.tuwien.ac.at>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29303&group=comp.arch#29303

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: ant...@mips.complang.tuwien.ac.at (Anton Ertl)
Newsgroups: comp.arch
Subject: Re: How convergent was the general use of binary floating point?
Date: Mon, 05 Dec 2022 09:15:11 GMT
Organization: Institut fuer Computersprachen, Technische Universitaet Wien
Lines: 41
Message-ID: <2022Dec5.101511@mips.complang.tuwien.ac.at>
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com> <2022Dec4.101848@mips.complang.tuwien.ac.at> <72d6ecc1-8b1a-4b21-9cb7-61ad118a42dan@googlegroups.com> <2022Dec4.193333@mips.complang.tuwien.ac.at> <1f875cad-52b2-499f-97f7-23dd85d20d36n@googlegroups.com>
Injection-Info: reader01.eternal-september.org; posting-host="d105a018139554ee5ee549087e8f70d9";
logging-data="4135702"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19mvjK+OdOzhoDzs3Kp0JQP"
Cancel-Lock: sha1:oidX97UiOvtCArQIhoQy8qPFfSs=
X-newsreader: xrn 10.11
 by: Anton Ertl - Mon, 5 Dec 2022 09:15 UTC

Russell Wallace <russell.wallace@gmail.com> writes:
>On Sunday, December 4, 2022 at 6:46:30 PM UTC, Anton Ertl wrote:
>> If these are represented as the binary integers 123 (7 bits) and 456=20
>> (9 bits), it seems to me that the multiplier can early-out at least as=20
>> early as if they are represented as BCD numbers (both 12 bits).
>
>Right, scaled integers are the other good way to represent decimal numbers.=
> (But this is a very different solution from binary floating point.)
>
>> Display update is so slow that the conversion cost is minor change.=20
>
>You're probably assuming a bitmap display with overlapping windows and scal=
>able fonts? Once you're at that tech level, the arithmetic cost (in whichev=
>er format) is also minor change. But Lotus 1-2-3 ran on a character cell di=
>splay, where updating each character was just a store to a single memory lo=
>cation.

It was a "graphics" card memory location, and CPU access there tends
to be slower than main memory. Probably more importantly in the early
generations, there's an overhead in determining the memory location
that you have to write to, and if you have to write at all (i.e., if
the cell is on-screen of off-screen).

>> We have enough CPU power, we have decimal FP=20
>> libraries, but does Excel or LibreOffice Calc etc. make any use of=20
>> them? Not to my knowledge.=20
>
>Right. I conjecture that's because when Lotus chose binary floating point, =
>it diverted spreadsheets down a path away from where you could usefully spe=
>cify rounding rules.

If financial rounding rules were a real requirement for spreadsheet
users, one of the spreadsheet programs would have added fixed point or
decimal floating point by now, and a rounding rule feature like you
have in mind. The fact that this has not happened indicates that
binary floating-point is good enough.

- anton
--
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

Re: How convergent was the general use of binary floating point?

<2022Dec5.103553@mips.complang.tuwien.ac.at>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29305&group=comp.arch#29305

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: ant...@mips.complang.tuwien.ac.at (Anton Ertl)
Newsgroups: comp.arch
Subject: Re: How convergent was the general use of binary floating point?
Date: Mon, 05 Dec 2022 09:35:53 GMT
Organization: Institut fuer Computersprachen, Technische Universitaet Wien
Lines: 31
Message-ID: <2022Dec5.103553@mips.complang.tuwien.ac.at>
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com> <2022Dec4.101848@mips.complang.tuwien.ac.at> <72d6ecc1-8b1a-4b21-9cb7-61ad118a42dan@googlegroups.com> <2022Dec4.193333@mips.complang.tuwien.ac.at> <b767272f-e607-4a40-b757-60fa8d0a9ffbn@googlegroups.com>
Injection-Info: reader01.eternal-september.org; posting-host="d105a018139554ee5ee549087e8f70d9";
logging-data="4135702"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/KFJhBV7RUCdJFU1qAtFFd"
Cancel-Lock: sha1:LQi2MfAD3GggoV9oZv8Kik6tDBQ=
X-newsreader: xrn 10.11
 by: Anton Ertl - Mon, 5 Dec 2022 09:35 UTC

Russell Wallace <russell.wallace@gmail.com> writes:
>On Sunday, December 4, 2022 at 6:46:30 PM UTC, Anton Ertl wrote:
>> Have you seen a spreadsheet that lets you specify=20
>> rounding rules? We have enough CPU power, we have decimal FP=20
>> libraries, but does Excel or LibreOffice Calc etc. make any use of=20
>> them? Not to my knowledge.=20
>
>To expand a little on my last reply,
>
>This change is definitely not going to happen today. Recently, scientists h=
>ad to rename genes because Microsoft wouldn't fix the frigging CSV import i=
>n Excel.

If the users fail to switch to better software, the fix is obviously
not important enough for them, so why should it be important for
Microsoft?

>It's clearly a long way past the point where significantly changin=
>g the arithmetic model would have been on the cards.

Same thing here: If it was really important to users, a competitor
could make inroads by offering fixed point and a rounding rule
specification feature. This would induce the competitor to add this
feature, and eventually Microsoft would add it, too. But apparently
all spreadsheet makers think that this feature is not important for
the users. And they are probably right.

- anton
--
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

Re: How convergent was the general use of binary floating point?

<tmkeoq$3ubem$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=29306&group=comp.arch#29306

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: sfu...@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.arch
Subject: Re: How convergent was the general use of binary floating point?
Date: Mon, 5 Dec 2022 01:48:10 -0800
Organization: A noiseless patient Spider
Lines: 40
Message-ID: <tmkeoq$3ubem$1@dont-email.me>
References: <e3e16390-04af-4c46-8352-d68963f2774en@googlegroups.com>
<2022Dec4.101848@mips.complang.tuwien.ac.at>
<221030dd-505c-4d56-b2c2-9dfaff340152n@googlegroups.com>
<2022Dec5.093902@mips.complang.tuwien.ac.at>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 5 Dec 2022 09:48:10 -0000 (UTC)
Injection-Info: reader01.eternal-september.org; posting-host="f424c990b258350c13b269dcea346162";
logging-data="4140502"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18y69RzLcypMSGXQo7bGc6nFV3+xOZ8A5w="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.5.1
Cancel-Lock: sha1:xnCkOY/ie4rzfxxE3VZeCpaXj60=
Content-Language: en-US
In-Reply-To: <2022Dec5.093902@mips.complang.tuwien.ac.at>
 by: Stephen Fuld - Mon, 5 Dec 2022 09:48 UTC

On 12/5/2022 12:39 AM, Anton Ertl wrote:
> MitchAlsup <MitchAlsup@aol.com> writes:
>> Floating point is an ideal representation for calculations where the exponent
>> varies at least as much as the fraction--that is scientific and numerical codes.
>> Almost nothing here is known to more than 10 digits of precision anyway, and
>> there is no closed form solution to many of the codes, either.
>> <
>> Floating point was never in any real competition with fixed point--which is the
>> ideal representation for money--and calculations where one must not lose
>> precision (do to rounding).
>
> Reality check:
>
> 1) Spreadsheets use floating point, and people use spreadsheets for
> computing stuff about money. I expect that spreadsheet companies
> considered adding fixed point, but their market research told them
> that they would not gain a competetive advantage.
>
> 2) Likewise, from what I hear about a big Forth application that also
> deals with money, they use floating point for dealing with money.

Question for you. Does Forth support of fixed point automatically
handle scaling, printing, etc. of non integer fixed point numbers? When
people moved away from COBOL, (which does all of that), to things like
C, which doesn't, they were confronted with a choice. Either code all
that stuff on their own or use binary floating point, which does handle
that stuff, but has rounding issues, etc. I expect many customers just
didn't want the mess of handling all of that themselves and were willing
to accept the binary floating point issues. If the more popular
languages that replaced COBOL had good support for fixed decimal, I
suspect that might have won out over binary floating point (especially
if there was some hardware support). By the time decimal floating point
came out, it was too late, and most people just didn't want to spend the
money to convert their existing programs to it.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Pages:12345
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor