Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

6 May, 2024: The networking issue during the past two days has been identified and fixed.


devel / comp.arch / Re: (Partial OT): Mutant ISA idea from a Sci-Fi story I was writing...

SubjectAuthor
* (Partial OT): Mutant ISA idea from a Sci-Fi story I was writing...BGB
`* Re: (Partial OT): Mutant ISA idea from a Sci-Fi story I was writing...robf...@gmail.com
 `- Re: (Partial OT): Mutant ISA idea from a Sci-Fi story I wasBGB

1
(Partial OT): Mutant ISA idea from a Sci-Fi story I was writing...

<u34pd9$2o4k0$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=32012&group=comp.arch#32012

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: cr88...@gmail.com (BGB)
Newsgroups: comp.arch
Subject: (Partial OT): Mutant ISA idea from a Sci-Fi story I was writing...
Date: Sat, 6 May 2023 00:41:52 -0500
Organization: A noiseless patient Spider
Lines: 272
Message-ID: <u34pd9$2o4k0$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sat, 6 May 2023 05:43:06 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="5b7d9f0dae0446b795549649af75c8c4";
logging-data="2888320"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+tRepvC88vVQucR3Af/beL"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.10.0
Cancel-Lock: sha1:i/t1bmU9/uqOguxkQRDjb87GosU=
Content-Language: en-US
 by: BGB - Sat, 6 May 2023 05:41 UTC

Was working on a sci-fi story, and I initially started trying to work
BJX2 into the story, but then the idea ended up mutating a bit.

Effectively, going from what BJX2 is now, to essentially being 64-bit
RISC-V but with much of the BJX2 feature-set glued on as an alt-mode
(or, sort of the reverse of what BJX2 is currently).

In the story, RISC-V is referred to simply as 'RV' or 'RV64', as this is
obvious enough and (should) hopefully sidestep the trademark issues.

In the story idea, it became that this ISA (being called BETA-V3) has
the same basic C ABI between it and RV64 modes, just:
* R0..R31 <-> X0..X31
* R32..R63 <-> F0..F31

And, in BETA-V3 mode, it relaxes the distinction between GPRs and FPRs
(except for where it matters for ABI compatibility), effectively
treating both as a shared pool of 64 registers (also being used in pairs
for 128-bit SIMD, basically the same as it works in BJX2).

From the story idea:
* zzzz-tttt ttss-ssss dddd-ddzz zzzy-00pw //3R (OP Rd, Rs, Rt)
* iiii-iiii iiss-ssss dddd-ddzz zzzy-01pw //3RI (OP Rd, Rs, Imm10)
* iiii-iiii iiss-ssss dddd-ddzz zzzy-10pw //3RD (MEM Rd, Rs, Disp10)
* iiii-iiii iiii-iiii dddd-ddzz zz0y-11pw //2RI (OP Rd, Imm16)
* iiii-iiii iiii-iiii iiii-iiii z11y-11pw //IMM (OP Imm24/Disp24)
Where:
* pw: Predicate Mode
** 00: Unconditional Scalar
** 01: Unconditional Bundle
** 10: Predicate Scalar
** 11: Predicate Bundle
* y: Predicate Direction (p==1), Opcode (p==0)
* zzzz: Opcode Bits
* dddd: Dest Register
* ssss: Source Register
* tttt: Source Register (3R)
* iiii: Immediate / Displacement

The IMM24 ops were special:
* If pw==00, these were:
** 0: BR (Branch)
*** Roughly: JAL X0, Label
** 1: BL (Branch with Link)
*** Roughly: JAL X1, Label
* If pw==10, these were:
** 0: BT/BF (Branch if True/False)
** 1: Reserved
* If pw==z1, these were Jumbo Prefixes
** 0: Jumbo Immed (Bigger Immediate)
** 1: Jumbo OpExt (Bigger Instruction)

Mem Ops:
* iiii-iiii iiss-ssss dddd-ddmm-m00y-10pw //STx Rd, Rs, Disp10u
* iiii-iiii iiss-ssss dddd-ddmm-m01y-10pw //LDx Rd, Rs, Disp10u
** mmm: SB/SW/SL/Q/UB/UW/UL/X
** Where Q=64-bit, X=128-bit (Register Pair)
* iiii-iiii iiss-ssss dddd-ddmm-m10y-10pw -
* iiii-iiii iiss-ssss dddd-ddmm-m11y-10pw //Bcc Rd, Rs, Disp10s
** mmm=EQ/UGT/GT/LE/NE/ULE/LE/GT
** These having a range of +/- 512 instruction words.

The story doesn't describe a full ISA spec or instruction listing, as
this is unlikely to be terribly interesting "for normal readers".

But, it seems like this idea "might not be entirely horrible".
And offers a "Make RISC-V less weak" option while still allowing for the
use of RISC-V and potential compatibility with the existing software
ecosystem (which is currently a likely roadblock for BJX2).

Technically, if it were implemented, it would be modestly a copy-paste
of the BJX2 Core; though would require a more significant rewrite of
BGBCC (though, one could use GCC for most purposes).

Though, I could make it "less effort" IRL by just having the "BETA-3"
mode just sorta being XG2 mode with RISC-V's register layout.

Though, would still need to go through the effort of moving BGBCC to
RISC-V's C ABI...

But, I guess one could question how sensible it is to go into these
sorts of technical details in the context of science fiction (as opposed
to just using hand-waved "funky new element or whatever" style
explanations like in Star-Trek and similar).

Then again, the story also does go some into how the robot AI's work
(essentially, it is a vaguely similar structure to the "Large Language
Model" approach; just with the idea that the model is light enough, and
the processor in the robot powerful enough, that the AI can be run
directly in the robot).

Also, most of the "normal" robots are also portrayed in story as being
limited enough that they can't recognize their own reflection in a
mirror, and one person can impersonate another simply by printing a
picture of the other person's face on a piece of paper and turning it
into a paper mask. Well, this would either work, or causing the facial
recognition algorithm to not recognize them at all.

But, the idea that a person would need to make a rubber mask to fool the
AI's seems a bit too advanced (I think there is a non-zero number of TV
shows where rubber masks are supposed to fool other humans). So, I went
with the idea that folded paper masks were sufficient.

In this case, mind-uploaded humans (in otherwise robot bodies) are
depicted as retaining otherwise human-level intelligence (however, the
idea is also that the human-level modules would require more powerful
processors and have higher energy requirements as well).

But, OTOH, the idea is less of a direct neural simulation of a human
brain, so much as an AI model which mimics recorded neural activation
patterns (captured in a special way) with it (somehow) achieving a
passable approximation of the original person (this is possibly itself a
bit of a hand-wave). (But, technically more advanced, if albeit not
entirely dissimilar, from the idea of how "Replika" originated; not
really the original person, but more an AI based approximation).

In this case, I don't think that "actual" neuron level simulation of a
human brain (much less the unresolved issue of how to reasonably image a
person's connectome) would be realistically possible in the world
imagined (the idea is that Moore's Law hits a wall in the 2020s, so by
the 2050s transistor budgets were not significantly improved; followed
by some amount of cultural erosion in the 2060-2090 timeframe following
a significant population decline; story being set mostly in the 2090s).

In an earlier form of the story, it was based on the MegaMan setting,
but has been reworked to be its own setting (and more within the limits
of "stuff that is scientifically possible").

Which originally mostly meant things like replacing the use of
teleporters with a network of suspension monorails (nevermind whether or
not the idea of a public-transit system based on pods riding on
monorails is "viable").

Well, and personal debates about how much to go into religious topics,
etc... Well, and the intersections of religion and transhumanism (in the
context of a world where the lines between human and machine starts to
get fuzzy).

Well, effectively, in-story, the idea is that a lot of the transhuman
stuff got going in the 2060s, but then the government made most of this
illegal. Trying to artificially maintain a status quo at a roughly
mid-2000s technology level; forbidding any AI or robotics much more
advanced than something like a Roomba (but, in the shadows, there are
some much bigger and more powerful AIs at work).

The timeline being sort of like:
2040s/2050s: Roughly Mega-Man Classic levels of technology
Borderline and fully sentient machines come into being;
A form of mind-uploading becomes possible;
Things like public transit are more common;
...
2060s/2070s: Significant AI restrictions are in place.
Turing locks to try to limit the emergence of sentience.
With human level (or beyond) AI's being illegal.
2080s: A full ban goes into effect
The "Turing locks" were not entirely sufficient.
2090s:
More like the 2000s/2010s, just more of a cyberpunk dystopia;
Also, everyone is back to driving cars, etc;
Also things like religion are also illegal;
...

So, one has a cyberpunk underworld of transhumans (cyborgs, uploads,
etc), sentient robots, people who want to be able to express their
religious beliefs, ... With some unease between the groups (the robots
don't trust humans; the more conventional religious types don't get
along well with the transhumans; etc).

Well, with a lot of people living in communities inside the remnants of
the "officially defunct" mass-transit system (in the stations/hubs for
the monorail network).

Well, with the mainstream society living in a sort of highly-regulated
form of the "suburban" lifestyle (with limited freedom of
self-expression and an actively enforced eugenics program, etc). With
the authorities doing basically all they can to "try to put the genie
back in the bottle" (and enforce their own image of an idealized form of
humanity).

....

Still not done with the main story arc.
Nor sure how realistic any of this is in a larger sense.

It intersects partly with another story I had written which is set in
the 2070s, mostly in a world where AI's are highly regulated, but none
the less the emergence of sentience is still something that can happen
with some of the larger and more powerful AIs.

That story had set a sort of ranking system for the AI's:
Alpha:
Lower power AIs, sub-human intelligence;
No Turing lock needed, as they are incapable of sentience;
Mostly run on hardware with comparable stats to a modern desktop PC.
(~ 10-100 TFLOP)
Beta:
Roughly human-level;
Weak protections are put in place to try to avoid the sentience;
Low risk as most would be unable to do much beyond a normal human;
Run on roughly minicomputer-sized hardware.
(~ 100-1000 TFLOP )
Gamma:
Mild superhuman;
Stronger protections are required;
Essentially run out of a data-center;
Comparable to a group of people, or a particularly smart human;
(~ 1-10 PFLOP)
Delta:
More superhuman (like a whole organization);
...


Click here to read the complete article
Re: (Partial OT): Mutant ISA idea from a Sci-Fi story I was writing...

<892c06e6-5303-4cef-8e4d-ebe8b0c1ea0en@googlegroups.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=32013&group=comp.arch#32013

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:318e:b0:754:8657:7b9f with SMTP id bi14-20020a05620a318e00b0075486577b9fmr922161qkb.8.1683360900924;
Sat, 06 May 2023 01:15:00 -0700 (PDT)
X-Received: by 2002:a05:6871:b25:b0:192:c0d5:3e57 with SMTP id
fq37-20020a0568710b2500b00192c0d53e57mr1545847oab.3.1683360900639; Sat, 06
May 2023 01:15:00 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!border-2.nntp.ord.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Sat, 6 May 2023 01:15:00 -0700 (PDT)
In-Reply-To: <u34pd9$2o4k0$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=2607:fea8:1dde:6a00:7d25:589e:d9c5:2bee;
posting-account=QId4bgoAAABV4s50talpu-qMcPp519Eb
NNTP-Posting-Host: 2607:fea8:1dde:6a00:7d25:589e:d9c5:2bee
References: <u34pd9$2o4k0$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <892c06e6-5303-4cef-8e4d-ebe8b0c1ea0en@googlegroups.com>
Subject: Re: (Partial OT): Mutant ISA idea from a Sci-Fi story I was writing...
From: robfi...@gmail.com (robf...@gmail.com)
Injection-Date: Sat, 06 May 2023 08:15:00 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Lines: 323
 by: robf...@gmail.com - Sat, 6 May 2023 08:15 UTC

On Saturday, May 6, 2023 at 1:44:39 AM UTC-4, BGB wrote:
> Was working on a sci-fi story, and I initially started trying to work
> BJX2 into the story, but then the idea ended up mutating a bit.
>
> Effectively, going from what BJX2 is now, to essentially being 64-bit
> RISC-V but with much of the BJX2 feature-set glued on as an alt-mode
> (or, sort of the reverse of what BJX2 is currently).
>
> In the story, RISC-V is referred to simply as 'RV' or 'RV64', as this is
> obvious enough and (should) hopefully sidestep the trademark issues.
>
>
> In the story idea, it became that this ISA (being called BETA-V3) has
> the same basic C ABI between it and RV64 modes, just:
> * R0..R31 <-> X0..X31
> * R32..R63 <-> F0..F31
>
> And, in BETA-V3 mode, it relaxes the distinction between GPRs and FPRs
> (except for where it matters for ABI compatibility), effectively
> treating both as a shared pool of 64 registers (also being used in pairs
> for 128-bit SIMD, basically the same as it works in BJX2).
>
>
>
>
> From the story idea:
> * zzzz-tttt ttss-ssss dddd-ddzz zzzy-00pw //3R (OP Rd, Rs, Rt)
> * iiii-iiii iiss-ssss dddd-ddzz zzzy-01pw //3RI (OP Rd, Rs, Imm10)
> * iiii-iiii iiss-ssss dddd-ddzz zzzy-10pw //3RD (MEM Rd, Rs, Disp10)
> * iiii-iiii iiii-iiii dddd-ddzz zz0y-11pw //2RI (OP Rd, Imm16)
> * iiii-iiii iiii-iiii iiii-iiii z11y-11pw //IMM (OP Imm24/Disp24)
> Where:
> * pw: Predicate Mode
> ** 00: Unconditional Scalar
> ** 01: Unconditional Bundle
> ** 10: Predicate Scalar
> ** 11: Predicate Bundle
> * y: Predicate Direction (p==1), Opcode (p==0)
> * zzzz: Opcode Bits
> * dddd: Dest Register
> * ssss: Source Register
> * tttt: Source Register (3R)
> * iiii: Immediate / Displacement
>
> The IMM24 ops were special:
> * If pw==00, these were:
> ** 0: BR (Branch)
> *** Roughly: JAL X0, Label
> ** 1: BL (Branch with Link)
> *** Roughly: JAL X1, Label
> * If pw==10, these were:
> ** 0: BT/BF (Branch if True/False)
> ** 1: Reserved
> * If pw==z1, these were Jumbo Prefixes
> ** 0: Jumbo Immed (Bigger Immediate)
> ** 1: Jumbo OpExt (Bigger Instruction)
>
> Mem Ops:
> * iiii-iiii iiss-ssss dddd-ddmm-m00y-10pw //STx Rd, Rs, Disp10u
> * iiii-iiii iiss-ssss dddd-ddmm-m01y-10pw //LDx Rd, Rs, Disp10u
> ** mmm: SB/SW/SL/Q/UB/UW/UL/X
> ** Where Q=64-bit, X=128-bit (Register Pair)
> * iiii-iiii iiss-ssss dddd-ddmm-m10y-10pw -
> * iiii-iiii iiss-ssss dddd-ddmm-m11y-10pw //Bcc Rd, Rs, Disp10s
> ** mmm=EQ/UGT/GT/LE/NE/ULE/LE/GT
> ** These having a range of +/- 512 instruction words.
>
>
> The story doesn't describe a full ISA spec or instruction listing, as
> this is unlikely to be terribly interesting "for normal readers".
>
>
>
> But, it seems like this idea "might not be entirely horrible".
> And offers a "Make RISC-V less weak" option while still allowing for the
> use of RISC-V and potential compatibility with the existing software
> ecosystem (which is currently a likely roadblock for BJX2).
>
> Technically, if it were implemented, it would be modestly a copy-paste
> of the BJX2 Core; though would require a more significant rewrite of
> BGBCC (though, one could use GCC for most purposes).
>
>
> Though, I could make it "less effort" IRL by just having the "BETA-3"
> mode just sorta being XG2 mode with RISC-V's register layout.
>
> Though, would still need to go through the effort of moving BGBCC to
> RISC-V's C ABI...
>
>
> But, I guess one could question how sensible it is to go into these
> sorts of technical details in the context of science fiction (as opposed
> to just using hand-waved "funky new element or whatever" style
> explanations like in Star-Trek and similar).
>
>
> Then again, the story also does go some into how the robot AI's work
> (essentially, it is a vaguely similar structure to the "Large Language
> Model" approach; just with the idea that the model is light enough, and
> the processor in the robot powerful enough, that the AI can be run
> directly in the robot).
>
> Also, most of the "normal" robots are also portrayed in story as being
> limited enough that they can't recognize their own reflection in a
> mirror, and one person can impersonate another simply by printing a
> picture of the other person's face on a piece of paper and turning it
> into a paper mask. Well, this would either work, or causing the facial
> recognition algorithm to not recognize them at all.
>
> But, the idea that a person would need to make a rubber mask to fool the
> AI's seems a bit too advanced (I think there is a non-zero number of TV
> shows where rubber masks are supposed to fool other humans). So, I went
> with the idea that folded paper masks were sufficient.
>
> In this case, mind-uploaded humans (in otherwise robot bodies) are
> depicted as retaining otherwise human-level intelligence (however, the
> idea is also that the human-level modules would require more powerful
> processors and have higher energy requirements as well).
>
>
> But, OTOH, the idea is less of a direct neural simulation of a human
> brain, so much as an AI model which mimics recorded neural activation
> patterns (captured in a special way) with it (somehow) achieving a
> passable approximation of the original person (this is possibly itself a
> bit of a hand-wave). (But, technically more advanced, if albeit not
> entirely dissimilar, from the idea of how "Replika" originated; not
> really the original person, but more an AI based approximation).
>
> In this case, I don't think that "actual" neuron level simulation of a
> human brain (much less the unresolved issue of how to reasonably image a
> person's connectome) would be realistically possible in the world
> imagined (the idea is that Moore's Law hits a wall in the 2020s, so by
> the 2050s transistor budgets were not significantly improved; followed
> by some amount of cultural erosion in the 2060-2090 timeframe following
> a significant population decline; story being set mostly in the 2090s).
>
>
>
>
> In an earlier form of the story, it was based on the MegaMan setting,
> but has been reworked to be its own setting (and more within the limits
> of "stuff that is scientifically possible").
>
> Which originally mostly meant things like replacing the use of
> teleporters with a network of suspension monorails (nevermind whether or
> not the idea of a public-transit system based on pods riding on
> monorails is "viable").
>
>
> Well, and personal debates about how much to go into religious topics,
> etc... Well, and the intersections of religion and transhumanism (in the
> context of a world where the lines between human and machine starts to
> get fuzzy).
>
> Well, effectively, in-story, the idea is that a lot of the transhuman
> stuff got going in the 2060s, but then the government made most of this
> illegal. Trying to artificially maintain a status quo at a roughly
> mid-2000s technology level; forbidding any AI or robotics much more
> advanced than something like a Roomba (but, in the shadows, there are
> some much bigger and more powerful AIs at work).
>
> The timeline being sort of like:
> 2040s/2050s: Roughly Mega-Man Classic levels of technology
> Borderline and fully sentient machines come into being;
> A form of mind-uploading becomes possible;
> Things like public transit are more common;
> ...
> 2060s/2070s: Significant AI restrictions are in place.
> Turing locks to try to limit the emergence of sentience.
> With human level (or beyond) AI's being illegal.
> 2080s: A full ban goes into effect
> The "Turing locks" were not entirely sufficient.
> 2090s:
> More like the 2000s/2010s, just more of a cyberpunk dystopia;
> Also, everyone is back to driving cars, etc;
> Also things like religion are also illegal;
> ...
>
> So, one has a cyberpunk underworld of transhumans (cyborgs, uploads,
> etc), sentient robots, people who want to be able to express their
> religious beliefs, ... With some unease between the groups (the robots
> don't trust humans; the more conventional religious types don't get
> along well with the transhumans; etc).
>
> Well, with a lot of people living in communities inside the remnants of
> the "officially defunct" mass-transit system (in the stations/hubs for
> the monorail network).
>
> Well, with the mainstream society living in a sort of highly-regulated
> form of the "suburban" lifestyle (with limited freedom of
> self-expression and an actively enforced eugenics program, etc). With
> the authorities doing basically all they can to "try to put the genie
> back in the bottle" (and enforce their own image of an idealized form of
> humanity).
>
> ...
>
> Still not done with the main story arc.
> Nor sure how realistic any of this is in a larger sense.
>
>
> It intersects partly with another story I had written which is set in
> the 2070s, mostly in a world where AI's are highly regulated, but none
> the less the emergence of sentience is still something that can happen
> with some of the larger and more powerful AIs.
>
> That story had set a sort of ranking system for the AI's:
> Alpha:
> Lower power AIs, sub-human intelligence;
> No Turing lock needed, as they are incapable of sentience;
> Mostly run on hardware with comparable stats to a modern desktop PC.
> (~ 10-100 TFLOP)
> Beta:
> Roughly human-level;
> Weak protections are put in place to try to avoid the sentience;
> Low risk as most would be unable to do much beyond a normal human;
> Run on roughly minicomputer-sized hardware.
> (~ 100-1000 TFLOP )
> Gamma:
> Mild superhuman;
> Stronger protections are required;
> Essentially run out of a data-center;
> Comparable to a group of people, or a particularly smart human;
> (~ 1-10 PFLOP)
> Delta:
> More superhuman (like a whole organization);
> ...
>
> Part of the idea in the other story was that a roughly Epsilon class AI
> became sentient (due to the efforts of a hacker), and then proceeded to
> covertly gather resources to build a bunch of unmanned (sorta, *)
> spaceships to function as seed-colonizer ships for other star-systems.
> (*: There are no fully-formed humans on the ship, rather the humans and
> other organic lifeforms are created at the destination via robots and
> genetically engineered biological samples; with the genetics of the
> lifeforms present being fine-tuned to the specifics of the planets they
> were being sent to).
>
> This partly ties in with another past story (currently thinking is that
> an alternate-universe past version of the same AI also unintentionally
> led to a "gray goo" event; with the "gray goo" then itself becoming its
> own society modeled in the past humanities' image; based on the
> "memories" of all it had consumed).
>
> Though, this was a bit more outside the scope of "actual science", so
> more fits in with some of my past "softer" sci-fi (these guys basically
> showing up as a sort of antagonist; with the added threat that parts of
> them are not entirely stable and, given the right conditions, can easily
> "devolve" into their original state and trigger yet another gray-goo
> event). Though, in these stories, it is more because of the idea that
> the FTL technology (*) isn't particular reliable of "keeping something
> within the same universe timeline", so members of this "species" can end
> up arriving in timelines where they did not otherwise come into
> existence (say, an instance of members of this species, where the AI had
> gray-goo'ed the Earth, showing up in colony worlds from the timeline
> where said AI had never gray-goo'ed the Earth...).
>
> *: My harder stories tend to assume that things like FTL doesn't exist.
> But, some of my softer stories allow it (though, its mechanics are very
> much unlike Star-Trek, as trips are typically one-way, time-reversed,
> and basically tend to break things like causality as they are
> effectively also time-machines that just so happen to also move through
> space in the process).
>
>
>
> Or, OTOH, some stuff that goes on, on my side of things, when I am not
> writing code (and/or machining parts in the shop...).
>
> ...
>
>
> Any thoughts...


Click here to read the complete article
Re: (Partial OT): Mutant ISA idea from a Sci-Fi story I was writing...

<u370lb$36nc4$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=32022&group=comp.arch#32022

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: cr88...@gmail.com (BGB)
Newsgroups: comp.arch
Subject: Re: (Partial OT): Mutant ISA idea from a Sci-Fi story I was
writing...
Date: Sat, 6 May 2023 20:59:04 -0500
Organization: A noiseless patient Spider
Lines: 487
Message-ID: <u370lb$36nc4$1@dont-email.me>
References: <u34pd9$2o4k0$1@dont-email.me>
<892c06e6-5303-4cef-8e4d-ebe8b0c1ea0en@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 7 May 2023 01:59:07 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="50d01ac6052893174202701e4104c5f8";
logging-data="3366276"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+7rwumK1piiSb/PEnIrW/+"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.10.1
Cancel-Lock: sha1:ARQ9fViLQvqZwl9Nlox+jG+78MU=
Content-Language: en-US
In-Reply-To: <892c06e6-5303-4cef-8e4d-ebe8b0c1ea0en@googlegroups.com>
 by: BGB - Sun, 7 May 2023 01:59 UTC

On 5/6/2023 3:15 AM, robf...@gmail.com wrote:
> On Saturday, May 6, 2023 at 1:44:39 AM UTC-4, BGB wrote:
>> Was working on a sci-fi story, and I initially started trying to work
>> BJX2 into the story, but then the idea ended up mutating a bit.
>>
>> Effectively, going from what BJX2 is now, to essentially being 64-bit
>> RISC-V but with much of the BJX2 feature-set glued on as an alt-mode
>> (or, sort of the reverse of what BJX2 is currently).
>>
>> In the story, RISC-V is referred to simply as 'RV' or 'RV64', as this is
>> obvious enough and (should) hopefully sidestep the trademark issues.
>>
>>
>> In the story idea, it became that this ISA (being called BETA-V3) has
>> the same basic C ABI between it and RV64 modes, just:
>> * R0..R31 <-> X0..X31
>> * R32..R63 <-> F0..F31
>>
>> And, in BETA-V3 mode, it relaxes the distinction between GPRs and FPRs
>> (except for where it matters for ABI compatibility), effectively
>> treating both as a shared pool of 64 registers (also being used in pairs
>> for 128-bit SIMD, basically the same as it works in BJX2).
>>
>>
>>
>>
>> From the story idea:
>> * zzzz-tttt ttss-ssss dddd-ddzz zzzy-00pw //3R (OP Rd, Rs, Rt)
>> * iiii-iiii iiss-ssss dddd-ddzz zzzy-01pw //3RI (OP Rd, Rs, Imm10)
>> * iiii-iiii iiss-ssss dddd-ddzz zzzy-10pw //3RD (MEM Rd, Rs, Disp10)
>> * iiii-iiii iiii-iiii dddd-ddzz zz0y-11pw //2RI (OP Rd, Imm16)
>> * iiii-iiii iiii-iiii iiii-iiii z11y-11pw //IMM (OP Imm24/Disp24)
>> Where:
>> * pw: Predicate Mode
>> ** 00: Unconditional Scalar
>> ** 01: Unconditional Bundle
>> ** 10: Predicate Scalar
>> ** 11: Predicate Bundle
>> * y: Predicate Direction (p==1), Opcode (p==0)
>> * zzzz: Opcode Bits
>> * dddd: Dest Register
>> * ssss: Source Register
>> * tttt: Source Register (3R)
>> * iiii: Immediate / Displacement
>>
>> The IMM24 ops were special:
>> * If pw==00, these were:
>> ** 0: BR (Branch)
>> *** Roughly: JAL X0, Label
>> ** 1: BL (Branch with Link)
>> *** Roughly: JAL X1, Label
>> * If pw==10, these were:
>> ** 0: BT/BF (Branch if True/False)
>> ** 1: Reserved
>> * If pw==z1, these were Jumbo Prefixes
>> ** 0: Jumbo Immed (Bigger Immediate)
>> ** 1: Jumbo OpExt (Bigger Instruction)
>>
>> Mem Ops:
>> * iiii-iiii iiss-ssss dddd-ddmm-m00y-10pw //STx Rd, Rs, Disp10u
>> * iiii-iiii iiss-ssss dddd-ddmm-m01y-10pw //LDx Rd, Rs, Disp10u
>> ** mmm: SB/SW/SL/Q/UB/UW/UL/X
>> ** Where Q=64-bit, X=128-bit (Register Pair)
>> * iiii-iiii iiss-ssss dddd-ddmm-m10y-10pw -
>> * iiii-iiii iiss-ssss dddd-ddmm-m11y-10pw //Bcc Rd, Rs, Disp10s
>> ** mmm=EQ/UGT/GT/LE/NE/ULE/LE/GT
>> ** These having a range of +/- 512 instruction words.
>>
>>
>> The story doesn't describe a full ISA spec or instruction listing, as
>> this is unlikely to be terribly interesting "for normal readers".
>>
>>
>>
>> But, it seems like this idea "might not be entirely horrible".
>> And offers a "Make RISC-V less weak" option while still allowing for the
>> use of RISC-V and potential compatibility with the existing software
>> ecosystem (which is currently a likely roadblock for BJX2).
>>
>> Technically, if it were implemented, it would be modestly a copy-paste
>> of the BJX2 Core; though would require a more significant rewrite of
>> BGBCC (though, one could use GCC for most purposes).
>>
>>
>> Though, I could make it "less effort" IRL by just having the "BETA-3"
>> mode just sorta being XG2 mode with RISC-V's register layout.
>>
>> Though, would still need to go through the effort of moving BGBCC to
>> RISC-V's C ABI...
>>
>>
>> But, I guess one could question how sensible it is to go into these
>> sorts of technical details in the context of science fiction (as opposed
>> to just using hand-waved "funky new element or whatever" style
>> explanations like in Star-Trek and similar).
>>
>>
>> Then again, the story also does go some into how the robot AI's work
>> (essentially, it is a vaguely similar structure to the "Large Language
>> Model" approach; just with the idea that the model is light enough, and
>> the processor in the robot powerful enough, that the AI can be run
>> directly in the robot).
>>
>> Also, most of the "normal" robots are also portrayed in story as being
>> limited enough that they can't recognize their own reflection in a
>> mirror, and one person can impersonate another simply by printing a
>> picture of the other person's face on a piece of paper and turning it
>> into a paper mask. Well, this would either work, or causing the facial
>> recognition algorithm to not recognize them at all.
>>
>> But, the idea that a person would need to make a rubber mask to fool the
>> AI's seems a bit too advanced (I think there is a non-zero number of TV
>> shows where rubber masks are supposed to fool other humans). So, I went
>> with the idea that folded paper masks were sufficient.
>>
>> In this case, mind-uploaded humans (in otherwise robot bodies) are
>> depicted as retaining otherwise human-level intelligence (however, the
>> idea is also that the human-level modules would require more powerful
>> processors and have higher energy requirements as well).
>>
>>
>> But, OTOH, the idea is less of a direct neural simulation of a human
>> brain, so much as an AI model which mimics recorded neural activation
>> patterns (captured in a special way) with it (somehow) achieving a
>> passable approximation of the original person (this is possibly itself a
>> bit of a hand-wave). (But, technically more advanced, if albeit not
>> entirely dissimilar, from the idea of how "Replika" originated; not
>> really the original person, but more an AI based approximation).
>>
>> In this case, I don't think that "actual" neuron level simulation of a
>> human brain (much less the unresolved issue of how to reasonably image a
>> person's connectome) would be realistically possible in the world
>> imagined (the idea is that Moore's Law hits a wall in the 2020s, so by
>> the 2050s transistor budgets were not significantly improved; followed
>> by some amount of cultural erosion in the 2060-2090 timeframe following
>> a significant population decline; story being set mostly in the 2090s).
>>
>>
>>
>>
>> In an earlier form of the story, it was based on the MegaMan setting,
>> but has been reworked to be its own setting (and more within the limits
>> of "stuff that is scientifically possible").
>>
>> Which originally mostly meant things like replacing the use of
>> teleporters with a network of suspension monorails (nevermind whether or
>> not the idea of a public-transit system based on pods riding on
>> monorails is "viable").
>>
>>
>> Well, and personal debates about how much to go into religious topics,
>> etc... Well, and the intersections of religion and transhumanism (in the
>> context of a world where the lines between human and machine starts to
>> get fuzzy).
>>
>> Well, effectively, in-story, the idea is that a lot of the transhuman
>> stuff got going in the 2060s, but then the government made most of this
>> illegal. Trying to artificially maintain a status quo at a roughly
>> mid-2000s technology level; forbidding any AI or robotics much more
>> advanced than something like a Roomba (but, in the shadows, there are
>> some much bigger and more powerful AIs at work).
>>
>> The timeline being sort of like:
>> 2040s/2050s: Roughly Mega-Man Classic levels of technology
>> Borderline and fully sentient machines come into being;
>> A form of mind-uploading becomes possible;
>> Things like public transit are more common;
>> ...
>> 2060s/2070s: Significant AI restrictions are in place.
>> Turing locks to try to limit the emergence of sentience.
>> With human level (or beyond) AI's being illegal.
>> 2080s: A full ban goes into effect
>> The "Turing locks" were not entirely sufficient.
>> 2090s:
>> More like the 2000s/2010s, just more of a cyberpunk dystopia;
>> Also, everyone is back to driving cars, etc;
>> Also things like religion are also illegal;
>> ...
>>
>> So, one has a cyberpunk underworld of transhumans (cyborgs, uploads,
>> etc), sentient robots, people who want to be able to express their
>> religious beliefs, ... With some unease between the groups (the robots
>> don't trust humans; the more conventional religious types don't get
>> along well with the transhumans; etc).
>>
>> Well, with a lot of people living in communities inside the remnants of
>> the "officially defunct" mass-transit system (in the stations/hubs for
>> the monorail network).
>>
>> Well, with the mainstream society living in a sort of highly-regulated
>> form of the "suburban" lifestyle (with limited freedom of
>> self-expression and an actively enforced eugenics program, etc). With
>> the authorities doing basically all they can to "try to put the genie
>> back in the bottle" (and enforce their own image of an idealized form of
>> humanity).
>>
>> ...
>>
>> Still not done with the main story arc.
>> Nor sure how realistic any of this is in a larger sense.
>>
>>
>> It intersects partly with another story I had written which is set in
>> the 2070s, mostly in a world where AI's are highly regulated, but none
>> the less the emergence of sentience is still something that can happen
>> with some of the larger and more powerful AIs.
>>
>> That story had set a sort of ranking system for the AI's:
>> Alpha:
>> Lower power AIs, sub-human intelligence;
>> No Turing lock needed, as they are incapable of sentience;
>> Mostly run on hardware with comparable stats to a modern desktop PC.
>> (~ 10-100 TFLOP)
>> Beta:
>> Roughly human-level;
>> Weak protections are put in place to try to avoid the sentience;
>> Low risk as most would be unable to do much beyond a normal human;
>> Run on roughly minicomputer-sized hardware.
>> (~ 100-1000 TFLOP )
>> Gamma:
>> Mild superhuman;
>> Stronger protections are required;
>> Essentially run out of a data-center;
>> Comparable to a group of people, or a particularly smart human;
>> (~ 1-10 PFLOP)
>> Delta:
>> More superhuman (like a whole organization);
>> ...
>>
>> Part of the idea in the other story was that a roughly Epsilon class AI
>> became sentient (due to the efforts of a hacker), and then proceeded to
>> covertly gather resources to build a bunch of unmanned (sorta, *)
>> spaceships to function as seed-colonizer ships for other star-systems.
>> (*: There are no fully-formed humans on the ship, rather the humans and
>> other organic lifeforms are created at the destination via robots and
>> genetically engineered biological samples; with the genetics of the
>> lifeforms present being fine-tuned to the specifics of the planets they
>> were being sent to).
>>
>> This partly ties in with another past story (currently thinking is that
>> an alternate-universe past version of the same AI also unintentionally
>> led to a "gray goo" event; with the "gray goo" then itself becoming its
>> own society modeled in the past humanities' image; based on the
>> "memories" of all it had consumed).
>>
>> Though, this was a bit more outside the scope of "actual science", so
>> more fits in with some of my past "softer" sci-fi (these guys basically
>> showing up as a sort of antagonist; with the added threat that parts of
>> them are not entirely stable and, given the right conditions, can easily
>> "devolve" into their original state and trigger yet another gray-goo
>> event). Though, in these stories, it is more because of the idea that
>> the FTL technology (*) isn't particular reliable of "keeping something
>> within the same universe timeline", so members of this "species" can end
>> up arriving in timelines where they did not otherwise come into
>> existence (say, an instance of members of this species, where the AI had
>> gray-goo'ed the Earth, showing up in colony worlds from the timeline
>> where said AI had never gray-goo'ed the Earth...).
>>
>> *: My harder stories tend to assume that things like FTL doesn't exist.
>> But, some of my softer stories allow it (though, its mechanics are very
>> much unlike Star-Trek, as trips are typically one-way, time-reversed,
>> and basically tend to break things like causality as they are
>> effectively also time-machines that just so happen to also move through
>> space in the process).
>>
>>
>>
>> Or, OTOH, some stuff that goes on, on my side of things, when I am not
>> writing code (and/or machining parts in the shop...).
>>
>> ...
>>
>>
>> Any thoughts...
>
> Writing is one of my alternate endeavours too. I have written a couple of
> short books, I hope to have one available soon, it is being edited by a
> relative. It’s fiction but not really sci-fi. It is a collection of short stories.
>


Click here to read the complete article
1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor