Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

Exceptions prove the rule, and wreck the budget. -- Miller


interests / alt.dreams.castaneda / Re: Non-State Hostile Intelligence Service & Maxwell Silver Hammer

SubjectAuthor
* Non-State Hostile Intelligence Service & Maxwell Silver HammerLowRider44M
`* Re: Non-State Hostile Intelligence Service & Maxwell Silver Hammerchris rodgers
 `* Re: Non-State Hostile Intelligence Service & Maxwell Silver Hammerchris rodgers
  `- Re: Non-State Hostile Intelligence Service & Maxwell Silver HammerLowRider44M

1
Non-State Hostile Intelligence Service & Maxwell Silver Hammer

<4cd31497-63d3-4d08-9727-d8baf129e4b8n@googlegroups.com>

  copy mid

https://www.novabbs.com/interests/article-flat.php?id=3028&group=alt.dreams.castaneda#3028

  copy link   Newsgroups: alt.dreams.castaneda
X-Received: by 2002:ad4:5c8c:0:b0:4b9:fe5:e7a8 with SMTP id o12-20020ad45c8c000000b004b90fe5e7a8mr22898436qvh.99.1667418448301;
Wed, 02 Nov 2022 12:47:28 -0700 (PDT)
X-Received: by 2002:a05:6902:1201:b0:6ca:b14e:8aaa with SMTP id
s1-20020a056902120100b006cab14e8aaamr25490824ybu.316.1667418447952; Wed, 02
Nov 2022 12:47:27 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: alt.dreams.castaneda
Date: Wed, 2 Nov 2022 12:47:27 -0700 (PDT)
Injection-Info: google-groups.googlegroups.com; posting-host=76.118.119.160; posting-account=y1Ih6QoAAABmkDmly_GJHZrtOKbrLuAF
NNTP-Posting-Host: 76.118.119.160
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <4cd31497-63d3-4d08-9727-d8baf129e4b8n@googlegroups.com>
Subject: Non-State Hostile Intelligence Service & Maxwell Silver Hammer
From: intraph...@gmail.com (LowRider44M)
Injection-Date: Wed, 02 Nov 2022 19:47:28 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 49203
 by: LowRider44M - Wed, 2 Nov 2022 19:47 UTC

Back In The USSR
https://youtu.be/nS5_EQgbuLc

Maxwell Silver Hammer
https://youtu.be/3HuXFfq79I8

Unexplainability and Incomprehensibility of Artificial Intelligence
Roman V. Yampolskiy

Computer Engineering and Computer Science
University of Louisville
roman.yampolskiy@louisville.edu, @romanyam
June 20, 2019

"If a lion could speak, we couldn't understand him"
Ludwig Wittgenstein

“It would be possible to describe everything scientifically, but it would make no sense. It would
be a description without meaning - as if you described a Beethoven symphony as a variation of
wave pressure.”
Albert Einstein

“Some things in life are too complicated to explain in any language.. … Not just to explain to
others but to explain to yourself. Force yourself to try to explain it and you create lies.”
Haruki Murakami

“I understand that you don’t understand”
Grigori Perelman

Abstract
Explainability and comprehensibility of AI are important requirements for intelligent systems
deployed in real-world domains. Users want and frequently need to understand how decisions
impacting them are made. Similarly it is important to understand how an intelligent system
functions for safety and security reasons. In this paper, we describe two complementary
impossibility results (Unexplainability and Incomprehensibility), essentially showing that
advanced AIs would not be able to accurately explain some of their decisions and for the decisions
they could explain people would not understand some of those explanations.
Keywords: AI Safety, Black Box, Comprehensible, Explainable AI, Impossibility, Intelligible,
Interpretability, Transparency, Understandable, Unserveyability.

1. Introduction
For decades AI projects relied on human expertise, distilled by knowledge engineers, and were
both explicitly designed and easily understood by people. For example, expert systems, frequently
based on decision trees, are perfect models of human decision making and so are naturally
understandable by both developers and end-users. With paradigm shift in the leading AI
methodology, over the last decade, to machine learning systems based on Deep Neural Networks
(DNN) this natural ease of understanding got sacrificed. The current systems are seen as “black
boxes” (not to be confused with AI boxing [1, 2]), opaque to human understanding but extremely
capable both with respect to results and learning of new domains. As long as Big Data and Huge
Compute are available, zero human knowledge is required [3] to achieve superhuman [4]
performance.

With their new found capabilities DNN-based AI systems are tasked with making decisions in
employment [5], admissions [6], investing [7], matching [8], diversity [9], security [10, 11],
recommendations [12], banking [13], and countless other critical domains. As many such domains
are legally regulated, it is a desirable property and frequently a requirement [14, 15] that such
systems should be able to explain how they arrived at their decisions, particularly to show that they
are bias free [16]. Additionally, and perhaps even more importantly to make artificially intelligent
systems safe and secure [17] it is essential that we understand what they are doing and why. A
particular area of interest in AI Safety [18-25] is predicting and explaining causes of AI failures
[26].

A significant amount of research [27-41] is now being devoted to developing explainable AI. In
the next section we review some main results and general trends relevant to this paper.

2. Literature Review
Hundreds of papers have been published on eXplainable Artificial Intelligence (XAI) [42].
According to DARPA [27], XAI is supposed to “produce more explainable models, while
maintaining a high level of learning performance … and enable human users to understand,
appropriately, trust, and effectively manage the emerging generation of artificially intelligent
partners”. Detailed analysis of literature on explainability or comprehensibility is beyond the scope
of this paper, but the readers are encouraged to look at many excellent surveys of the topic [43-
45]. Miller [46] surveys social sciences to understand how people explain, in the hopes of
transferring that knowledge to XAI, but of course people often say: “I can’t explain it” or “I don’t
understand”. For example, most people are unable to explain how they recognize faces, a problem
we frequently ask computers to solve [47, 48].

Despite wealth of publications on XAI and related concepts [49-51], the subject of unexplainability
or incomprehensibility of AI is only implicitly addressed. Some limitations of explainability are
discussed: “ML algorithms intrinsically consider high-degree interactions between input features,
which make disaggregating such functions into human understandable form difficult. … While a
single linear transformation may be interpreted by looking at the weights from the input features
to each of the output classes, multiple layers with non-linear interactions at every layer imply
disentangling a super complicated nested structure which is a difficult task and potentially even a
questionable one [52]. … As mentioned before, given the complicated structure of ML models,
for the same set of input variables and prediction targets, complex machine learning algorithms
can produce multiple accurate models by taking very similar but not the same internal pathway in
the network, so details of explanations can also change across multiple accurate models. This
systematic instability makes automated generated explanations difficult.” [42].

Sutcliffe et al. talk about incomprehensible theorems [53]: “Comprehensibility estimates the effort
required for a user to understand the theorem. Theorems with many or deeply nested structures
may be considered incomprehensible.” Muggleton et al. [54] suggest “using inspection time as a
proxy for incomprehension. That is, we might expect that humans take a long time … in the case
they find the program hard to understand. As a proxy, inspection time is easier to measure than
comprehension.”

The tradeoff between explainability and comprehensibility is recognized [52], but is not taken to
its logical conclusion. “[A]ccuracy generally requires more complex prediction methods [but]
simple and interpretable functions do not make the most accurate predictors'' [55]. “Indeed, there
are algorithms that are more interpretable than others are, and there is often a tradeoff between
accuracy and interpretability: the most accurate AI/ML models usually are not very explainable
(for example, deep neural nets, boosted trees, random forests, and support vector machines), and
the most interpretable models usually are less accurate (for example, linear or logistic regression).”
[42].

Incomprehensibility is supported by well-known impossibility results. Charlesworth proved his
Comprehensibility theorem while attempting to formalize the answer to such questions as: “If [full
human-level intelligence] software can exist, could humans understand it?” [56]. While describing
implications of his theorem on AI, he writes [57]: “Comprehensibility Theorem is the first
mathematical theorem implying the impossibility of any AI agent or natural agent—including a
not-necessarily infallible human agent—satisfying a rigorous and deductive interpretation of the
self-comprehensibility challenge. … Self-comprehensibility in some form might be essential for a
kind of self-reflection useful for self-improvement that might enable some agents to increase their
success.” It is reasonable to conclude that a system which doesn’t comprehend itself would not be
able to explain itself.
Hernandez-Orallo et al. introduce the notion of K-incomprehensibility (a.k.a. K-hardness) [58].

“This will be the formal counterpart to our notion of hard-to-learn good explanations. In our sense,
a k-incomprehensible string with a high k (difficult to comprehend) is different (harder) than a k-
compressible string (difficult to learn) [59] and different from classical computational complexity
(slow to compute). Calculating the value of k for a given string is not computable in general.
Fortunately, the converse, i.e., given an arbitrary k, calculating whether a string is k-
comprehensible is computable. … Kolmogorov Complexity measures the amount of information
but not the complexity to understand them.” [58].

Yampolskiy addresses limits of understanding other agents in his work on the space of possible
minds [60]: “Each mind design corresponds to an integer and so is finite, but since the number of
minds is infinite some have a much greater number of states compared to others. This property
holds for all minds. Consequently, since a human mind has only a finite number of possible states,
there are minds which can never be fully understood by a human mind as such mind designs have
a much greater number of states, making their understanding impossible as can be demonstrated
by the pigeonhole principle.” Hibbard points out safety impact from incomprehensibility of AI:
“Given the incomprehensibility of their thoughts, we will not be able to sort out the effect of any
conflicts they have between their own interests and ours.”


Click here to read the complete article
Re: Non-State Hostile Intelligence Service & Maxwell Silver Hammer

<98db572b-c848-461a-9812-046db13979aan@googlegroups.com>

  copy mid

https://www.novabbs.com/interests/article-flat.php?id=3029&group=alt.dreams.castaneda#3029

  copy link   Newsgroups: alt.dreams.castaneda
X-Received: by 2002:a05:6214:20a7:b0:4bb:9359:8368 with SMTP id 7-20020a05621420a700b004bb93598368mr33861295qvd.122.1667606273528;
Fri, 04 Nov 2022 16:57:53 -0700 (PDT)
X-Received: by 2002:a25:384:0:b0:6cc:2c8f:4a0 with SMTP id 126-20020a250384000000b006cc2c8f04a0mr37417250ybd.649.1667606273257;
Fri, 04 Nov 2022 16:57:53 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: alt.dreams.castaneda
Date: Fri, 4 Nov 2022 16:57:53 -0700 (PDT)
In-Reply-To: <4cd31497-63d3-4d08-9727-d8baf129e4b8n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2603:8000:6e00:194f:71fd:96d3:d9d0:7d41;
posting-account=0nUoVAoAAABAx_EzSYxVstLp1Y2NeNcX
NNTP-Posting-Host: 2603:8000:6e00:194f:71fd:96d3:d9d0:7d41
References: <4cd31497-63d3-4d08-9727-d8baf129e4b8n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <98db572b-c848-461a-9812-046db13979aan@googlegroups.com>
Subject: Re: Non-State Hostile Intelligence Service & Maxwell Silver Hammer
From: allready...@gmail.com (chris rodgers)
Injection-Date: Fri, 04 Nov 2022 23:57:53 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 18646
 by: chris rodgers - Fri, 4 Nov 2022 23:57 UTC

On Wednesday, November 2, 2022 at 12:47:28 PM UTC-7, LowRider44M wrote:
> Back In The USSR
> https://youtu.be/nS5_EQgbuLc
>
> Maxwell Silver Hammer
> https://youtu.be/3HuXFfq79I8
>
>
>
>
>
> Unexplainability and Incomprehensibility of Artificial Intelligence
> Roman V. Yampolskiy
>
> Computer Engineering and Computer Science
> University of Louisville
> roman.ya...@louisville.edu, @romanyam
> June 20, 2019
>
> "If a lion could speak, we couldn't understand him"
> Ludwig Wittgenstein
>
> “It would be possible to describe everything scientifically, but it would make no sense. It would
> be a description without meaning - as if you described a Beethoven symphony as a variation of
> wave pressure.”
> Albert Einstein
>
> “Some things in life are too complicated to explain in any language. … Not just to explain to
> others but to explain to yourself. Force yourself to try to explain it and you create lies.”
> Haruki Murakami
>
> “I understand that you don’t understand”
> Grigori Perelman
>
>
> Abstract
> Explainability and comprehensibility of AI are important requirements for intelligent systems
> deployed in real-world domains. Users want and frequently need to understand how decisions
> impacting them are made. Similarly it is important to understand how an intelligent system
> functions for safety and security reasons. In this paper, we describe two complementary
> impossibility results (Unexplainability and Incomprehensibility), essentially showing that
> advanced AIs would not be able to accurately explain some of their decisions and for the decisions
> they could explain people would not understand some of those explanations..
> Keywords: AI Safety, Black Box, Comprehensible, Explainable AI, Impossibility, Intelligible,
> Interpretability, Transparency, Understandable, Unserveyability.
>
> 1. Introduction
> For decades AI projects relied on human expertise, distilled by knowledge engineers, and were
> both explicitly designed and easily understood by people. For example, expert systems, frequently
> based on decision trees, are perfect models of human decision making and so are naturally
> understandable by both developers and end-users. With paradigm shift in the leading AI
> methodology, over the last decade, to machine learning systems based on Deep Neural Networks
> (DNN) this natural ease of understanding got sacrificed. The current systems are seen as “black
> boxes” (not to be confused with AI boxing [1, 2]), opaque to human understanding but extremely
> capable both with respect to results and learning of new domains. As long as Big Data and Huge
> Compute are available, zero human knowledge is required [3] to achieve superhuman [4]
> performance.
>
> With their new found capabilities DNN-based AI systems are tasked with making decisions in
> employment [5], admissions [6], investing [7], matching [8], diversity [9], security [10, 11],
> recommendations [12], banking [13], and countless other critical domains. As many such domains
> are legally regulated, it is a desirable property and frequently a requirement [14, 15] that such
> systems should be able to explain how they arrived at their decisions, particularly to show that they
> are bias free [16]. Additionally, and perhaps even more importantly to make artificially intelligent
> systems safe and secure [17] it is essential that we understand what they are doing and why. A
> particular area of interest in AI Safety [18-25] is predicting and explaining causes of AI failures
> [26].
>
> A significant amount of research [27-41] is now being devoted to developing explainable AI. In
> the next section we review some main results and general trends relevant to this paper.
>
> 2. Literature Review
> Hundreds of papers have been published on eXplainable Artificial Intelligence (XAI) [42].
> According to DARPA [27], XAI is supposed to “produce more explainable models, while
> maintaining a high level of learning performance … and enable human users to understand,
> appropriately, trust, and effectively manage the emerging generation of artificially intelligent
> partners”. Detailed analysis of literature on explainability or comprehensibility is beyond the scope
> of this paper, but the readers are encouraged to look at many excellent surveys of the topic [43-
> 45]. Miller [46] surveys social sciences to understand how people explain, in the hopes of
> transferring that knowledge to XAI, but of course people often say: “I can’t explain it” or “I don’t
> understand”. For example, most people are unable to explain how they recognize faces, a problem
> we frequently ask computers to solve [47, 48].
>
> Despite wealth of publications on XAI and related concepts [49-51], the subject of unexplainability
> or incomprehensibility of AI is only implicitly addressed. Some limitations of explainability are
> discussed: “ML algorithms intrinsically consider high-degree interactions between input features,
> which make disaggregating such functions into human understandable form difficult. … While a
> single linear transformation may be interpreted by looking at the weights from the input features
> to each of the output classes, multiple layers with non-linear interactions at every layer imply
> disentangling a super complicated nested structure which is a difficult task and potentially even a
> questionable one [52]. … As mentioned before, given the complicated structure of ML models,
> for the same set of input variables and prediction targets, complex machine learning algorithms
> can produce multiple accurate models by taking very similar but not the same internal pathway in
> the network, so details of explanations can also change across multiple accurate models. This
> systematic instability makes automated generated explanations difficult.” [42].
>
> Sutcliffe et al. talk about incomprehensible theorems [53]: “Comprehensibility estimates the effort
> required for a user to understand the theorem. Theorems with many or deeply nested structures
> may be considered incomprehensible.” Muggleton et al. [54] suggest “using inspection time as a
> proxy for incomprehension. That is, we might expect that humans take a long time … in the case
> they find the program hard to understand. As a proxy, inspection time is easier to measure than
> comprehension.”
>
> The tradeoff between explainability and comprehensibility is recognized [52], but is not taken to
> its logical conclusion. “[A]ccuracy generally requires more complex prediction methods [but]
> simple and interpretable functions do not make the most accurate predictors'' [55]. “Indeed, there
> are algorithms that are more interpretable than others are, and there is often a tradeoff between
> accuracy and interpretability: the most accurate AI/ML models usually are not very explainable
> (for example, deep neural nets, boosted trees, random forests, and support vector machines), and
> the most interpretable models usually are less accurate (for example, linear or logistic regression).”
> [42].
>
> Incomprehensibility is supported by well-known impossibility results. Charlesworth proved his
> Comprehensibility theorem while attempting to formalize the answer to such questions as: “If [full
> human-level intelligence] software can exist, could humans understand it?” [56]. While describing
> implications of his theorem on AI, he writes [57]: “Comprehensibility Theorem is the first
> mathematical theorem implying the impossibility of any AI agent or natural agent—including a
> not-necessarily infallible human agent—satisfying a rigorous and deductive interpretation of the
> self-comprehensibility challenge. … Self-comprehensibility in some form might be essential for a
> kind of self-reflection useful for self-improvement that might enable some agents to increase their
> success.” It is reasonable to conclude that a system which doesn’t comprehend itself would not be
> able to explain itself.
> Hernandez-Orallo et al. introduce the notion of K-incomprehensibility (a.k.a. K-hardness) [58].
>
> “This will be the formal counterpart to our notion of hard-to-learn good explanations. In our sense,
> a k-incomprehensible string with a high k (difficult to comprehend) is different (harder) than a k-
> compressible string (difficult to learn) [59] and different from classical computational complexity
> (slow to compute). Calculating the value of k for a given string is not computable in general.
> Fortunately, the converse, i.e., given an arbitrary k, calculating whether a string is k-
> comprehensible is computable. … Kolmogorov Complexity measures the amount of information
> but not the complexity to understand them.” [58].
>
> Yampolskiy addresses limits of understanding other agents in his work on the space of possible
> minds [60]: “Each mind design corresponds to an integer and so is finite, but since the number of
> minds is infinite some have a much greater number of states compared to others. This property
> holds for all minds. Consequently, since a human mind has only a finite number of possible states,
> there are minds which can never be fully understood by a human mind as such mind designs have
> a much greater number of states, making their understanding impossible as can be demonstrated
> by the pigeonhole principle.” Hibbard points out safety impact from incomprehensibility of AI:
> “Given the incomprehensibility of their thoughts, we will not be able to sort out the effect of any
> conflicts they have between their own interests and ours.”
>
> We are slowly starting to realize that as AIs become more powerful, the models behind their
> success will become ever less comprehensible to us [61]: “… deep learning that produces
> outcomes based on so many different variables under so many different conditions being
> transformed by so many layers of neural networks that humans simply cannot comprehend the
> model the computer has built for itself. … Clearly our computers have surpassed us in their power
> to discriminate, find patterns, and draw conclusions. That’s one reason we use them. Rather than
> reducing phenomena to fit a relatively simple model, we can now let our computers make models
> as big as they need to. But this also seems to mean that what we know depends upon the output of
> machines the functioning of which we cannot follow, explain, or understand. … But some of the
> new models are incomprehensible. They can exist only in the weights of countless digital triggers
> networked together and feeding successive layers of networked, weighted triggers representing
> huge quantities of variables that affect one another in ways so particular that we cannot derive
> general principles from them.”
>
> “Now our machines are letting us see that even if the rules are simple, elegant, beautiful and
> rational, the domain they govern is so granular, so intricate, so interrelated, with everything
> causing everything else all at once and forever, that our brains and our knowledge cannot begin to
> comprehend it. … Our new reliance on inscrutable models as the source of the justification of our
> beliefs puts us in an odd position. If knowledge includes the justification of our beliefs, then
> knowledge cannot be a class of mental content, because the justification now consists of models
> that exist in machines, models that human mentality cannot comprehend. … But the promise of
> machine learning is that there are times when the machine’s inscrutable models will be far more
> predictive than the manually constructed, human-intelligible ones. In those cases, our
> knowledge — if we choose to use it — will depend on justifications that we simply cannot
> understand. … [W]e are likely to continue to rely ever more heavily on justifications that we
> simply cannot fathom. And the issue is not simply that we cannot fathom them, the way a lay
> person can’t fathom a string theorist’s ideas. Rather, it’s that the nature of computer-based
> justification is not at all like human justification. It is alien.” [61].
>
> 3. Unexplainability
> A number of impossibility results are well-known in many areas of research [62-70] and some are
> starting to be discovered in the domain of AI research, for example: Unverifiability [71],
> Unpredictability1 [72] and limits on preference deduction [73] or alignment [74]. In this section
> we introduce Unexplainability of AI and show that some decisions of superintelligent systems will
> never be explainable, even in principle. We will concentrate on the most interesting case, a
> superintelligent AI acting in novel and unrestricted domains. Simple cases of Narrow AIs making
> decisions in restricted domains (Ex. Tic-Tac-Toe) are both explainable and comprehensible.
> Consequently a whole spectrum of AIs can be developed from completely
> explainable/comprehensible to completely unexplainable/incomprehensible. We define
> Unexplainability as impossibility of providing an explanation for certain decisions made by an
> intelligent system which is both 100% accurate and comprehensible.
>
>
> 1 Unpredictability is not the same as Unexplainability or Incomprehensibility, see ref. 72. Yampolskiy, R.V.,
> Unpredictability of AI. arXiv preprint arXiv:1905.13053, 2019. for details.
> Artificial Deep Neural Networks continue increasing in size and may already comprise millions
> of neurons, thousands of layers and billions of connecting weights, ultimately targeting and
> perhaps surpassing the size of the human brain. They are trained on Big Data from which million
> feature vectors are extracted and on which decisions are based, with each feature contributing to
> the decision in proportion to a set of weights. To explain such a decision, which relies on literally
> billions of contributing factors, AI has to either simplify the explanation and so make the
> explanation less accurate/specific/detailed or to report it exactly but such an explanation elucidates
> nothing by virtue of its semantic complexity, large size and abstract data representation. Such
> precise reporting is just a copy of trained DNN model.
>
> For example, an AI utilized in the mortgage industry may look at an application to decide credit
> worthiness of a person in order to approve them for a loan. For simplicity, let’s say the system
> looks at only a hundred descriptors of the applicant and utilizes a neural network to arrive at a
> binary approval decision. An explanation which included all hundred features and weights of the
> neural network would not be very useful, so the system may instead select one of two most
> important features and explain its decision with respect to just those top properties, ignoring the
> rest. This highly simplified explanation would not be accurate as the other 98 features all
> contributed to the decision and if only one or two top features were considered the decision could
> have been different. This is similar to how Principal Component Analysis works for dimensionality
> reduction [75].
>
> Even if the agent trying to get the explanation is not a human but another AI the problem remains
> as the explanation is either inaccurate or agent-encoding specific. Trained model could be copied
> to another neural network, but it would likewise have a har
> 104. Yampolskiy, R.V. and M. Spellchecker, Artificial Intelligence Safety and Cybersecurity: a
> Timeline of AI Failures. arXiv preprint arXiv:1610.07997, 2016.
hot rod charlie runs tomorrow
last race of his career


Click here to read the complete article
Re: Non-State Hostile Intelligence Service & Maxwell Silver Hammer

<33c75b5f-4a0d-4c6c-9984-b79d2f6b2e81n@googlegroups.com>

  copy mid

https://www.novabbs.com/interests/article-flat.php?id=3030&group=alt.dreams.castaneda#3030

  copy link   Newsgroups: alt.dreams.castaneda
X-Received: by 2002:a05:620a:28cd:b0:6cf:93b3:a78 with SMTP id l13-20020a05620a28cd00b006cf93b30a78mr32411108qkp.11.1667744448539;
Sun, 06 Nov 2022 06:20:48 -0800 (PST)
X-Received: by 2002:a25:4dc4:0:b0:6cc:d497:5d91 with SMTP id
a187-20020a254dc4000000b006ccd4975d91mr31811302ybb.430.1667744448240; Sun, 06
Nov 2022 06:20:48 -0800 (PST)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: alt.dreams.castaneda
Date: Sun, 6 Nov 2022 06:20:48 -0800 (PST)
In-Reply-To: <98db572b-c848-461a-9812-046db13979aan@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2603:8000:6e00:194f:71fd:96d3:d9d0:7d41;
posting-account=0nUoVAoAAABAx_EzSYxVstLp1Y2NeNcX
NNTP-Posting-Host: 2603:8000:6e00:194f:71fd:96d3:d9d0:7d41
References: <4cd31497-63d3-4d08-9727-d8baf129e4b8n@googlegroups.com> <98db572b-c848-461a-9812-046db13979aan@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <33c75b5f-4a0d-4c6c-9984-b79d2f6b2e81n@googlegroups.com>
Subject: Re: Non-State Hostile Intelligence Service & Maxwell Silver Hammer
From: allready...@gmail.com (chris rodgers)
Injection-Date: Sun, 06 Nov 2022 14:20:48 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 19358
 by: chris rodgers - Sun, 6 Nov 2022 14:20 UTC

On Friday, November 4, 2022 at 4:57:53 PM UTC-7, chris rodgers wrote:
> On Wednesday, November 2, 2022 at 12:47:28 PM UTC-7, LowRider44M wrote:
> > Back In The USSR
> > https://youtu.be/nS5_EQgbuLc
> >
> > Maxwell Silver Hammer
> > https://youtu.be/3HuXFfq79I8
> >
> >
> >
> >
> >
> > Unexplainability and Incomprehensibility of Artificial Intelligence
> > Roman V. Yampolskiy
> >
> > Computer Engineering and Computer Science
> > University of Louisville
> > roman.ya...@louisville.edu, @romanyam
> > June 20, 2019
> >
> > "If a lion could speak, we couldn't understand him"
> > Ludwig Wittgenstein
> >
> > “It would be possible to describe everything scientifically, but it would make no sense. It would
> > be a description without meaning - as if you described a Beethoven symphony as a variation of
> > wave pressure.”
> > Albert Einstein
> >
> > “Some things in life are too complicated to explain in any language. … Not just to explain to
> > others but to explain to yourself. Force yourself to try to explain it and you create lies.”
> > Haruki Murakami
> >
> > “I understand that you don’t understand”
> > Grigori Perelman
> >
> >
> > Abstract
> > Explainability and comprehensibility of AI are important requirements for intelligent systems
> > deployed in real-world domains. Users want and frequently need to understand how decisions
> > impacting them are made. Similarly it is important to understand how an intelligent system
> > functions for safety and security reasons. In this paper, we describe two complementary
> > impossibility results (Unexplainability and Incomprehensibility), essentially showing that
> > advanced AIs would not be able to accurately explain some of their decisions and for the decisions
> > they could explain people would not understand some of those explanations.
> > Keywords: AI Safety, Black Box, Comprehensible, Explainable AI, Impossibility, Intelligible,
> > Interpretability, Transparency, Understandable, Unserveyability.
> >
> > 1. Introduction
> > For decades AI projects relied on human expertise, distilled by knowledge engineers, and were
> > both explicitly designed and easily understood by people. For example, expert systems, frequently
> > based on decision trees, are perfect models of human decision making and so are naturally
> > understandable by both developers and end-users. With paradigm shift in the leading AI
> > methodology, over the last decade, to machine learning systems based on Deep Neural Networks
> > (DNN) this natural ease of understanding got sacrificed. The current systems are seen as “black
> > boxes” (not to be confused with AI boxing [1, 2]), opaque to human understanding but extremely
> > capable both with respect to results and learning of new domains. As long as Big Data and Huge
> > Compute are available, zero human knowledge is required [3] to achieve superhuman [4]
> > performance.
> >
> > With their new found capabilities DNN-based AI systems are tasked with making decisions in
> > employment [5], admissions [6], investing [7], matching [8], diversity [9], security [10, 11],
> > recommendations [12], banking [13], and countless other critical domains. As many such domains
> > are legally regulated, it is a desirable property and frequently a requirement [14, 15] that such
> > systems should be able to explain how they arrived at their decisions, particularly to show that they
> > are bias free [16]. Additionally, and perhaps even more importantly to make artificially intelligent
> > systems safe and secure [17] it is essential that we understand what they are doing and why. A
> > particular area of interest in AI Safety [18-25] is predicting and explaining causes of AI failures
> > [26].
> >
> > A significant amount of research [27-41] is now being devoted to developing explainable AI. In
> > the next section we review some main results and general trends relevant to this paper.
> >
> > 2. Literature Review
> > Hundreds of papers have been published on eXplainable Artificial Intelligence (XAI) [42].
> > According to DARPA [27], XAI is supposed to “produce more explainable models, while
> > maintaining a high level of learning performance … and enable human users to understand,
> > appropriately, trust, and effectively manage the emerging generation of artificially intelligent
> > partners”. Detailed analysis of literature on explainability or comprehensibility is beyond the scope
> > of this paper, but the readers are encouraged to look at many excellent surveys of the topic [43-
> > 45]. Miller [46] surveys social sciences to understand how people explain, in the hopes of
> > transferring that knowledge to XAI, but of course people often say: “I can’t explain it” or “I don’t
> > understand”. For example, most people are unable to explain how they recognize faces, a problem
> > we frequently ask computers to solve [47, 48].
> >
> > Despite wealth of publications on XAI and related concepts [49-51], the subject of unexplainability
> > or incomprehensibility of AI is only implicitly addressed. Some limitations of explainability are
> > discussed: “ML algorithms intrinsically consider high-degree interactions between input features,
> > which make disaggregating such functions into human understandable form difficult. … While a
> > single linear transformation may be interpreted by looking at the weights from the input features
> > to each of the output classes, multiple layers with non-linear interactions at every layer imply
> > disentangling a super complicated nested structure which is a difficult task and potentially even a
> > questionable one [52]. … As mentioned before, given the complicated structure of ML models,
> > for the same set of input variables and prediction targets, complex machine learning algorithms
> > can produce multiple accurate models by taking very similar but not the same internal pathway in
> > the network, so details of explanations can also change across multiple accurate models. This
> > systematic instability makes automated generated explanations difficult..” [42].
> >
> > Sutcliffe et al. talk about incomprehensible theorems [53]: “Comprehensibility estimates the effort
> > required for a user to understand the theorem. Theorems with many or deeply nested structures
> > may be considered incomprehensible.” Muggleton et al. [54] suggest “using inspection time as a
> > proxy for incomprehension. That is, we might expect that humans take a long time … in the case
> > they find the program hard to understand. As a proxy, inspection time is easier to measure than
> > comprehension.”
> >
> > The tradeoff between explainability and comprehensibility is recognized [52], but is not taken to
> > its logical conclusion. “[A]ccuracy generally requires more complex prediction methods [but]
> > simple and interpretable functions do not make the most accurate predictors'' [55]. “Indeed, there
> > are algorithms that are more interpretable than others are, and there is often a tradeoff between
> > accuracy and interpretability: the most accurate AI/ML models usually are not very explainable
> > (for example, deep neural nets, boosted trees, random forests, and support vector machines), and
> > the most interpretable models usually are less accurate (for example, linear or logistic regression).”
> > [42].
> >
> > Incomprehensibility is supported by well-known impossibility results. Charlesworth proved his
> > Comprehensibility theorem while attempting to formalize the answer to such questions as: “If [full
> > human-level intelligence] software can exist, could humans understand it?” [56]. While describing
> > implications of his theorem on AI, he writes [57]: “Comprehensibility Theorem is the first
> > mathematical theorem implying the impossibility of any AI agent or natural agent—including a
> > not-necessarily infallible human agent—satisfying a rigorous and deductive interpretation of the
> > self-comprehensibility challenge. … Self-comprehensibility in some form might be essential for a
> > kind of self-reflection useful for self-improvement that might enable some agents to increase their
> > success.” It is reasonable to conclude that a system which doesn’t comprehend itself would not be
> > able to explain itself.
> > Hernandez-Orallo et al. introduce the notion of K-incomprehensibility (a.k.a. K-hardness) [58].
> >
> > “This will be the formal counterpart to our notion of hard-to-learn good explanations. In our sense,
> > a k-incomprehensible string with a high k (difficult to comprehend) is different (harder) than a k-
> > compressible string (difficult to learn) [59] and different from classical computational complexity
> > (slow to compute). Calculating the value of k for a given string is not computable in general.
> > Fortunately, the converse, i.e., given an arbitrary k, calculating whether a string is k-
> > comprehensible is computable. … Kolmogorov Complexity measures the amount of information
> > but not the complexity to understand them.” [58].
> >
> > Yampolskiy addresses limits of understanding other agents in his work on the space of possible
> > minds [60]: “Each mind design corresponds to an integer and so is finite, but since the number of
> > minds is infinite some have a much greater number of states compared to others. This property
> > holds for all minds. Consequently, since a human mind has only a finite number of possible states,
> > there are minds which can never be fully understood by a human mind as such mind designs have
> > a much greater number of states, making their understanding impossible as can be demonstrated
> > by the pigeonhole principle.” Hibbard points out safety impact from incomprehensibility of AI:
> > “Given the incomprehensibility of their thoughts, we will not be able to sort out the effect of any
> > conflicts they have between their own interests and ours.”
> >
> > We are slowly starting to realize that as AIs become more powerful, the models behind their
> > success will become ever less comprehensible to us [61]: “… deep learning that produces
> > outcomes based on so many different variables under so many different conditions being
> > transformed by so many layers of neural networks that humans simply cannot comprehend the
> > model the computer has built for itself. … Clearly our computers have surpassed us in their power
> > to discriminate, find patterns, and draw conclusions. That’s one reason we use them. Rather than
> > reducing phenomena to fit a relatively simple model, we can now let our computers make models
> > as big as they need to. But this also seems to mean that what we know depends upon the output of
> > machines the functioning of which we cannot follow, explain, or understand. … But some of the
> > new models are incomprehensible. They can exist only in the weights of countless digital triggers
> > networked together and feeding successive layers of networked, weighted triggers representing
> > huge quantities of variables that affect one another in ways so particular that we cannot derive
> > general principles from them.”
> >
> > “Now our machines are letting us see that even if the rules are simple, elegant, beautiful and
> > rational, the domain they govern is so granular, so intricate, so interrelated, with everything
> > causing everything else all at once and forever, that our brains and our knowledge cannot begin to
> > comprehend it. … Our new reliance on inscrutable models as the source of the justification of our
> > beliefs puts us in an odd position. If knowledge includes the justification of our beliefs, then
> > knowledge cannot be a class of mental content, because the justification now consists of models
> > that exist in machines, models that human mentality cannot comprehend. … But the promise of
> > machine learning is that there are times when the machine’s inscrutable models will be far more
> > predictive than the manually constructed, human-intelligible ones. In those cases, our
> > knowledge — if we choose to use it — will depend on justifications that we simply cannot
> > understand. … [W]e are likely to continue to rely ever more heavily on justifications that we
> > simply cannot fathom. And the issue is not simply that we cannot fathom them, the way a lay
> > person can’t fathom a string theorist’s ideas. Rather, it’s that the nature of computer-based
> > justification is not at all like human justification. It is alien.” [61].
> >
> > 3. Unexplainability
> > A number of impossibility results are well-known in many areas of research [62-70] and some are
> > starting to be discovered in the domain of AI research, for example: Unverifiability [71],
> > Unpredictability1 [72] and limits on preference deduction [73] or alignment [74]. In this section
> > we introduce Unexplainability of AI and show that some decisions of superintelligent systems will
> > never be explainable, even in principle. We will concentrate on the most interesting case, a
> > superintelligent AI acting in novel and unrestricted domains. Simple cases of Narrow AIs making
> > decisions in restricted domains (Ex. Tic-Tac-Toe) are both explainable and comprehensible.
> > Consequently a whole spectrum of AIs can be developed from completely
> > explainable/comprehensible to completely unexplainable/incomprehensible.. We define
> > Unexplainability as impossibility of providing an explanation for certain decisions made by an
> > intelligent system which is both 100% accurate and comprehensible.
> >
> >
> > 1 Unpredictability is not the same as Unexplainability or Incomprehensibility, see ref. 72. Yampolskiy, R.V.,
> > Unpredictability of AI. arXiv preprint arXiv:1905.13053, 2019. for details.
> > Artificial Deep Neural Networks continue increasing in size and may already comprise millions
> > of neurons, thousands of layers and billions of connecting weights, ultimately targeting and
> > perhaps surpassing the size of the human brain. They are trained on Big Data from which million
> > feature vectors are extracted and on which decisions are based, with each feature contributing to
> > the decision in proportion to a set of weights. To explain such a decision, which relies on literally
> > billions of contributing factors, AI has to either simplify the explanation and so make the
> > explanation less accurate/specific/detailed or to report it exactly but such an explanation elucidates
> > nothing by virtue of its semantic complexity, large size and abstract data representation. Such
> > precise reporting is just a copy of trained DNN model.
> >
> > For example, an AI utilized in the mortgage industry may look at an application to decide credit
> > worthiness of a person in order to approve them for a loan. For simplicity, let’s say the system
> > looks at only a hundred descriptors of the applicant and utilizes a neural network to arrive at a
> > binary approval decision. An explanation which included all hundred features and weights of the
> > neural network would not be very useful, so the system may instead select one of two most
> > important features and explain its decision with respect to just those top properties, ignoring the
> > rest. This highly simplified explanation would not be accurate as the other 98 features all
> > contributed to the decision and if only one or two top features were considered the decision could
> > have been different. This is similar to how Principal Component Analysis works for dimensionality
> > reduction [75].
> >
> > Even if the agent trying to get the explanation is not a human but another AI the problem remains
> > as the explanation is either inaccurate or agent-encoding specific. Trained model could be copied
> > to another neural network, but it would likewise have a har
> > 104. Yampolskiy, R.V. and M. Spellchecker, Artificial Intelligence Safety and Cybersecurity: a
> > Timeline of AI Failures. arXiv preprint arXiv:1610.07997, 2016.
> hot rod charlie runs tomorrow
> last race of his career
>
> and joe biden should tap out
> and Harris? she can sell hot dogs in front of the White House


Click here to read the complete article
Re: Non-State Hostile Intelligence Service & Maxwell Silver Hammer

<e520a431-99c8-432b-b4f4-7952c2f1b651n@googlegroups.com>

  copy mid

https://www.novabbs.com/interests/article-flat.php?id=3031&group=alt.dreams.castaneda#3031

  copy link   Newsgroups: alt.dreams.castaneda
X-Received: by 2002:a05:620a:9cb:b0:6fa:22bf:9fa with SMTP id y11-20020a05620a09cb00b006fa22bf09famr30323843qky.625.1667787791550;
Sun, 06 Nov 2022 18:23:11 -0800 (PST)
X-Received: by 2002:a25:384:0:b0:6cc:2c8f:4a0 with SMTP id 126-20020a250384000000b006cc2c8f04a0mr46826084ybd.649.1667787791270;
Sun, 06 Nov 2022 18:23:11 -0800 (PST)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: alt.dreams.castaneda
Date: Sun, 6 Nov 2022 18:23:11 -0800 (PST)
In-Reply-To: <33c75b5f-4a0d-4c6c-9984-b79d2f6b2e81n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=76.118.119.160; posting-account=y1Ih6QoAAABmkDmly_GJHZrtOKbrLuAF
NNTP-Posting-Host: 76.118.119.160
References: <4cd31497-63d3-4d08-9727-d8baf129e4b8n@googlegroups.com>
<98db572b-c848-461a-9812-046db13979aan@googlegroups.com> <33c75b5f-4a0d-4c6c-9984-b79d2f6b2e81n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <e520a431-99c8-432b-b4f4-7952c2f1b651n@googlegroups.com>
Subject: Re: Non-State Hostile Intelligence Service & Maxwell Silver Hammer
From: intraph...@gmail.com (LowRider44M)
Injection-Date: Mon, 07 Nov 2022 02:23:11 +0000
Content-Type: text/plain; charset="UTF-8"
X-Received-Bytes: 1668
 by: LowRider44M - Mon, 7 Nov 2022 02:23 UTC

> > hot rod charlie runs tomorrow
> > last race of his career
> >
> > and joe biden should tap out
> > and Harris? she can sell hot dogs in front of the White House
> poor hot rod charlie got beat yesterday.
> didn't even finish 'in the money'. oh well.
> they had some fast horses in that race.

I'd bet a 20 on this dog!
https://twitter.com/i/status/1589242912109301760

1
server_pubkey.txt

rocksolid light 0.9.8
clearnet tor