Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

When the sun shineth, make hay. -- John Heywood


interests / alt.politics / Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI" just AIN'T

SubjectAuthor
* Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI" just25B.Z969
`* Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI"Frank
 `* Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI"25B.Z969
  `* Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI"Rudy Crayola
   +* Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI" just AIN'TMighty Wannabe
   |`- Re: Tesla Self-Driving AI Can't Even Cope with Horse-nBuggy - "AI" just AIN'TNevermind
   `- Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI"25B.Z969

1
Re: Tesla Self-Driving AI Can't Even Cope with Horse-nBuggy - "AI" just AIN'T

<1%LMK.896832$cVE6.234088@fx11.ams4>

  copy mid

https://www.novabbs.com/interests/article-flat.php?id=19752&group=alt.politics#19752

  copy link   Newsgroups: alt.politics
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!peer01.ams4!peer.am4.highwinds-media.com!news.highwinds-media.com!fx11.ams4.POSTED!not-for-mail
From: NoO...@none.com (Nevermind)
Newsgroups: alt.politics
Subject: Re: Tesla Self-Driving AI Can't Even Cope with Horse-nBuggy - "AI" just AIN'T
X-Newsreader: NewsLeecher V8.0 Beta 4 (http://www.newsleecher.com)
References: <_XDMK.2096414$70_9.1928169@fx10.ams1>
Lines: 151
Message-ID: <1%LMK.896832$cVE6.234088@fx11.ams4>
X-Complaints-To: abuse@eweka.nl
NNTP-Posting-Date: Mon, 22 Aug 2022 14:01:33 UTC
Organization: Eweka Internet Services
Date: Mon, 22 Aug 2022 14:01:33 GMT
X-Received-Bytes: 8256
 by: Nevermind - Mon, 22 Aug 2022 14:01 UTC

In reply to "Mighty Wannabe" who wrote the following:

> Rudy Crayola wrote on 8/22/2022 12:00 AM:
> > On 8/21/2022 11:23 AM, 25B.Z969 wrote:
> > > On 8/19/22 8:23 AM, Frank wrote:
> > > > On 8/18/2022 11:33 PM, 25B.Z969 wrote:
> > > > > https://www.dailymail.co.uk/sciencetech/article-11123757/Teslas-self-d
> > > > > riving-software-confuses-horse-drawn-carriage-highway-semi-truck.html
> > > > >
> > > > >
> > > > > Horsepower! Tesla's self-driving software goes haywire when
> > > > > it spots a carriage on the highway and confuses the nostalgic
> > > > > way of traveling with a semi-truck
> > > > >
> > > > > A video shared to TikTok shows a Tesla mistaking a horse-drawn
> > > > > carriage for a semi-truck, pedestrian and sedan
> > > > >
> > > > > . . .
> > > > >
> > > > > �� Don't blame Musk ... this is a problem with all current
> > > > > �� 'AI' software that relies on extensive 'training' to
> > > > > �� sort-of understand what it's seeing. Humans also need
> > > > > �� such 'training' - but humans have enough added IQ and
> > > > > �� resources to "get it" even when seeing something quite
> > > > > �� novel.
> > > > >
> > > > > �� In short, 'AI', as it currently exists, lacks common
> > > > > �� sense. It is still VERY "robotic". If whatever was not
> > > > > �� covered in its training then it's at a total loss. I'm
> > > > > �� surprised the car didn't just hit the accelerator and
> > > > > �� crash through the unknown obstacle.
> > > > >
> > > > > �� Some years back the US Army funded a cutting-edge 'AI'
> > > > > �� system designed to recognize, perhaps attack, various
> > > > > �� kinds of enemy assets. They spent a long time teaching
> > > > > �� it what a tank looked like from every angle - and it
> > > > > �� seemed to score near 100%. Then they found out ... all
> > > > > �� the photos of tanks they'd trained it with were taken
> > > > > �� in the woods on clear sunny days. The 'AI' saw THAT
> > > > > �� and just said "Tank !" - didn't matter if there was
> > > > > �� a machine in the picture or not.
> > > > >
> > > > > �� The ability to *recognize" can be found in animals as
> > > > > �� tiny as fleas and mosquitoes .... but electronic
> > > > > �� "intelligence" is lacking a billion years of real-world
> > > > > �� life-or-death experience and the super-specialized
> > > > > �� neural systems evolution refined. In theory we have
> > > > > �� the computing power now to make an e-mosquito, but
> > > > > �� we don't have the PLAN, the HOW IT WORKS, nature
> > > > > �� evolved. There's some trick to all this, and we do
> > > > > �� NOT seem able to find it.
> > > > >
> > > > > �� Electronic 'AI' goes back to the 1950s. Even into the
> > > > > �� 80's it was possible to find guru Marvin Minsky here
> > > > > �� on Usenet. Thing is, every generation of 'AI' researcher
> > > > > �� has GOTTEN IT WRONG. In the beginning it was an overly
> > > > > �� simplistic view ... "one transistor can give you a
> > > > > �� if/then decision" ... without considering the MASSIVE
> > > > > �� exercise required to provide that final transistor with
> > > > > �� the right info when it came to anything vaguely
> > > > > �� resembling the real world.
> > > > >
> > > > > �� The (in)famous "HAL-9000" was a product of that overly-
> > > > > �� optimistic error in fictional form ... Arthur Clarke
> > > > > �� initially thought Minsky and friends were correct, that
> > > > > �� we'd have human-level AI by the dawn of the 21st century
> > > > > �� for sure.
> > > > >
> > > > > �� NOW ... it seems to plan is to OVERPOWER the problem
> > > > > �� with vast numbers of little processors or 'nerve-like'
> > > > > �� memristor arrays which are 'trained' with vast numbers
> > > > > �� of images or other simulated inputs. But, as seen in
> > > > > �� this news blurb, even that doesn't really work very
> > > > > �� well. Something important is STILL missing, the paradigm
> > > > > �� just isn't right. Plenty of theories on what's missing
> > > > > �� but none seem to produce much. Likely, since we're
> > > > > �� dealing with electronics instead of natural goo even
> > > > > �� trying to mimic nerves may be a very inefficient approach.
> > > > >
> > > > > �� Now for certain MILITARY uses ... todays 'AI' could really
> > > > > �� produce some extremely dangerous weapons ... a flying
> > > > > �� kill-em-all bot IS possible - 10 guns, 10 targets at once,
> > > > > �� never misses (like they always do in the movies) ........
> > > > >
> > > > > �� Bottom line, do NOT trust your car to drive for you.
> > > > > �� This was a tech introduced LONG before it was really
> > > > > �� viable. It can do "OK" ... but "OK" really isn't
> > > > > �� good enough here. For certain apps todays 'AI" can
> > > > > �� be useful, but don't trust your life to it.
> > > >
> > > > Teslas have crashed into trucks because apparently the flat white
> > > > backs look like the sky.
> > > >
> > > > I worry about the collision braking on my car after hearing another
> > > > brand thought a tunnel was a vehicle and braked and got rear ended.
> > >
> > >
> > > �� The DOD funded DARPA contests to create self-navigating
> > > �� vehicles. The test course was in a desert - not much in
> > > �� the way of cross-traffic or pedestrians.
> > >
> > > �� The 'AI' has been refined since then, but it still suffers
> > > �� some problems inherent to the current approach. The things
> > > �� must be "trained", but the real world has too many nuances
> > > �� for such 'training' to be successful beyond a certain point.
> > >
> > > �� 'AI' lacks "SENSE" - the ability to extrapolate from the
> > > �� specific to the general. Their "training" is just a bunch
> > > �� of specifics ... but the ability to derive a general paradigm
> > > �� to employ when presented with odd/novel cases is almost
> > > �� non-existent. It's not really 'AI' - just fairly good
> > > �� pattern-matching intended to evoke some canned responses.
> > > �� It's just not GOOD enough. I'd trust a 5-year-old to
> > > �� drive smarter. This isn't JUST Tesla - there are numerous
> > > �� makers involved with the same tech.
> > >
> > > �� Don't trust your life to these things - indeed insurers should
> > > �� refuse to cover them.
> > >
> > > �� MAYbe in ten or twenty years .........
> >
> > If the incident is ticketed by Police, Who or what gets the ticket? Or
> > the responsibility?
>
>
> In Toronto, Canada, if a vehicle is caught by red-light camera or photo
> radar, the vehicle's registered owner gets the ticket. Since the driver
> is unable to be positively identified, there is no demerit points for
> the unknown driver.
>
>
> > Does the car require a Drivers license and Liability insurance. Can an
> > unlicensed passenger be held responsible?
> >
>
>
> I suppose, the self-driving vehicle has the government's seal of
> approval to be on the road, so the government has implicitly issued a
> driver's licence to the AI in the self-driving vehicle.
>
> I think the owner of the self-driving vehicle should buy comprehensive
> auto-insurance, which would cover cost of collision damages to itself,
> physical properties and other vehicles involved, plus compensation for
> injuries to passengers and pedestrians.


Click here to read the complete article
Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI" just AIN'T

<p-SdnUjAXLy2mWL_nZ2dnZfqn_jNnZ2d@earthlink.com>

  copy mid

https://www.novabbs.com/interests/article-flat.php?id=22908&group=alt.politics#22908

  copy link   Newsgroups: talk.politics.misc alt.survival alt.politics alt.science soc.culture.usa
Path: i2pn2.org!rocksolid2!news.neodome.net!weretis.net!feeder6.news.weretis.net!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!border-1.nntp.ord.giganews.com!nntp.giganews.com!Xl.tags.giganews.com!local-2.nntp.ord.giganews.com!nntp.earthlink.com!news.earthlink.com.POSTED!not-for-mail
NNTP-Posting-Date: Fri, 19 Aug 2022 03:31:55 +0000
Newsgroups: talk.politics.misc,alt.survival,alt.politics,alt.science,soc.culture.usa
X-Mozilla-News-Host: news://news.west.earthlink.net:119
From: 25B.Z...@noda.net (25B.Z969)
Subject: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI" just
AIN'T
Date: Thu, 18 Aug 2022 23:33:33 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
Thunderbird/78.13.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
Message-ID: <p-SdnUjAXLy2mWL_nZ2dnZfqn_jNnZ2d@earthlink.com>
Lines: 81
X-Usenet-Provider: http://www.giganews.com
NNTP-Posting-Host: 98.77.165.7
X-Trace: sv3-lre35SN/XWzRPaaQA2cV4xvOfy9jnIBLKwGwbbzKBDCINfNPQPp7Liuk7JPkjuoK2Nm/b13s12gHwBn!5S6rGjsfJl6VYyPKpKRycmS/UBnvJ8HhfuVipLaVzEVM8WEyTPZagIEYnbKDz8INu0GCumh5vI6v!48Wy7OwkVGm4OXYvEQ==
X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers
X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly
X-Postfilter: 1.3.40
X-Received-Bytes: 5126
 by: 25B.Z969 - Fri, 19 Aug 2022 03:33 UTC

https://www.dailymail.co.uk/sciencetech/article-11123757/Teslas-self-driving-software-confuses-horse-drawn-carriage-highway-semi-truck.html

Horsepower! Tesla's self-driving software goes haywire when
it spots a carriage on the highway and confuses the nostalgic
way of traveling with a semi-truck

A video shared to TikTok shows a Tesla mistaking a horse-drawn
carriage for a semi-truck, pedestrian and sedan

.. . .

Don't blame Musk ... this is a problem with all current
'AI' software that relies on extensive 'training' to
sort-of understand what it's seeing. Humans also need
such 'training' - but humans have enough added IQ and
resources to "get it" even when seeing something quite
novel.

In short, 'AI', as it currently exists, lacks common
sense. It is still VERY "robotic". If whatever was not
covered in its training then it's at a total loss. I'm
surprised the car didn't just hit the accelerator and
crash through the unknown obstacle.

Some years back the US Army funded a cutting-edge 'AI'
system designed to recognize, perhaps attack, various
kinds of enemy assets. They spent a long time teaching
it what a tank looked like from every angle - and it
seemed to score near 100%. Then they found out ... all
the photos of tanks they'd trained it with were taken
in the woods on clear sunny days. The 'AI' saw THAT
and just said "Tank !" - didn't matter if there was
a machine in the picture or not.

The ability to *recognize" can be found in animals as
tiny as fleas and mosquitoes .... but electronic
"intelligence" is lacking a billion years of real-world
life-or-death experience and the super-specialized
neural systems evolution refined. In theory we have
the computing power now to make an e-mosquito, but
we don't have the PLAN, the HOW IT WORKS, nature
evolved. There's some trick to all this, and we do
NOT seem able to find it.

Electronic 'AI' goes back to the 1950s. Even into the
80's it was possible to find guru Marvin Minsky here
on Usenet. Thing is, every generation of 'AI' researcher
has GOTTEN IT WRONG. In the beginning it was an overly
simplistic view ... "one transistor can give you a
if/then decision" ... without considering the MASSIVE
exercise required to provide that final transistor with
the right info when it came to anything vaguely
resembling the real world.

The (in)famous "HAL-9000" was a product of that overly-
optimistic error in fictional form ... Arthur Clarke
initially thought Minsky and friends were correct, that
we'd have human-level AI by the dawn of the 21st century
for sure.

NOW ... it seems to plan is to OVERPOWER the problem
with vast numbers of little processors or 'nerve-like'
memristor arrays which are 'trained' with vast numbers
of images or other simulated inputs. But, as seen in
this news blurb, even that doesn't really work very
well. Something important is STILL missing, the paradigm
just isn't right. Plenty of theories on what's missing
but none seem to produce much. Likely, since we're
dealing with electronics instead of natural goo even
trying to mimic nerves may be a very inefficient approach.

Now for certain MILITARY uses ... todays 'AI' could really
produce some extremely dangerous weapons ... a flying
kill-em-all bot IS possible - 10 guns, 10 targets at once,
never misses (like they always do in the movies) ........

Bottom line, do NOT trust your car to drive for you.
This was a tech introduced LONG before it was really
viable. It can do "OK" ... but "OK" really isn't
good enough here. For certain apps todays 'AI" can
be useful, but don't trust your life to it.

Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI" just AIN'T

<tdnvcj$1fu1l$1@dont-email.me>

  copy mid

https://www.novabbs.com/interests/article-flat.php?id=22909&group=alt.politics#22909

  copy link   Newsgroups: talk.politics.misc alt.survival alt.politics alt.science soc.culture.usa
Path: i2pn2.org!rocksolid2!news.neodome.net!news.mixmin.net!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: "fran...@frank.net (Frank)
Newsgroups: talk.politics.misc,alt.survival,alt.politics,alt.science,soc.culture.usa
Subject: Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI"
just AIN'T
Date: Fri, 19 Aug 2022 08:23:45 -0400
Organization: A noiseless patient Spider
Lines: 89
Message-ID: <tdnvcj$1fu1l$1@dont-email.me>
References: <p-SdnUjAXLy2mWL_nZ2dnZfqn_jNnZ2d@earthlink.com>
Reply-To: frank@frank.net
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 19 Aug 2022 12:23:47 -0000 (UTC)
Injection-Info: reader01.eternal-september.org; posting-host="ff49b730752565a9ad53c3f796afc77d";
logging-data="1570869"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19fq85jABlw4ngfzFQhZ5dWrOxaksmHXHI="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
Thunderbird/91.12.0
Cancel-Lock: sha1:myxWpPywhRGEwnzerFR68pYklvg=
In-Reply-To: <p-SdnUjAXLy2mWL_nZ2dnZfqn_jNnZ2d@earthlink.com>
Content-Language: en-US
 by: Frank - Fri, 19 Aug 2022 12:23 UTC

On 8/18/2022 11:33 PM, 25B.Z969 wrote:
> https://www.dailymail.co.uk/sciencetech/article-11123757/Teslas-self-driving-software-confuses-horse-drawn-carriage-highway-semi-truck.html
>
>
> Horsepower! Tesla's self-driving software goes haywire when
> it spots a carriage on the highway and confuses the nostalgic
> way of traveling with a semi-truck
>
> A video shared to TikTok shows a Tesla mistaking a horse-drawn
> carriage for a semi-truck, pedestrian and sedan
>
> . . .
>
>   Don't blame Musk ... this is a problem with all current
>   'AI' software that relies on extensive 'training' to
>   sort-of understand what it's seeing. Humans also need
>   such 'training' - but humans have enough added IQ and
>   resources to "get it" even when seeing something quite
>   novel.
>
>   In short, 'AI', as it currently exists, lacks common
>   sense. It is still VERY "robotic". If whatever was not
>   covered in its training then it's at a total loss. I'm
>   surprised the car didn't just hit the accelerator and
>   crash through the unknown obstacle.
>
>   Some years back the US Army funded a cutting-edge 'AI'
>   system designed to recognize, perhaps attack, various
>   kinds of enemy assets. They spent a long time teaching
>   it what a tank looked like from every angle - and it
>   seemed to score near 100%. Then they found out ... all
>   the photos of tanks they'd trained it with were taken
>   in the woods on clear sunny days. The 'AI' saw THAT
>   and just said "Tank !" - didn't matter if there was
>   a machine in the picture or not.
>
>   The ability to *recognize" can be found in animals as
>   tiny as fleas and mosquitoes .... but electronic
>   "intelligence" is lacking a billion years of real-world
>   life-or-death experience and the super-specialized
>   neural systems evolution refined. In theory we have
>   the computing power now to make an e-mosquito, but
>   we don't have the PLAN, the HOW IT WORKS, nature
>   evolved. There's some trick to all this, and we do
>   NOT seem able to find it.
>
>   Electronic 'AI' goes back to the 1950s. Even into the
>   80's it was possible to find guru Marvin Minsky here
>   on Usenet. Thing is, every generation of 'AI' researcher
>   has GOTTEN IT WRONG. In the beginning it was an overly
>   simplistic view ... "one transistor can give you a
>   if/then decision" ... without considering the MASSIVE
>   exercise required to provide that final transistor with
>   the right info when it came to anything vaguely
>   resembling the real world.
>
>   The (in)famous "HAL-9000" was a product of that overly-
>   optimistic error in fictional form ... Arthur Clarke
>   initially thought Minsky and friends were correct, that
>   we'd have human-level AI by the dawn of the 21st century
>   for sure.
>
>   NOW ... it seems to plan is to OVERPOWER the problem
>   with vast numbers of little processors or 'nerve-like'
>   memristor arrays which are 'trained' with vast numbers
>   of images or other simulated inputs. But, as seen in
>   this news blurb, even that doesn't really work very
>   well. Something important is STILL missing, the paradigm
>   just isn't right. Plenty of theories on what's missing
>   but none seem to produce much. Likely, since we're
>   dealing with electronics instead of natural goo even
>   trying to mimic nerves may be a very inefficient approach.
>
>   Now for certain MILITARY uses ... todays 'AI' could really
>   produce some extremely dangerous weapons ... a flying
>   kill-em-all bot IS possible - 10 guns, 10 targets at once,
>   never misses (like they always do in the movies) ........
>
>   Bottom line, do NOT trust your car to drive for you.
>   This was a tech introduced LONG before it was really
>   viable. It can do "OK" ... but "OK" really isn't
>   good enough here. For certain apps todays 'AI" can
>   be useful, but don't trust your life to it.

Teslas have crashed into trucks because apparently the flat white backs
look like the sky.

I worry about the collision braking on my car after hearing another
brand thought a tunnel was a vehicle and braked and got rear ended.

Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI" just AIN'T

<HrCcnaaRwthexp_-nZ2dnZfqn_qdnZ2d@earthlink.com>

  copy mid

https://www.novabbs.com/interests/article-flat.php?id=22928&group=alt.politics#22928

  copy link   Newsgroups: talk.politics.misc alt.survival alt.politics alt.science soc.culture.usa
Path: i2pn2.org!rocksolid2!news.neodome.net!weretis.net!feeder6.news.weretis.net!news.misty.com!border-2.nntp.ord.giganews.com!nntp.giganews.com!Xl.tags.giganews.com!local-1.nntp.ord.giganews.com!nntp.earthlink.com!news.earthlink.com.POSTED!not-for-mail
NNTP-Posting-Date: Sun, 21 Aug 2022 16:22:27 +0000
From: 25B.Z...@noda.net (25B.Z969)
Subject: Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI"
just AIN'T
Newsgroups: talk.politics.misc,alt.survival,alt.politics,alt.science,soc.culture.usa
References: <p-SdnUjAXLy2mWL_nZ2dnZfqn_jNnZ2d@earthlink.com>
<tdnvcj$1fu1l$1@dont-email.me>
X-Mozilla-News-Host: news://news.west.earthlink.net
Date: Sun, 21 Aug 2022 12:23:33 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
Thunderbird/78.13.0
MIME-Version: 1.0
In-Reply-To: <tdnvcj$1fu1l$1@dont-email.me>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
Message-ID: <HrCcnaaRwthexp_-nZ2dnZfqn_qdnZ2d@earthlink.com>
Lines: 115
X-Usenet-Provider: http://www.giganews.com
NNTP-Posting-Host: 98.77.165.67
X-Trace: sv3-qM5fAowdBkFI7rmoUwAQXNJOlduoxvZ3lSgkI3527bpjCKLBQ0dbtZYGt+IK0L47cxXoMeKm2LN3w/s!H/T6m7If/JScdvrK6Kh9xUspK7md48UrjsXTlqWFbJTx8L6/TUAU2XaBO0q+fdcvX/DjHgSQNabU!h+vrgYJXAXWXIkT12XQ=
X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers
X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly
X-Postfilter: 1.3.40
 by: 25B.Z969 - Sun, 21 Aug 2022 16:23 UTC

On 8/19/22 8:23 AM, Frank wrote:
> On 8/18/2022 11:33 PM, 25B.Z969 wrote:
>> https://www.dailymail.co.uk/sciencetech/article-11123757/Teslas-self-driving-software-confuses-horse-drawn-carriage-highway-semi-truck.html
>>
>>
>> Horsepower! Tesla's self-driving software goes haywire when
>> it spots a carriage on the highway and confuses the nostalgic
>> way of traveling with a semi-truck
>>
>> A video shared to TikTok shows a Tesla mistaking a horse-drawn
>> carriage for a semi-truck, pedestrian and sedan
>>
>> . . .
>>
>>    Don't blame Musk ... this is a problem with all current
>>    'AI' software that relies on extensive 'training' to
>>    sort-of understand what it's seeing. Humans also need
>>    such 'training' - but humans have enough added IQ and
>>    resources to "get it" even when seeing something quite
>>    novel.
>>
>>    In short, 'AI', as it currently exists, lacks common
>>    sense. It is still VERY "robotic". If whatever was not
>>    covered in its training then it's at a total loss. I'm
>>    surprised the car didn't just hit the accelerator and
>>    crash through the unknown obstacle.
>>
>>    Some years back the US Army funded a cutting-edge 'AI'
>>    system designed to recognize, perhaps attack, various
>>    kinds of enemy assets. They spent a long time teaching
>>    it what a tank looked like from every angle - and it
>>    seemed to score near 100%. Then they found out ... all
>>    the photos of tanks they'd trained it with were taken
>>    in the woods on clear sunny days. The 'AI' saw THAT
>>    and just said "Tank !" - didn't matter if there was
>>    a machine in the picture or not.
>>
>>    The ability to *recognize" can be found in animals as
>>    tiny as fleas and mosquitoes .... but electronic
>>    "intelligence" is lacking a billion years of real-world
>>    life-or-death experience and the super-specialized
>>    neural systems evolution refined. In theory we have
>>    the computing power now to make an e-mosquito, but
>>    we don't have the PLAN, the HOW IT WORKS, nature
>>    evolved. There's some trick to all this, and we do
>>    NOT seem able to find it.
>>
>>    Electronic 'AI' goes back to the 1950s. Even into the
>>    80's it was possible to find guru Marvin Minsky here
>>    on Usenet. Thing is, every generation of 'AI' researcher
>>    has GOTTEN IT WRONG. In the beginning it was an overly
>>    simplistic view ... "one transistor can give you a
>>    if/then decision" ... without considering the MASSIVE
>>    exercise required to provide that final transistor with
>>    the right info when it came to anything vaguely
>>    resembling the real world.
>>
>>    The (in)famous "HAL-9000" was a product of that overly-
>>    optimistic error in fictional form ... Arthur Clarke
>>    initially thought Minsky and friends were correct, that
>>    we'd have human-level AI by the dawn of the 21st century
>>    for sure.
>>
>>    NOW ... it seems to plan is to OVERPOWER the problem
>>    with vast numbers of little processors or 'nerve-like'
>>    memristor arrays which are 'trained' with vast numbers
>>    of images or other simulated inputs. But, as seen in
>>    this news blurb, even that doesn't really work very
>>    well. Something important is STILL missing, the paradigm
>>    just isn't right. Plenty of theories on what's missing
>>    but none seem to produce much. Likely, since we're
>>    dealing with electronics instead of natural goo even
>>    trying to mimic nerves may be a very inefficient approach.
>>
>>    Now for certain MILITARY uses ... todays 'AI' could really
>>    produce some extremely dangerous weapons ... a flying
>>    kill-em-all bot IS possible - 10 guns, 10 targets at once,
>>    never misses (like they always do in the movies) ........
>>
>>    Bottom line, do NOT trust your car to drive for you.
>>    This was a tech introduced LONG before it was really
>>    viable. It can do "OK" ... but "OK" really isn't
>>    good enough here. For certain apps todays 'AI" can
>>    be useful, but don't trust your life to it.
>
> Teslas have crashed into trucks because apparently the flat white backs
> look like the sky.
>
> I worry about the collision braking on my car after hearing another
> brand thought a tunnel was a vehicle and braked and got rear ended.

The DOD funded DARPA contests to create self-navigating
vehicles. The test course was in a desert - not much in
the way of cross-traffic or pedestrians.

The 'AI' has been refined since then, but it still suffers
some problems inherent to the current approach. The things
must be "trained", but the real world has too many nuances
for such 'training' to be successful beyond a certain point.

'AI' lacks "SENSE" - the ability to extrapolate from the
specific to the general. Their "training" is just a bunch
of specifics ... but the ability to derive a general paradigm
to employ when presented with odd/novel cases is almost
non-existent. It's not really 'AI' - just fairly good
pattern-matching intended to evoke some canned responses.
It's just not GOOD enough. I'd trust a 5-year-old to
drive smarter. This isn't JUST Tesla - there are numerous
makers involved with the same tech.

Don't trust your life to these things - indeed insurers should
refuse to cover them.

MAYbe in ten or twenty years .........

Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI" just AIN'T

<tduv0a$2j1ol$1@dont-email.me>

  copy mid

https://www.novabbs.com/interests/article-flat.php?id=22931&group=alt.politics#22931

  copy link   Newsgroups: talk.politics.misc alt.survival alt.politics alt.science soc.culture.usa
Path: i2pn2.org!rocksolid2!news.neodome.net!news.mixmin.net!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: Met...@Amphetamin.com (Rudy Crayola)
Newsgroups: talk.politics.misc,alt.survival,alt.politics,alt.science,soc.culture.usa
Subject: Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI"
just AIN'T
Date: Sun, 21 Aug 2022 23:00:13 -0500
Organization: A noiseless patient Spider
Lines: 121
Message-ID: <tduv0a$2j1ol$1@dont-email.me>
References: <p-SdnUjAXLy2mWL_nZ2dnZfqn_jNnZ2d@earthlink.com>
<tdnvcj$1fu1l$1@dont-email.me>
<HrCcnaaRwthexp_-nZ2dnZfqn_qdnZ2d@earthlink.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 22 Aug 2022 04:00:11 -0000 (UTC)
Injection-Info: reader01.eternal-september.org; posting-host="286c1b04e89506577d66cd052ef98a41";
logging-data="2721557"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18fKyAaTpSFRMzBz5yhoknT"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.1.2
Cancel-Lock: sha1:B9ve2MFOHkxnIny05QPCbIwuJOU=
In-Reply-To: <HrCcnaaRwthexp_-nZ2dnZfqn_qdnZ2d@earthlink.com>
Content-Language: en-US
 by: Rudy Crayola - Mon, 22 Aug 2022 04:00 UTC

On 8/21/2022 11:23 AM, 25B.Z969 wrote:
> On 8/19/22 8:23 AM, Frank wrote:
>> On 8/18/2022 11:33 PM, 25B.Z969 wrote:
>>> https://www.dailymail.co.uk/sciencetech/article-11123757/Teslas-self-driving-software-confuses-horse-drawn-carriage-highway-semi-truck.html
>>>
>>> Horsepower! Tesla's self-driving software goes haywire when
>>> it spots a carriage on the highway and confuses the nostalgic
>>> way of traveling with a semi-truck
>>>
>>> A video shared to TikTok shows a Tesla mistaking a horse-drawn
>>> carriage for a semi-truck, pedestrian and sedan
>>>
>>> . . .
>>>
>>>    Don't blame Musk ... this is a problem with all current
>>>    'AI' software that relies on extensive 'training' to
>>>    sort-of understand what it's seeing. Humans also need
>>>    such 'training' - but humans have enough added IQ and
>>>    resources to "get it" even when seeing something quite
>>>    novel.
>>>
>>>    In short, 'AI', as it currently exists, lacks common
>>>    sense. It is still VERY "robotic". If whatever was not
>>>    covered in its training then it's at a total loss. I'm
>>>    surprised the car didn't just hit the accelerator and
>>>    crash through the unknown obstacle.
>>>
>>>    Some years back the US Army funded a cutting-edge 'AI'
>>>    system designed to recognize, perhaps attack, various
>>>    kinds of enemy assets. They spent a long time teaching
>>>    it what a tank looked like from every angle - and it
>>>    seemed to score near 100%. Then they found out ... all
>>>    the photos of tanks they'd trained it with were taken
>>>    in the woods on clear sunny days. The 'AI' saw THAT
>>>    and just said "Tank !" - didn't matter if there was
>>>    a machine in the picture or not.
>>>
>>>    The ability to *recognize" can be found in animals as
>>>    tiny as fleas and mosquitoes .... but electronic
>>>    "intelligence" is lacking a billion years of real-world
>>>    life-or-death experience and the super-specialized
>>>    neural systems evolution refined. In theory we have
>>>    the computing power now to make an e-mosquito, but
>>>    we don't have the PLAN, the HOW IT WORKS, nature
>>>    evolved. There's some trick to all this, and we do
>>>    NOT seem able to find it.
>>>
>>>    Electronic 'AI' goes back to the 1950s. Even into the
>>>    80's it was possible to find guru Marvin Minsky here
>>>    on Usenet. Thing is, every generation of 'AI' researcher
>>>    has GOTTEN IT WRONG. In the beginning it was an overly
>>>    simplistic view ... "one transistor can give you a
>>>    if/then decision" ... without considering the MASSIVE
>>>    exercise required to provide that final transistor with
>>>    the right info when it came to anything vaguely
>>>    resembling the real world.
>>>
>>>    The (in)famous "HAL-9000" was a product of that overly-
>>>    optimistic error in fictional form ... Arthur Clarke
>>>    initially thought Minsky and friends were correct, that
>>>    we'd have human-level AI by the dawn of the 21st century
>>>    for sure.
>>>
>>>    NOW ... it seems to plan is to OVERPOWER the problem
>>>    with vast numbers of little processors or 'nerve-like'
>>>    memristor arrays which are 'trained' with vast numbers
>>>    of images or other simulated inputs. But, as seen in
>>>    this news blurb, even that doesn't really work very
>>>    well. Something important is STILL missing, the paradigm
>>>    just isn't right. Plenty of theories on what's missing
>>>    but none seem to produce much. Likely, since we're
>>>    dealing with electronics instead of natural goo even
>>>    trying to mimic nerves may be a very inefficient approach.
>>>
>>>    Now for certain MILITARY uses ... todays 'AI' could really
>>>    produce some extremely dangerous weapons ... a flying
>>>    kill-em-all bot IS possible - 10 guns, 10 targets at once,
>>>    never misses (like they always do in the movies) ........
>>>
>>>    Bottom line, do NOT trust your car to drive for you.
>>>    This was a tech introduced LONG before it was really
>>>    viable. It can do "OK" ... but "OK" really isn't
>>>    good enough here. For certain apps todays 'AI" can
>>>    be useful, but don't trust your life to it.
>>
>> Teslas have crashed into trucks because apparently the flat white
>> backs look like the sky.
>>
>> I worry about the collision braking on my car after hearing another
>> brand thought a tunnel was a vehicle and braked and got rear ended.
>
>
>   The DOD funded DARPA contests to create self-navigating
>   vehicles. The test course was in a desert - not much in
>   the way of cross-traffic or pedestrians.
>
>   The 'AI' has been refined since then, but it still suffers
>   some problems inherent to the current approach. The things
>   must be "trained", but the real world has too many nuances
>   for such 'training' to be successful beyond a certain point.
>
>   'AI' lacks "SENSE" - the ability to extrapolate from the
>   specific to the general. Their "training" is just a bunch
>   of specifics ... but the ability to derive a general paradigm
>   to employ when presented with odd/novel cases is almost
>   non-existent. It's not really 'AI' - just fairly good
>   pattern-matching intended to evoke some canned responses.
>   It's just not GOOD enough. I'd trust a 5-year-old to
>   drive smarter. This isn't JUST Tesla - there are numerous
>   makers involved with the same tech.
>
>   Don't trust your life to these things - indeed insurers should
>   refuse to cover them.
>
>   MAYbe in ten or twenty years .........

If the incident is ticketed by Police, Who or what gets the ticket? Or
the responsibility? Does the car require a Drivers license and Liability
insurance. Can an unlicensed passenger be held responsible?

Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI" just AIN'T

<_XDMK.2096414$70_9.1928169@fx10.ams1>

  copy mid

https://www.novabbs.com/interests/article-flat.php?id=22932&group=alt.politics#22932

  copy link   Newsgroups: talk.politics.misc alt.survival alt.politics alt.science soc.culture.usa
Path: i2pn2.org!rocksolid2!news.neodome.net!news.samoylyk.net!news.uzoreto.com!feeder.usenetexpress.com!tr2.eu1.usenetexpress.com!feeder1.feed.usenet.farm!feed.usenet.farm!peer01.ams4!peer.am4.highwinds-media.com!news.highwinds-media.com!peer01.ams1!peer.ams1.xlned.com!news.xlned.com!feeder.cambriumusenet.nl!feed.tweaknews.nl!posting.tweaknews.nl!fx10.ams1.POSTED!not-for-mail
Subject: Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI" just AIN'T
Newsgroups: talk.politics.misc,alt.survival,alt.politics,alt.science,soc.culture.usa
References: <p-SdnUjAXLy2mWL_nZ2dnZfqn_jNnZ2d@earthlink.com> <tdnvcj$1fu1l$1@dont-email.me> <HrCcnaaRwthexp_-nZ2dnZfqn_qdnZ2d@earthlink.com> <tduv0a$2j1ol$1@dont-email.me>
From: ...@. (Mighty Wannabe)
Organization: Prometheus Society
MIME-Version: 1.0
In-Reply-To: <tduv0a$2j1ol$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable
Lines: 171
Message-ID: <_XDMK.2096414$70_9.1928169@fx10.ams1>
X-Complaints-To: abuse@tweaknews.nl
NNTP-Posting-Date: Mon, 22 Aug 2022 04:52:10 UTC
Date: Mon, 22 Aug 2022 00:52:02 -0400
X-Received-Bytes: 8645
 by: Mighty Wannabe - Mon, 22 Aug 2022 04:52 UTC

Rudy Crayola wrote on 8/22/2022 12:00 AM:
> On 8/21/2022 11:23 AM, 25B.Z969 wrote:
>> On 8/19/22 8:23 AM, Frank wrote:
>>> On 8/18/2022 11:33 PM, 25B.Z969 wrote:
>>>> https://www.dailymail.co.uk/sciencetech/article-11123757/Teslas-self-driving-software-confuses-horse-drawn-carriage-highway-semi-truck.html
>>>>
>>>>
>>>> Horsepower! Tesla's self-driving software goes haywire when
>>>> it spots a carriage on the highway and confuses the nostalgic
>>>> way of traveling with a semi-truck
>>>>
>>>> A video shared to TikTok shows a Tesla mistaking a horse-drawn
>>>> carriage for a semi-truck, pedestrian and sedan
>>>>
>>>> . . .
>>>>
>>>>    Don't blame Musk ... this is a problem with all current
>>>>    'AI' software that relies on extensive 'training' to
>>>>    sort-of understand what it's seeing. Humans also need
>>>>    such 'training' - but humans have enough added IQ and
>>>>    resources to "get it" even when seeing something quite
>>>>    novel.
>>>>
>>>>    In short, 'AI', as it currently exists, lacks common
>>>>    sense. It is still VERY "robotic". If whatever was not
>>>>    covered in its training then it's at a total loss. I'm
>>>>    surprised the car didn't just hit the accelerator and
>>>>    crash through the unknown obstacle.
>>>>
>>>>    Some years back the US Army funded a cutting-edge 'AI'
>>>>    system designed to recognize, perhaps attack, various
>>>>    kinds of enemy assets. They spent a long time teaching
>>>>    it what a tank looked like from every angle - and it
>>>>    seemed to score near 100%. Then they found out ... all
>>>>    the photos of tanks they'd trained it with were taken
>>>>    in the woods on clear sunny days. The 'AI' saw THAT
>>>>    and just said "Tank !" - didn't matter if there was
>>>>    a machine in the picture or not.
>>>>
>>>>    The ability to *recognize" can be found in animals as
>>>>    tiny as fleas and mosquitoes .... but electronic
>>>>    "intelligence" is lacking a billion years of real-world
>>>>    life-or-death experience and the super-specialized
>>>>    neural systems evolution refined. In theory we have
>>>>    the computing power now to make an e-mosquito, but
>>>>    we don't have the PLAN, the HOW IT WORKS, nature
>>>>    evolved. There's some trick to all this, and we do
>>>>    NOT seem able to find it.
>>>>
>>>>    Electronic 'AI' goes back to the 1950s. Even into the
>>>>    80's it was possible to find guru Marvin Minsky here
>>>>    on Usenet. Thing is, every generation of 'AI' researcher
>>>>    has GOTTEN IT WRONG. In the beginning it was an overly
>>>>    simplistic view ... "one transistor can give you a
>>>>    if/then decision" ... without considering the MASSIVE
>>>>    exercise required to provide that final transistor with
>>>>    the right info when it came to anything vaguely
>>>>    resembling the real world.
>>>>
>>>>    The (in)famous "HAL-9000" was a product of that overly-
>>>>    optimistic error in fictional form ... Arthur Clarke
>>>>    initially thought Minsky and friends were correct, that
>>>>    we'd have human-level AI by the dawn of the 21st century
>>>>    for sure.
>>>>
>>>>    NOW ... it seems to plan is to OVERPOWER the problem
>>>>    with vast numbers of little processors or 'nerve-like'
>>>>    memristor arrays which are 'trained' with vast numbers
>>>>    of images or other simulated inputs. But, as seen in
>>>>    this news blurb, even that doesn't really work very
>>>>    well. Something important is STILL missing, the paradigm
>>>>    just isn't right. Plenty of theories on what's missing
>>>>    but none seem to produce much. Likely, since we're
>>>>    dealing with electronics instead of natural goo even
>>>>    trying to mimic nerves may be a very inefficient approach.
>>>>
>>>>    Now for certain MILITARY uses ... todays 'AI' could really
>>>>    produce some extremely dangerous weapons ... a flying
>>>>    kill-em-all bot IS possible - 10 guns, 10 targets at once,
>>>>    never misses (like they always do in the movies) ........
>>>>
>>>>    Bottom line, do NOT trust your car to drive for you.
>>>>    This was a tech introduced LONG before it was really
>>>>    viable. It can do "OK" ... but "OK" really isn't
>>>>    good enough here. For certain apps todays 'AI" can
>>>>    be useful, but don't trust your life to it.
>>>
>>> Teslas have crashed into trucks because apparently the flat white
>>> backs look like the sky.
>>>
>>> I worry about the collision braking on my car after hearing another
>>> brand thought a tunnel was a vehicle and braked and got rear ended.
>>
>>
>>    The DOD funded DARPA contests to create self-navigating
>>    vehicles. The test course was in a desert - not much in
>>    the way of cross-traffic or pedestrians.
>>
>>    The 'AI' has been refined since then, but it still suffers
>>    some problems inherent to the current approach. The things
>>    must be "trained", but the real world has too many nuances
>>    for such 'training' to be successful beyond a certain point.
>>
>>    'AI' lacks "SENSE" - the ability to extrapolate from the
>>    specific to the general. Their "training" is just a bunch
>>    of specifics ... but the ability to derive a general paradigm
>>    to employ when presented with odd/novel cases is almost
>>    non-existent. It's not really 'AI' - just fairly good
>>    pattern-matching intended to evoke some canned responses.
>>    It's just not GOOD enough. I'd trust a 5-year-old to
>>    drive smarter. This isn't JUST Tesla - there are numerous
>>    makers involved with the same tech.
>>
>>    Don't trust your life to these things - indeed insurers should
>>    refuse to cover them.
>>
>>    MAYbe in ten or twenty years .........
>
> If the incident is ticketed by Police, Who or what gets the ticket? Or
> the responsibility?

In Toronto, Canada, if a vehicle is caught by red-light camera or photo
radar, the vehicle's registered owner gets the ticket. Since the driver
is unable to be positively identified, there is no demerit points for
the unknown driver.

> Does the car require a Drivers license and Liability insurance. Can an
> unlicensed passenger be held responsible?
>

I suppose, the self-driving vehicle has the government's seal of
approval to be on the road, so the government has implicitly issued a
driver's licence to the AI in the self-driving vehicle.

I think the owner of the self-driving vehicle should buy comprehensive
auto-insurance, which would cover cost of collision damages to itself,
physical properties and other vehicles involved, plus compensation for
injuries to passengers and pedestrians.

Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI" just AIN'T

<gOKdnXyvndi_sJn-nZ2dnZfqnPXNnZ2d@earthlink.com>

  copy mid

https://www.novabbs.com/interests/article-flat.php?id=22936&group=alt.politics#22936

  copy link   Newsgroups: talk.politics.misc alt.survival alt.politics alt.science soc.culture.usa
Path: i2pn2.org!rocksolid2!news.neodome.net!weretis.net!feeder6.news.weretis.net!news.misty.com!border-2.nntp.ord.giganews.com!border-1.nntp.ord.giganews.com!nntp.giganews.com!Xl.tags.giganews.com!local-2.nntp.ord.giganews.com!nntp.earthlink.com!news.earthlink.com.POSTED!not-for-mail
NNTP-Posting-Date: Tue, 23 Aug 2022 01:28:02 +0000
Subject: Re: Tesla Self-Driving AI Can't Even Cope with Horse-n-Buggy - "AI"
just AIN'T
Newsgroups: talk.politics.misc,alt.survival,alt.politics,alt.science,soc.culture.usa
References: <p-SdnUjAXLy2mWL_nZ2dnZfqn_jNnZ2d@earthlink.com>
<tdnvcj$1fu1l$1@dont-email.me>
<HrCcnaaRwthexp_-nZ2dnZfqn_qdnZ2d@earthlink.com>
<tduv0a$2j1ol$1@dont-email.me>
From: 25B.Z...@noda.net (25B.Z969)
Date: Mon, 22 Aug 2022 21:29:33 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
Thunderbird/78.13.0
MIME-Version: 1.0
In-Reply-To: <tduv0a$2j1ol$1@dont-email.me>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
Message-ID: <gOKdnXyvndi_sJn-nZ2dnZfqnPXNnZ2d@earthlink.com>
Lines: 162
X-Usenet-Provider: http://www.giganews.com
NNTP-Posting-Host: 98.77.165.215
X-Trace: sv3-xNHKpXla4DJVHHdTOn3ZsI6hUMQxZZNmu5mm9fV4DnBj3UJznQDTPZkn32ZyjMc4XjQwhJwa0z7iHUa!tbXkbr5gnun0S84BdZowRrBiYSqV3/36MIJHwC+R6Od2B3XJ7Bu3RzJyRaXGl9dH+UapsHmw2gWG!zBroQoST/r0r17Auvv/I
X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers
X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly
X-Postfilter: 1.3.40
 by: 25B.Z969 - Tue, 23 Aug 2022 01:29 UTC

On 8/22/22 12:00 AM, Rudy Crayola wrote:
> On 8/21/2022 11:23 AM, 25B.Z969 wrote:
>> On 8/19/22 8:23 AM, Frank wrote:
>>> On 8/18/2022 11:33 PM, 25B.Z969 wrote:
>>>>
https://www.dailymail.co.uk/sciencetech/article-11123757/Teslas-self-driving-software-confuses-horse-drawn-carriage-highway-semi-truck.html

>>>>
>>>>
>>>> Horsepower! Tesla's self-driving software goes haywire when
>>>> it spots a carriage on the highway and confuses the nostalgic
>>>> way of traveling with a semi-truck
>>>>
>>>> A video shared to TikTok shows a Tesla mistaking a horse-drawn
>>>> carriage for a semi-truck, pedestrian and sedan
>>>>
>>>> . . .
>>>>
>>>> Don't blame Musk ... this is a problem with all current
>>>> 'AI' software that relies on extensive 'training' to
>>>> sort-of understand what it's seeing. Humans also need
>>>> such 'training' - but humans have enough added IQ and
>>>> resources to "get it" even when seeing something quite
>>>> novel.
>>>>
>>>> In short, 'AI', as it currently exists, lacks common
>>>> sense. It is still VERY "robotic". If whatever was not
>>>> covered in its training then it's at a total loss. I'm
>>>> surprised the car didn't just hit the accelerator and
>>>> crash through the unknown obstacle.
>>>>
>>>> Some years back the US Army funded a cutting-edge 'AI'
>>>> system designed to recognize, perhaps attack, various
>>>> kinds of enemy assets. They spent a long time teaching
>>>> it what a tank looked like from every angle - and it
>>>> seemed to score near 100%. Then they found out ... all
>>>> the photos of tanks they'd trained it with were taken
>>>> in the woods on clear sunny days. The 'AI' saw THAT
>>>> and just said "Tank !" - didn't matter if there was
>>>> a machine in the picture or not.
>>>>
>>>> The ability to *recognize" can be found in animals as
>>>> tiny as fleas and mosquitoes .... but electronic
>>>> "intelligence" is lacking a billion years of real-world
>>>> life-or-death experience and the super-specialized
>>>> neural systems evolution refined. In theory we have
>>>> the computing power now to make an e-mosquito, but
>>>> we don't have the PLAN, the HOW IT WORKS, nature
>>>> evolved. There's some trick to all this, and we do
>>>> NOT seem able to find it.
>>>>
>>>> Electronic 'AI' goes back to the 1950s. Even into the
>>>> 80's it was possible to find guru Marvin Minsky here
>>>> on Usenet. Thing is, every generation of 'AI' researcher
>>>> has GOTTEN IT WRONG. In the beginning it was an overly
>>>> simplistic view ... "one transistor can give you a
>>>> if/then decision" ... without considering the MASSIVE
>>>> exercise required to provide that final transistor with
>>>> the right info when it came to anything vaguely
>>>> resembling the real world.
>>>>
>>>> The (in)famous "HAL-9000" was a product of that overly-
>>>> optimistic error in fictional form ... Arthur Clarke
>>>> initially thought Minsky and friends were correct, that
>>>> we'd have human-level AI by the dawn of the 21st century
>>>> for sure.
>>>>
>>>> NOW ... it seems to plan is to OVERPOWER the problem
>>>> with vast numbers of little processors or 'nerve-like'
>>>> memristor arrays which are 'trained' with vast numbers
>>>> of images or other simulated inputs. But, as seen in
>>>> this news blurb, even that doesn't really work very
>>>> well. Something important is STILL missing, the paradigm
>>>> just isn't right. Plenty of theories on what's missing
>>>> but none seem to produce much. Likely, since we're
>>>> dealing with electronics instead of natural goo even
>>>> trying to mimic nerves may be a very inefficient approach.
>>>>
>>>> Now for certain MILITARY uses ... todays 'AI' could really
>>>> produce some extremely dangerous weapons ... a flying
>>>> kill-em-all bot IS possible - 10 guns, 10 targets at once,
>>>> never misses (like they always do in the movies) ........
>>>>
>>>> Bottom line, do NOT trust your car to drive for you.
>>>> This was a tech introduced LONG before it was really
>>>> viable. It can do "OK" ... but "OK" really isn't
>>>> good enough here. For certain apps todays 'AI" can
>>>> be useful, but don't trust your life to it.
>>>
>>> Teslas have crashed into trucks because apparently the flat white
>>> backs look like the sky.
>>>
>>> I worry about the collision braking on my car after hearing another
>>> brand thought a tunnel was a vehicle and braked and got rear ended.
>>
>>
>> The DOD funded DARPA contests to create self-navigating
>> vehicles. The test course was in a desert - not much in
>> the way of cross-traffic or pedestrians.
>>
>> The 'AI' has been refined since then, but it still suffers
>> some problems inherent to the current approach. The things
>> must be "trained", but the real world has too many nuances
>> for such 'training' to be successful beyond a certain point.
>>
>> 'AI' lacks "SENSE" - the ability to extrapolate from the
>> specific to the general. Their "training" is just a bunch
>> of specifics ... but the ability to derive a general paradigm
>> to employ when presented with odd/novel cases is almost
>> non-existent. It's not really 'AI' - just fairly good
>> pattern-matching intended to evoke some canned responses.
>> It's just not GOOD enough. I'd trust a 5-year-old to
>> drive smarter. This isn't JUST Tesla - there are numerous
>> makers involved with the same tech.
>>
>> Don't trust your life to these things - indeed insurers should
>> refuse to cover them.
>>
>> MAYbe in ten or twenty years .........
>
> If the incident is ticketed by Police, Who or what gets the ticket? Or
> the responsibility? Does the car require a Drivers license and Liability
> insurance. Can an unlicensed passenger be held responsible?

Heh ... THOSE have been longstanding questions - and
we've had NO real answers so far. The makers kinda
weasel around it by "discouraging" what you'd call
"hands-off" mode - ie where you go to sleep or climb
into back seat with your S.O. or favorite sheep and
a crack-pipe or whatever. This makes it YOUR fault.

Now they are working on mass-transit apps for this tech
and Musk himself has a prototype 18-wheeler intended to
navigate 16 tons at 70mph all by itself through
beltway traffic. Amazon and others are testing autonomous
delivery vehicles, including ones that fly. As the first
case would have ZERO drivers, but potentially 50 unlicensed
humanoids and the other two would have would have ZERO drivers -
well - then it'd be the MAKERS faultt unless they could prove
the customer/operator failed to maintain the system (and
they'll make one or another bit of that complicated/time-
consuming HOPING the customer will be one update behind).

Remember Windows Vista ? Out of the box it was almost
unusable. You had to keep turning off as many 'security'
features as possible just to make it bearable. MS could
not FIX the security problems, so they made it YOUR fault
if something happened. This is why corps hire all those
expensive lawyers :-)

But in any case, 'AI' is just NOT ready for the highway.
WON'T be for a really long time yet. They've been putting
this stuff in vehicles for at least five years now and it
doesn't seem to be getting any 'smarter' - they merely
add one or two new pattern-match cases and assume that
will cover it.


Click here to read the complete article
1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor