Re: Augmenting humans is a better way

From: James Higgins (jameshiggins@earthlink.net)
Date: Sat Jul 28 2001 - 15:46:23 MDT


At 04:47 PM 7/28/2001 -0400, you wrote:
>James Higgins wrote:
> > >Exactly, it may well be impossible to come up with a one-size fits all
> > >technology for something as uniquely individual as the brain. And what
> > >company will take the risks to commercial it if they know that for many
> > >people it won't work, or they even risk getting sued? We live in a
> > >country where Dow Chemical got sued by women who got breast implants.
> > >Will companies really expose themselves to the kinds of risks involved
> > >with neural hacking?
> >
> > Hello? Sorry, but I just HAVE to point this out. Did you know that there
> > are more countries in the world than the United States? Personally,
>
>Yes and almost all of them are less advanced when it comes to biological
>and computing sciences. Many of them are close or even equivalent, but
>those same countries are also the ones who will likely be even less
>likely to work on Really Scary Human Augmenting science. Think Europe.
>So if you have to bail out of the USA that is going to extend the
>bio-based Singularity timeline even farther than I am already thinking
>about.

Same companies, different countries. You can buy medication overseas that
the FDA has not (or will not) approve for sale in the US. Multi-national
corporations make individual countries mostly irrelevant when it comes to
holding back new technology/advances.

> > if/when they come up with implants that offer a significant mental
> > advantage and have a low chance of screwing you up I *will* be getting
> > one. I don't care if I have to go to Japan, Europe, Russia, Mexico or
> > Chiba City (CyberPunk is my favorite fictional genre). When it becomes
> > possible to do, it will also become possible to get (and without waiting
> > for FDA approval)! Then, assuming these have a significant effect on
> > intelligence, the next series will likely be available sooner than might be
> > expected (you have to assume the developers are going to use their own
> > product). I also imagine that income for upgraded individuals will
> > drastically go up, which will make affording the next upgrade much
> > easier. Which is another reason why I'd want to get on the boat early.
> >
> > But, that said, this will still take a very long time. Possibly much
> > longer than the AI path. However, I will NOT say that the AI path is
> > likely to be faster than this path since NO ONE IN THE WHOLE WORLD HAS EVER
> > CREATED ANYTHING REMOTELY SIMILIAR TO REAL AI. And thus it is IMPOSSIBLE
>
>Now you are the one making claims.. for all you know Webmind may very well
>be remotely similar to real AI. In fact you have Ben here making that
>claim. I do not see anyone around claiming to be near to finishing a
>Real Neural Interface. RNIs seem to be around the stage of development
>that AI was back when computers were using vacuum tubes.
>
>A different way to look at it is this: with the computing power of the
>near future, AI is at the stage now where we can do real scientific
>experimentation. That (being able to really experiment) almost always
>leads to breakthroughs. RNIs are not there yet. I think you will agree
>with me that the AI path /definitely/ seems to be much farther along from
>these two perspectives.

No, I'm specifically NOT making claims. I'm taking a show me attitude.

IF/WHEN they come up with implants that A) make a significant difference in
mental capacity and B) aren't likely to screw up the recipient. I'm not
making any claims there.

"When it becomes possible to do, it will also become possible to get". If
the time is taken to design such an implant, it will become available in
one fashion or another. With enough money you can even go buy a nuclear
weapon, so you will be able to buy implants once they have been designed.

If I had an implant that significantly enhanced my mental ability, I feel
confidant that I could negotiate for much better pay. I'm already quite
good at this anyway.

As for Real AI, when someone gets one working then we can talk. The fact
is that no one knows what Real AI is going to require. I also think Ben is
doing a great job and is probably the most likely (that I know of anyway)
to succeed. However, no one can predict with any credibility that he will
succeed. I like to think he will and I would bet that he would also.

> > to estimate if/when we will ever get real AI. Without incredibly massive
> > funding it may take 15-20 years just to build a knowledge base sufficient
> > to kick start the thing. And you can't seriously argue the point because,
>
>Knowledge bases (shouldn't this be one word?) already exist both in natural
>form (the world, the Net) and in prepackaged formats like Cyc. Again, you see
>that AI is farther along in development.

Yes, they exist. Are they of the correct form? Do they contain sufficient
knowledge?

> > honestly, you don't know otherwise. I give very serious credit to Ben
> > Goertzel's opinions on AI (keep up the great work) and I doubt he could, in
> > all honesty, give any sort of realistic time line for the first Real AI
> > (TM). Thus I don't you, you don't know, we don't know.
>
>He may be unwilling to do so in public, but I can tell you that it
>won't take until 2030 according to rumors I hear...

Certainly the hardware will be available before then (there is a strong
track record to predict that on). But there is no track record to predict
Real AI. In order to make predictions on how long it will take to
understand it. We don't understand Real AI yet. I could just as easily
say we will break the barrier for traveling faster than the speed of light
by 2050, but that also requires knowledge that we don't yet have and thus
we can not predict this.

So everyone agrees that 2030 is the outside estimate, but that is just a
hopeful guess (that I also share, BTW).

> > Even self upgrading AI will take many steps to get there. Same exact
> > thing, just a different route. No technology is going to just go *blip*
> > and produce Singularity.
>
>Steps at computer speed, not biological hacking speed. VAST difference.

You're probably correct, but they both require steps. And no one knows for
certain, maybe an advanced biological implant would allow for reprogramming
& expendability, which could put both on similar terms.

>Actually you cannot say that for certain about AI. We definitely can
>say that about the biological route, at least up till the point we
>get nanotech/inloading. There is nothing to prevent an AI that is
>smart enough from developing a quick route to nanotech and then yes
>*blip* away we go.

But it will most likely take many steps to get to that point, especially
based on Eli's Seed AI.

> > You know, producing the first Real AI may be so difficult that it may just
> > require augmented humans to get their in any reasonable amount of
> > time. Have you considered that possible reality?
>
>No I do not see that as a reasonable possibility. Most AI scientists
>will agree that even if we can't design an AI, we can evolve one. By
>brute force if we have to by simply trying all possibile code. It's
>like picking a combination lock, if the lock is openable at all then
>you will eventually open it just by trying all ways. And the rise in
>computing power makes this almost inevitable by 2030 or even earlier.

Don't agree, especially after your argument. Trying "all possible code"
would take a very, very long time. Unless a significant percentage of
available resources were devoted to this it could easily exceed
2030. Computing power does you no good when it comes to software
development, a computer that runs 10 times faster has almost no effect on
the speed of the developer. Nural implants could, on the other hand, have
incredible impact on the speed of developers as maybe they could think code
instead of typing it. I'm not saying that this is going to be necessary,
but it would definitely be helpful and may be necessary in order to keep
the proposed time line.

Plus, it might be much more likely for enhanced humans to get friendliness
right the first time.

> > > > > the singularity than the imho cringing one proposed by the
> Institute of
> > > > > building an AI and - if everything works out as hoped - maybe
> humans will
> > > > be
> > > > > permitted to scale the heights; what I would call the "singularity by
> > > > proxy"
> > > > > path. I, for one, intend to participate DIRECTLY in the
> singularity. I
> > > > > hope there are at least a few others here as well.
> > > > >
> > >
> > >In order to participate directly in a transhuman based Singularity you
> > >would have to be one of the first humans enhanced into transhumanity. How
> > >do you plan to achieve that? Even if you do, the vast majority of humanity
> > >will just be riding your coattails no matter which path occurs first.
> > >
> > >Secondly, without an AI to guide things, what prevents individuals
> intra or
> > >post-Singularity from using nanotech or other ultratechnologies in
> destructive
> > >ways in an anarchic fashion? I'd like to hear a brief but coherent
> timeline/
> > >description of how you think this would play out. Our argument is that
> while
> > >it all probably would turn out ok, it would generally be safer to get a
> > >Friendly AI in place first.
> >
> > Well, personally, I'm still not sold on this whole Friendliness
>
>Care to answer my questions?

Which questions?

The participate directly one? Don't care. I would *like* to participate
but if it happens without me and goes smoothly I'll happily ride someone's
coattails.

Destruction / Anarchic? First, I have nothing against anarchy. Actually,
an anarchy where the individuals treat each other respectfully would be my
preference. As for destructive technologies, nothing. My personal belief
is that either super intelligence will promote friendliness or we're doomed.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT