From: Brian Atkins (brian@posthuman.com)
Date: Sat Jul 28 2001 - 14:47:39 MDT
James Higgins wrote:
>
> > > > > What is the problem is figuring out what exactly will make us smarter
> > > > > and how to integrate that in to our existing brain architecture. It's
> > > > > not as simple as adding more memory -- there is tons of different types
> > > > > of memory in the brain and they are highly distributed very connected
> > > > > with the computations being preformed. Also there are a lot of
> > > > > calibration problems that have to be overcome if we would like to be
> > > > > able to recognize meaningful patterns in the brain.
> >
> >Exactly, it may well be impossible to come up with a one-size fits all
> >technology for something as uniquely individual as the brain. And what
> >company will take the risks to commercial it if they know that for many
> >people it won't work, or they even risk getting sued? We live in a
> >country where Dow Chemical got sued by women who got breast implants.
> >Will companies really expose themselves to the kinds of risks involved
> >with neural hacking?
>
> Hello? Sorry, but I just HAVE to point this out. Did you know that there
> are more countries in the world than the United States? Personally,
Yes and almost all of them are less advanced when it comes to biological
and computing sciences. Many of them are close or even equivalent, but
those same countries are also the ones who will likely be even less
likely to work on Really Scary Human Augmenting science. Think Europe.
So if you have to bail out of the USA that is going to extend the
bio-based Singularity timeline even farther than I am already thinking
about.
> if/when they come up with implants that offer a significant mental
> advantage and have a low chance of screwing you up I *will* be getting
> one. I don't care if I have to go to Japan, Europe, Russia, Mexico or
> Chiba City (CyberPunk is my favorite fictional genre). When it becomes
> possible to do, it will also become possible to get (and without waiting
> for FDA approval)! Then, assuming these have a significant effect on
> intelligence, the next series will likely be available sooner than might be
> expected (you have to assume the developers are going to use their own
> product). I also imagine that income for upgraded individuals will
> drastically go up, which will make affording the next upgrade much
> easier. Which is another reason why I'd want to get on the boat early.
>
> But, that said, this will still take a very long time. Possibly much
> longer than the AI path. However, I will NOT say that the AI path is
> likely to be faster than this path since NO ONE IN THE WHOLE WORLD HAS EVER
> CREATED ANYTHING REMOTELY SIMILIAR TO REAL AI. And thus it is IMPOSSIBLE
Now you are the one making claims.. for all you know Webmind may very well
be remotely similar to real AI. In fact you have Ben here making that
claim. I do not see anyone around claiming to be near to finishing a
Real Neural Interface. RNIs seem to be around the stage of development
that AI was back when computers were using vacuum tubes.
A different way to look at it is this: with the computing power of the
near future, AI is at the stage now where we can do real scientific
experimentation. That (being able to really experiment) almost always
leads to breakthroughs. RNIs are not there yet. I think you will agree
with me that the AI path /definitely/ seems to be much farther along from
these two perspectives.
> to estimate if/when we will ever get real AI. Without incredibly massive
> funding it may take 15-20 years just to build a knowledge base sufficient
> to kick start the thing. And you can't seriously argue the point because,
Knowledge bases (shouldn't this be one word?) already exist both in natural
form (the world, the Net) and in prepackaged formats like Cyc. Again, you see
that AI is farther along in development.
> honestly, you don't know otherwise. I give very serious credit to Ben
> Goertzel's opinions on AI (keep up the great work) and I doubt he could, in
> all honesty, give any sort of realistic time line for the first Real AI
> (TM). Thus I don't you, you don't know, we don't know.
He may be unwilling to do so in public, but I can tell you that it
won't take until 2030 according to rumors I hear...
>
> > > > Of course, if you can interface one human, then you can do it to a
> > > thousand
> > > > or a billion. You don't need detailed models of the brain for this kind
> > > of
> > > > thing - at least to start. You can begin with a "what do you feel when I
> > > do
> > > > this?" kind of thing and once crude dni's are working, things can take
> > > off.
> >
> >And so theoretically if you can "interface" a human, what does that get
> >you? Slightly quicker output for typing or controlling machines, maybe
> >slightly quicker input than you could get by reading? Expanded access
> >to memory, but I bet that would be very hard to do. But where does the
> >massive intelligence increase we want come from?
>
> Even self upgrading AI will take many steps to get there. Same exact
Steps at computer speed, not biological hacking speed. VAST difference.
> thing, just a different route. No technology is going to just go *blip*
> and produce Singularity.
Actually you cannot say that for certain about AI. We definitely can
say that about the biological route, at least up till the point we
get nanotech/inloading. There is nothing to prevent an AI that is
smart enough from developing a quick route to nanotech and then yes
*blip* away we go.
>
> You know, producing the first Real AI may be so difficult that it may just
> require augmented humans to get their in any reasonable amount of
> time. Have you considered that possible reality?
No I do not see that as a reasonable possibility. Most AI scientists
will agree that even if we can't design an AI, we can evolve one. By
brute force if we have to by simply trying all possibile code. It's
like picking a combination lock, if the lock is openable at all then
you will eventually open it just by trying all ways. And the rise in
computing power makes this almost inevitable by 2030 or even earlier.
>
> > > > the singularity than the imho cringing one proposed by the Institute of
> > > > building an AI and - if everything works out as hoped - maybe humans will
> > > be
> > > > permitted to scale the heights; what I would call the "singularity by
> > > proxy"
> > > > path. I, for one, intend to participate DIRECTLY in the singularity. I
> > > > hope there are at least a few others here as well.
> > > >
> >
> >In order to participate directly in a transhuman based Singularity you
> >would have to be one of the first humans enhanced into transhumanity. How
> >do you plan to achieve that? Even if you do, the vast majority of humanity
> >will just be riding your coattails no matter which path occurs first.
> >
> >Secondly, without an AI to guide things, what prevents individuals intra or
> >post-Singularity from using nanotech or other ultratechnologies in destructive
> >ways in an anarchic fashion? I'd like to hear a brief but coherent timeline/
> >description of how you think this would play out. Our argument is that while
> >it all probably would turn out ok, it would generally be safer to get a
> >Friendly AI in place first.
>
> Well, personally, I'm still not sold on this whole Friendliness
Care to answer my questions?
-- Brian Atkins Director, Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT