From: James Higgins (firstname.lastname@example.org)
Date: Sat Jul 28 2001 - 04:10:22 MDT
At 04:32 AM 7/28/2001 -0400, Brian Atkins wrote:
>I'm just going to go through all three messages quoted here and
Ok, and I'll respond to a couple of those...
> > > > What is the problem is figuring out what exactly will make us smarter
> > > > and how to integrate that in to our existing brain architecture. It's
> > > > not as simple as adding more memory -- there is tons of different types
> > > > of memory in the brain and they are highly distributed very connected
> > > > with the computations being preformed. Also there are a lot of
> > > > calibration problems that have to be overcome if we would like to be
> > > > able to recognize meaningful patterns in the brain.
>Exactly, it may well be impossible to come up with a one-size fits all
>technology for something as uniquely individual as the brain. And what
>company will take the risks to commercial it if they know that for many
>people it won't work, or they even risk getting sued? We live in a
>country where Dow Chemical got sued by women who got breast implants.
>Will companies really expose themselves to the kinds of risks involved
>with neural hacking?
Hello? Sorry, but I just HAVE to point this out. Did you know that there
are more countries in the world than the United States? Personally,
if/when they come up with implants that offer a significant mental
advantage and have a low chance of screwing you up I *will* be getting
one. I don't care if I have to go to Japan, Europe, Russia, Mexico or
Chiba City (CyberPunk is my favorite fictional genre). When it becomes
possible to do, it will also become possible to get (and without waiting
for FDA approval)! Then, assuming these have a significant effect on
intelligence, the next series will likely be available sooner than might be
expected (you have to assume the developers are going to use their own
product). I also imagine that income for upgraded individuals will
drastically go up, which will make affording the next upgrade much
easier. Which is another reason why I'd want to get on the boat early.
But, that said, this will still take a very long time. Possibly much
longer than the AI path. However, I will NOT say that the AI path is
likely to be faster than this path since NO ONE IN THE WHOLE WORLD HAS EVER
CREATED ANYTHING REMOTELY SIMILIAR TO REAL AI. And thus it is IMPOSSIBLE
to estimate if/when we will ever get real AI. Without incredibly massive
funding it may take 15-20 years just to build a knowledge base sufficient
to kick start the thing. And you can't seriously argue the point because,
honestly, you don't know otherwise. I give very serious credit to Ben
Goertzel's opinions on AI (keep up the great work) and I doubt he could, in
all honesty, give any sort of realistic time line for the first Real AI
(TM). Thus I don't you, you don't know, we don't know.
> > > Of course, if you can interface one human, then you can do it to a
> > thousand
> > > or a billion. You don't need detailed models of the brain for this kind
> > of
> > > thing - at least to start. You can begin with a "what do you feel when I
> > do
> > > this?" kind of thing and once crude dni's are working, things can take
> > off.
>And so theoretically if you can "interface" a human, what does that get
>you? Slightly quicker output for typing or controlling machines, maybe
>slightly quicker input than you could get by reading? Expanded access
>to memory, but I bet that would be very hard to do. But where does the
>massive intelligence increase we want come from?
Even self upgrading AI will take many steps to get there. Same exact
thing, just a different route. No technology is going to just go *blip*
and produce Singularity.
You know, producing the first Real AI may be so difficult that it may just
require augmented humans to get their in any reasonable amount of
time. Have you considered that possible reality?
> > > the singularity than the imho cringing one proposed by the Institute of
> > > building an AI and - if everything works out as hoped - maybe humans will
> > be
> > > permitted to scale the heights; what I would call the "singularity by
> > proxy"
> > > path. I, for one, intend to participate DIRECTLY in the singularity. I
> > > hope there are at least a few others here as well.
> > >
>In order to participate directly in a transhuman based Singularity you
>would have to be one of the first humans enhanced into transhumanity. How
>do you plan to achieve that? Even if you do, the vast majority of humanity
>will just be riding your coattails no matter which path occurs first.
>Secondly, without an AI to guide things, what prevents individuals intra or
>post-Singularity from using nanotech or other ultratechnologies in destructive
>ways in an anarchic fashion? I'd like to hear a brief but coherent timeline/
>description of how you think this would play out. Our argument is that while
>it all probably would turn out ok, it would generally be safer to get a
>Friendly AI in place first.
Well, personally, I'm still not sold on this whole Friendliness
thing. Parts of it sound real nice on paper. Parts sound like 1984 might
be paradise in comparison. I personally believe it is incredibly unlikely
that you can code something so specific and have it persist, completely
intact, though millions of self-coded upgrades that produce huge increases
in intelligence. Now, add in the fact that you'll have very little
opportunity for testing it (has to be perfect on the 1st run) and, simply
by looking at software development in general, your almost guaranteed
failure. Plus, so far everyone seems to agree that we can't even test the
thing for friendliness, because if we even attempt to communicate with it
beyond the early stages it will open the box and set itself free if it
isn't friendly. Thus it sounds lovely, but at present I give you a 5% (at
best) chance of success. AI in general, however, I'd say has a much, much
better chance of achieving the Singularity (80%+).
>Director, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT