From: Ben Goertzel (firstname.lastname@example.org)
Date: Sun Jun 30 2002 - 11:51:56 MDT
> > The idea that Novamente has more potential to *ever* be smarter than
> > Microsoft Word is *also* "just my opinion"... or rather, "just
> the opinion
> > of me and the others who have studied the codebase"
> > Can't you see that if the odds of a certain software system
> going superhuman
> > are *sufficiently low*, then no protective measures are
> necessary, or even
> > meaningful?
> Certainly, although last I checked M$ Word wasn't self modifying.
If it were, would Microsoft tell you??
> the point all my devil's advocacy is about: if you get to the point where
> you have running self modifying code, you should already have in place
> plenty of safety measures.
Actually, now I think you're being too unconservative. Or maybe just loose
with the definition of "self-modifying."
I think that a takeoff to vastly superhuman intelligence *could* occur
without any significant self-modification, just via a system with a fixed
(but good) AI architecture learning a lot, including learning how to think
better. Of course, all learning involves some self-modification (modifying
the memory with new knowledge, modifying one's cognitive schema for
approaching different kinds of computers, etc.)
I would rather say "when you get to the point where you have a system that
can autonomously interact with the outside world, and can potentially
experience significant intelligence increase."
> Actually, no the reason is just that you're around and willing to talk
> about it. Which I do have to give you a lot of credit for. I'll be even
> more impressed if you do eventually get out your "social policy" for
> criticism well before you get your total codebase running.
Yeah, we will do that...
> believe anyone
> of them, you, us, or anyone else working on this stuff needs to
> have concrete
> well-documented plans for how to deal with these issues well
> before they get
> near the point of actual testing. If one of them was here instead
> of you I'd
> be pressuring them just the same... hopefully they are getting the message
> by lurking.
So far as I know, all other AGI developers take about the same position as
me, albeit more quietly.
They think that once their system is advanced enough according to their own
intuitions, they're going to put in appropriate protections.
E.g. I know Peter Voss thinks it's just way too early to be seriously
talking about such things, and that he's said as much to Eliezer as well...
> I am still concerned that your commercial focus may be causing you to
> cut dangerous corners in the long run,
hey, I would love to have some funding for "pure Novamente AI" work,
hopefully that will come in the next couple years...
>hopefully you'll address that in
> your eventual documentation. Have a good vacation! :-)
This archive was generated by hypermail 2.1.5 : Sat May 18 2013 - 04:00:29 MDT