From: Kevin Osborne (kevin.osborne@gmail.com)
Date: Mon Jan 30 2006 - 03:49:54 MST
> On 1/30/06, Kevin Osborne <kevin.osborne@gmail.com> wrote:
> > I thought the idea is that any code is better code than no code
>
> This is not the case when it comes to AGI; getting it wrong could be
> far worse than doing nothing. A critical failure of Friendliness is a
> pretty bad existential risk.
look to be terse I think this issue is overblown; it's right up there
with grey goo. I don't know how many programming hours we are away
from being able to make something unfriendly, when the current state
of the field seems barely into it's infancy in terms of concious-being
creation. we currently can't create something which feels a damn
thing, so until we do, why the hell do we waste so many thought ticks
agonizing about whether we think it will one day pick us for the ball
team?. Scaremongers are present in GM, stem cells, nano and elsewhere,
and I'm sure we think they're plenty misguided. going OT, and sorry to
flame, but some of you old coots need to remove the stick from your
arse. You're supposed to be the shock level junkies, right? And these
problems are so far away in LoC terms as to be pointless to consider.
Stop being chickenshits already! maybe confine your fears to
philosophy/morals/ethics posts and give the engineering lights a
chance to breathe. could it be that the most formidable obstacle to
_friendly_ AGI development is scaredy-cat filibustering?
> > It seems intuitive that encouraging people _not_ to
> > program in the AGI sphere would be counterproductive.
>
> Unless encouraging people who are on the low end of "genius" means
> increasing the odds that the first successful AGI will be less
> Friendly than it would have been, had the standards of AGI project
> recruiters been less compromising.
sorry to be blunt, but this is more panty-pissing; just get a working
AGI already. We had nukes before we had non-proliferation treaties;
and yet no nuclear winter. we had hiv for feck knows how many years
before we had any kind of treatment, and yet it's a non-fatal disease
in the west now. It could be fair to say that a crucial component in
combatting any future malignant AI would be to have a wide range of
AGI implementations and instances with which to develop a 'friendly'
pool. And again, feeble attempts at code-on-chip =/= vengeful ai
overlord; let the mugs have a go - they just might surprise you.
> Speed and safety are competing factors in this. Do you try to start
> the Singularity as quickly as possible, or wait until you're really
> sure you can do it right, with as much margin for error as possible? I
> get the impression that the seed developer qualifications page is not
> just firmly on the side of safety, but designed to make sure that
> people motivated by speed who read it will be personally dissuaded.
the developer qualifications page is a good laugh with some serious
underpinnings. The depth and breadth of the literature involved is
enormous; thanks go to many on this list and others in the field who
are doing the heavy lifting of thrashing out these issues and
expanding our grasp on the subject matter.
(obligatory "but") - but - programmers need not be scientists or
geniuses to produce good code; they need to be good coders. they don't
really even need to be all that informed on the subject matter. I can
happily make an SSL connection without knowing a damn thing about the
tcp/ip stack, block ciphers or the discrete logarithm problem. Yet do
I get point-to-point encryption anyway? you bet.
The no doubt incredibly smart people on this list I'm sure would love
to have legions of these wishlist programmers they posit at their
disposal; you and every other VC or idea-hamster with a half-baked
idea. It's important to remember that the majority of the people I
talk to about this subject think I'm half-mad or on crack, and most of
them are in tech for chrissakes, as good an audience as you're going
to get. everyone who has emotional investment in the end-point of AGI
not being a failed dream should maybe think about that when turning
away the few of us nutty enough to think you might be on to something
with this kooky AI business
For you personally Ben please don't take any of my straight-talking as
any kind of personal attack; I'm sure you want to meet humanity's
future AGI love-child(ren) just as much as I do.
My position is one of genuine, general frustration that no real
resource avenues of worth seem to exist for programmers who want to
contribute. Syllabus and recruitment cutoffs have their place, but so
too does an open invitation to expand the development community in any
way possible for the good of the field.
The major
issue seems
to be one of 'they might create a monster!' which just doesn't wash
with me, if only in the fact that we somehow think that having read a
cognitive deficit text is going to stop an all-star programmer from
making just as hairy a boogeyman. It does have a touch of
superstitious old women about it, surely? read this tarot card, so
thoust may be safe from ee-vil!! I'll say it again; for every line of
code you actually get running you are one step closer to
something(anything!) than you are without it, no matter how crap the
code(r) actually is. I also don't think trying to hard-code any kind
of restraint into a sentient super-intelligence is going to last any
longer than a louisiana levee; free the code, dammit! stallman would
be appalled :-)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT