From: Rick Geniale (rickgeniale@pibot.com)
Date: Sat Feb 04 2006 - 06:43:42 MST
P K wrote:
>
>
>
>> From: "nuzz604" <nuzz604@gmail.com>
>> Reply-To: sl4@sl4.org
>> To: <sl4@sl4.org>
>> Subject: Re: AGI Reproduction? (Safety)
>> Date: Fri, 3 Feb 2006 20:15:20 -0800
>>
>> I understand that you have good intentions with AGI. My worry
>> involves -accidentally- creating an unfriendly AGI (And this worry
>> applies to anybody who builds an AGI). You can have good intentions
>> and still create an unfriendly AI because of some design flaw (even
>> if it appears to be a great design).
>
>
>> I am worried because no one really knows how a Seed AI will function
>> when it is turned on, and whether it will be friendly or not.
>
>
> The odds of randomly stumbling upon a working AGI are extremely small.
> AGI programming will most likely be a very deliberate process. In
> other words, if and when AGI is created, the builder(s) will most
> likely know exactly what they are doing.
>
>> There are so many things that can go wrong.
>
>
> Yes, but for an AGI to work, allot of things would have to go right.
> Why would builder(s) capable of overcoming the enormous technical
> challenges of making a working AGI succeed on all the other points and
> fail on that particular point; friendliness. I'm amazed at how, in SF,
> AGI creators are smart enough to build it but give goal systems so
> stupidly flawed that...(I know SF is just for entertainment, I'm
> trying to prove a point here.)
>
> The complexity of the task (AGI) is naturally selecting for builder(s)
> that have a clue.
>
> Do you have any particular reason to believe that the FAI problem is
> more complex than the AGI problem? Most people seem to believe that
> intuitively. This is due to two reasons.
> 1) It is easier to argue about FAI because it doesn't require as much
> technical knowledge. It is easier to grasp the complexity of the
> Friendliness problem first hand. It looks like a big thing to solve.
> 2) General intelligence seems kind of straightforward because we do it
> all the time however; doing it is definitely not the same as coding
> it. In fact, people systematically underestimate how complex AGI
> really is. There have been many that claimed to have the AGI solution.
> They have all failed todate. If you ever try coding an AGI you will
> very likely realize it is more complex than you originally thought.
>
> These two reasons cause people to focus on the FAI problem more than
> on the AGI problem. Which, in my opinion is a mistake at this stage.
>
> There is another twist to this. The FAI or UFAI concepts are mostly
> useless without AGI however; working on AGI will very likely help
> develop FAI theory.
> 1) AGI theory will give a clearer picture of how FAI can be
> technically implemented.
> 2) AGI work can have semi-intelligent tools as offshoots that, when
> combined with human intelligence, enhance it (ex: human + computer +
> Internet > human). We could then work on FAI theory more efficiently
> (and AGI aswell).
Finally somebody is hitting the target.
Also, the problem of the hard takeoff is fake. It has never existed. It
pertains only to SF (I will explain better this point on our site).
>
>> This is why I think that the system and its safety should be
>> analyzed, and go through at least several phases of testing before
>> activation is even considered.
>
>
> There will be plenty of testing all along the project. And there wont
> be just a single activation where the coders put some jargon together,
> compile and see what happens. (See above)
> Poof-> FAI -> YAY! ->JK ->UFAI ->NOO! ->R.I.P.
Totally agree.
>
>> I would also feel better if these tests are conducted by a team of
>> independent AGI researchers rather than just one firm, or RGE Corp.
>> by itself.
>
>
> The AGI coder(s) will be pre-selected for competence by the complexity
> of the task. The point is moot for evil coder(s) since they wouldn't
> agree for inspection anyway.
> How would the "independent AGI researchers" be selected? How would we
> know they are trustworthy and competent? I think this introduces more
> uncertainty than it removes.
But what evil coders? There is no evil coders (never existed). We are
simply a company formed by people that have done an hard work, that have
sweated blood to make something positive, and that have invested much
time, money and efforts to do that.
Regarding PIBOT, the first step is to see if it functions properly and
how it functions (IOW, what PIBOT can do?). The second step is to do an
assessment of the AGI scenario. The third step is to open and to share
the technology.
Furthermore, we are Italians. Do you know Italy? The country of sun and
see. We like good cuisine. We like to dress well. We like friendship. We
live very well in our country.
>> You can have many shots at creating a Seed AI, but you only get one
>> shot at creating a friendly one.
>> If this is the Seed AI that is successful, then I say make that
>> friendly shot count.
>
>
> If you are shooting randomly, there is a small chance you will hit the
> right target, a slightly larger chance you will hit the wrong target,
> and an overwhelmingly huge chance you will never hit anything. If
> you're one of the few who have some talent, opportunity, and
> persistence, you can perfect your archery and hit targets at will. We
> hope you aim at FAI for all.
>
>> Since you want to be open with the technology, I think that it is an
>> idea worth considering.
>
>
> I am somewhat suspicious of AGI claims. The more advanced an AI is the
> less public proving it needs. A seed AI could recursively self
> improve, start the singularity ... trust me we would know. A
> superhuman AI that is a bit short of the Singularity (could happen)
> could at least make its owners millionaires, just let it play with the
> stock market. Even a slow AI could do some amazing things. At least it
> would be immune to human biases. There might be some areas where it
> could outperform humans. They would have investors chasing them not
> them having to prove themselves. When I think of some workshop format
> proving, what comes to mind is Eliza type AI. It looks smart at first
> glance but is inferior to humans in practically every way. I'm not
> trying to be rude, but the lack of a splash sort of indicates that
> there isn't that much to see.
>
> _________________________________________________________________
> Take advantage of powerful junk e-mail filters built on patented
> Microsoft® SmartScreen Technology.
> http://join.msn.com/?pgmarket=en-ca&page=byoa/prem&xAPID=1994&DI=1034&SU=http://hotmail.com/enca&HL=Market_MSNIS_Taglines
> Start enjoying all the benefits of MSN® Premium right now and get the
> first two months FREE*.
>
>
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT