Page 2 of 2

Posted: Fri Aug 22, 2003 2:13 pm
by Psychophanta
Thanks both two.

I'll try to understand and work with it.


AL

My take on all this.

Posted: Fri Aug 22, 2003 10:04 pm
by D'Oldefoxx
Ah, the old Artificial Intelligence question. "What is Intelligence?" Some say the ability to solve problems. Some say the ability to recognize problems. Some say the ability to perceive of oneself ("I think, therefore I am"). Neat stuff. Sort of useless, you know?

The first challenge, I believe, was to have a computer attached to one end of a teletype line, and a human at the other, and for the computer to deftly respond to anything typed by the human until the human became utterly convenced that it was a human on the other side, not a machine. In other words, you could not distinguish the machine from the man in that limited context. The context was limited, of course, due to the fact that there are many aspects of being human that we do not envison as being renderable into machine form, such as the ability to get sick, shed skin, grow hair and nails, emit gas, or take out its frustrations on us.

I presume that the intent was to limit the questions and answers to the relm of knowledge and rational thought. But humans can be unrational - emotional, thoughtless, tempremental, obstinate, in a joking or taunting mood, so I suspect that it would always quickly become apparent that only a machine was on the other side of that circuit. It might appear to be smart, but it is not a person.

So, artificial intelligence is trying to focus on just the "smart" aspect of being human. We deem that a highly desireable trait, and we not only want more of it, we want more of it focused on our wants and desires.
So now we know what we are talking about, agreed?

A lot of what we regard as "smart" is tied up in propogation of the species and survival. For us humans, that is. We mant machines to be as smart in those areas, but to always put our needs and interests ahead of their own. Thus, the three prime robotic laws, which no-one has seriously challenged since Asimov proposed them, I think back in the 1940's. Trouble is, these imply a sence of justice, morality, or ethics that simply do not map into circuit form. So no machine has been devised that adheres to these laws. They are from philosophy, not physics.

If you want to study Aritifical Intelligence, then you really need to look beyond neural networks into Fuzzy Logic. Fuzzy Logic is not about True and False, but about Maybe. It is a system of weighed values and associations., like if this changes, then how does it [seem to] impact that?
It begins with the idea that there is a desired outcome, or series of outcomes, and then tries to identify what contributes or detracts from that desired outcome. It hopes that if we can learn to convey to a process that uses Fuzzy Logic what we regard as a desired outcome, that the process will somehow discover from the values and sources fed to it which ones in conjunction with each other tend to bring us closer to our desired goal. In otherwords, it strives to make correlations, and to improve its results over time as further observations are made.

But suppose one of your stated purposes is to cut business expenses, and you place the weight on this very high in comparison to other goals. There is no guarantee that the Fuzzy Logic process might not decide closing all your plants, selling off all our property, and firing everyone isn't the best way to maximize your cost cutting efforts. And Fuzzy Logic tends to favor averaging and trends, meaning that it becomes inflexable to a changed environment. You might start off in a long chess game with the idea that winning is the number one goal, but midway through the game that goal may change as you get tired, need to eat, or just want to stop to watch one of your favorite sports teams in the playoffs. Then you might decide your immediate goal is to call a halt, capitulate, or force a draw. You might be able to restate your goals to the Fuzzy Logic process, but it would then depend on whether the process had any case examples on hand that would help it achieve that goal.

So there are aspects to intelligence and being smart that need to be explored. Common sence is one of those. Changing goals is another. Shifting environment is a third. And intuitive leaps (learned associative rules applied in different context) is likely a fourth.

All thoughts welcomed.

,

Posted: Fri Aug 22, 2003 11:19 pm
by Psychophanta
In the past there are a few philosophers explaining us some about those questions you talk about in a phylosophic point of view: F. Nietzsche, Voltaire, S. Freud, A. Camus, and some other nihilists in present-days.

However, it is hard to understand by common people.
Same as it is hard to understand the micro-physics, quantic physic (which is based nowadays in probability calculations, like Fuzzy logic).

I am sure that a machine with the human intelligence (smart), but without human emotion should be auto-destroyed.
The more intelligent and conscious, the more fast auto-destroyed.
At least that it had a goal (which is always a instruction of "some" master who convert the machine in a "clever" slave of that goal).

But come on, please, all that is another matter... :roll:
As i said to LJ; lets try to do something with neuronal nets with PB, OK?



AL

Posted: Tue Feb 17, 2004 11:06 am
by Dare2
I know this topic is reasonably old, but it was very very interesting.

Re Hopfield et el.

Isn't it a bit artificial :) to remove IF (or conditional test) statements from a program attempting to emulate a neural network (NN)?

If the NN was modelled (by Mead) using a chip or circuit board, then it had states, flow and conditions. These were set by and/or controlled or at least influenced the NN. It therefore had conditions, and a means to react to, and act differently because of, these conditions.

If software conditions/tests are disqualified, then the hardware model must be disqualified as well, surely?

Even if the NN was created using an hydraulic system, it still has conditions and reactions to those conditions.

In and of itself there were states, conditions, and behavioral changes caused by this.

Surely?

To use another approach, and the word "describe".

Mathematics can be used to describe just about anything that can be understood (and often can be used to theorise to a point where something can be understood). Software can be used to create mathematical models to simulate or describe those things that are understood.

So chemical reactions, relationships, anything that can be understood, can be "described" by software.

Surely then a NN can be described?

And "branching", "weighting" and complex relationships can be described using conditions and tests for conditions, regardless of whether the states being tested are simple on/off or much more complex. Regardless of whether they cause dramatic branching or just modify some weighting to further strengthen/weaken a path.

Whilst we can say a PC is not a NN, we cannot say a PC cannot simulate the behaviour of a NN. It may not have yet been done, or done effectively, but it is possible.

PS: I only read the Hopfield thing twice, not thrice. :)

Posted: Tue Feb 17, 2004 2:49 pm
by Psychophanta
Nice comment, Dare2.

I don't like to use the "intelligence" term, because it is missunderstood by most.
Lets talk about "learning" and "understanding".
I've made with PB some algorithms to do an entity really learn; and it works, it is a total pleasure for me to watch it working. :D
I didn't use NN, but combinations of these three things:
- programming loops
- conditions
- mathematics

So, with all this, i mean that i am sure what is called AI (bad called) is possible just with those mentioned three things.

In general, a learning device must:
- Perform tasks whose are supposed to work in order to get the wanted result. (Notice that the wanted result and the performed tasks must be perfectly known).
- Watch to the past and:
- compare the wanted result with the actually obtained result.
- Extract that difference from that comparison.
- "Abstract" (from that differences extraction) what was the "external forces" whose modified our attempt, in order to "correct" the "faults" in performed operations against those external forces.

All this learning method can be made using NN but perhaps could be better to do it using maths (surely maths method only should be harder to implement)... i doubt...

Posted: Tue Feb 24, 2004 3:33 pm
by Psychophanta
Here is a visual demonstration of what is called A.I.:
http://perso.wanadoo.es/akloiv/ShunSeek&Shoot2D.zip

Yellow objects are launched with random direction and random speed.
Main program captures their positions in known times intevals, what means to know their exact movements vectors (speed and direction at any moment). Grey objects are projectiles launched by the protagonist (brown biggest object) to find yellow objects in the less possible time.

Still there are not learning methods, but only math calculations using vectorial geometry.
Since the visual demonstration shows only 2D, all the included calculations are valid for "n" dimensions, i mean all included algorithms are valid for 3D, 4D, 5D, ...

I will add learning algorithms for projectiles when are moved in an environment with external forces; wind, rub and gravity.

Besides, I will TRY to add a good understanding for the protagonist to get the best positions in order to avoid be collided with any object (whose movements should not be always straight and with constant speed).
But this task is really, really, really hard, because it is similar to a chess game, and i don't know how to get deep in that matter.
If someone have interest in this SHUNNING project, please contact me.

Posted: Tue Feb 24, 2004 5:45 pm
by Dare2
Hi Psychophanta,

I can't tell what is happening on screen as I can only see the first 3 lines of text and in the same screen zone, the odd yellow object from time to time (there is too much flicker, I have a lousy graphics card).

I am not sure exactly what the objective is, but if it is to make the targets smarter in avoiding and/or the missiles smarter in tracking for a game, would building in "human error" with diminishing error as levels get tougher perhaps be better than using a learning approach?

I may be talking rot, because, as I say, I am not quite sure what is meant to be happening.

If I am talking rot, ignore this. :)

Posted: Tue Feb 24, 2004 10:21 pm
by LarsG
Psychophanta: that example flickered like a mf, so I almost couldn't see anything.. then when I pressed esc, the program crashed, and took winxp with.. (rebooted).. :(

Posted: Tue Feb 24, 2004 10:37 pm
by Psychophanta
Dare2,
Perhaps you have not installed your screen-monitor driver in windows, and that's probably the reason why you can't see it finely.
This visual demonstration is intended to be watched in a 1024x768 screen.
I use this to open Screen:

Code: Select all

bitplanes.b=32:RX.l=1024:RY.l=768
If InitSprite()=0 Or InitKeyboard()=0 Or InitMouse()=0
  MessageRequester("Error","Can't open DirectX",0):End
EndIf
While OpenScreen(RX.l,RY.l,bitplanes.b,"")=0
  If bitplanes.b>16:bitplanes.b-8
  ElseIf RY.l>600:RX.l=800:RY.l=600
  ElseIf RY.l>480:RX.l=640:RY.l=480
  ElseIf RY.l>400:RX.l=640:RY.l=400
  ElseIf RY.l>240:RX.l=320:RY.l=240
  ElseIf RY.l>200:RX.l=320:RY.l=200
  Else:MessageRequester("VGA limitation","Can't open Screen!",0):End
  EndIf
Wend
http://perso.wanadoo.es/akloiv/Shun,Seek&Shoot.zip

If you got to watch it you'll see that it is more clever than a human. :wink:
Don't forget to increment objects amount (Up arrow key)

Posted: Tue Feb 24, 2004 11:02 pm
by Num3
This is the reasonwhy the Matrix Thrilogy is on my all time movies...

Cause and effect... :wink:

Re: Neural Net Programming

Posted: Sun Jul 31, 2005 3:35 am
by Randy Walker
LJ wrote:@Balrog:
We must be correct when we use the term neural net that we do not use it interchangeably with the term A.I., for while few on this web site would notice, should you produce a commercial program and advertise it as a neural net, you will surely be scorned to death, and probably sued if you were not to refund your customers.
As long as the dictionary comes up with definitions like this:

<i>neural network also neural net
n.
A real or virtual device, modeled after the human brain, in which several interconnected elements process information simultaneously, adapting and learning from past patterns.</i>
Ref: http://dictionary.reference.com/search?q=neural%20net

... anyone can apply the term to any form of computer configured for any form of AI they want. It's only an anatomical reference... not a trademark.

Then again, speaking from the US, you can be sued for literally anything but, only one who's intellegence is limited to esoteric interpretations would ever be offended. The bottom line is whether or not the product serves the users needs. IF it ''simulates thought'' AND it runs on a computer THEN it can be called a neural network. (Esoterics/semantics aside.)

Not intended as a slam... just want to point out that the esoteric definition of a term does not encompass the full extent of it's meaning. Contesting the difference in court would be comparable to charging software companies producing mirroring software for cheating all their customers because the image on the mirrored drive is not literally "mirrored", and what good would it be if it did? Bottom line is the same... did it serve the need? IF it did, THEN you can call it cheesecake for all that matters. Just make sure "cheesecake" isn't already a registered trademark and don't worry about limited esoteric interpretation.

Re: My take on all this.

Posted: Sun Jul 31, 2005 3:58 am
by Randy Walker
D'Oldefoxx wrote:Ah, the old Artificial Intelligence question. "What is Intelligence?" Some say the ability to solve problems. Some say the ability to recognize problems. Some say the ability to perceive of oneself ("I think, therefore I am"). Neat stuff. Sort of useless, you know?
...
All thoughts welcomed.
The whole discussion reminds me of a another word I looked up recently. I found it amusing anyway:

<i>ac·a·dem·ic
adj.
1. Of, relating to, or characteristic of a school, especially one of higher learning.
2.
a. Relating to studies that are liberal or classical rather than technical or vocational.
b. Relating to scholarly performance: a student's academic average.
3. Of or belonging to a scholarly organization.
4. Scholarly to the point of being unaware of the outside world. See Synonyms at pedantic.
5. Based on formal education.
6. Formalistic or conventional.
7. Theoretical or speculative without a practical purpose or intention. See Synonyms at theoretical.
8. Having no practical purpose or use.</i>
Ref: http://dictionary.reference.com/search?q=academic

I especially love the way we went from ''...higher learning" to ''no practical purpose or use''. That's pretty much how I feel about the overly ''educated''.

On the other hand, D'Oldefoxx (obviously quite ''knowlegdable'') always comes up with the craftiest of retorts :-)

Knowledge imlpies understanding, whereas ''educated'' only implies amenable to programming within the realm of standards set by those who profess ''higher learning". :lol: