Engelberger and others at the show drew a sharp contrast between the explosive growth of the computer industry over the past few decades and the relative stagnation of the robotics field. While venture capitalists were lining up to fund computer start-ups, Engelberger, despite his impressive résumé, was unable to get financing for his robot that would help people live at home rather than go into a nursing home.
The robotics industry today is about as far along the road to widespread commercial acceptance as the PC industry was in the 1970s. The differences are that robotics don't have an equivalent of Moore's Law, the industry hasn't settled on standards, there's not much in the way of venture capital money and there's really no viable commercial application - killer or otherwise, said Paolo Pirjanian, chief scientist at Evolution Robotics .
On the show floor, several vendors displayed small demo robots that used sensors to navigate the show floor - literally technologies in search of an application. Unfortunately, the economics are such that it's extremely difficult to build a true robot that can interact with its environment at a cost that would attract consumers, Pirjanian said.
The vacuum cleaner is a good example. Electrolux tried to market a robotic vacuum cleaner called Trilobite that uses ultrasound to get around, but at $1,800 consumers weren't biting. The Roombas and e-Vacs are affordable - between $150 and $250 - but they lack the sophisticated capabilities that one would want in a robotic vacuum cleaner, such as obstacle avoidance, the ability to go up and down steps, and the ability to know where it had already vacuumed.
Hello! Friends....... This blog is created specially for recent updates related to Artificial intelligence and Robotics......
Tuesday, January 19, 2010
Saturday, January 16, 2010
The Future Of Robots
Engineers built humanoid robots that can recognize objects by color by processing information from a camera mounted on the robot's head. The robots are programmed to play soccer, with the intention of creating a team of fully autonomous humanoid robots able to compete against a championship human team by 2050. They have also designed tiny robots to mimic the communicative "waggle dance" of bees.
A world of robots may seem like something out of a movie, but it could be closer to reality than you think. Engineers have created robotic soccer players, bees and even a spider that will send chills up your spine just like the real thing.
They're big ... they're strong ... they're fast! Your favorite big screen robots may become a reality.
Powered by a small battery on her back, humanoid robot Lola is a soccer champion.
"The idea of the robot is that it can walk, it can see things because it has a video camera on top," Raul Rojas, Ph.D., professor of artificial intelligence at Freie University in Berlin, Germany, told Ivanhoe.
Using the camera mounted on her head, Lola recognizes objects by color. The information from the camera is then processed in this microchip, which activates different motors.
"And using this camera it can locate objects on the floor for example a red ball, go after the ball and try to score a goal," Dr. Rojas said. A robot with a few tricks up her sleeve.
German engineers have also created a bee robot. Covered with wax so it's not stung by others, it mimics the 'waggle' dance -- a figure eight pattern for communicating the location of food and water.
"Later what we want to prove is that the robot can send the bees in any decided direction using the waggle dance," Dr. Rojas said.
Robots like this could one day become high-tech surveillance tools that secretly fly and record data ... and a robot you probably won't want to see walking around anytime soon? The spider-bot.
A world of robots may seem like something out of a movie, but it could be closer to reality than you think. Engineers have created robotic soccer players, bees and even a spider that will send chills up your spine just like the real thing.
They're big ... they're strong ... they're fast! Your favorite big screen robots may become a reality.
Powered by a small battery on her back, humanoid robot Lola is a soccer champion.
"The idea of the robot is that it can walk, it can see things because it has a video camera on top," Raul Rojas, Ph.D., professor of artificial intelligence at Freie University in Berlin, Germany, told Ivanhoe.
Using the camera mounted on her head, Lola recognizes objects by color. The information from the camera is then processed in this microchip, which activates different motors.
"And using this camera it can locate objects on the floor for example a red ball, go after the ball and try to score a goal," Dr. Rojas said. A robot with a few tricks up her sleeve.
German engineers have also created a bee robot. Covered with wax so it's not stung by others, it mimics the 'waggle' dance -- a figure eight pattern for communicating the location of food and water.
"Later what we want to prove is that the robot can send the bees in any decided direction using the waggle dance," Dr. Rojas said.
Robots like this could one day become high-tech surveillance tools that secretly fly and record data ... and a robot you probably won't want to see walking around anytime soon? The spider-bot.
Friday, January 15, 2010
Artificial Intelligence - Chatterbot Eliza description

Artificial Intelligence - Chatterbot Eliza program is an Eliza like chatterbot.
This program is an Eliza like chatterbot.The implementation of the program has been improved, the repetitions made by the program are better handled, the context in a conversation is also better handled, the program can now correct grammatical errors that can occure after conjugating verbs.
Finaly, the database is bigger than the last time, it includes some of the script that originaly was used in the first implementation of the chatterbot Eliza by Joseph Weizenbaum. And also,most of the chatterbots that have been written this days are largely based on the original chatterbot Eliza that was written by Joseph Weizenbaum which means that they use some appropriate keywords to select the responses to generate when they get new inputs from the users.
More generaly,the techique that are in use in a "chatterbot database" or "script file" to represent the chatterbot knowledge is known as "Case Base Reasoning" or CBR. A very good example of an Eliza like chatterbot would be "Alice",these program has won the Loebner prize for most human chatterbot three times (www.alicebot.org).
The goal of NLP and NLU is to create programs that are capable of understanding natural languages and also capable of processing it to get input from the user by "voice recognition" or to produce output by "text to speech".
During the last decades there has been a lot of progress in the domains of "Voice Recognition" and "Text to Speech",however the goal of NLU that is to make software that are capable of showing a good level of understanding of "natural languages" in general seems quiet far to many AI experts. The general view about this subject is that it would take at list many decades before any computer can begin to really understand "natural language" just as the humans do.
Thursday, January 14, 2010
Technologies of affective computing
Emotional speech
Emotional speech processing recognizes the user's emotional state by analyzing speech patterns. Vocal parameters and prosody features such as pitch variables and speech rate are analyzed through pattern recognition.
Emotional inflection and modulation in synthesized speech, either through phrasing or acoustic features is useful in human-computer interaction. Such capability makes speech natural and expressive. For example a dialog system might modulate its speech to be more puerile if it deems the emotional model of its current user is that of a child.
Facial expression
The detection and processing of facial expression is achieved through various methods such as optical flow, hidden Markov model, neural network processing or active appearance model. More than one modalities can be combined or fused (multimodal recognition, e.g. facial expressions and speech prosody or facial expressions and hand gestures) to provide a more robust estimation of the subject's emotional state.
Body gesture
Body gesture is the position and the changes of the body. There are many proposed methods to detect the body gesture. Hand gestures have been a common focus of body gesture detection, apparentness methods and 3-D modeling methods are traditionally used.
Visual aesthetics
Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities is a highly subjective task. Computer scientists at Penn State treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated on-line photo sharing Website as data source. They extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. The work is demonstrated in the ACQUINE system on the Web.
Potential applications
In e-learning applications, affective computing can be used to adjust the presentation style of a computerized tutor when a learner is bored, interested, frustrated, or pleased. Psychological health services, i.e. counseling, benefit from affective computing applications when determining a client's emotional state. Affective computing sends a message via color or sound to express an emotional state to others.
Robotic systems capable of processing affective information exhibit higher flexibility while one works in uncertain or complex environments. Companion devices, such as digital pets, use affective computing abilities to enhance realism and provide a higher degree of autonomy.
Other potential applications are centered around social monitoring. For example, a car can monitor the emotion of all occupants and engage in additional safety measures, such as alerting other vehicles if it detects the driver to be angry. Affective computing has potential applications in human computer interaction, such as affective mirrors allowing the user to see how he or she performs; emotion monitoring agents sending a warning before one sends an angry email; or even music players selecting tracks based on mood.
Affective computing is also being applied to the development of communicative technologies for use by people with autism.
Emotional speech processing recognizes the user's emotional state by analyzing speech patterns. Vocal parameters and prosody features such as pitch variables and speech rate are analyzed through pattern recognition.
Emotional inflection and modulation in synthesized speech, either through phrasing or acoustic features is useful in human-computer interaction. Such capability makes speech natural and expressive. For example a dialog system might modulate its speech to be more puerile if it deems the emotional model of its current user is that of a child.
Facial expression
The detection and processing of facial expression is achieved through various methods such as optical flow, hidden Markov model, neural network processing or active appearance model. More than one modalities can be combined or fused (multimodal recognition, e.g. facial expressions and speech prosody or facial expressions and hand gestures) to provide a more robust estimation of the subject's emotional state.
Body gesture
Body gesture is the position and the changes of the body. There are many proposed methods to detect the body gesture. Hand gestures have been a common focus of body gesture detection, apparentness methods and 3-D modeling methods are traditionally used.
Visual aesthetics
Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities is a highly subjective task. Computer scientists at Penn State treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated on-line photo sharing Website as data source. They extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. The work is demonstrated in the ACQUINE system on the Web.
Potential applications
In e-learning applications, affective computing can be used to adjust the presentation style of a computerized tutor when a learner is bored, interested, frustrated, or pleased. Psychological health services, i.e. counseling, benefit from affective computing applications when determining a client's emotional state. Affective computing sends a message via color or sound to express an emotional state to others.
Robotic systems capable of processing affective information exhibit higher flexibility while one works in uncertain or complex environments. Companion devices, such as digital pets, use affective computing abilities to enhance realism and provide a higher degree of autonomy.
Other potential applications are centered around social monitoring. For example, a car can monitor the emotion of all occupants and engage in additional safety measures, such as alerting other vehicles if it detects the driver to be angry. Affective computing has potential applications in human computer interaction, such as affective mirrors allowing the user to see how he or she performs; emotion monitoring agents sending a warning before one sends an angry email; or even music players selecting tracks based on mood.
Affective computing is also being applied to the development of communicative technologies for use by people with autism.
Wednesday, January 13, 2010
Affective computing
Affective computing is a branch of the study and development of artificial intelligence that deals with the design of systems and devices that can recognize, interpret, and process human emotions. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as to early philosophical enquiries into emotion, the more modern branch of computer science originated with Rosalind Picard's 1995 paper on affective computing. A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behaviour to them, giving an appropriate response for those emotions.
Areas of affective computing
1. Detecting and recognizing emotional information
2. Emotion in machines
Technologies of affective computing
1. Emotional speech
2. Facial expression
3. Body gesture
4. Visual aesthetics
5. Potential applications
Application examples
1. Wearable computer applications make use of affective technologies, such as detection of biosignals
2. Human–computer interaction
3. Kismet
4. ACQUINE Aesthetic Quality Inference Engine
Areas of affective computing
1. Detecting and recognizing emotional information
2. Emotion in machines
Technologies of affective computing
1. Emotional speech
2. Facial expression
3. Body gesture
4. Visual aesthetics
5. Potential applications
Application examples
1. Wearable computer applications make use of affective technologies, such as detection of biosignals
2. Human–computer interaction
3. Kismet
4. ACQUINE Aesthetic Quality Inference Engine
Tuesday, January 12, 2010
Philosophy of artificial intelligence
In 1950 Alan M. Turing published "Computing machinery and intelligence" in Mind, in which he proposed that machines could be tested for intelligence using questions and answers. This process is now named the Turing Test. The term Artificial Intelligence (AI) was first used by John McCarthy who considers it to mean "the science and engineering of making intelligent machines". It can also refer to intelligence as exhibited by an artificial (man-made, non-natural, manufactured) entity. AI is studied in overlapping fields of computer science, psychology, neuroscience and engineering, dealing with intelligent behavior, learning and adaptation and usually developed using customized machines or computers.
Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, natural language, speech and facial recognition. As such, the study of AI has also become an engineering discipline, focused on providing solutions to real life problems, knowledge mining, software applications, strategy games like computer chess and other video games. One of the biggest difficulties with AI is that of comprehension. Many devices have been created that can do amazing things, but critics of AI claim that no actual comprehension by the AI machine has taken place.
The debate about the nature of the mind is relevant to the development of artificial intelligence. If the mind is indeed a thing separate from or higher than the functioning of the brain, then hypothetically it would be much more difficult to recreate within a machine, if it were possible at all. If, on the other hand, the mind is no more than the aggregated functions of the brain, then it will be possible to create a machine with a recognisable mind (though possibly only with computers much different from today's), by simple virtue of the fact that such a machine already exists in the form of the human brain.
Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, natural language, speech and facial recognition. As such, the study of AI has also become an engineering discipline, focused on providing solutions to real life problems, knowledge mining, software applications, strategy games like computer chess and other video games. One of the biggest difficulties with AI is that of comprehension. Many devices have been created that can do amazing things, but critics of AI claim that no actual comprehension by the AI machine has taken place.
The debate about the nature of the mind is relevant to the development of artificial intelligence. If the mind is indeed a thing separate from or higher than the functioning of the brain, then hypothetically it would be much more difficult to recreate within a machine, if it were possible at all. If, on the other hand, the mind is no more than the aggregated functions of the brain, then it will be possible to create a machine with a recognisable mind (though possibly only with computers much different from today's), by simple virtue of the fact that such a machine already exists in the form of the human brain.
Monday, January 11, 2010
Neural Networks and Parallel Computation
In the quest to create intelligent machines, the field of Artificial Intelligence has split into several different approaches based on the opinions about the most promising methods and theories. These rivaling theories have lead researchers in one of two basic approaches; bottom-up and top-down. Bottom-up theorists believe the best way to achieve artificial intelligence is to build electronic replicas of the human brain's complex network of neurons, while the top-down approach attempts to mimic the brain's behavior with computer programs.

The neuron "firing", passing a signal to the next in the chain.
Research has shown that a signal received by a neuron travels through the dendrite region, and down the axon. Separating nerve cells is a gap called the synapse. In order for the signal to be transferred to the next neuron, the signal must be converted from electrical to chemical energy. The signal can then be received by the next neuron and processed.
Warren McCulloch after completing medical school at Yale, along with Walter Pitts a mathematician proposed a hypothesis to explain the fundamentals of how neural networks made the brain work. Based on experiments with neurons, McCulloch and Pitts showed that neurons might be considered devices for processing binary numbers. An important back of mathematic logic, binary numbers (represented as 1's and 0's or true and false) were also the basis of the electronic computer. This link is the basis of computer-simulated neural networks, also know as Parallel computing.

Research has shown that a signal received by a neuron travels through the dendrite region, and down the axon. Separating nerve cells is a gap called the synapse. In order for the signal to be transferred to the next neuron, the signal must be converted from electrical to chemical energy. The signal can then be received by the next neuron and processed.
Warren McCulloch after completing medical school at Yale, along with Walter Pitts a mathematician proposed a hypothesis to explain the fundamentals of how neural networks made the brain work. Based on experiments with neurons, McCulloch and Pitts showed that neurons might be considered devices for processing binary numbers. An important back of mathematic logic, binary numbers (represented as 1's and 0's or true and false) were also the basis of the electronic computer. This link is the basis of computer-simulated neural networks, also know as Parallel computing.
Subscribe to:
Posts (Atom)