Hello! Friends....... This blog is created specially for recent updates related to Artificial intelligence and Robotics......
Thursday, January 21, 2010
Robotics Timeline
* Robots capable of manual labour tasks
o 2009 - robots that perform searching and fetching tasks in unmodified library environment, Professor Angel del Pobil (University Jaume I, Spain), 2004
o 2015-2020 - every South Korean household will have a robot and many European, The Ministry of Information and Communication (South Korea), 2007
o 2018 - robots will routinely carry out surgery, South Korea government 2007
o 2022 - intelligent robots that sense their environment, make decisions, and learn are used in 30% of households and organizations - TechCast
o 2030 - robots capable of performing at human level at most manual jobs Marshall Brain
o 2034 - robots (home automation systems) performing most household tasks, Helen Greiner, Chairman of iRobot
* Military robots
o 2015 - one third of US fighting strength will be composed of robots - US Department of Defense, 2006
o 2035 - first completely autonomous robot soldiers in operation - US Department of Defense, 2006
o 2038 - first completely autonomous robot flying car in operation - US Department of Technology, 2007
* Developments related to robotics from the Japan NISTEP 2030 report :
o 2013-2014 — agricultural robots (AgRobots).
o 2013-2017 — robots that care for the elderly
o 2017 — medical robots performing low-invasive surgery
o 2017-2019 — household robots with full use.
o 2019-2021 — Nanorobots
o 2021-2022 — Transhumanism
o 2009 - robots that perform searching and fetching tasks in unmodified library environment, Professor Angel del Pobil (University Jaume I, Spain), 2004
o 2015-2020 - every South Korean household will have a robot and many European, The Ministry of Information and Communication (South Korea), 2007
o 2018 - robots will routinely carry out surgery, South Korea government 2007
o 2022 - intelligent robots that sense their environment, make decisions, and learn are used in 30% of households and organizations - TechCast
o 2030 - robots capable of performing at human level at most manual jobs Marshall Brain
o 2034 - robots (home automation systems) performing most household tasks, Helen Greiner, Chairman of iRobot
* Military robots
o 2015 - one third of US fighting strength will be composed of robots - US Department of Defense, 2006
o 2035 - first completely autonomous robot soldiers in operation - US Department of Defense, 2006
o 2038 - first completely autonomous robot flying car in operation - US Department of Technology, 2007
* Developments related to robotics from the Japan NISTEP 2030 report :
o 2013-2014 — agricultural robots (AgRobots).
o 2013-2017 — robots that care for the elderly
o 2017 — medical robots performing low-invasive surgery
o 2017-2019 — household robots with full use.
o 2019-2021 — Nanorobots
o 2021-2022 — Transhumanism
Wednesday, January 20, 2010
Robotics in 2020
Robots will be commonplace: in home, factories, agriculture, building & construction, undersea, space, mining, hospitals and streets for repair, construction, maintenance, security, entertainment, companionship, care.
Purposes of these Robots:
* Robotized space vehicles and facilities
* Anthropomorphic general-purpose robots with hands like humans used for factory jobs - Intelligent robots for unmanned plants - Totally automated factories will be commonplace.
* Robots for guiding blind people
* Robots for almost any job in home or hospital, including Robo-surgery.
* Housework robots for cleaning, washing etc - Domestic robots will be small, specialized and attractive, e.g. cuddly
Properties of these robots:
* Autonomous, with environmental awareness sensors
* Self diagnostic self repairing
* Artificial brains with ten thousand or
International Robot Exhibition (IREX), organized by Japan Robot Association (JARA), is a biennal robot exhibition since 1973, which features state-of-the art robot technologies and products.
Purposes of these Robots:
* Robotized space vehicles and facilities
* Anthropomorphic general-purpose robots with hands like humans used for factory jobs - Intelligent robots for unmanned plants - Totally automated factories will be commonplace.
* Robots for guiding blind people
* Robots for almost any job in home or hospital, including Robo-surgery.
* Housework robots for cleaning, washing etc - Domestic robots will be small, specialized and attractive, e.g. cuddly
Properties of these robots:
* Autonomous, with environmental awareness sensors
* Self diagnostic self repairing
* Artificial brains with ten thousand or
International Robot Exhibition (IREX), organized by Japan Robot Association (JARA), is a biennal robot exhibition since 1973, which features state-of-the art robot technologies and products.
Tuesday, January 19, 2010
Bits vs. atoms
Engelberger and others at the show drew a sharp contrast between the explosive growth of the computer industry over the past few decades and the relative stagnation of the robotics field. While venture capitalists were lining up to fund computer start-ups, Engelberger, despite his impressive résumé, was unable to get financing for his robot that would help people live at home rather than go into a nursing home.
The robotics industry today is about as far along the road to widespread commercial acceptance as the PC industry was in the 1970s. The differences are that robotics don't have an equivalent of Moore's Law, the industry hasn't settled on standards, there's not much in the way of venture capital money and there's really no viable commercial application - killer or otherwise, said Paolo Pirjanian, chief scientist at Evolution Robotics .
On the show floor, several vendors displayed small demo robots that used sensors to navigate the show floor - literally technologies in search of an application. Unfortunately, the economics are such that it's extremely difficult to build a true robot that can interact with its environment at a cost that would attract consumers, Pirjanian said.
The vacuum cleaner is a good example. Electrolux tried to market a robotic vacuum cleaner called Trilobite that uses ultrasound to get around, but at $1,800 consumers weren't biting. The Roombas and e-Vacs are affordable - between $150 and $250 - but they lack the sophisticated capabilities that one would want in a robotic vacuum cleaner, such as obstacle avoidance, the ability to go up and down steps, and the ability to know where it had already vacuumed.
The robotics industry today is about as far along the road to widespread commercial acceptance as the PC industry was in the 1970s. The differences are that robotics don't have an equivalent of Moore's Law, the industry hasn't settled on standards, there's not much in the way of venture capital money and there's really no viable commercial application - killer or otherwise, said Paolo Pirjanian, chief scientist at Evolution Robotics .
On the show floor, several vendors displayed small demo robots that used sensors to navigate the show floor - literally technologies in search of an application. Unfortunately, the economics are such that it's extremely difficult to build a true robot that can interact with its environment at a cost that would attract consumers, Pirjanian said.
The vacuum cleaner is a good example. Electrolux tried to market a robotic vacuum cleaner called Trilobite that uses ultrasound to get around, but at $1,800 consumers weren't biting. The Roombas and e-Vacs are affordable - between $150 and $250 - but they lack the sophisticated capabilities that one would want in a robotic vacuum cleaner, such as obstacle avoidance, the ability to go up and down steps, and the ability to know where it had already vacuumed.
Saturday, January 16, 2010
The Future Of Robots
Engineers built humanoid robots that can recognize objects by color by processing information from a camera mounted on the robot's head. The robots are programmed to play soccer, with the intention of creating a team of fully autonomous humanoid robots able to compete against a championship human team by 2050. They have also designed tiny robots to mimic the communicative "waggle dance" of bees.
A world of robots may seem like something out of a movie, but it could be closer to reality than you think. Engineers have created robotic soccer players, bees and even a spider that will send chills up your spine just like the real thing.
They're big ... they're strong ... they're fast! Your favorite big screen robots may become a reality.
Powered by a small battery on her back, humanoid robot Lola is a soccer champion.
"The idea of the robot is that it can walk, it can see things because it has a video camera on top," Raul Rojas, Ph.D., professor of artificial intelligence at Freie University in Berlin, Germany, told Ivanhoe.
Using the camera mounted on her head, Lola recognizes objects by color. The information from the camera is then processed in this microchip, which activates different motors.
"And using this camera it can locate objects on the floor for example a red ball, go after the ball and try to score a goal," Dr. Rojas said. A robot with a few tricks up her sleeve.
German engineers have also created a bee robot. Covered with wax so it's not stung by others, it mimics the 'waggle' dance -- a figure eight pattern for communicating the location of food and water.
"Later what we want to prove is that the robot can send the bees in any decided direction using the waggle dance," Dr. Rojas said.
Robots like this could one day become high-tech surveillance tools that secretly fly and record data ... and a robot you probably won't want to see walking around anytime soon? The spider-bot.
A world of robots may seem like something out of a movie, but it could be closer to reality than you think. Engineers have created robotic soccer players, bees and even a spider that will send chills up your spine just like the real thing.
They're big ... they're strong ... they're fast! Your favorite big screen robots may become a reality.
Powered by a small battery on her back, humanoid robot Lola is a soccer champion.
"The idea of the robot is that it can walk, it can see things because it has a video camera on top," Raul Rojas, Ph.D., professor of artificial intelligence at Freie University in Berlin, Germany, told Ivanhoe.
Using the camera mounted on her head, Lola recognizes objects by color. The information from the camera is then processed in this microchip, which activates different motors.
"And using this camera it can locate objects on the floor for example a red ball, go after the ball and try to score a goal," Dr. Rojas said. A robot with a few tricks up her sleeve.
German engineers have also created a bee robot. Covered with wax so it's not stung by others, it mimics the 'waggle' dance -- a figure eight pattern for communicating the location of food and water.
"Later what we want to prove is that the robot can send the bees in any decided direction using the waggle dance," Dr. Rojas said.
Robots like this could one day become high-tech surveillance tools that secretly fly and record data ... and a robot you probably won't want to see walking around anytime soon? The spider-bot.
Friday, January 15, 2010
Artificial Intelligence - Chatterbot Eliza description

Artificial Intelligence - Chatterbot Eliza program is an Eliza like chatterbot.
This program is an Eliza like chatterbot.The implementation of the program has been improved, the repetitions made by the program are better handled, the context in a conversation is also better handled, the program can now correct grammatical errors that can occure after conjugating verbs.
Finaly, the database is bigger than the last time, it includes some of the script that originaly was used in the first implementation of the chatterbot Eliza by Joseph Weizenbaum. And also,most of the chatterbots that have been written this days are largely based on the original chatterbot Eliza that was written by Joseph Weizenbaum which means that they use some appropriate keywords to select the responses to generate when they get new inputs from the users.
More generaly,the techique that are in use in a "chatterbot database" or "script file" to represent the chatterbot knowledge is known as "Case Base Reasoning" or CBR. A very good example of an Eliza like chatterbot would be "Alice",these program has won the Loebner prize for most human chatterbot three times (www.alicebot.org).
The goal of NLP and NLU is to create programs that are capable of understanding natural languages and also capable of processing it to get input from the user by "voice recognition" or to produce output by "text to speech".
During the last decades there has been a lot of progress in the domains of "Voice Recognition" and "Text to Speech",however the goal of NLU that is to make software that are capable of showing a good level of understanding of "natural languages" in general seems quiet far to many AI experts. The general view about this subject is that it would take at list many decades before any computer can begin to really understand "natural language" just as the humans do.
Thursday, January 14, 2010
Technologies of affective computing
Emotional speech
Emotional speech processing recognizes the user's emotional state by analyzing speech patterns. Vocal parameters and prosody features such as pitch variables and speech rate are analyzed through pattern recognition.
Emotional inflection and modulation in synthesized speech, either through phrasing or acoustic features is useful in human-computer interaction. Such capability makes speech natural and expressive. For example a dialog system might modulate its speech to be more puerile if it deems the emotional model of its current user is that of a child.
Facial expression
The detection and processing of facial expression is achieved through various methods such as optical flow, hidden Markov model, neural network processing or active appearance model. More than one modalities can be combined or fused (multimodal recognition, e.g. facial expressions and speech prosody or facial expressions and hand gestures) to provide a more robust estimation of the subject's emotional state.
Body gesture
Body gesture is the position and the changes of the body. There are many proposed methods to detect the body gesture. Hand gestures have been a common focus of body gesture detection, apparentness methods and 3-D modeling methods are traditionally used.
Visual aesthetics
Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities is a highly subjective task. Computer scientists at Penn State treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated on-line photo sharing Website as data source. They extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. The work is demonstrated in the ACQUINE system on the Web.
Potential applications
In e-learning applications, affective computing can be used to adjust the presentation style of a computerized tutor when a learner is bored, interested, frustrated, or pleased. Psychological health services, i.e. counseling, benefit from affective computing applications when determining a client's emotional state. Affective computing sends a message via color or sound to express an emotional state to others.
Robotic systems capable of processing affective information exhibit higher flexibility while one works in uncertain or complex environments. Companion devices, such as digital pets, use affective computing abilities to enhance realism and provide a higher degree of autonomy.
Other potential applications are centered around social monitoring. For example, a car can monitor the emotion of all occupants and engage in additional safety measures, such as alerting other vehicles if it detects the driver to be angry. Affective computing has potential applications in human computer interaction, such as affective mirrors allowing the user to see how he or she performs; emotion monitoring agents sending a warning before one sends an angry email; or even music players selecting tracks based on mood.
Affective computing is also being applied to the development of communicative technologies for use by people with autism.
Emotional speech processing recognizes the user's emotional state by analyzing speech patterns. Vocal parameters and prosody features such as pitch variables and speech rate are analyzed through pattern recognition.
Emotional inflection and modulation in synthesized speech, either through phrasing or acoustic features is useful in human-computer interaction. Such capability makes speech natural and expressive. For example a dialog system might modulate its speech to be more puerile if it deems the emotional model of its current user is that of a child.
Facial expression
The detection and processing of facial expression is achieved through various methods such as optical flow, hidden Markov model, neural network processing or active appearance model. More than one modalities can be combined or fused (multimodal recognition, e.g. facial expressions and speech prosody or facial expressions and hand gestures) to provide a more robust estimation of the subject's emotional state.
Body gesture
Body gesture is the position and the changes of the body. There are many proposed methods to detect the body gesture. Hand gestures have been a common focus of body gesture detection, apparentness methods and 3-D modeling methods are traditionally used.
Visual aesthetics
Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities is a highly subjective task. Computer scientists at Penn State treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated on-line photo sharing Website as data source. They extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. The work is demonstrated in the ACQUINE system on the Web.
Potential applications
In e-learning applications, affective computing can be used to adjust the presentation style of a computerized tutor when a learner is bored, interested, frustrated, or pleased. Psychological health services, i.e. counseling, benefit from affective computing applications when determining a client's emotional state. Affective computing sends a message via color or sound to express an emotional state to others.
Robotic systems capable of processing affective information exhibit higher flexibility while one works in uncertain or complex environments. Companion devices, such as digital pets, use affective computing abilities to enhance realism and provide a higher degree of autonomy.
Other potential applications are centered around social monitoring. For example, a car can monitor the emotion of all occupants and engage in additional safety measures, such as alerting other vehicles if it detects the driver to be angry. Affective computing has potential applications in human computer interaction, such as affective mirrors allowing the user to see how he or she performs; emotion monitoring agents sending a warning before one sends an angry email; or even music players selecting tracks based on mood.
Affective computing is also being applied to the development of communicative technologies for use by people with autism.
Subscribe to:
Posts (Atom)