All of us are aware of the five basic senses – seeing, feeling, smelling, tasting and hearing. But there is also another sense called the sixth sense. It is basically a connection to something greater than what their physical senses are able to perceive. To a layman, it would be something supernatural. Some might just consider it to be a superstition or something psychological. But the invention of sixth sense technology has completely shocked the world. Although it is not widely known as of now but the time is not far when this technology will change our perception of the world.
Pranav Mistry, 28 year old, of Indian origin is the mastermind behind the sixth sense technology. He invented ‘ Sixth Sense / WUW ( Wear UR World) ‘ which is a wearable gestural , user friendly interface which links the physical world around us with digital information and uses hand gestures to interact with them. He is a PhD student at MIT and he won the ‘Invention of the Year 2009 ‘- by Popular Science. The device sees what we see but it lets out information that we want to know while viewing the object. It can project information on any surface, be it a wall, table or any other object and uses hand / arm movements to help us interact with the projected information. The device brings us closer to reality and assists us in making right decisions by providing the relevant information, thereby, making the entire world a computer.
The Sixth Sense prototype consists of a pocket projector, mirror and a camera. The device is pendant shaped like mobile wearing devices. Both the projector and the camera are connected to the mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user’s hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tip of the user’s fingers using simple computer-vision techniques. It also supports multi touch and multi user interaction.
The device has a huge number of applications. Firstly, it is portable and easily to carry as you can wear it in your neck. The drawing application lets user draw on any surface by observing the movement of index finger. Mapping can also be done anywhere with the features of zooming in or zooming out. The camera also helps user to take pictures of the scene he is viewing and later he can arrange them on any surface. That’s not it. Some of the more practical uses are reading a newspaper. Imagine reading a newspaper and viewing videos instead of the photos in the paper. Or live sports updates while reading the newspaper. The device can also tell you arrival, departure or delay time of your air plane on your tickets. For book lovers it is nothing less than a blessing. Open any book and you will find the Amazon ratings of the book. To add to it, pick any page and the device gives additional information on the text, comments and lot more add on features. While picking up any good at the grocery store, the user can get to know whether the product is eco friendly or not. To know the time, all one has to do is to just gesture drawing circle on the wrist and there appears a wrist watch. The device serves the purpose of a computer plus saves time spent on searching information. Currently the prototype of the device costs around $350 to build. Still more work is being done on the device and when fully developed, it will definitely revolutionize the world.
Hello! Friends....... This blog is created specially for recent updates related to Artificial intelligence and Robotics......
Thursday, January 28, 2010
Wednesday, January 27, 2010
Architecture of Cloud Computing
Architecture
Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, comprises hardware and software designed by a cloud architect who typically works for a cloud integrator. It typically involves multiple cloud components communicating with each other over application programming interfaces, usually web services.
This closely resembles the Unix philosophy of having multiple programs each doing one thing well and working together over universal interfaces. Complexity is controlled and the resulting systems are more manageable than their monolithic counterparts.
Cloud architecture extends to the client, where web browsers and/or software applications access cloud applications.
Cloud storage architecture is loosely coupled, often assiduously avoiding the use of centralized metadata servers which can become bottlenecks. This enables the data nodes to scale into the hundreds, each independently delivering data to applications or users.
Types by visibility
Public cloud
Public cloud or external cloud describes cloud computing in the traditional mainstream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via web applications/web services, from an off-site third-party provider who shares resources and bills on a fine-grained utility computing basis.
Hybrid cloud
A hybrid cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises". A hybrid cloud can describe configuration combining a local device, such as a Plug computer with cloud services. It can also describe configurations combining virtual and physical, colocated assets—for example, a mostly virtualized environment that requires physical servers, routers, or other hardware such as a network appliance acting as a firewall or spam filter.
Private cloud
Private cloud and internal cloud are neologisms that some vendors have recently used to describe offerings that emulate cloud computing on private networks. These (typically virtualisation automation) products claim to "deliver some benefits of cloud computing without the pitfalls", capitalising on data security, corporate governance, and reliability concerns. They have been criticized on the basis that users "still have to buy, build, and manage them" and as such do not benefit from lower up-front capital costs and less hands-on management, essentially " the economic model that makes cloud computing such an intriguing concept".
While an analyst predicted in 2008 that private cloud networks would be the future of corporate IT, there is some uncertainty whether they are a reality even within the same firm. Analysts also claim that within five years a "huge percentage" of small and medium enterprises will get most of their computing resources from external cloud computing providers as they "will not have economies of scale to make it worth staying in the IT business" or be able to afford private clouds. Analysts have reported on Platform's view that private clouds are a stepping stone to external clouds, particularly for the financial services, and that future datacenters will look like internal clouds.
The term has also been used in the logical rather than physical sense, for example in reference to platform as a service offerings, though such offerings including Microsoft's Azure Services Platform are not available for on-premises deployment.
Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, comprises hardware and software designed by a cloud architect who typically works for a cloud integrator. It typically involves multiple cloud components communicating with each other over application programming interfaces, usually web services.
This closely resembles the Unix philosophy of having multiple programs each doing one thing well and working together over universal interfaces. Complexity is controlled and the resulting systems are more manageable than their monolithic counterparts.
Cloud architecture extends to the client, where web browsers and/or software applications access cloud applications.
Cloud storage architecture is loosely coupled, often assiduously avoiding the use of centralized metadata servers which can become bottlenecks. This enables the data nodes to scale into the hundreds, each independently delivering data to applications or users.
Types by visibility
Public cloud
Public cloud or external cloud describes cloud computing in the traditional mainstream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via web applications/web services, from an off-site third-party provider who shares resources and bills on a fine-grained utility computing basis.
Hybrid cloud
A hybrid cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises". A hybrid cloud can describe configuration combining a local device, such as a Plug computer with cloud services. It can also describe configurations combining virtual and physical, colocated assets—for example, a mostly virtualized environment that requires physical servers, routers, or other hardware such as a network appliance acting as a firewall or spam filter.
Private cloud
Private cloud and internal cloud are neologisms that some vendors have recently used to describe offerings that emulate cloud computing on private networks. These (typically virtualisation automation) products claim to "deliver some benefits of cloud computing without the pitfalls", capitalising on data security, corporate governance, and reliability concerns. They have been criticized on the basis that users "still have to buy, build, and manage them" and as such do not benefit from lower up-front capital costs and less hands-on management, essentially " the economic model that makes cloud computing such an intriguing concept".
While an analyst predicted in 2008 that private cloud networks would be the future of corporate IT, there is some uncertainty whether they are a reality even within the same firm. Analysts also claim that within five years a "huge percentage" of small and medium enterprises will get most of their computing resources from external cloud computing providers as they "will not have economies of scale to make it worth staying in the IT business" or be able to afford private clouds. Analysts have reported on Platform's view that private clouds are a stepping stone to external clouds, particularly for the financial services, and that future datacenters will look like internal clouds.
The term has also been used in the logical rather than physical sense, for example in reference to platform as a service offerings, though such offerings including Microsoft's Azure Services Platform are not available for on-premises deployment.
Tuesday, January 26, 2010
Cloud Computing
Cloud computing is Internet- ("cloud-") based development and use of computer technology ("computing"). In concept, it is a paradigm shift whereby details are abstracted from the users who no longer need knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. Cloud computing describes a new supplement, consumption and delivery model for IT services based on Internet, and it typically involves the provision of dynamically scalable and often virtualized resources as a service over the Internet.
The term cloud is used as a metaphor for the Internet, based on the cloud drawing used to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents. Typical cloud computing providers deliver common business applications online which are accessed from a web browser, while the software and data are stored on servers.
These applications are broadly divided into the following categories: Software as a Service (SaaS), Utility Computing, Web Services, Platform as a Service (PaaS), Managed Service Providers (MSP), Service Commerce, and Internet Integration.
The term cloud is used as a metaphor for the Internet, based on the cloud drawing used to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents. Typical cloud computing providers deliver common business applications online which are accessed from a web browser, while the software and data are stored on servers.
These applications are broadly divided into the following categories: Software as a Service (SaaS), Utility Computing, Web Services, Platform as a Service (PaaS), Managed Service Providers (MSP), Service Commerce, and Internet Integration.
Friday, January 22, 2010
Human Thoughts Control New Robot
Scientists have created a way to control a robot with signals from a human brain.
By generating the proper brainwaves—picked up by a cap with electrodes that sense the signals and reflect a person's instructions—scientists can instruct a humanoid robot to move to specific locations and pick up certain objects.
The commands are limited to moving forward, picking up one of two objects and bringing it to one of two locations. The researchers have achieved 94 percent accuracy between the thought commands and the robot's movements.
"This is really a proof-of-concept demonstration," said Rajesh Rao, a researcher from the University of Washington who leads the project. "It suggests that one day we might be able to use semi-autonomous robots for such jobs as helping disabled people or performing routine tasks in a person's home."
The person wearing the electrode cap watches the robot's movement on a computer screen through two cameras installed on and above the robot.
When the robot's camera sees the objects that are to be picked up it passes on the information to the user's computer screen. Each object lights up randomly on the computer screen. When a person wants something picked up and it happens to light up, the brain registers surprise and sends this brain activity to the computer and then to the robot as the choice object. The robot then proceeds to pick up the object.
A similar algorithm is used to decide where the robot will go.
"One of the important things about this demonstration is that we're using a 'noisy' brain signal to control the robot," Rao said. "The technique for picking up brain signals is non-invasive, but that means we can only obtain brain signals indirectly from sensors on the surface of the head, and not where they are generated deep in the brain. As a result, the user can only generate high-level commands such as indicating which object to pick up or which location to go to, and the robot needs to be autonomous enough to be able to execute such commands."
By generating the proper brainwaves—picked up by a cap with electrodes that sense the signals and reflect a person's instructions—scientists can instruct a humanoid robot to move to specific locations and pick up certain objects.
The commands are limited to moving forward, picking up one of two objects and bringing it to one of two locations. The researchers have achieved 94 percent accuracy between the thought commands and the robot's movements.
"This is really a proof-of-concept demonstration," said Rajesh Rao, a researcher from the University of Washington who leads the project. "It suggests that one day we might be able to use semi-autonomous robots for such jobs as helping disabled people or performing routine tasks in a person's home."
The person wearing the electrode cap watches the robot's movement on a computer screen through two cameras installed on and above the robot.
When the robot's camera sees the objects that are to be picked up it passes on the information to the user's computer screen. Each object lights up randomly on the computer screen. When a person wants something picked up and it happens to light up, the brain registers surprise and sends this brain activity to the computer and then to the robot as the choice object. The robot then proceeds to pick up the object.
A similar algorithm is used to decide where the robot will go.
"One of the important things about this demonstration is that we're using a 'noisy' brain signal to control the robot," Rao said. "The technique for picking up brain signals is non-invasive, but that means we can only obtain brain signals indirectly from sensors on the surface of the head, and not where they are generated deep in the brain. As a result, the user can only generate high-level commands such as indicating which object to pick up or which location to go to, and the robot needs to be autonomous enough to be able to execute such commands."
Thursday, January 21, 2010
Robotics Timeline
* Robots capable of manual labour tasks
o 2009 - robots that perform searching and fetching tasks in unmodified library environment, Professor Angel del Pobil (University Jaume I, Spain), 2004
o 2015-2020 - every South Korean household will have a robot and many European, The Ministry of Information and Communication (South Korea), 2007
o 2018 - robots will routinely carry out surgery, South Korea government 2007
o 2022 - intelligent robots that sense their environment, make decisions, and learn are used in 30% of households and organizations - TechCast
o 2030 - robots capable of performing at human level at most manual jobs Marshall Brain
o 2034 - robots (home automation systems) performing most household tasks, Helen Greiner, Chairman of iRobot
* Military robots
o 2015 - one third of US fighting strength will be composed of robots - US Department of Defense, 2006
o 2035 - first completely autonomous robot soldiers in operation - US Department of Defense, 2006
o 2038 - first completely autonomous robot flying car in operation - US Department of Technology, 2007
* Developments related to robotics from the Japan NISTEP 2030 report :
o 2013-2014 — agricultural robots (AgRobots).
o 2013-2017 — robots that care for the elderly
o 2017 — medical robots performing low-invasive surgery
o 2017-2019 — household robots with full use.
o 2019-2021 — Nanorobots
o 2021-2022 — Transhumanism
o 2009 - robots that perform searching and fetching tasks in unmodified library environment, Professor Angel del Pobil (University Jaume I, Spain), 2004
o 2015-2020 - every South Korean household will have a robot and many European, The Ministry of Information and Communication (South Korea), 2007
o 2018 - robots will routinely carry out surgery, South Korea government 2007
o 2022 - intelligent robots that sense their environment, make decisions, and learn are used in 30% of households and organizations - TechCast
o 2030 - robots capable of performing at human level at most manual jobs Marshall Brain
o 2034 - robots (home automation systems) performing most household tasks, Helen Greiner, Chairman of iRobot
* Military robots
o 2015 - one third of US fighting strength will be composed of robots - US Department of Defense, 2006
o 2035 - first completely autonomous robot soldiers in operation - US Department of Defense, 2006
o 2038 - first completely autonomous robot flying car in operation - US Department of Technology, 2007
* Developments related to robotics from the Japan NISTEP 2030 report :
o 2013-2014 — agricultural robots (AgRobots).
o 2013-2017 — robots that care for the elderly
o 2017 — medical robots performing low-invasive surgery
o 2017-2019 — household robots with full use.
o 2019-2021 — Nanorobots
o 2021-2022 — Transhumanism
Wednesday, January 20, 2010
Robotics in 2020
Robots will be commonplace: in home, factories, agriculture, building & construction, undersea, space, mining, hospitals and streets for repair, construction, maintenance, security, entertainment, companionship, care.
Purposes of these Robots:
* Robotized space vehicles and facilities
* Anthropomorphic general-purpose robots with hands like humans used for factory jobs - Intelligent robots for unmanned plants - Totally automated factories will be commonplace.
* Robots for guiding blind people
* Robots for almost any job in home or hospital, including Robo-surgery.
* Housework robots for cleaning, washing etc - Domestic robots will be small, specialized and attractive, e.g. cuddly
Properties of these robots:
* Autonomous, with environmental awareness sensors
* Self diagnostic self repairing
* Artificial brains with ten thousand or
International Robot Exhibition (IREX), organized by Japan Robot Association (JARA), is a biennal robot exhibition since 1973, which features state-of-the art robot technologies and products.
Purposes of these Robots:
* Robotized space vehicles and facilities
* Anthropomorphic general-purpose robots with hands like humans used for factory jobs - Intelligent robots for unmanned plants - Totally automated factories will be commonplace.
* Robots for guiding blind people
* Robots for almost any job in home or hospital, including Robo-surgery.
* Housework robots for cleaning, washing etc - Domestic robots will be small, specialized and attractive, e.g. cuddly
Properties of these robots:
* Autonomous, with environmental awareness sensors
* Self diagnostic self repairing
* Artificial brains with ten thousand or
International Robot Exhibition (IREX), organized by Japan Robot Association (JARA), is a biennal robot exhibition since 1973, which features state-of-the art robot technologies and products.
Tuesday, January 19, 2010
Bits vs. atoms
Engelberger and others at the show drew a sharp contrast between the explosive growth of the computer industry over the past few decades and the relative stagnation of the robotics field. While venture capitalists were lining up to fund computer start-ups, Engelberger, despite his impressive résumé, was unable to get financing for his robot that would help people live at home rather than go into a nursing home.
The robotics industry today is about as far along the road to widespread commercial acceptance as the PC industry was in the 1970s. The differences are that robotics don't have an equivalent of Moore's Law, the industry hasn't settled on standards, there's not much in the way of venture capital money and there's really no viable commercial application - killer or otherwise, said Paolo Pirjanian, chief scientist at Evolution Robotics .
On the show floor, several vendors displayed small demo robots that used sensors to navigate the show floor - literally technologies in search of an application. Unfortunately, the economics are such that it's extremely difficult to build a true robot that can interact with its environment at a cost that would attract consumers, Pirjanian said.
The vacuum cleaner is a good example. Electrolux tried to market a robotic vacuum cleaner called Trilobite that uses ultrasound to get around, but at $1,800 consumers weren't biting. The Roombas and e-Vacs are affordable - between $150 and $250 - but they lack the sophisticated capabilities that one would want in a robotic vacuum cleaner, such as obstacle avoidance, the ability to go up and down steps, and the ability to know where it had already vacuumed.
The robotics industry today is about as far along the road to widespread commercial acceptance as the PC industry was in the 1970s. The differences are that robotics don't have an equivalent of Moore's Law, the industry hasn't settled on standards, there's not much in the way of venture capital money and there's really no viable commercial application - killer or otherwise, said Paolo Pirjanian, chief scientist at Evolution Robotics .
On the show floor, several vendors displayed small demo robots that used sensors to navigate the show floor - literally technologies in search of an application. Unfortunately, the economics are such that it's extremely difficult to build a true robot that can interact with its environment at a cost that would attract consumers, Pirjanian said.
The vacuum cleaner is a good example. Electrolux tried to market a robotic vacuum cleaner called Trilobite that uses ultrasound to get around, but at $1,800 consumers weren't biting. The Roombas and e-Vacs are affordable - between $150 and $250 - but they lack the sophisticated capabilities that one would want in a robotic vacuum cleaner, such as obstacle avoidance, the ability to go up and down steps, and the ability to know where it had already vacuumed.
Saturday, January 16, 2010
The Future Of Robots
Engineers built humanoid robots that can recognize objects by color by processing information from a camera mounted on the robot's head. The robots are programmed to play soccer, with the intention of creating a team of fully autonomous humanoid robots able to compete against a championship human team by 2050. They have also designed tiny robots to mimic the communicative "waggle dance" of bees.
A world of robots may seem like something out of a movie, but it could be closer to reality than you think. Engineers have created robotic soccer players, bees and even a spider that will send chills up your spine just like the real thing.
They're big ... they're strong ... they're fast! Your favorite big screen robots may become a reality.
Powered by a small battery on her back, humanoid robot Lola is a soccer champion.
"The idea of the robot is that it can walk, it can see things because it has a video camera on top," Raul Rojas, Ph.D., professor of artificial intelligence at Freie University in Berlin, Germany, told Ivanhoe.
Using the camera mounted on her head, Lola recognizes objects by color. The information from the camera is then processed in this microchip, which activates different motors.
"And using this camera it can locate objects on the floor for example a red ball, go after the ball and try to score a goal," Dr. Rojas said. A robot with a few tricks up her sleeve.
German engineers have also created a bee robot. Covered with wax so it's not stung by others, it mimics the 'waggle' dance -- a figure eight pattern for communicating the location of food and water.
"Later what we want to prove is that the robot can send the bees in any decided direction using the waggle dance," Dr. Rojas said.
Robots like this could one day become high-tech surveillance tools that secretly fly and record data ... and a robot you probably won't want to see walking around anytime soon? The spider-bot.
A world of robots may seem like something out of a movie, but it could be closer to reality than you think. Engineers have created robotic soccer players, bees and even a spider that will send chills up your spine just like the real thing.
They're big ... they're strong ... they're fast! Your favorite big screen robots may become a reality.
Powered by a small battery on her back, humanoid robot Lola is a soccer champion.
"The idea of the robot is that it can walk, it can see things because it has a video camera on top," Raul Rojas, Ph.D., professor of artificial intelligence at Freie University in Berlin, Germany, told Ivanhoe.
Using the camera mounted on her head, Lola recognizes objects by color. The information from the camera is then processed in this microchip, which activates different motors.
"And using this camera it can locate objects on the floor for example a red ball, go after the ball and try to score a goal," Dr. Rojas said. A robot with a few tricks up her sleeve.
German engineers have also created a bee robot. Covered with wax so it's not stung by others, it mimics the 'waggle' dance -- a figure eight pattern for communicating the location of food and water.
"Later what we want to prove is that the robot can send the bees in any decided direction using the waggle dance," Dr. Rojas said.
Robots like this could one day become high-tech surveillance tools that secretly fly and record data ... and a robot you probably won't want to see walking around anytime soon? The spider-bot.
Friday, January 15, 2010
Artificial Intelligence - Chatterbot Eliza description
Artificial Intelligence - Chatterbot Eliza program is an Eliza like chatterbot.
This program is an Eliza like chatterbot.The implementation of the program has been improved, the repetitions made by the program are better handled, the context in a conversation is also better handled, the program can now correct grammatical errors that can occure after conjugating verbs.
Finaly, the database is bigger than the last time, it includes some of the script that originaly was used in the first implementation of the chatterbot Eliza by Joseph Weizenbaum. And also,most of the chatterbots that have been written this days are largely based on the original chatterbot Eliza that was written by Joseph Weizenbaum which means that they use some appropriate keywords to select the responses to generate when they get new inputs from the users.
More generaly,the techique that are in use in a "chatterbot database" or "script file" to represent the chatterbot knowledge is known as "Case Base Reasoning" or CBR. A very good example of an Eliza like chatterbot would be "Alice",these program has won the Loebner prize for most human chatterbot three times (www.alicebot.org).
The goal of NLP and NLU is to create programs that are capable of understanding natural languages and also capable of processing it to get input from the user by "voice recognition" or to produce output by "text to speech".
During the last decades there has been a lot of progress in the domains of "Voice Recognition" and "Text to Speech",however the goal of NLU that is to make software that are capable of showing a good level of understanding of "natural languages" in general seems quiet far to many AI experts. The general view about this subject is that it would take at list many decades before any computer can begin to really understand "natural language" just as the humans do.
Thursday, January 14, 2010
Technologies of affective computing
Emotional speech
Emotional speech processing recognizes the user's emotional state by analyzing speech patterns. Vocal parameters and prosody features such as pitch variables and speech rate are analyzed through pattern recognition.
Emotional inflection and modulation in synthesized speech, either through phrasing or acoustic features is useful in human-computer interaction. Such capability makes speech natural and expressive. For example a dialog system might modulate its speech to be more puerile if it deems the emotional model of its current user is that of a child.
Facial expression
The detection and processing of facial expression is achieved through various methods such as optical flow, hidden Markov model, neural network processing or active appearance model. More than one modalities can be combined or fused (multimodal recognition, e.g. facial expressions and speech prosody or facial expressions and hand gestures) to provide a more robust estimation of the subject's emotional state.
Body gesture
Body gesture is the position and the changes of the body. There are many proposed methods to detect the body gesture. Hand gestures have been a common focus of body gesture detection, apparentness methods and 3-D modeling methods are traditionally used.
Visual aesthetics
Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities is a highly subjective task. Computer scientists at Penn State treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated on-line photo sharing Website as data source. They extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. The work is demonstrated in the ACQUINE system on the Web.
Potential applications
In e-learning applications, affective computing can be used to adjust the presentation style of a computerized tutor when a learner is bored, interested, frustrated, or pleased. Psychological health services, i.e. counseling, benefit from affective computing applications when determining a client's emotional state. Affective computing sends a message via color or sound to express an emotional state to others.
Robotic systems capable of processing affective information exhibit higher flexibility while one works in uncertain or complex environments. Companion devices, such as digital pets, use affective computing abilities to enhance realism and provide a higher degree of autonomy.
Other potential applications are centered around social monitoring. For example, a car can monitor the emotion of all occupants and engage in additional safety measures, such as alerting other vehicles if it detects the driver to be angry. Affective computing has potential applications in human computer interaction, such as affective mirrors allowing the user to see how he or she performs; emotion monitoring agents sending a warning before one sends an angry email; or even music players selecting tracks based on mood.
Affective computing is also being applied to the development of communicative technologies for use by people with autism.
Emotional speech processing recognizes the user's emotional state by analyzing speech patterns. Vocal parameters and prosody features such as pitch variables and speech rate are analyzed through pattern recognition.
Emotional inflection and modulation in synthesized speech, either through phrasing or acoustic features is useful in human-computer interaction. Such capability makes speech natural and expressive. For example a dialog system might modulate its speech to be more puerile if it deems the emotional model of its current user is that of a child.
Facial expression
The detection and processing of facial expression is achieved through various methods such as optical flow, hidden Markov model, neural network processing or active appearance model. More than one modalities can be combined or fused (multimodal recognition, e.g. facial expressions and speech prosody or facial expressions and hand gestures) to provide a more robust estimation of the subject's emotional state.
Body gesture
Body gesture is the position and the changes of the body. There are many proposed methods to detect the body gesture. Hand gestures have been a common focus of body gesture detection, apparentness methods and 3-D modeling methods are traditionally used.
Visual aesthetics
Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities is a highly subjective task. Computer scientists at Penn State treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated on-line photo sharing Website as data source. They extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. The work is demonstrated in the ACQUINE system on the Web.
Potential applications
In e-learning applications, affective computing can be used to adjust the presentation style of a computerized tutor when a learner is bored, interested, frustrated, or pleased. Psychological health services, i.e. counseling, benefit from affective computing applications when determining a client's emotional state. Affective computing sends a message via color or sound to express an emotional state to others.
Robotic systems capable of processing affective information exhibit higher flexibility while one works in uncertain or complex environments. Companion devices, such as digital pets, use affective computing abilities to enhance realism and provide a higher degree of autonomy.
Other potential applications are centered around social monitoring. For example, a car can monitor the emotion of all occupants and engage in additional safety measures, such as alerting other vehicles if it detects the driver to be angry. Affective computing has potential applications in human computer interaction, such as affective mirrors allowing the user to see how he or she performs; emotion monitoring agents sending a warning before one sends an angry email; or even music players selecting tracks based on mood.
Affective computing is also being applied to the development of communicative technologies for use by people with autism.
Wednesday, January 13, 2010
Affective computing
Affective computing is a branch of the study and development of artificial intelligence that deals with the design of systems and devices that can recognize, interpret, and process human emotions. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as to early philosophical enquiries into emotion, the more modern branch of computer science originated with Rosalind Picard's 1995 paper on affective computing. A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behaviour to them, giving an appropriate response for those emotions.
Areas of affective computing
1. Detecting and recognizing emotional information
2. Emotion in machines
Technologies of affective computing
1. Emotional speech
2. Facial expression
3. Body gesture
4. Visual aesthetics
5. Potential applications
Application examples
1. Wearable computer applications make use of affective technologies, such as detection of biosignals
2. Human–computer interaction
3. Kismet
4. ACQUINE Aesthetic Quality Inference Engine
Areas of affective computing
1. Detecting and recognizing emotional information
2. Emotion in machines
Technologies of affective computing
1. Emotional speech
2. Facial expression
3. Body gesture
4. Visual aesthetics
5. Potential applications
Application examples
1. Wearable computer applications make use of affective technologies, such as detection of biosignals
2. Human–computer interaction
3. Kismet
4. ACQUINE Aesthetic Quality Inference Engine
Tuesday, January 12, 2010
Philosophy of artificial intelligence
In 1950 Alan M. Turing published "Computing machinery and intelligence" in Mind, in which he proposed that machines could be tested for intelligence using questions and answers. This process is now named the Turing Test. The term Artificial Intelligence (AI) was first used by John McCarthy who considers it to mean "the science and engineering of making intelligent machines". It can also refer to intelligence as exhibited by an artificial (man-made, non-natural, manufactured) entity. AI is studied in overlapping fields of computer science, psychology, neuroscience and engineering, dealing with intelligent behavior, learning and adaptation and usually developed using customized machines or computers.
Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, natural language, speech and facial recognition. As such, the study of AI has also become an engineering discipline, focused on providing solutions to real life problems, knowledge mining, software applications, strategy games like computer chess and other video games. One of the biggest difficulties with AI is that of comprehension. Many devices have been created that can do amazing things, but critics of AI claim that no actual comprehension by the AI machine has taken place.
The debate about the nature of the mind is relevant to the development of artificial intelligence. If the mind is indeed a thing separate from or higher than the functioning of the brain, then hypothetically it would be much more difficult to recreate within a machine, if it were possible at all. If, on the other hand, the mind is no more than the aggregated functions of the brain, then it will be possible to create a machine with a recognisable mind (though possibly only with computers much different from today's), by simple virtue of the fact that such a machine already exists in the form of the human brain.
Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, natural language, speech and facial recognition. As such, the study of AI has also become an engineering discipline, focused on providing solutions to real life problems, knowledge mining, software applications, strategy games like computer chess and other video games. One of the biggest difficulties with AI is that of comprehension. Many devices have been created that can do amazing things, but critics of AI claim that no actual comprehension by the AI machine has taken place.
The debate about the nature of the mind is relevant to the development of artificial intelligence. If the mind is indeed a thing separate from or higher than the functioning of the brain, then hypothetically it would be much more difficult to recreate within a machine, if it were possible at all. If, on the other hand, the mind is no more than the aggregated functions of the brain, then it will be possible to create a machine with a recognisable mind (though possibly only with computers much different from today's), by simple virtue of the fact that such a machine already exists in the form of the human brain.
Monday, January 11, 2010
Neural Networks and Parallel Computation
In the quest to create intelligent machines, the field of Artificial Intelligence has split into several different approaches based on the opinions about the most promising methods and theories. These rivaling theories have lead researchers in one of two basic approaches; bottom-up and top-down. Bottom-up theorists believe the best way to achieve artificial intelligence is to build electronic replicas of the human brain's complex network of neurons, while the top-down approach attempts to mimic the brain's behavior with computer programs.
The neuron "firing", passing a signal to the next in the chain.
Research has shown that a signal received by a neuron travels through the dendrite region, and down the axon. Separating nerve cells is a gap called the synapse. In order for the signal to be transferred to the next neuron, the signal must be converted from electrical to chemical energy. The signal can then be received by the next neuron and processed.
Warren McCulloch after completing medical school at Yale, along with Walter Pitts a mathematician proposed a hypothesis to explain the fundamentals of how neural networks made the brain work. Based on experiments with neurons, McCulloch and Pitts showed that neurons might be considered devices for processing binary numbers. An important back of mathematic logic, binary numbers (represented as 1's and 0's or true and false) were also the basis of the electronic computer. This link is the basis of computer-simulated neural networks, also know as Parallel computing.
Research has shown that a signal received by a neuron travels through the dendrite region, and down the axon. Separating nerve cells is a gap called the synapse. In order for the signal to be transferred to the next neuron, the signal must be converted from electrical to chemical energy. The signal can then be received by the next neuron and processed.
Warren McCulloch after completing medical school at Yale, along with Walter Pitts a mathematician proposed a hypothesis to explain the fundamentals of how neural networks made the brain work. Based on experiments with neurons, McCulloch and Pitts showed that neurons might be considered devices for processing binary numbers. An important back of mathematic logic, binary numbers (represented as 1's and 0's or true and false) were also the basis of the electronic computer. This link is the basis of computer-simulated neural networks, also know as Parallel computing.
Sunday, January 10, 2010
An Introduction to Artificial Intelligence.
Artificial Intelligence, or AI for short, is a combination of computer science, physiology, and philosophy. AI is a broad topic, consisting of different fields, from machine vision to expert systems. The element that the fields of AI have in common is the creation of machines that can "think".
In order to classify machines as "thinking", it is necessary to define intelligence. To what degree does intelligence consist of, for example, solving complex
problems, or making generalizations and relationships? And what about perception and comprehension? Research into the areas of learning, of language, and of sensory perception have aided scientists in building intelligent machines. One of the most challenging approaches facing experts is building systems that mimic the behavior of the human brain, made up of billions of neurons, and arguably the most complex matter in the universe. Perhaps the best way to gauge the intelligence of a machine is British computer scientist Alan Turing's test. He stated that a computer would deserves to be called intelligent if it could deceive a human into believing that it was human.
Artificial Intelligence has come a long way from its early roots, driven by dedicated researchers. The beginnings of AI reach back before electronics,
to philosophers and mathematicians such as Boole and others theorizing on principles that were used as the foundation of AI Logic. AI really began to intrigue researchers with the invention of the computer in 1943. The technology was finally available, or so it seemed, to simulate intelligent behavior. Over the next four decades, despite many stumbling blocks, AI has grown from a dozen researchers, to thousands of engineers and specialists; and from programs capable of playing checkers, to systems designed to diagnose disease.
AI has always been on the pioneering end of computer science. Advanced-level computer languages, as well as computer interfaces and word-processors owe their existence to the research into artificial intelligence. The theory and insights brought about by AI research will set the trend in the future of computing. The products available today are only bits and pieces of what are soon to follow, but they are a movement towards the future of artificial intelligence. The advancements in the quest for artificial intelligence have, and will continue to affect our jobs, our education, and our lives.
Saturday, January 9, 2010
Areas of affective computing
Detecting and recognizing emotional information
Detecting emotional information begins with passive sensors which capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture and gestures, while a microphone might capture speech. Other sensors detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance.
Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done by parsing the data through various processes such as speech recognition, natural language processing, or facial expression detection, all of which are dependent on the human factor vis-a-vis programming.
Emotion in machines
Another area within affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. A more practical approach, based on current technological capabilities, is the simulation of emotions in conversational agents. The goal of such simulation is to enrich and facilitate interactivity between human and machine. While human emotions are often associated with surges in hormones and other neuropeptides, emotions in machines might be associated with abstract states associated with progress (or lack of progress) in autonomous learning systems. In this view, affective emotional states correspond to time-derivatives (perturbations) in the learning curve of an arbitrary learning system.
Marvin Minsky, one of the pioneering computer scientists in artificial intelligence, relates emotions to the broader issues of machine intelligence stating in The Emotion Machine that emotion is "not especially different from the processes that we call 'thinking.'"
Detecting emotional information begins with passive sensors which capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture and gestures, while a microphone might capture speech. Other sensors detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance.
Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done by parsing the data through various processes such as speech recognition, natural language processing, or facial expression detection, all of which are dependent on the human factor vis-a-vis programming.
Emotion in machines
Another area within affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. A more practical approach, based on current technological capabilities, is the simulation of emotions in conversational agents. The goal of such simulation is to enrich and facilitate interactivity between human and machine. While human emotions are often associated with surges in hormones and other neuropeptides, emotions in machines might be associated with abstract states associated with progress (or lack of progress) in autonomous learning systems. In this view, affective emotional states correspond to time-derivatives (perturbations) in the learning curve of an arbitrary learning system.
Marvin Minsky, one of the pioneering computer scientists in artificial intelligence, relates emotions to the broader issues of machine intelligence stating in The Emotion Machine that emotion is "not especially different from the processes that we call 'thinking.'"
Friday, January 8, 2010
The Brain
The human brain is made up of a web of billions of cells called neurons, and understanding its complexities is seen as one of the last frontiers in scientific research. It is the aim of AI researchers who prefer this bottom-up approach to construct electronic circuits that act as neurons do in the human brain. Although much of the working of the brain remains unknown, the complex network of neurons is what gives humans intelligent characteristics. By itself, a neuron is not intelligent, but when grouped together, neurons are able to pass electrical signals through networks.
Subscribe to:
Posts (Atom)