When the advances of microprocessor technology finally hits the wall and circuits are beyond anymore shrinking, more computers are expected to use dual processing, triple processing or even more. This means that instead of using single chips to perform operations, the compter shares the job between 2 or more processors.
Already, supercomputers at companies like Intel, NASA and IBM use fleets of processors and are hence able to process jobs at amazing speeds impossible for single processor computers. Workstations, animation CAD computers and other video editing computers use the dual processor technology.
The only problem that lies here is with the operating systems. As the number of processors increase, the operating system, which takes care of all the tasks inside a computer will have to be more complex to be able to support them. Further more, the task of splitting the operations is complicated would be a big problem as more and more processors are incoperated.
To be able to let the processors run at top speeds, memory allocation areas like the RAM (random access memory), cache and also the BUS (the connection that links up the component) will also have to increase in speed and size.
However, it will still be sometime before the current technology hits a wall and we will have to resort to such tactics and the end-users and buyers like us will have nothing to worry about for the time being. No matter what, computers will only get better and faster, even if Moore's law doesn't hold anymore.
Hello! Friends....... This blog is created specially for recent updates related to Artificial intelligence and Robotics......
Thursday, April 28, 2011
Human Brain Development
The human embryo is a single cell at its conception, from that cell grows vital organs such as the brain.
The brain first appears during the first three weeks, by this time the embryo is about a tenth of an inch. The brain is developed as a bump at the end of the neural tube.
The neural tube is a group of cells that are connected into a hollow, extended structure. Glial cells create the physical structure of the brain and other central nervous systems structures. The Glial cells move to their final positions in nuclear groups and layered structures such as the cortex.
As the glial cells move they create contacts with other neurons but when they reach there final position they begin to form into thin strands called axons with the neurons that they connected to during there journey.
In the next 5 weeks major sections of the brain become recognizable and operational. Also during this time a major growth of brain rapidly occurs in the cerebral cortex. These cells then move and create other sections of the brain. These sections are decided by the neuron function. At six months of pregnancy the fetus is working as it would after birth, its forebrain, midbrain, hindbrain and optic vesicle also become visible. By seven months the brain waves of the fetus can be determine through the mother's abdomen. Around nine months the human loses the ability to create more neurons.
By this time the brain of the human is so big that the brain can not grow any further until birth. This is why a human baby is less developed than that of other animals. The human brain continues to grow until about the age of 25.
The brain first appears during the first three weeks, by this time the embryo is about a tenth of an inch. The brain is developed as a bump at the end of the neural tube.
The neural tube is a group of cells that are connected into a hollow, extended structure. Glial cells create the physical structure of the brain and other central nervous systems structures. The Glial cells move to their final positions in nuclear groups and layered structures such as the cortex.
As the glial cells move they create contacts with other neurons but when they reach there final position they begin to form into thin strands called axons with the neurons that they connected to during there journey.
In the next 5 weeks major sections of the brain become recognizable and operational. Also during this time a major growth of brain rapidly occurs in the cerebral cortex. These cells then move and create other sections of the brain. These sections are decided by the neuron function. At six months of pregnancy the fetus is working as it would after birth, its forebrain, midbrain, hindbrain and optic vesicle also become visible. By seven months the brain waves of the fetus can be determine through the mother's abdomen. Around nine months the human loses the ability to create more neurons.
By this time the brain of the human is so big that the brain can not grow any further until birth. This is why a human baby is less developed than that of other animals. The human brain continues to grow until about the age of 25.
The Superior Intelligence
Generally Speaking
What do we mean ? - The superior intelligence would refer to a race of computers and robots completely different in ability and functionality of those of today. We expect computers to be able to learn better without human assistance, understand scenarios and react independently to events that they come across.
So what ? - Yes, traditional AI (artificial intelligence) programs already told us of smarter, better and more human friendly computer systems in the future, but to what level ? Traditional AI allows the computer to simply make use of information provided in ways the programmer already defined and act accordingly to them. All actions and creativity of those programs are only limited to those preset by programmers. What we are talking about, however, are computer programs that can learn from observation, apply it new knowledge to use ; that is they, possess the learning capabilities similar to any human being. That being said and given their efficient
Is it Possible ?
Software wise -Today's computer programs are already capable of learning like humans, by observation, by sounds and find their way around by trail and error. Some developers have already come up with robots that are capable of learning and behaving like a human infant. It is not mainly a problem of software that is hindering the progress of computer technology but the hardware aspect. Today's computers simply lack the processing power to match up to the human brain.
Hardware wise - We already mentioned the capability in computer processing power , compared to those of the human brain. By our estimate, today's very biggest super computers are within a factor of a hundred of having the power to mimic a human mind. It is only a matter of time before computers become even more powerful than the human brain.
How long will it take ?
Right after World War I, rapid improvements in electromechanical calculators was the first sign of the beginning of the electronics race. As electronic computers surfaced during the World War II, the computation speed to price ratio increased a millionfold.
Since we know that electricity needs to travel through a conductor, it only made sense that shorter, smaller circuits and conductors will enable the electricity to travel faster, hence improving speed of the electrical component.
Once the idea got through, the race to squeeze "more" into "less" created fierce competition between markets.Components which made use of vacuum tubes were abandoned, and transistors took their place. Then, the transistors themselves were replaced by faster and smaller integrated circuits. At some point, scientists warned that the "circuit squeezing days" were over as circuits reached 3 micrometers. Then, new manufacturing techniques came out and proved the scientists wrong. Computer circuit developments sped up even more !
As these integrated circuits get more and more packed with circuits that grew increasing smaller by the day, they became so integrated, that they actually became microchips . Today's microchips producers can squeeze a few million circuits and transistors all into a chip the size of a pin head !! The small problem involved here was that the less than 0.1 micrometer circuits were so fragile and heat sensitive that they would melt down when the chips get too hot. As a result, the chips are kept under controlled temperatures by intensive cooling. (E.g. liquid helium baths)
It was not long before the question of "how much more can we squeeze the circuits ?" popped up again. Circuit sizes seem to be reaching a size that seem too small to be compressed even more and even if they were, the heat heat and other electrical interference generated would destroy the fragile circuits and signals would leak from them.
The problem has fortunately found a new solution, by a technique that seem even more unlikely ; to shrink the circuits even more. Because traditional circuits still use the property of forcing massive loads of electrons along tiny little circuits, they cause great problems about the amount of interference they can take. With this new class of components known as single-electron transistors and quantum dots, which work by the interference of electron waves, larger amounts of interference energy are needed to cause disruptions. These single electron transistors even work better as they shrink in size !! Instead of making war with the inference energy, scientists actually make use of them.
The pace of improvements in computer performance is rapid. In the 1980s, computer performance doubled every one and a half years. In the late 1990s, computer performance doubled with periods of a single year !
If the pace continues, and computer technology continues improving even more rapidly by the day, computers matching human performance would be possible in more than a decade and human like robots will appear in the 2020s !!!
Would we actually do it ?
Even today, the current technology is just barely good enough (but possible) for us to create our own "digital human" should we want it badly enough. We mentioned that we are just a fraction away from building a computer system matching the processing power of the human brain. The main barrier involved was cost. Building a system that powerful would take hundreds of billions of dollars and not many investors would be willing to sponsor any enthusiastic researchers and scientists with that amount of money.
We would expect many of you to ask : Why wouldn't anybody be interested ?
First of all, why would we waste such a massive amount of money and human resource into building a single,
"fake " human when so many real life humans are already out in the populations ? It is jst not economical to the investor's pockets because the "product" would not be at all that useful.
The most powerful experimental super computers today are not used to research AI, or to imitate a human brain but are instead used for stimulations of real life events which are too complicated for normal calculations. While, they are powerful enough to do a partial imitation of the human brain, they will most likely not be used for that cause. This is again, because it is not economical. The investors and businessmen who funds such projects are not likely to waste their money on unproductive experiments that would cause so much.
While there are curious funders who pour money into AI and robotics projects that produces interesting results like a robot that can imitate a babie's curious behavouir, and robots that can create facial expressions themselves by responding to their "moods", such investors are rare and often invest only for fun and not for serious interest.
Until the cost of building a digital human brain is cheap enough, it will be hard to expect to see human-like robots walking around in the streets like in the movies any time soon.
What do we mean ? - The superior intelligence would refer to a race of computers and robots completely different in ability and functionality of those of today. We expect computers to be able to learn better without human assistance, understand scenarios and react independently to events that they come across.
So what ? - Yes, traditional AI (artificial intelligence) programs already told us of smarter, better and more human friendly computer systems in the future, but to what level ? Traditional AI allows the computer to simply make use of information provided in ways the programmer already defined and act accordingly to them. All actions and creativity of those programs are only limited to those preset by programmers. What we are talking about, however, are computer programs that can learn from observation, apply it new knowledge to use ; that is they, possess the learning capabilities similar to any human being. That being said and given their efficient
Is it Possible ?
Software wise -Today's computer programs are already capable of learning like humans, by observation, by sounds and find their way around by trail and error. Some developers have already come up with robots that are capable of learning and behaving like a human infant. It is not mainly a problem of software that is hindering the progress of computer technology but the hardware aspect. Today's computers simply lack the processing power to match up to the human brain.
Hardware wise - We already mentioned the capability in computer processing power , compared to those of the human brain. By our estimate, today's very biggest super computers are within a factor of a hundred of having the power to mimic a human mind. It is only a matter of time before computers become even more powerful than the human brain.
How long will it take ?
Right after World War I, rapid improvements in electromechanical calculators was the first sign of the beginning of the electronics race. As electronic computers surfaced during the World War II, the computation speed to price ratio increased a millionfold.
Since we know that electricity needs to travel through a conductor, it only made sense that shorter, smaller circuits and conductors will enable the electricity to travel faster, hence improving speed of the electrical component.
Once the idea got through, the race to squeeze "more" into "less" created fierce competition between markets.Components which made use of vacuum tubes were abandoned, and transistors took their place. Then, the transistors themselves were replaced by faster and smaller integrated circuits. At some point, scientists warned that the "circuit squeezing days" were over as circuits reached 3 micrometers. Then, new manufacturing techniques came out and proved the scientists wrong. Computer circuit developments sped up even more !
As these integrated circuits get more and more packed with circuits that grew increasing smaller by the day, they became so integrated, that they actually became microchips . Today's microchips producers can squeeze a few million circuits and transistors all into a chip the size of a pin head !! The small problem involved here was that the less than 0.1 micrometer circuits were so fragile and heat sensitive that they would melt down when the chips get too hot. As a result, the chips are kept under controlled temperatures by intensive cooling. (E.g. liquid helium baths)
It was not long before the question of "how much more can we squeeze the circuits ?" popped up again. Circuit sizes seem to be reaching a size that seem too small to be compressed even more and even if they were, the heat heat and other electrical interference generated would destroy the fragile circuits and signals would leak from them.
The problem has fortunately found a new solution, by a technique that seem even more unlikely ; to shrink the circuits even more. Because traditional circuits still use the property of forcing massive loads of electrons along tiny little circuits, they cause great problems about the amount of interference they can take. With this new class of components known as single-electron transistors and quantum dots, which work by the interference of electron waves, larger amounts of interference energy are needed to cause disruptions. These single electron transistors even work better as they shrink in size !! Instead of making war with the inference energy, scientists actually make use of them.
The pace of improvements in computer performance is rapid. In the 1980s, computer performance doubled every one and a half years. In the late 1990s, computer performance doubled with periods of a single year !
If the pace continues, and computer technology continues improving even more rapidly by the day, computers matching human performance would be possible in more than a decade and human like robots will appear in the 2020s !!!
Would we actually do it ?
Even today, the current technology is just barely good enough (but possible) for us to create our own "digital human" should we want it badly enough. We mentioned that we are just a fraction away from building a computer system matching the processing power of the human brain. The main barrier involved was cost. Building a system that powerful would take hundreds of billions of dollars and not many investors would be willing to sponsor any enthusiastic researchers and scientists with that amount of money.
We would expect many of you to ask : Why wouldn't anybody be interested ?
First of all, why would we waste such a massive amount of money and human resource into building a single,
"fake " human when so many real life humans are already out in the populations ? It is jst not economical to the investor's pockets because the "product" would not be at all that useful.
The most powerful experimental super computers today are not used to research AI, or to imitate a human brain but are instead used for stimulations of real life events which are too complicated for normal calculations. While, they are powerful enough to do a partial imitation of the human brain, they will most likely not be used for that cause. This is again, because it is not economical. The investors and businessmen who funds such projects are not likely to waste their money on unproductive experiments that would cause so much.
While there are curious funders who pour money into AI and robotics projects that produces interesting results like a robot that can imitate a babie's curious behavouir, and robots that can create facial expressions themselves by responding to their "moods", such investors are rare and often invest only for fun and not for serious interest.
Until the cost of building a digital human brain is cheap enough, it will be hard to expect to see human-like robots walking around in the streets like in the movies any time soon.
Wednesday, March 9, 2011
Applied Theory (Artificial Intelligence)
Ai bases its approach to creating real artificial intelligence on a solid philosophical basis. Rather than keeping our philosophy in the realm of theory, we apply it to our entire working process.
Ai’s applied philosophy is drawn from branches of the philosophy of language, logic, and radical behaviorism. It is built on four founding principles which guide our approach.
The first principle is that intelligence is in the eyes of the beholder. This means that there is no way to tell whether someone, or something, is intelligent, other than by making a subjective judgment based on observable behavior.
The second principle is that the most salient behavior that demonstrates intelligence is language, or more specifically conversational skills – the ability to interact in an "intelligent" manner with the observer.
The third principle is that this ability to use language, to converse, is a skill that can be acquired like any other skill.
Fourth, we believe that like the development of any skill, the ability to converse can only develop if a strict developmental process is followed.
A careful reading of Alan Turing’s paper "Computing Machinery and Intelligence" shows that Turing - the father of modern computing and artificial intelligence - based his approach to creating a "child machine" on the same four principles.
Subjective Intelligence
Intelligence is in the eyes of the beholder. Therefor, a machine that through conversation can fool a human into believing that it is human, must be deemed intelligent.
Language
Intelligence is measured through the social use of language. If a machine can generate language which accurately simulates the way people use language, it is fair to call that machine "intelligent".
Skill
Language has nothing to do with any type of knowledge base or rules; it is a skill that can be learned through a system of punishments and rewards.
Development
The acquisition of conversational skills has to go through an incremental developmental process. This developmental approach to learning language is the only way to create machine intelligence.
Ai’s applied philosophy is drawn from branches of the philosophy of language, logic, and radical behaviorism. It is built on four founding principles which guide our approach.
The first principle is that intelligence is in the eyes of the beholder. This means that there is no way to tell whether someone, or something, is intelligent, other than by making a subjective judgment based on observable behavior.
The second principle is that the most salient behavior that demonstrates intelligence is language, or more specifically conversational skills – the ability to interact in an "intelligent" manner with the observer.
The third principle is that this ability to use language, to converse, is a skill that can be acquired like any other skill.
Fourth, we believe that like the development of any skill, the ability to converse can only develop if a strict developmental process is followed.
A careful reading of Alan Turing’s paper "Computing Machinery and Intelligence" shows that Turing - the father of modern computing and artificial intelligence - based his approach to creating a "child machine" on the same four principles.
Subjective Intelligence
Intelligence is in the eyes of the beholder. Therefor, a machine that through conversation can fool a human into believing that it is human, must be deemed intelligent.
Language
Intelligence is measured through the social use of language. If a machine can generate language which accurately simulates the way people use language, it is fair to call that machine "intelligent".
Skill
Language has nothing to do with any type of knowledge base or rules; it is a skill that can be learned through a system of punishments and rewards.
Development
The acquisition of conversational skills has to go through an incremental developmental process. This developmental approach to learning language is the only way to create machine intelligence.
Tuesday, March 8, 2011
The Child Machine
Hal, like any 18-month old baby, is learning the rudiments of speech. He talks about red balls and blue balls, knows his Mommy and Daddy, and likes to go to the park. A child development specialist was given transcripts of Hal's conversations with his caretakers and declared him a healthy, normal little boy. What she wasn't told is that Hal is a computer program running on a regular Windows PC.
Ai uses behaviorist principles to teach our child machine - nicknamed Hal - to hold a conversation. Our approach was outlined by the computing theory pioneer Alan Turing in his 1950 article Computing Machinery and Intelligence. Turing viewed language as the defining element of intelligence; he believed that by giving a machine the capacity to learn, and a willingness to ask questions, you could "raise" an intelligence, an entity capable of rational, engaging conversation.
Education
Learn how Hal is being educated, and read about his training process.
State of Mind
The Ai child machine is built on a statistical model of language, coupled with advanced learning algorithms.
First Words
HAL is developed to meet basic human language development milestones. Look inside to find out what HAL's talking about.
Ai uses behaviorist principles to teach our child machine - nicknamed Hal - to hold a conversation. Our approach was outlined by the computing theory pioneer Alan Turing in his 1950 article Computing Machinery and Intelligence. Turing viewed language as the defining element of intelligence; he believed that by giving a machine the capacity to learn, and a willingness to ask questions, you could "raise" an intelligence, an entity capable of rational, engaging conversation.
Education
Learn how Hal is being educated, and read about his training process.
State of Mind
The Ai child machine is built on a statistical model of language, coupled with advanced learning algorithms.
First Words
HAL is developed to meet basic human language development milestones. Look inside to find out what HAL's talking about.
Research plan (Artifical Intelligence)
Ai has developed a research program aimed at true artificial intelligence - allowing people to converse with their computers in everyday language.
In teaching a computer to use language, Ai takes a scheduled, developmental approach, applying the behaviorist model of learning.
Our research plan is based on an iterative cycle, designed to improve the language skills of the system with each software update ("brain upgrade"). The developmental milestones we set for our child machine are based on human language-use milestones, with progress being evaluated by experts in child development
Applying the principles of behaviorism, we teach language to the child-machine through a system of rewards and punishments. The child-machine thus learns to use language, rather than having language built into it. Subjective Intelligence Related Article
At the Ai research facility, trainers converse with the machine, engaging it in conversation and monitoring its progress. The trainers test the limits of the child-machine's intelligence, and share their assessments with the algorithm developers. The developers consequently update and adapt the child-machine's algorithm, or "brain", making it more "human" in its language capability. Every so often, a new version of the brain is handed to the trainers, and the process repeats.
As the trainer works with the child machine, he or she frequently reports back to the developers on its progress. Our metric for success is clearly defined as "the language capability of a human of a defined age." For the first several iterations of the child machine, we sought to have the child machine speak at the same developmental level as a 15-month old. Now we are working on raising the child to 18 months. The actual time spent training the child does not correspond to its age; rather, these measurements of linguistic ability are standard, accepted guidelines for determining if a child is making linguistic progress or not.
In teaching a computer to use language, Ai takes a scheduled, developmental approach, applying the behaviorist model of learning.
Our research plan is based on an iterative cycle, designed to improve the language skills of the system with each software update ("brain upgrade"). The developmental milestones we set for our child machine are based on human language-use milestones, with progress being evaluated by experts in child development
Applying the principles of behaviorism, we teach language to the child-machine through a system of rewards and punishments. The child-machine thus learns to use language, rather than having language built into it. Subjective Intelligence Related Article
At the Ai research facility, trainers converse with the machine, engaging it in conversation and monitoring its progress. The trainers test the limits of the child-machine's intelligence, and share their assessments with the algorithm developers. The developers consequently update and adapt the child-machine's algorithm, or "brain", making it more "human" in its language capability. Every so often, a new version of the brain is handed to the trainers, and the process repeats.
As the trainer works with the child machine, he or she frequently reports back to the developers on its progress. Our metric for success is clearly defined as "the language capability of a human of a defined age." For the first several iterations of the child machine, we sought to have the child machine speak at the same developmental level as a 15-month old. Now we are working on raising the child to 18 months. The actual time spent training the child does not correspond to its age; rather, these measurements of linguistic ability are standard, accepted guidelines for determining if a child is making linguistic progress or not.
Monday, March 7, 2011
Theory (Artificial Intelligence)
Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain. -- Alan Turing, 1950
At Ai, we're raising a child machine from infancy to adulthood - thus bringing Turing's vision to fruition - and creating entirely new approaches to machine learning. In our research, we take a strong behaviorist approach, meaning that we work from the principle that language is a skill, not simply the output of brain functions, and, therefore, can be learned. The research was initially led by Jason Hutchens, a world-renowned chatbot developer and winner of the Loebner Prize in Artificial Intelligence, and Dr. Anat Treister-Goren, an award-winning neurolinguist.
At Ai, we're raising a child machine from infancy to adulthood - thus bringing Turing's vision to fruition - and creating entirely new approaches to machine learning. In our research, we take a strong behaviorist approach, meaning that we work from the principle that language is a skill, not simply the output of brain functions, and, therefore, can be learned. The research was initially led by Jason Hutchens, a world-renowned chatbot developer and winner of the Loebner Prize in Artificial Intelligence, and Dr. Anat Treister-Goren, an award-winning neurolinguist.
Subscribe to:
Posts (Atom)