Brain fitness has basic principles: variety and curiosity. When anything you do becomes second nature, you need to make a change. If you can do the crossword puzzle in your sleep, it's time for you to move on to a new challenge in order to get the best workout for your brain. Curiosity about the world around you, how it works and how you can understand it will keep your brain working fast and efficiently. Use the ideas below to help attain your quest for mental fitness.
1. Play Games
Brain fitness programs and games are a wonderful way to tease and challenge your brain. Suduko, crosswords and electronic games can all improve your brain's speed and memory. These games rely on logic, word skills, math and more. These games are also fun. You'll get benefit more by doing these games a little bit every day -- spend 15 minutes or so, not hours.
2. Meditation
Daily meditation is perhaps the single greatest thing you can do for your mind/body health. Meditation not only relaxes you, it gives your brain a workout. By creating a different mental state, you engage your brain in new and interesting ways while increasing your brain fitness.
3. Eat for Your Brain
Your brain needs you to eat healthy fats. Focus on fish oils from wild salmon, nuts such as walnuts, seeds such as flax seed and olive oil. Eat more of these foods and less saturated fats. Eliminate transfats completely from your diet.
4. Tell Good Stories
Stories are a way that we solidify memories, interpret events and share moments. Practice telling your stories, both new and old, so that they are interesting, compelling and fun. Some basic storytelling techniques will go a long way in keeping people's interest both in you and in what you have to say.
5. Turn Off Your Television
The average person watches more than 4 hours of television everyday. Television can stand in the way of relationships, life and more. Turn off your TV and spend more time living and exercising your mind and body.
6. Exercise Your Body To Exercise Your Brain
Physical exercise is great brain exercise too. By moving your body, your brain has to learn new muscle skills, estimate distance and practice balance. Choose a variety of exercises to challenge your brain.
7. Read Something Different
Books are portable, free from libraries and filled with infinite interesting characters, information and facts. Branch out from familiar reading topics. If you usually read history books, try a contemporary novel. Read foreign authors, the classics and random books. Not only will your brain get a workout by imagining different time periods, cultures and peoples, you will also have interesting stories to tell about your reading, what it makes you think of and the connections you draw between modern life and the words.
8. Learn a New Skill
Learning a new skill works multiple areas of the brain. Your memory comes into play, you learn new movements and you associate things differently. Reading Shakespeare, learning to cook and building an airplane out of toothpicks all will challenge your brain and give you something to think about.
9. Make Simple Changes
We love our routines. We have hobbies and pastimes that we could do for hours on end. But the more something is 'second nature,' the less our brains have to work to do it. To really help your brain stay young, challenge it. Change routes to the grocery store, use your opposite hand to open doors and eat dessert first. All this will force your brain to wake up from habits and pay attention again.
10. Train Your Brain
Brain training is becoming a trend. There are formal courses, websites and books with programs on how to train your brain to work better and faster. There is some research behind these programs, but the basic principles are memory, visualization and reasoning. Work on these three concepts everyday and your brain will be ready for anything.
Tomorrow's Intelligent Machines......
Hello! Friends....... This blog is created specially for recent updates related to Artificial intelligence and Robotics......
Friday, January 13, 2012
Thursday, April 28, 2011
Future of the computer
When the advances of microprocessor technology finally hits the wall and circuits are beyond anymore shrinking, more computers are expected to use dual processing, triple processing or even more. This means that instead of using single chips to perform operations, the compter shares the job between 2 or more processors.
Already, supercomputers at companies like Intel, NASA and IBM use fleets of processors and are hence able to process jobs at amazing speeds impossible for single processor computers. Workstations, animation CAD computers and other video editing computers use the dual processor technology.
The only problem that lies here is with the operating systems. As the number of processors increase, the operating system, which takes care of all the tasks inside a computer will have to be more complex to be able to support them. Further more, the task of splitting the operations is complicated would be a big problem as more and more processors are incoperated.
To be able to let the processors run at top speeds, memory allocation areas like the RAM (random access memory), cache and also the BUS (the connection that links up the component) will also have to increase in speed and size.
However, it will still be sometime before the current technology hits a wall and we will have to resort to such tactics and the end-users and buyers like us will have nothing to worry about for the time being. No matter what, computers will only get better and faster, even if Moore's law doesn't hold anymore.
Already, supercomputers at companies like Intel, NASA and IBM use fleets of processors and are hence able to process jobs at amazing speeds impossible for single processor computers. Workstations, animation CAD computers and other video editing computers use the dual processor technology.
The only problem that lies here is with the operating systems. As the number of processors increase, the operating system, which takes care of all the tasks inside a computer will have to be more complex to be able to support them. Further more, the task of splitting the operations is complicated would be a big problem as more and more processors are incoperated.
To be able to let the processors run at top speeds, memory allocation areas like the RAM (random access memory), cache and also the BUS (the connection that links up the component) will also have to increase in speed and size.
However, it will still be sometime before the current technology hits a wall and we will have to resort to such tactics and the end-users and buyers like us will have nothing to worry about for the time being. No matter what, computers will only get better and faster, even if Moore's law doesn't hold anymore.
Human Brain Development
The human embryo is a single cell at its conception, from that cell grows vital organs such as the brain.
The brain first appears during the first three weeks, by this time the embryo is about a tenth of an inch. The brain is developed as a bump at the end of the neural tube.
The neural tube is a group of cells that are connected into a hollow, extended structure. Glial cells create the physical structure of the brain and other central nervous systems structures. The Glial cells move to their final positions in nuclear groups and layered structures such as the cortex.
As the glial cells move they create contacts with other neurons but when they reach there final position they begin to form into thin strands called axons with the neurons that they connected to during there journey.
In the next 5 weeks major sections of the brain become recognizable and operational. Also during this time a major growth of brain rapidly occurs in the cerebral cortex. These cells then move and create other sections of the brain. These sections are decided by the neuron function. At six months of pregnancy the fetus is working as it would after birth, its forebrain, midbrain, hindbrain and optic vesicle also become visible. By seven months the brain waves of the fetus can be determine through the mother's abdomen. Around nine months the human loses the ability to create more neurons.
By this time the brain of the human is so big that the brain can not grow any further until birth. This is why a human baby is less developed than that of other animals. The human brain continues to grow until about the age of 25.
The brain first appears during the first three weeks, by this time the embryo is about a tenth of an inch. The brain is developed as a bump at the end of the neural tube.
The neural tube is a group of cells that are connected into a hollow, extended structure. Glial cells create the physical structure of the brain and other central nervous systems structures. The Glial cells move to their final positions in nuclear groups and layered structures such as the cortex.
As the glial cells move they create contacts with other neurons but when they reach there final position they begin to form into thin strands called axons with the neurons that they connected to during there journey.
In the next 5 weeks major sections of the brain become recognizable and operational. Also during this time a major growth of brain rapidly occurs in the cerebral cortex. These cells then move and create other sections of the brain. These sections are decided by the neuron function. At six months of pregnancy the fetus is working as it would after birth, its forebrain, midbrain, hindbrain and optic vesicle also become visible. By seven months the brain waves of the fetus can be determine through the mother's abdomen. Around nine months the human loses the ability to create more neurons.
By this time the brain of the human is so big that the brain can not grow any further until birth. This is why a human baby is less developed than that of other animals. The human brain continues to grow until about the age of 25.
The Superior Intelligence
Generally Speaking
What do we mean ? - The superior intelligence would refer to a race of computers and robots completely different in ability and functionality of those of today. We expect computers to be able to learn better without human assistance, understand scenarios and react independently to events that they come across.
So what ? - Yes, traditional AI (artificial intelligence) programs already told us of smarter, better and more human friendly computer systems in the future, but to what level ? Traditional AI allows the computer to simply make use of information provided in ways the programmer already defined and act accordingly to them. All actions and creativity of those programs are only limited to those preset by programmers. What we are talking about, however, are computer programs that can learn from observation, apply it new knowledge to use ; that is they, possess the learning capabilities similar to any human being. That being said and given their efficient
Is it Possible ?
Software wise -Today's computer programs are already capable of learning like humans, by observation, by sounds and find their way around by trail and error. Some developers have already come up with robots that are capable of learning and behaving like a human infant. It is not mainly a problem of software that is hindering the progress of computer technology but the hardware aspect. Today's computers simply lack the processing power to match up to the human brain.
Hardware wise - We already mentioned the capability in computer processing power , compared to those of the human brain. By our estimate, today's very biggest super computers are within a factor of a hundred of having the power to mimic a human mind. It is only a matter of time before computers become even more powerful than the human brain.
How long will it take ?
Right after World War I, rapid improvements in electromechanical calculators was the first sign of the beginning of the electronics race. As electronic computers surfaced during the World War II, the computation speed to price ratio increased a millionfold.
Since we know that electricity needs to travel through a conductor, it only made sense that shorter, smaller circuits and conductors will enable the electricity to travel faster, hence improving speed of the electrical component.
Once the idea got through, the race to squeeze "more" into "less" created fierce competition between markets.Components which made use of vacuum tubes were abandoned, and transistors took their place. Then, the transistors themselves were replaced by faster and smaller integrated circuits. At some point, scientists warned that the "circuit squeezing days" were over as circuits reached 3 micrometers. Then, new manufacturing techniques came out and proved the scientists wrong. Computer circuit developments sped up even more !
As these integrated circuits get more and more packed with circuits that grew increasing smaller by the day, they became so integrated, that they actually became microchips . Today's microchips producers can squeeze a few million circuits and transistors all into a chip the size of a pin head !! The small problem involved here was that the less than 0.1 micrometer circuits were so fragile and heat sensitive that they would melt down when the chips get too hot. As a result, the chips are kept under controlled temperatures by intensive cooling. (E.g. liquid helium baths)
It was not long before the question of "how much more can we squeeze the circuits ?" popped up again. Circuit sizes seem to be reaching a size that seem too small to be compressed even more and even if they were, the heat heat and other electrical interference generated would destroy the fragile circuits and signals would leak from them.
The problem has fortunately found a new solution, by a technique that seem even more unlikely ; to shrink the circuits even more. Because traditional circuits still use the property of forcing massive loads of electrons along tiny little circuits, they cause great problems about the amount of interference they can take. With this new class of components known as single-electron transistors and quantum dots, which work by the interference of electron waves, larger amounts of interference energy are needed to cause disruptions. These single electron transistors even work better as they shrink in size !! Instead of making war with the inference energy, scientists actually make use of them.
The pace of improvements in computer performance is rapid. In the 1980s, computer performance doubled every one and a half years. In the late 1990s, computer performance doubled with periods of a single year !
If the pace continues, and computer technology continues improving even more rapidly by the day, computers matching human performance would be possible in more than a decade and human like robots will appear in the 2020s !!!
Would we actually do it ?
Even today, the current technology is just barely good enough (but possible) for us to create our own "digital human" should we want it badly enough. We mentioned that we are just a fraction away from building a computer system matching the processing power of the human brain. The main barrier involved was cost. Building a system that powerful would take hundreds of billions of dollars and not many investors would be willing to sponsor any enthusiastic researchers and scientists with that amount of money.
We would expect many of you to ask : Why wouldn't anybody be interested ?
First of all, why would we waste such a massive amount of money and human resource into building a single,
"fake " human when so many real life humans are already out in the populations ? It is jst not economical to the investor's pockets because the "product" would not be at all that useful.
The most powerful experimental super computers today are not used to research AI, or to imitate a human brain but are instead used for stimulations of real life events which are too complicated for normal calculations. While, they are powerful enough to do a partial imitation of the human brain, they will most likely not be used for that cause. This is again, because it is not economical. The investors and businessmen who funds such projects are not likely to waste their money on unproductive experiments that would cause so much.
While there are curious funders who pour money into AI and robotics projects that produces interesting results like a robot that can imitate a babie's curious behavouir, and robots that can create facial expressions themselves by responding to their "moods", such investors are rare and often invest only for fun and not for serious interest.
Until the cost of building a digital human brain is cheap enough, it will be hard to expect to see human-like robots walking around in the streets like in the movies any time soon.
What do we mean ? - The superior intelligence would refer to a race of computers and robots completely different in ability and functionality of those of today. We expect computers to be able to learn better without human assistance, understand scenarios and react independently to events that they come across.
So what ? - Yes, traditional AI (artificial intelligence) programs already told us of smarter, better and more human friendly computer systems in the future, but to what level ? Traditional AI allows the computer to simply make use of information provided in ways the programmer already defined and act accordingly to them. All actions and creativity of those programs are only limited to those preset by programmers. What we are talking about, however, are computer programs that can learn from observation, apply it new knowledge to use ; that is they, possess the learning capabilities similar to any human being. That being said and given their efficient
Is it Possible ?
Software wise -Today's computer programs are already capable of learning like humans, by observation, by sounds and find their way around by trail and error. Some developers have already come up with robots that are capable of learning and behaving like a human infant. It is not mainly a problem of software that is hindering the progress of computer technology but the hardware aspect. Today's computers simply lack the processing power to match up to the human brain.
Hardware wise - We already mentioned the capability in computer processing power , compared to those of the human brain. By our estimate, today's very biggest super computers are within a factor of a hundred of having the power to mimic a human mind. It is only a matter of time before computers become even more powerful than the human brain.
How long will it take ?
Right after World War I, rapid improvements in electromechanical calculators was the first sign of the beginning of the electronics race. As electronic computers surfaced during the World War II, the computation speed to price ratio increased a millionfold.
Since we know that electricity needs to travel through a conductor, it only made sense that shorter, smaller circuits and conductors will enable the electricity to travel faster, hence improving speed of the electrical component.
Once the idea got through, the race to squeeze "more" into "less" created fierce competition between markets.Components which made use of vacuum tubes were abandoned, and transistors took their place. Then, the transistors themselves were replaced by faster and smaller integrated circuits. At some point, scientists warned that the "circuit squeezing days" were over as circuits reached 3 micrometers. Then, new manufacturing techniques came out and proved the scientists wrong. Computer circuit developments sped up even more !
As these integrated circuits get more and more packed with circuits that grew increasing smaller by the day, they became so integrated, that they actually became microchips . Today's microchips producers can squeeze a few million circuits and transistors all into a chip the size of a pin head !! The small problem involved here was that the less than 0.1 micrometer circuits were so fragile and heat sensitive that they would melt down when the chips get too hot. As a result, the chips are kept under controlled temperatures by intensive cooling. (E.g. liquid helium baths)
It was not long before the question of "how much more can we squeeze the circuits ?" popped up again. Circuit sizes seem to be reaching a size that seem too small to be compressed even more and even if they were, the heat heat and other electrical interference generated would destroy the fragile circuits and signals would leak from them.
The problem has fortunately found a new solution, by a technique that seem even more unlikely ; to shrink the circuits even more. Because traditional circuits still use the property of forcing massive loads of electrons along tiny little circuits, they cause great problems about the amount of interference they can take. With this new class of components known as single-electron transistors and quantum dots, which work by the interference of electron waves, larger amounts of interference energy are needed to cause disruptions. These single electron transistors even work better as they shrink in size !! Instead of making war with the inference energy, scientists actually make use of them.
The pace of improvements in computer performance is rapid. In the 1980s, computer performance doubled every one and a half years. In the late 1990s, computer performance doubled with periods of a single year !
If the pace continues, and computer technology continues improving even more rapidly by the day, computers matching human performance would be possible in more than a decade and human like robots will appear in the 2020s !!!
Would we actually do it ?
Even today, the current technology is just barely good enough (but possible) for us to create our own "digital human" should we want it badly enough. We mentioned that we are just a fraction away from building a computer system matching the processing power of the human brain. The main barrier involved was cost. Building a system that powerful would take hundreds of billions of dollars and not many investors would be willing to sponsor any enthusiastic researchers and scientists with that amount of money.
We would expect many of you to ask : Why wouldn't anybody be interested ?
First of all, why would we waste such a massive amount of money and human resource into building a single,
"fake " human when so many real life humans are already out in the populations ? It is jst not economical to the investor's pockets because the "product" would not be at all that useful.
The most powerful experimental super computers today are not used to research AI, or to imitate a human brain but are instead used for stimulations of real life events which are too complicated for normal calculations. While, they are powerful enough to do a partial imitation of the human brain, they will most likely not be used for that cause. This is again, because it is not economical. The investors and businessmen who funds such projects are not likely to waste their money on unproductive experiments that would cause so much.
While there are curious funders who pour money into AI and robotics projects that produces interesting results like a robot that can imitate a babie's curious behavouir, and robots that can create facial expressions themselves by responding to their "moods", such investors are rare and often invest only for fun and not for serious interest.
Until the cost of building a digital human brain is cheap enough, it will be hard to expect to see human-like robots walking around in the streets like in the movies any time soon.
Wednesday, March 9, 2011
Applied Theory (Artificial Intelligence)
Ai bases its approach to creating real artificial intelligence on a solid philosophical basis. Rather than keeping our philosophy in the realm of theory, we apply it to our entire working process.
Ai’s applied philosophy is drawn from branches of the philosophy of language, logic, and radical behaviorism. It is built on four founding principles which guide our approach.
The first principle is that intelligence is in the eyes of the beholder. This means that there is no way to tell whether someone, or something, is intelligent, other than by making a subjective judgment based on observable behavior.
The second principle is that the most salient behavior that demonstrates intelligence is language, or more specifically conversational skills – the ability to interact in an "intelligent" manner with the observer.
The third principle is that this ability to use language, to converse, is a skill that can be acquired like any other skill.
Fourth, we believe that like the development of any skill, the ability to converse can only develop if a strict developmental process is followed.
A careful reading of Alan Turing’s paper "Computing Machinery and Intelligence" shows that Turing - the father of modern computing and artificial intelligence - based his approach to creating a "child machine" on the same four principles.
Subjective Intelligence
Intelligence is in the eyes of the beholder. Therefor, a machine that through conversation can fool a human into believing that it is human, must be deemed intelligent.
Language
Intelligence is measured through the social use of language. If a machine can generate language which accurately simulates the way people use language, it is fair to call that machine "intelligent".
Skill
Language has nothing to do with any type of knowledge base or rules; it is a skill that can be learned through a system of punishments and rewards.
Development
The acquisition of conversational skills has to go through an incremental developmental process. This developmental approach to learning language is the only way to create machine intelligence.
Ai’s applied philosophy is drawn from branches of the philosophy of language, logic, and radical behaviorism. It is built on four founding principles which guide our approach.
The first principle is that intelligence is in the eyes of the beholder. This means that there is no way to tell whether someone, or something, is intelligent, other than by making a subjective judgment based on observable behavior.
The second principle is that the most salient behavior that demonstrates intelligence is language, or more specifically conversational skills – the ability to interact in an "intelligent" manner with the observer.
The third principle is that this ability to use language, to converse, is a skill that can be acquired like any other skill.
Fourth, we believe that like the development of any skill, the ability to converse can only develop if a strict developmental process is followed.
A careful reading of Alan Turing’s paper "Computing Machinery and Intelligence" shows that Turing - the father of modern computing and artificial intelligence - based his approach to creating a "child machine" on the same four principles.
Subjective Intelligence
Intelligence is in the eyes of the beholder. Therefor, a machine that through conversation can fool a human into believing that it is human, must be deemed intelligent.
Language
Intelligence is measured through the social use of language. If a machine can generate language which accurately simulates the way people use language, it is fair to call that machine "intelligent".
Skill
Language has nothing to do with any type of knowledge base or rules; it is a skill that can be learned through a system of punishments and rewards.
Development
The acquisition of conversational skills has to go through an incremental developmental process. This developmental approach to learning language is the only way to create machine intelligence.
Tuesday, March 8, 2011
The Child Machine
Hal, like any 18-month old baby, is learning the rudiments of speech. He talks about red balls and blue balls, knows his Mommy and Daddy, and likes to go to the park. A child development specialist was given transcripts of Hal's conversations with his caretakers and declared him a healthy, normal little boy. What she wasn't told is that Hal is a computer program running on a regular Windows PC.
Ai uses behaviorist principles to teach our child machine - nicknamed Hal - to hold a conversation. Our approach was outlined by the computing theory pioneer Alan Turing in his 1950 article Computing Machinery and Intelligence. Turing viewed language as the defining element of intelligence; he believed that by giving a machine the capacity to learn, and a willingness to ask questions, you could "raise" an intelligence, an entity capable of rational, engaging conversation.
Education
Learn how Hal is being educated, and read about his training process.
State of Mind
The Ai child machine is built on a statistical model of language, coupled with advanced learning algorithms.
First Words
HAL is developed to meet basic human language development milestones. Look inside to find out what HAL's talking about.
Ai uses behaviorist principles to teach our child machine - nicknamed Hal - to hold a conversation. Our approach was outlined by the computing theory pioneer Alan Turing in his 1950 article Computing Machinery and Intelligence. Turing viewed language as the defining element of intelligence; he believed that by giving a machine the capacity to learn, and a willingness to ask questions, you could "raise" an intelligence, an entity capable of rational, engaging conversation.
Education
Learn how Hal is being educated, and read about his training process.
State of Mind
The Ai child machine is built on a statistical model of language, coupled with advanced learning algorithms.
First Words
HAL is developed to meet basic human language development milestones. Look inside to find out what HAL's talking about.
Research plan (Artifical Intelligence)
Ai has developed a research program aimed at true artificial intelligence - allowing people to converse with their computers in everyday language.
In teaching a computer to use language, Ai takes a scheduled, developmental approach, applying the behaviorist model of learning.
Our research plan is based on an iterative cycle, designed to improve the language skills of the system with each software update ("brain upgrade"). The developmental milestones we set for our child machine are based on human language-use milestones, with progress being evaluated by experts in child development
Applying the principles of behaviorism, we teach language to the child-machine through a system of rewards and punishments. The child-machine thus learns to use language, rather than having language built into it. Subjective Intelligence Related Article
At the Ai research facility, trainers converse with the machine, engaging it in conversation and monitoring its progress. The trainers test the limits of the child-machine's intelligence, and share their assessments with the algorithm developers. The developers consequently update and adapt the child-machine's algorithm, or "brain", making it more "human" in its language capability. Every so often, a new version of the brain is handed to the trainers, and the process repeats.
As the trainer works with the child machine, he or she frequently reports back to the developers on its progress. Our metric for success is clearly defined as "the language capability of a human of a defined age." For the first several iterations of the child machine, we sought to have the child machine speak at the same developmental level as a 15-month old. Now we are working on raising the child to 18 months. The actual time spent training the child does not correspond to its age; rather, these measurements of linguistic ability are standard, accepted guidelines for determining if a child is making linguistic progress or not.
In teaching a computer to use language, Ai takes a scheduled, developmental approach, applying the behaviorist model of learning.
Our research plan is based on an iterative cycle, designed to improve the language skills of the system with each software update ("brain upgrade"). The developmental milestones we set for our child machine are based on human language-use milestones, with progress being evaluated by experts in child development
Applying the principles of behaviorism, we teach language to the child-machine through a system of rewards and punishments. The child-machine thus learns to use language, rather than having language built into it. Subjective Intelligence Related Article
At the Ai research facility, trainers converse with the machine, engaging it in conversation and monitoring its progress. The trainers test the limits of the child-machine's intelligence, and share their assessments with the algorithm developers. The developers consequently update and adapt the child-machine's algorithm, or "brain", making it more "human" in its language capability. Every so often, a new version of the brain is handed to the trainers, and the process repeats.
As the trainer works with the child machine, he or she frequently reports back to the developers on its progress. Our metric for success is clearly defined as "the language capability of a human of a defined age." For the first several iterations of the child machine, we sought to have the child machine speak at the same developmental level as a 15-month old. Now we are working on raising the child to 18 months. The actual time spent training the child does not correspond to its age; rather, these measurements of linguistic ability are standard, accepted guidelines for determining if a child is making linguistic progress or not.
Subscribe to:
Posts (Atom)