Friday, August 13, 2010

More on Sixth Sence Technology

Ever since computers began to be a part of mankind, their evolution has been taking place at break neck speed. And now we are about to witness the power of computing and on demand information just like the Sci-fi thrillers of Hollywood have portrayed for many years. Sixth Sense Technology is one such recent invention which aims to blend in the boundaries between the virtual and the real world. The mastermind behind this futuristic technology is Pranav Mistry, a designer whiz kid.

Developmental Stages

The idea behind this technology is to simplify day-to-day tasks and integrate them with the virtual world. This technology was born with the simple modification of a ball mouse into a motion sensing device. The axial rollers found in the ball mouse were used to replicate the gestures made with hand on the computer. The much-loved sticky notes were also implemented with this, one exception being that our scribble work on them would be directly synchronized with the computer or a scheduling device which can also be organized effortlessly.

With the virtual interaction in place, the next obvious step was to bring in instant information to the user. Sixth sense technology is set to redefine the way information can be searched for. The information could be accessed by merely placing the object of interest on the interactive plane without even having to GOOGLE it! So to check your flight schedule, all that you have to do is place your ticket on the interactive surface and watch in awe as you are flooded with the details.

Device Set Up

The Sixth sense device is a complete surprise package when it comes to its functionality and hardware. Just as the device simplifies human craving for information, it simplifies the way you interact with it. You don’t have to be a rocket scientist to be absolutely at ease operating it. The device has a portable camera, projector and few color markers stuck onto the fingers for gesture tracking.

The big plus of this device is that you need not carry a monitor display with you wherever you go. Rather you can magically turn any surface of your choice into a virtual screen and start interacting with the projected information. The device is a network enabled module which allows you to access the internet, cruise through maps when you are stuck in a tour, check your mails and also doubles up as your virtual mobile phone.

Availability

If you think this technology is confined to your dreams and would cost a bomb, then its time you get ready to be blown away. Because it would come as cheap as $350 in its compact and stylish pendant avatar! If this isn’t a reason to smile, then you will be happier to know that this technology boasts of an open source software. So all you need is the basic hardware in place, a techno savvy mind and passion for cool gadgets to put together your own sixth sense device and experience the power of the digital world on the go.

Sixth sense Vs Microsoft Digital Surface

The other recent development which introduced the multi-touch functionality and gesture interpretation technology is the Microsoft Surface from the computer technology giants, the Microsoft Corporation. But Pranav Mistry’s invention wins hands down with respect to the Microsoft Surface because of its down to earth price, portability, and mass popularity. Pranav Mistry and his mentor Pattie Maes got rave reviews for their presentations in the Ted conferences held at India and USA respectively. Moreover the Microsoft Surface is aimed at the commercial market with only a few ready to experiment with it considering the huge investment involved.

Thus Sixth Sense Technology is one fuss free device which is set to bless mankind with an extra sense in spite of the individual’s spiritual status. So let us all gear up to explore the world around us Virtually!

Monday, August 9, 2010

DNA ‘Spider’ Robot Collects Nanoparticles!

A DNA spider follows a path on a DNA origami scaffold towards the red-labeled goal.


Wow! Scientists are one step closer to creating molecular robots that may eventually perform very complex tasks, such as building nanomolecules… or delivering drugs to targeted diseased tissues. They have constructed DNA-based robots that can walk along a specific path, unaided, or, collect various nanoparticles along an assembly line. This news is from two studies published in Nature few month ago.

Thursday, January 28, 2010

The Sixth Sense Technology....

All of us are aware of the five basic senses – seeing, feeling, smelling, tasting and hearing. But there is also another sense called the sixth sense. It is basically a connection to something greater than what their physical senses are able to perceive. To a layman, it would be something supernatural. Some might just consider it to be a superstition or something psychological. But the invention of sixth sense technology has completely shocked the world. Although it is not widely known as of now but the time is not far when this technology will change our perception of the world.

Pranav Mistry, 28 year old, of Indian origin is the mastermind behind the sixth sense technology. He invented ‘ Sixth Sense / WUW ( Wear UR World) ‘ which is a wearable gestural , user friendly interface which links the physical world around us with digital information and uses hand gestures to interact with them. He is a PhD student at MIT and he won the ‘Invention of the Year 2009 ‘- by Popular Science. The device sees what we see but it lets out information that we want to know while viewing the object. It can project information on any surface, be it a wall, table or any other object and uses hand / arm movements to help us interact with the projected information. The device brings us closer to reality and assists us in making right decisions by providing the relevant information, thereby, making the entire world a computer.

The Sixth Sense prototype consists of a pocket projector, mirror and a camera. The device is pendant shaped like mobile wearing devices. Both the projector and the camera are connected to the mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user’s hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tip of the user’s fingers using simple computer-vision techniques. It also supports multi touch and multi user interaction.

The device has a huge number of applications. Firstly, it is portable and easily to carry as you can wear it in your neck. The drawing application lets user draw on any surface by observing the movement of index finger. Mapping can also be done anywhere with the features of zooming in or zooming out. The camera also helps user to take pictures of the scene he is viewing and later he can arrange them on any surface. That’s not it. Some of the more practical uses are reading a newspaper. Imagine reading a newspaper and viewing videos instead of the photos in the paper. Or live sports updates while reading the newspaper. The device can also tell you arrival, departure or delay time of your air plane on your tickets. For book lovers it is nothing less than a blessing. Open any book and you will find the Amazon ratings of the book. To add to it, pick any page and the device gives additional information on the text, comments and lot more add on features. While picking up any good at the grocery store, the user can get to know whether the product is eco friendly or not. To know the time, all one has to do is to just gesture drawing circle on the wrist and there appears a wrist watch. The device serves the purpose of a computer plus saves time spent on searching information. Currently the prototype of the device costs around $350 to build. Still more work is being done on the device and when fully developed, it will definitely revolutionize the world.

Wednesday, January 27, 2010

Architecture of Cloud Computing

Architecture

Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, comprises hardware and software designed by a cloud architect who typically works for a cloud integrator. It typically involves multiple cloud components communicating with each other over application programming interfaces, usually web services.

This closely resembles the Unix philosophy of having multiple programs each doing one thing well and working together over universal interfaces. Complexity is controlled and the resulting systems are more manageable than their monolithic counterparts.

Cloud architecture extends to the client, where web browsers and/or software applications access cloud applications.

Cloud storage architecture is loosely coupled, often assiduously avoiding the use of centralized metadata servers which can become bottlenecks. This enables the data nodes to scale into the hundreds, each independently delivering data to applications or users.



Types by visibility

Public cloud

Public cloud or external cloud describes cloud computing in the traditional mainstream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via web applications/web services, from an off-site third-party provider who shares resources and bills on a fine-grained utility computing basis.

Hybrid cloud

A hybrid cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises". A hybrid cloud can describe configuration combining a local device, such as a Plug computer with cloud services. It can also describe configurations combining virtual and physical, colocated assets—for example, a mostly virtualized environment that requires physical servers, routers, or other hardware such as a network appliance acting as a firewall or spam filter.

Private cloud

Private cloud and internal cloud are neologisms that some vendors have recently used to describe offerings that emulate cloud computing on private networks. These (typically virtualisation automation) products claim to "deliver some benefits of cloud computing without the pitfalls", capitalising on data security, corporate governance, and reliability concerns. They have been criticized on the basis that users "still have to buy, build, and manage them" and as such do not benefit from lower up-front capital costs and less hands-on management, essentially " the economic model that makes cloud computing such an intriguing concept".

While an analyst predicted in 2008 that private cloud networks would be the future of corporate IT, there is some uncertainty whether they are a reality even within the same firm. Analysts also claim that within five years a "huge percentage" of small and medium enterprises will get most of their computing resources from external cloud computing providers as they "will not have economies of scale to make it worth staying in the IT business" or be able to afford private clouds. Analysts have reported on Platform's view that private clouds are a stepping stone to external clouds, particularly for the financial services, and that future datacenters will look like internal clouds.

The term has also been used in the logical rather than physical sense, for example in reference to platform as a service offerings, though such offerings including Microsoft's Azure Services Platform are not available for on-premises deployment.

Tuesday, January 26, 2010

Cloud Computing

Cloud computing is Internet- ("cloud-") based development and use of computer technology ("computing"). In concept, it is a paradigm shift whereby details are abstracted from the users who no longer need knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. Cloud computing describes a new supplement, consumption and delivery model for IT services based on Internet, and it typically involves the provision of dynamically scalable and often virtualized resources as a service over the Internet.



The term cloud is used as a metaphor for the Internet, based on the cloud drawing used to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents. Typical cloud computing providers deliver common business applications online which are accessed from a web browser, while the software and data are stored on servers.

These applications are broadly divided into the following categories: Software as a Service (SaaS), Utility Computing, Web Services, Platform as a Service (PaaS), Managed Service Providers (MSP), Service Commerce, and Internet Integration.

Friday, January 22, 2010

Human Thoughts Control New Robot

Scientists have created a way to control a robot with signals from a human brain.

By generating the proper brainwaves—picked up by a cap with electrodes that sense the signals and reflect a person's instructions—scientists can instruct a humanoid robot to move to specific locations and pick up certain objects.

The commands are limited to moving forward, picking up one of two objects and bringing it to one of two locations. The researchers have achieved 94 percent accuracy between the thought commands and the robot's movements.

"This is really a proof-of-concept demonstration," said Rajesh Rao, a researcher from the University of Washington who leads the project. "It suggests that one day we might be able to use semi-autonomous robots for such jobs as helping disabled people or performing routine tasks in a person's home."

The person wearing the electrode cap watches the robot's movement on a computer screen through two cameras installed on and above the robot.

When the robot's camera sees the objects that are to be picked up it passes on the information to the user's computer screen. Each object lights up randomly on the computer screen. When a person wants something picked up and it happens to light up, the brain registers surprise and sends this brain activity to the computer and then to the robot as the choice object. The robot then proceeds to pick up the object.


A similar algorithm is used to decide where the robot will go.

"One of the important things about this demonstration is that we're using a 'noisy' brain signal to control the robot," Rao said. "The technique for picking up brain signals is non-invasive, but that means we can only obtain brain signals indirectly from sensors on the surface of the head, and not where they are generated deep in the brain. As a result, the user can only generate high-level commands such as indicating which object to pick up or which location to go to, and the robot needs to be autonomous enough to be able to execute such commands."

Thursday, January 21, 2010