Google’s latest, Project Soli, could completely reinvent the way we interact with our devices. Check out the video:

One of the biggest social criticisms of the past 10 years has been the zombification of screen addition. (Eric Pickersgill’s photography is pretty compelling on the matter.) The thing is, we know we stare at our phones too much. We know it’s unhealthy and many of us want an alternative. Project Soli could introduce an entirely new language for interacting with computers – one that harnesses the intuitive complexities of the human hand, while at the same time freeing us from the need to stare at the interface we control. At first most actions will still require visual feedback from a screen, but the speed of performance would likely jump right off the bat. Instead of our hands adapting to a keyboard, a whole new language could be born from our most instinctive tool. Our grandchildren could look at typing the way we look at the rotary phone.

The amount of information in our pockets is a strange new world, and the bigger picture tug-of-war is between our time spent accessing it versus living in the real world. When something comes along that can blend the 2, some might jump to Dystopian conclusions. But unlike Google Glass’s intrusive attempt to overlay the virtual on top of the physical, Project Soli is positioned to increase control while decreasing intrusion. It feels like a big step in the right direction.