Thursday, August 01, 2013

TheGlassLog: Why Google #Glass Needs a Lock Screen (and a lot more): Finding something like this in your stream:

Of all the words Glass could have caught, it was this one. I don't even remember saying it.

Of all the words Glass could have caught, it was this one. I don’t even remember saying it.



Of course, there are quite a few things that Glass needs but a lockscreen is definitely one of them. Sure, Mike DiGiovanni coded one and has made the code available (get the code here) but he hasn’t made an APK of it available yet. While I do know how to sideload APKs onto Glass (I learned how to do it here, at the Applied GlassWare blog), I haven’t educated myself enough to know how to compile code into an APK (that’s the file extension for an Android app, in case you were wondering).



In fact, I think one of the biggest flaws of Glass is the placement of it’s touchpad. I have no idea where else it would fit, but then, the least Google could do is provide us with a lockscreen so we don’t accidentally Google swear words. It’s just that the touchpad is RIGHT where I instinctively want to hold Glass. You know, like where anyone would hold a normal pair of glasses.


I think there is another way to handle this (literally and figuratively) interacting with Glass. The thing that annoys me the most is that I keep having to touch the damn thing. I’ve written about it before, so I won’t go too in depth again. I will say that it’s odd that I am wearing a computer on my face and to actually see any content, I have to tap the touchpad. I might as well just wake my smart phone, right? So what’s the missing link, here?


I shouldn’t have to touch Glass to make it go.


No, I’m not talking about shaking my head around to wake Glass up and see content (actually, I can do this already, if I do it RIGHT after content is delivered). The whole head-shaking thing seems dumb to me. No, I’m thinking of something like this guy Pranav Mistry came up with back in 2009. He used to be a grad student at MIT and he devised such a brilliant piece of tech that he gave a TED talk on it. Here it is if you want to watch it (or scroll past it if you want):



I think the problem is that Google, themselves, weren’t pushing their ideas far enough. Because here is this kid who, years before Glass would grace anyone’s face, had come up with an inexpensive rig that could do some pretty amazing things–without requiring the user to touch anything at all.


This “SixthSense” device did not need a touch-interface in order for it to work. The damn thing worked by seeing little markers on the user’s finger-tips. Of course, Glass has a hi-res camera and is likely capable of recognizing hand gestures with a better level of accuracy than Mistry’s gear from 2009, hopefully rendering the markers on the user’s finger tips unnecessary.


The point is that using Glass to watch for gestures and motions we could then see as many amazing uses for Glass as Mistry demonstrates his rig being capable of. Draw a square around a newspaper article and Glass will save it for you to read later. Draw a rectangle around a UPC code and Glass will scan it and match it to the product and give you the cheapest price available both nearby and online.


With the right software, you could even use sign language to tell Glass to do things.


And that’s the real trick would be coding that Glassware that would be able to recognize patterns in both static objects and in motions. So hand gestures and hand motions would both be able to trigger events with Glass.


But for now, we still have to make the effort to actually TOUCH stuff. What a PAIN…and how far we’ve come!







via thepete.com http://thepete.com/theglasslog-why-google-glass-needs-a-lock-screen-and-a-lot-more-finding-something-like-this-in-your-stream/

No comments: