Raspberry Pi-Powered Translation Glasses
It seems only yesterday that the entire tech world was obsessed by Google Glass. It had everything: accessibility implications, privacy concerns, and portable communications. As an always-on and in-sight wearable, it offered the greatest chance of technological singularity.
But then everyone realised that the tech (and society) wasn’t quite ready. So, are they ready for OTON GLASS? Developed by Keisuke Shimakage of the Media Creation Research Department at the Institute of Advanced Media Arts and Sciences, and inspired by his father’s brain tumor-related dyslexia, OTON GLASS is based on a Raspberry Pi 3 and includes two cameras and an earphone.
But what does it do?
Well, it’s simple, and could, for the foreseeable future, provide the perfect basis for Google Glass-style projects: very specific use cases. In this case, the device changes words to sound, using the internal camera to detect eye movement and direction and matching that with any text read by the external camera. These words are then read to the wearer via the earphone. Not only does this have implications for dyslexia and other reading difficulties, but it can potentially also aid travellers reading signs written in unfamiliar foreign languages.
Perhaps the real winning element of this project, however, is how it looks. As Keisuke Shimakage notes, it “combines camera-to-glasses – meaning they look like normal glasses. This capture trigger based on human’s behavior is natural interaction for people.” We are sure you’ll agree tech that can be used without the wearer looking ridiculous (hi, Google Glass!) and absent of any unusual gestures, is more likely to be accepted by consumers.
OTON GLASS has already made waves in Japan, and was a runner-up for the James Dyson Award. Visit otonglass.jp to find out more.
Leave your feedback...