Blindness is a deadly situation, not physical but mentally. Being a normal person and start losing the eyesight is similar to closing down one’s life from everything. Such a person would do anything to improve the vision or at least retain. According to a WHO report in 2017, it is estimated 253 million people live with vision impairment: 36 million are blind and 217 million have moderate to severe vision impairment. Chronic diseases, un-operated cataract and uncorrected refractive errors are the main causes of visual impairments. It is predicted that these numbers will be nearly tripled by 2050.
While acknowledging this is a serious problem in the world, through many kinds of research, various solutions are being made to aid the visually impaired people. They can be categorized into two major sections: smart technology involved solutions and traditional solutions. White cane, Magnifier glasses are some of the most used traditional devices. But white cane is only used for navigation purposes, and magnifiers are used to read (only for people with a certain amount of visibility). Smartphone applications, special glasses with cameras embedded are the equipment build with latest technologies, especially thanks to the advancements in artificial intelligence. But most of the visually impaired people, still don’t use smartphone applications because it’s cumbersome to use them. For example, they have to take out the phone from their pockets, plug the earphones, navigate to the particular application (with the accessibility features of smartphones), place the camera correctly to the document to be read or the object to be identified. Although cameras with embedded glasses are a good solution, still some problems exist. High price, social acceptance (generally people are not comfortable to see someone coming towards them wearing a camera, which is a privacy issue) and reluctance to wear glasses are the biggest issues.
As a solution to the existing problems, a team of researchers from the Augmented Human Lab at Auckland Bioengineering Institute, University of Auckland are developing an assistive device for visually impaired people. The fundamental idea is: “user will be able to hear what they are pointing at”. Pointing is a universal gesture which anyone can perform. On the other hand, a device should be “always available” and it should be small enough for easy use and “not obvious” so it will not draw the public attention.
The device – FingerReader – consists of a finger-worn ring with a small camera on it. The ring is connected to a smartwatch-like device, which is to be worn on the wrist. A user has to just point at the document to be read or the object to be identified, press a button on a side of the ring with the thumb. Then the camera will take a photo and send to the smartwatch. It will preprocess the image and send to the cloud. The algorithms running on the cloud will analyze the image and send the feedback to the wristwatch. A user will then hear the feedback through a Bluetooth headset. The main reason for the cloud-based architecture is, deep learning algorithms are used to detect objects and extract text which needs more computational power.
Through several user studies conducted in several countries of the world, they are thinking that FingerReader is on the correct track to attain its goals. Research and developments are still being done for father enhancements. More information can be found from here: http://fingerreader.org/
Article by Thisum Buddhika, Junior Assistant Editor, IMPACT by IEEE Young Professionals