Goto

Collaborating Authors

visually impaired


Researchers design an AI-powered backpack for the visually impaired

Washington Post - Technology News

The backpack, which has yet to be named, was revealed Wednesday but could face years of development before a consumer-ready version is launched. Still, the product offers a glimpse at what a future could look like as progress in AI and machine learning increasingly help people with vision issues better perceive their environments and, therefore, live more independently.


Indian Currency Notes Classifier -- on cAInvas

#artificialintelligence

Currency notes have identifiers that allow the visually impaired to identify them easily. This is a learned skill. On the other hand, classifying them using images is an easier solution to help the visually impaired identify the currency they are dealing with. Here, we use pictures of different versions of the currency notes taken from different angles, with different backgrounds and covering different proportions. The dataset contains 195 images of 7 categories of Indian Currency Notes -- Tennote, Fiftynote, Twentynote, 2Thousandnote, 2Hundrednote, Hundrednote, 1Hundrednote.


Facebook improves AI photo descriptions for the visually impaired

Engadget

Facebook has long been using AI to describe photos for the visually impaired, but it's stepping up its efforts in 2021. The social media giant has detailed a new version of automatic alternative text (AAT) that promises much more information. Instead of relying on heavily supervised AI learning, Facebook is now using weak supervision based on "billions" of Instagram photos and hashtags. The method lets Facebook expand beyond just 100 concept descriptions to include over 1,200, such as different kinds of food and national monuments. It's also more culturally inclusive -- it can recognize weddings that don't involve white wedding dresses, for example. A new object detection system can also recognize where people are in the frame as well as the number of people in the scene.


The Impact of AI on Accessibility

#artificialintelligence

Gerry Bayne: Welcome to EDUCAUSE Exchange, where we focus on a single question from the higher ed IT community and hear advice, anecdotes, best practices, and more. Students with disabilities are a vulnerable population in higher education. Yet the real percentage is likely higher, given that many choose not to disclose their disability to their institutions. Students with disabilities experience barriers to education that many other students do not. And they can have both visible and invisible needs. Their dropout rates are substantially higher and their graduation rates are significantly lower than those of non-disabled students.


Technologies for the Visually Impaired

Communications of the ACM

Navigation is a huge part of the value smartphones provide for the blind and visually impaired. Thanks to recent advances in technology, the blind and visually impaired are now able to lead more independent lives than ever. The WeWALK Smart Cane is a great example of what is now possible. The WeWALK looks similar to the cane that some blind and visually impaired people have used for decades to avoid obstacles while walking, but it incorporates a few modern twists. With a standard cane, you can still run into obstacles that are not immediately underfoot, like poles, tree branches, and barriers.


Microsoft's new AI auto-captions images for the visually impaired

#artificialintelligence

A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out. Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv. The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to learn a visual vocabulary. A second dataset of properly captioned images is then used to help teach the AI how to best describe the pictures. "Ideally, everyone would include alt text for all images in documents, on the web, in social media – as this enables people who are blind to access the content and participate in the conversation. But, alas, people don't," said Saqib Shaikh, a software engineering manager with Microsoft's AI platform group.


JAWS architect Glen Gordon is joining Sight Tech Global, a virtual event Dec. 2-3 – TechCrunch

#artificialintelligence

For people who are blind or visually impaired, JAWS is synonymous with freedom to operate Windows PCs with a remarkable degree of control and precision with output in speech and Braille. The keyboard-driven application makes it possible to navigate GUI-based interfaces of web sites and Windows programs. Anyone who has ever listened to someone proficient in JAWS (the acronym for "Job Access With Speech") navigate a PC can't help but marvel at the speed of the operator and the rapid fire machine-voice responses from JAWS itself. For nearly 25 years, JAWS has dominated the field of screen readers, and is in use by hundreds of thousands of people worldwide. It is inarguably one of the greatest achievements in modern assistive technology.


OrCam Technologies co-founder Amnon Shashua to speak at Sight Tech Global – TechCrunch

#artificialintelligence

If the measure of progress in technology is that devices should become ever smaller and more capable, then OrCam Technologies is on a roll. The Israeli firm's OrCam MyEye, which fits on the arm of a pair of glasses, is far more powerful and much smaller than its predecessor. With new AI-based Smart Reading software released in July, the device not only "reads" text and labels but also identifies people by name and describes other important aspects of the visual world. It also interacts with the user, principally people who are blind or visually impaired, by means of an AI-based smart voice assistant. At the upcoming Sight Tech Global virtual event, we're pleased to announce that OrCam's co-founder and co-CEO, Professor Amnon Shashua, will be a featured speaker.


On-device Supermarket Product Recognition « Machine Learning Times

#artificialintelligence

One of the greatest challenges faced by users who are visually impaired is identifying packaged foods, both in a grocery store and also in their kitchen cupboard at home. This is because many foods share the same packaging, such as boxes, tins, bottles and jars, and only differ in the text and imagery printed on the label. However, the ubiquity of smart mobile devices provides an opportunity to address such challenges using machine learning (ML). In recent years, there have been significant improvements in the accuracy of on-device neural networks for various perception tasks. When coupled with the increased computing power in modern smartphones, it is now possible for many vision tasks to yield high performance while running entirely on a mobile device.


Researchers say we need better benchmarks to build more useful AI assistants

#artificialintelligence

The promise of conversational AI is that, unlike virtually any other form of technology, all you have to do is talk. Natural language is the most natural and democratic form of communication. After all, humans are born capable of learning how to speak, but some never learn to read or use a graphical user interface. That's why AI researchers from Element AI, Stanford University, and CIFAR recommend academic researchers take steps to create more useful forms of AI that speak with people to get things done, including the elimination of existing benchmarks. "As many current [language user interface] benchmarks suffer from low ecological validity, we recommend researchers not to initiate incremental research projects on them. Benchmark-specific advances are less meaningful when it is unclear if they transfer to real LUI use cases. Instead, we suggest the community to focus on conceptual research ideas that can generalize well beyond the current datasets," the paper reads.