Optical Character Recognition


Sometimes "Small Data" Is Enough to Create Smart Products

#artificialintelligence

Another interesting example of small, high precision data being used to make big gains with AI can be found in the airline industry. One such project aims to dramatically reduce maintenance costs with AI by standardizing maintenance logs. A useful framework for taming data chaos and extracting small high precision data is focusing on the lifecycles of customers, partners, and suppliers. In the world of digital business, companies are always looking for big bang solutions -- some breakthrough that can give them an edge.


Notorious crow single-handedly shuts down mail delivery in neighborhood

Mashable

And now for some very Canadian news: Mail delivery has been halted in certain areas of Vancouver because a "well-known crow" attacked a mailman. SEE ALSO: Buy this house and you'll live in 2 countries at once Canuck the crow, who is identifiable to locals because of a red tag on its ankle, has been causing trouble in the area for a long time apparently. Canada Post spokeswoman Darcia Kmet told the BBC: "Regular mail delivery was suspended to three homes due to it being unsafe for our employees. Canuck the badass crow has two times more Facebook fans than the state of Arkansas, and his fanbase seems to absolutely love him.


From braille to Be My Eyes – there's a revolution happening in tech for the blind

The Guardian

I am using an app called Be My Eyes, an app that connects blind and visually impaired people to sighted volunteers via a remote video connection. In the mid-1970s Ray Kurzweil, a pioneer in optical character recognition (OCR) – software that can recognise printed text – founded Kurzweil Computer Products and programmed omni-font, the first OCR program with the ability to recognise any kind of print style. All the time, companies are finding new ways to improve accessibility and Be My Eyes isn't the only assistive technology company taking advantage of the real time human element, building technology that is based on the creation of dialogue with its users. Earlier this year, Aira helped Erich Manser, who has retinitis pigmentosa, run the Boston marathon.


Careers at A9

#artificialintelligence

To see what kind of talent we are currently looking for and submit your resume, please visit: https://a9.com/careers/ We are always looking for talented people with backgrounds in: · Computer Vision · Machine Learning · Natural Language Processing · Backend Infrastructure / Systems Software Development · Analytics Data Mining · Pattern Recognition · Artificial Intelligence · Optical Character Recognition · Server Infrastructure · Augmented Reality · DevOps / Operations Engineer · Software Developer in Test A9 solves some of the biggest challenges in search and advertising. We design, develop, and deploy high performance, fault-tolerant distributed search systems used by millions of Amazon customers every day. A9 advertising drives the publisher products for Amazon's ad programs. To see all of our current openings, please visit: https://a9.com/careers/ To see all of our current openings, please visit: https://a9.com/careers/


Azure-Readiness/hol-azure-machine-learning

#artificialintelligence

This content is designed for audience without any prior Machine learning knowledge. It starts from very basics and goes to advanced topics. We will try to keep this content live and include more and more advanced lab sessions with real life scenarious. Thanks for your support and feedback to make this content better.


The Conundrum of Machine Learning and Cognitive Biases Access AI

#artificialintelligence

Machine learning is the ability for computers to learn without explicit programming. For example, iconoclastic author Tom Peters highlights 159 cognitive biases that impact management decision-making (Peters, Tom. Given a computer is devoid of emotion and the hubris of human ego, it would seem logical that machine learning is not impacted by cognitive bias. Machine learning technology is deployed today for many business uses, including self-driving cars, online recommendation, search engines, handwriting recognition, computer vision, online ad serving, pricing, prediction of equipment failure, credit scoring, fraud detection, OCR (optical character recognition), spam filtering and many other uses.


Baidu's text-to-speech system mimics a variety of accents 'perfectly'

Engadget

Chinese tech giant Baidu's text-to-speech system, Deep Voice, is making a lot of progress toward sounding more human. Baidu says that unlike previous text-to-speech systems, Deep Voice 2 finds shared qualities between the training voices entirely on its own, and without any previous guidance. "Deep voice 2 can learn from hundreds of voices and imitate them perfectly," a blog post says. In a research paper (PDF), Baidu concludes that its neural network can create voice pretty effectively even from small voice samples from hundreds of different speakers.


[R] Deep Voice 2: Multi-Speaker Neural Text-to-Speech • r/MachineLearning

#artificialintelligence

TL;DR Baidu's TTS system now supports multi-speaker conditioning, and can learn new speakers with very little data (a la LyreBird). I'm really excited about the recent influx of neural-net TTS systems, but all of the them seem to be too slow for real time dialog, or not publicly available, or both. Hoping that one of them gets a high quality open-source implementation soon!


Baidu's Deep Voice 2 text-to-speech engine can imitate hundreds of human accents

#artificialintelligence

Next time you hear a voice generated by Baidu's Deep Voice 2, you might not be able to tell whether it's human. That's leaps and bounds better than early versions of Deep Voice, which took multiple hours to learn one voice. Then, it autonomously derives unique voices from that model -- unlike voice assistants like Apple's Siri, which require that a human record thousands of hours of speech that engineers tune by hand, Deep Voice 2 doesn't require guidance or manual intervention. Google's WaveNet, a product of the company's DeepMind division, generates voices by sampling real human speech and independently creating its own sounds in a variety of voices.


How Computers Learned to Read

#artificialintelligence

In 1954, the first optical character recognition machine was installed in a business--fittingly, the office of Reader's Digest, though it wasn't used for books. Previously, the most well-known use for such technology involved something you're probably familiar with if you've cashed a check sometime in the last 60 years: Magnetic Ink Character Recognition, or MICR. In a piece on the Linotype blog, the late typographer, who died in 2015, recalled that he had somewhat of an advantage as OCR reader technology had improved significantly, and was able to pick up finer details. This led Kurzweil to build his company's technology around this specific use case, the result becoming the Kurzweil Reading Machine.