Pattern Recognition


Unsupervised Learning with Clustering Techniques w/Srini Anand

#artificialintelligence

As humans we are able to discern differences among different groups within a collection. We might group a collection by broad groups such as birds versus plants versus animals or detect subtle features to identify different makes and models of cars. Clustering techniques allow us to automate the process and apply them to data where groupings are not immediately obvious. These techniques are used for different purposes such as detecting market segments, identifying properties of online communities, fraud detection, and cybersecurity. Srini Anand is a Data Scientist at Ameritas Life Insurance Company and holds a Masters degree in Data Science from Indiana University.


Open source and open data

#artificialintelligence

There's currently an ongoing debate about the value of data and whether internet companies should do more to share their data with others. At Google we've long believed that open data and open source are good not only for us and our industry, but also benefit the world at large. Our commitment to open source and open data has led us to share datasets, services and software with everyone. For example, Google released the Open Images dataset of 36.5 million images containing nearly 20,000 categories of human-labeled objects. With this data, computer vision researchers can train image recognition systems.


r/deeplearning - What creates bias in AI?

#artificialintelligence

It has nothing to do with any of the things you listed. Machine learning and pattern recognition basically come down to learning a model of the dataset and then predicting something based on that model. If the model is "biased" then it's because the dataset was "biased". I don't understand what you are getting at when you talk about the black/white/male/female stuff. Black/white/male/female are just arbitrary labels defined by you.


Face recognition, bad people and bad data -- Benedict Evans

#artificialintelligence

We worried that these databases would contain bad data or bad assumptions, and in particular that they might inadvertently and unconsciously encode the existing prejudices and biases of our societies and fix them into machinery. We worried people would screw up. That is, we worried what would happen if these systems didn't work and we worried what would happen if they did work. We're now having much the same conversation about AI in general (or more properly machine learning) and especially about face recognition, which has only become practical because of machine learning. And, we're worrying about the same things - we worry what happens if it doesn't work and we worry what happens if it does work.



Four Things to Remember When Thinking of Image Analytics and Business Improvement

#artificialintelligence

According to a Forbes blog post from May 2018, over 300 million images are uploaded to Facebook and 95 million images are uploaded to Instagram each day. There's a good reason for this new trend: Images are more memorable, more impactful, and easier to share than text. You don't have to translate them. A picture is worth a thousand words, after all. Ninety percent of what our brains process is visual.



The Well-Grounded Rubyist [PDF] - Programmer Books

#artificialintelligence

In this chapter, we'll explore Ruby's facilities for pattern matching and text processing, centering around the use of regular expressions. A regular expression in Ruby serves the same purposes it does in other languages: it specifies a pattern of characters, a pattern that may or may not correctly predict (that is, match) a given string. Pattern-match operations are used for conditional branching (match/no match), pinpointing substrings (parts of a string that match parts of the pattern), and various text-filtering techniques. Regular expressions in Ruby are objects. You send messages to a regular expression.


Face recognition and OCR processing of 300 million records from US yearbooks

#artificialintelligence

A yearbook is a type of a book published annually to record, highlight, and commemorate the past year of a school. Our team at MyHeritage took on a complex project: extracting individual pictures, names, and ages from hundreds of thousands of yearbooks, structuring the data, and creating a searchable index that covers the majority of US schools between the years 1890–1979 -- more than 290 million individuals. In this article I'll describe what problems we encountered during this project and how we solved them. First of all, let me explain why we needed to tackle this challenge. MyHeritage is a genealogy platform that provides access to almost 10 billion historical records.


YouTube Using AI to Help Remove Video Deemed Offensive; Meanwhile Recommendation Engine is Challenged - AI Trends

#artificialintelligence

You Tube needs to employ AI to help process the 300 hours of video uploaded to the platform every minute by its users. This processing includes removing video deemed inappropriate by YouTube's standards. Some 8.3 million videos were removed from YouTube in the first quarter, 76 percent of those identified and flagged by AI automatically, according to an account in Forbes. Of those, more than 70 percent were never viewed by users. While the AI system is able to review more content than humans, full-time human specialists work with the AI, which of course is not foolproof.