"Image understanding (IU) is the research area concerned with the design and experimentation of computer systems that integrate explicit models of a visual problem domain with one or more methods for extracting features from images and one or more methods for matching features with models using a control structure. Given a goal, or a reason for looking at a particular scene, these systems produce descriptions of both the images and the world scenes that the images represent."
– Image Understanding, by J.K. Tsotos. In Encyclopedia of Artificial Intelligence. Stuart C. Shapiro, editor. 1987. New York: John Wiley & Sons.
The Victorian government has announced an overhaul of the state's road safety cameras after the WannaCry ransomware that claimed hundreds of thousands of victims across 150 countries last year found itself on speed and red-light cameras on state roads last June. The announcement on Thursday sees an overhaul of the network's governance and security protocols, following a report from Road Safety Camera Commissioner John Voyage. The report, commissioned by Minister for Police Lisa Neville, and all of its recommendations will be accepted and "fully implemented", the state government said. According to Voyage, the virus did not affect the network's "integrity". "Our road safety camera network is integral to protecting the lives of Victorians on the road.
This is an introduction to both graphical programming in Python and fractal geometry at an intermediate level. We learn through coding examples in which you type along with me as we go through examples of fractals created with iteration, recursion, cellular automata, and chaos. These concepts are implemented in Python using it's built-in Tkinter and turtle graphics libraries, so no special packages have to be brought in by the user, in fact by the time we are done you could write graphical packages on your own!
During the MVP Summit this year and the Windows Developer Day, Microsoft has spoken a lot about WinML. From that moment on, I was trying to find some spare time to start playing with this. I finally managed to build a very simple UWP Console app that does image classification, using a ONNX file that I trained in the cloud. In this blogpost I'll show you exactly how I've built this. The resulting UWP Console app will take all images from the executing folder, classify them and will add the classification as a Tag to the metadata of the image.
During the opening F8 2018 keynote, Facebook CEO Mark Zuckerberg showed off the company's latest Instagram updates: Spotify integration, AI-based anti-bullying comment filters, AR camera effects and four-way video chat. During the Day 2 keynote, Facebook revealed how your daily Instagram updates are giving its AI technology a deep-learning crash course in image recognition--one that's apparently made its AI even smarter than Google's at categorizing objects in photos. Facebook pulled this off, amazingly enough, by instructing its AI to read photo hashtags and interpret photos' subject matter. Using this strategy, called "weakly supervised training", Facebook's AI achieved a record 85.4% accuracy rating on an industry-wide test of image recognition, beating out Google's previous record. A Facebook Engineering blog post went into detail on the methods.
In the race to continue building more sophisticated AI deep learning models, Facebook has a secret weapon: billions of images on Instagram. In research the company is presenting today at F8, Facebook details how it took what amounted to billions of public Instagram photos that had been annotated by users with hashtags and used that data to train their own image recognition models. They relied on hundreds of GPUs running around the clock to parse the data, but were ultimately left with deep learning models that beat industry benchmarks, the best of which achieved 85.4 percent accuracy on ImageNet. If you've ever put a few hashtags onto an Instagram photo, you'll know doing so isn't exactly a research-grade process. There is generally some sort of method to why users tag an image with a specific hashtag; the challenge for Facebook was sorting what was relevant across billions of images.
Training deep learning models to recognize image,s as well as objects within those images, takes quite a bit of effort. Often, each training image has to be labeled by humans and when you're using millions of images, that process becomes rather labor-intensive. Scaling up to billions of images becomes nearly impossible. So, Facebook has been working on a way to train deep learning models with limited human supervision. Instead, its researchers have turned to public images that are, in a way, already labeled -- with hashtags.
Medical AI systems are particularly vulnerable to attacks and have been overlooked in security research, a new study suggests. Researchers from Harvard University believe they have demonstrated the first examples of how medical systems can be manipulated in a paper [PDF] published on arXiv. Sam Finlayson, lead author of the study, and his colleagues Andrew Beam and Isaac Kahone, used the projected gradient descent (PGD) attack on image recognition models to try and get them to see things that aren' t there. The PGD algorithm finds the best pixels to fudge in an image to create adversarial examples that will push models into identifying an object incorrectly and thus cause false diagnoses. The team tested the attack on three data processes: A fundoscopy model which detects diabetic retinopathy from retina scans, a second model that scans chest x-rays for signs of a collapsed lung and finally a dermoscopy model looking at moles for signs of skin cancer.