image recognition system
'Degraded' Synthetic Faces Could Help Improve Facial Image Recognition
Researchers from Michigan State University have devised a way for synthetic faces to take a break from the deepfakes scene and do some good in the world – by helping image recognition systems to become more accurate. The new controllable face synthesis module (CFSM) they've devised is capable of regenerating faces in the style of real-world video surveillance footage, rather than relying on the uniformly higher-quality images used in popular open source datasets of celebrities, which do not reflect all the faults and shortcomings of genuine CCTV systems, such as facial blur, low resolution, and sensor noise – factors that can affect recognition accuracy. CFSM is not intended specifically to authentically simulate head poses, expressions, or all the other usual traits that are the objective of deepfake systems, but rather to generate a range of alternative views in the style of the target recognition system, using style transfer. The system is designed to mimic the style domain of the target system, and to adapt its output according to the resolution and range of'eccentricities' therein. The use-case includes legacy systems that are not likely to be upgraded due to cost, but which can currently contribute little to the new generation of facial recognition technologies, due to poor quality of output that may once have been leading-edge.
- North America > United States > Michigan (0.25)
- Asia > China > Hong Kong (0.05)
How to mess up testing your AI system
A great way to keep your wits about you when working with machine learning (ML) and artificial intelligence (AI) is to think like a teacher. After all, the point of ML/AI is that you're getting your (machine) student to learn a task by giving examples instead of explicit instructions. As any teacher will remind you: if you want to teach with examples, the examples must be good. The more complicated the task, the more examples you'll need. If you want to be able to trust that your student has learned the task, the test must be good.
Which games are useful to put artificial intelligence to the test?
The artificial intelligence (AI) that is already all around us cannot safely drive a car by himself, nor can they write compelling scripts. Still, every day the research to make them more capable gets new results, and in some cases, anyone with a computer and an Internet connection can help teach them something. Both Google and Microsoft have put online some experiments (interpretable as "games") with which to try to understand how to teach a computer to learn, or simply to get an idea of how smart some AI systems that already exist are. Google's can be found on the A.I. Experiments platform, while Microsoft's were mostly created by the Microsoft Garage research community. They are, of course, not the first such experiments.
- Europe > Middle East (0.05)
- Asia > Middle East (0.05)
- Asia > India (0.05)
- Africa > Middle East (0.05)
- Media (0.75)
- Leisure & Entertainment > Games > Computer Games (0.49)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Mobile (0.55)
Why Adversarial Image Attacks Are No Joke
Attacking image recognition systems with carefully-crafted adversarial images has been considered an amusing but trivial proof-of-concept over the last five years. However, new research from Australia suggests that the casual use of highly popular image datasets for commercial AI projects could create an enduring new security problem. For a couple of years now, a group of academics at the University of Adelaide have been trying to explain something really important about the future of AI-based image recognition systems. It's something that would be difficult (and very expensive) to fix right now, and which would be unconscionably costly to remedy once the current trends in image recognition research have been fully developed into commercialized and industrialized deployments in 5-10 years' time. Before we get into it, let's have a look at a flower being classified as President Barack Obama, from one of the six videos that the team has published on the project page: In the above image, a facial recognition system that clearly knows how to recognize Barack Obama is fooled into 80% certainty that an anonymized man holding a crafted, printed adversarial image of a flower is also Barack Obama.
- Information Technology > Security & Privacy (1.00)
- Government (0.76)
The State of AI Ethics Report (Volume 4)
Gupta, Abhishek, Royer, Alexandrine, Wright, Connor, Heath, Victoria, Fancy, Muriam, Ganapini, Marianna Bergamaschi, Egan, Shannon, Sweidan, Masa, Akif, Mo, Butalid, Renjie
The 4th edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in the field of AI Ethics since January 2021. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, with a particular focus on four key themes: Ethical AI, Fairness & Justice, Humans & Tech, and Privacy. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Opening the report is a long-form piece by Edward Higgs (Professor of History, University of Essex) titled "AI and the Face: A Historian's View." In it, Higgs examines the unscientific history of facial analysis and how AI might be repeating some of those mistakes at scale. The report also features chapter introductions by Alexa Hagerty (Anthropologist, University of Cambridge), Marianna Ganapini (Faculty Director, Montreal AI Ethics Institute), Deborah G. Johnson (Emeritus Professor, Engineering and Society, University of Virginia), and Soraj Hongladarom (Professor of Philosophy and Director, Center for Science, Technology and Society, Chulalongkorn University in Bangkok). This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.
- North America > United States (1.00)
- Asia (1.00)
- Africa (1.00)
- (2 more...)
- Summary/Review (1.00)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- (4 more...)
- Social Sector (1.00)
- Media > News (1.00)
- Leisure & Entertainment > Sports (1.00)
- (17 more...)
This avocado armchair could be the future of AI
For all GPT-3's flair, its output can feel untethered from reality, as if it doesn't know what it's talking about. By grounding text in images, researchers at OpenAI and elsewhere are trying to give language models a better grasp of the everyday concepts that humans use to make sense of things. DALL·E and CLIP come at this problem from different directions. At first glance, CLIP (Contrastive Language-Image Pre-training) is yet another image recognition system. Except that it has learned to recognize images not from labeled examples in curated data sets, as most existing models do, but from images and their captions taken from the internet.
When AI Systems Fail: Introducing the AI Incident Database - The Partnership on AI
Governments, corporations, and individuals are increasingly deploying intelligent systems to safety-critical problem areas, such as transportation, energy, health care, and law enforcement, as well as challenging social system domains such as recruiting. Failures of these systems pose serious risks to life and wellbeing, but even well-intentioned intelligent system developers fail to imagine what can go wrong when their systems are deployed in the real world. These failures can lead to dire consequences, some of which we've already witnessed, from a trading algorithm causing a market "flash crash" in 2010 to an autonomous car killing a pedestrian in 2018 and a facial recognition system causing the wrongful arrest of an innocent person in 2019. Worse, the artificial intelligence community has no formal systems or processes whereby practitioners can discover and learn from the mistakes of the past, especially since there is not a widely used centralized place to collect information about what has gone wrong previously. Avoiding repeated AI failures requires making past failures known.
- Information Technology > Security & Privacy (0.73)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.71)
- Transportation > Ground > Road (0.35)
Deep Learning Neural Networks Bas
Image recognition, when referring to a computer, is its ability to understand the content of the photograph when it sees it. For instance, when a "House" picture is passed through a neural network, and it outputs the label'House,' this is because it recognized the house as the main content of the picture. In previous years, researchers have used neural networks to make significant progress in image recognition. Neural networks can be employed in object effectively, and its recognition accuracy will be high. Neurons are separate nodes that make up a neural network and are arranged in various groups known as layers.
Brainsourcing automatically identifies human preferences
Researchers at the University of Helsinki have developed a technique, using artificial intelligence, to analyse opinions, and draw conclusions using the brain activity of groups of people. This technique, which the researchers call "brainsourcing," can be used to classify images or recommend content, something that has not been demonstrated before. Crowdsourcing is a method to break up a more complex task into smaller tasks that can be distributed to large groups of people and solved individually. For example, people can be asked if an object can be seen in an image, and their responses are used as instructional data for an image recognition system. Even the most advanced image recognition systems based on artificial intelligence are not yet fully automated.