Goto

Collaborating Authors

representation


The Future of AI Part 1

#artificialintelligence

It was reported that Venture Capital investments into AI related startups made a significant increase in 2018, jumping by 72% compared to 2017, with 466 startups funded from 533 in 2017. PWC moneytree report stated that that seed-stage deal activity in the US among AI-related companies rose to 28% in the fourth-quarter of 2018, compared to 24% in the three months prior, while expansion-stage deal activity jumped to 32%, from 23%. There will be an increasing international rivalry over the global leadership of AI. President Putin of Russia was quoted as saying that "the nation that leads in AI will be the ruler of the world". Billionaire Mark Cuban was reported in CNBC as stating that "the world's first trillionaire would be an AI entrepreneur".


Facebook AI Wav2Vec 2.0: Automatic Speech Recognition From 10 Minute Sample

#artificialintelligence

Speech-to-text applications have never been so plentiful, popular or powerful, with researchers' pursuit of ever-better automatic speech recognition (ASR) system performance bearing fruit thanks to huge advances in machine learning technologies and the increasing availability of large speech datasets. Current speech recognition systems require thousands of hours of transcribed speech to reach acceptable performance. However, a lack of transcribed audio data for the less widely spoken of the world's 7,000 languages and dialects makes it difficult to train robust speech recognition systems in this area. To help ASR development for such low-resource languages and dialects, Facebook AI researchers have open-sourced the new wav2vec 2.0 algorithm for self-supervised language learning. The paper Wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations claims to "show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler." A Facebook AI tweet says the new algorithm can enable automatic speech recognition models with just 10 minutes of transcribed speech data.


We Need to Rethink Convolutional Neural Networks

#artificialintelligence

Convolutional Neural Networks (CNNs) have shown impressive state-of-the-art performance on multiple standard datasets, and no doubt they have been instrumental in the development and research acceleration around the field of image processing. Researchers often have a problem of getting too wrapped in the closed world of theory and perfect datasets. Unfortunately, chasing extra fractions of percentage points on accuracy is actually counterproductive to the real usages of image processing: the real world. When algorithms and methods are designed with the noiseless and perfectly predictable world of a dataset in mind, they very well may perform poorly in the real world. This has certainly shown to be the case.


A Step Towards Sensor Fusion for Indoor Layout Estimation

#artificialintelligence

The vision of smart autonomous robots in the indoor environment is becoming a reality in the current decade. This vision is now becoming a reality because of emerging technologies of Sensor Fusion and Artificial Intelligence. Sensor fusion is aggregating informative features from disparate hardware resources. Just like autonomous vehicles, the robotic industry is quickly moving towards automatic smart robots for handling indoor tasks. Now the major question arises.


MIT undergraduates pursue research opportunities through the pandemic

#artificialintelligence

Even in ordinary times, scientific process is stressful, with its demand for open-ended exploration and persistence in the face of failure. But the pandemic has added to the strain. In this new world of physical isolation, there are fewer opportunities for spontaneity and connection, and fewer distractions and events to mark the passage of time. Days pass in a numbing blur of sameness. Working from home this summer, students participating in MIT's Undergraduate Research Opportunities Program (UROP) did their best to overcome these challenges.


MobileBERT Paper Summary

#artificialintelligence

As the size of the NLP model increases into the hundreds of billions of parameters, so does the importance of being able to create more compact representations of these models. Knowledge distillation has successfully enabled this but is still considered an afterthought when designing the teacher models. This probably reduces the effectiveness of the distillation, leaving potential performance improvements for the student on the table. Further, the difficulties in fine-tuning small student models after the initial distillation, without degrading their performance, requires us to both pre-train and fine-tune the teachers on the tasks we want the student to be able to perform. Training a student model through knowledge distillation will, therefore, require more training compared to only training the teacher, which limits the benefits of a student model to inference-time.


The Whiteness of AI

#artificialintelligence

It is a truth little acknowledged that a machine in possession of intelligence must be white. Typing terms like "robot" or "artificial intelligence" into a search engine will yield a preponderance of stock images of white plastic humanoids. Perhaps more notable still, these machines are not only white in colour, but the more human they are made to look, the more their features are made ethnically White.Footnote 1 In this paper, we problematize the often unnoticed and unremarked-upon fact that intelligent machines are predominantly conceived and portrayed as White. We argue that this Whiteness both illuminates particularities of what (Anglophone Western) society hopes for and fears from these machines, and situates these affects within long-standing ideological structures that relate race and technology. Race and technology are two of the most powerful and important categories for understanding the world as it has developed since at least the early modern period.


AI Technique Copies Human Memory To Minimize Data Storage Burden

#artificialintelligence

Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a "major, long-standing obstacle to increasing AI capabilities" by drawing inspiration from a human brain memory mechanism known as "replay." First author and postdoctoral researcher Gido van de Ven and principal investigator Andreas Tolias at Baylor, with Hava Siegelmann at UMass Amherst, write in Nature Communications that they have developed a new method to protect - "surprisingly efficiently" - deep neural networks from "catastrophic forgetting" - upon learning new lessons, the networks forget what they had learned before. Siegelmann and colleagues point out that deep neural networks are the main drivers behind recent AI advances, but progress is held back by this forgetting. They write, "One solution would be to store previously encountered examples and revisit them when learning something new. Although such'replay' or'rehearsal' solves catastrophic forgetting," they add, "constantly retraining on all previously learned tasks is highly inefficient and the amount of data that would have to be stored becomes unmanageable quickly."


What Is The Best Technique to Detect Duplicate Images?

#artificialintelligence

If you have a lot of image data to manage, then you know: identifying and avoiding duplicate images is the key to maintain the integrity of your image collection. Depending on which detection technique you choose, this can be error-prone or not applicable to large volumes of image data. So, what is the best technique for detecting duplicate images? It always depends on your image collection and your requirements. How large is your collection?


How we remember could help AI be less forgetful

#artificialintelligence

A brain mechanism referred to as "replay" inspired researchers at Baylor College of Medicine to develop a new method to protect deep neural networks, found in artificial intelligence (AI), from forgetting what they have previously learned. The study, in the current edition of Nature Communications, has implications for both neuroscience and deep learning. Deep neural networks are the main drivers behind the recent fast progress in AI. These networks are extremely good at learning to solve individual tasks. However, when they are trained on a new task, they typically lose the ability to solve the previously learned task completely.