Coming with the ever growing computational power of mobile devices, mobile visual search have undergone an evolution in techniques and applications. A significant trend is low bit rate visual search, where compact visual descriptors are extracted directly over a mobile and delivered as queries rather than raw images to reduce the query transmission latency. In this article, we introduce our work on low bit rate mobile landmark search, in which a compact yet discriminative landmark image descriptor is extracted by using location context such as GPS, crowd-sourced hotspot WLAN, and cell tower locations. The compactness originates from the bag-of-words image representation, with an offline learning from geotagged photos from online photo sharing websites including Flickr and Panoramio. The learning process involves segmenting the landmark photo collection by discrete geographical regions using Gaussian mixture model, and then boosting a ranking sensitive vocabulary within each region, with an "entropy" based descriptor compactness feedback to refine both phases iteratively.
NASA's Artemis program will eventually need robots to help live off the lunar soil, and it's enlisting help from the public to make those robots viable. The space agency has picked winners from a design challenge that tasked people with improving the bucket drums RASSOR (Regolith Advanced Surface Systems Operations Robot) will use to dig on the Moon. The victors all had clever designs that should capture lunar regolith with little effort -- important when any long-term presence might depend on bots like this. The winner was a trap from Caleb Clausing that uses a passive door to grab large amounts of soil while remaining dust-tolerant. Others included a simple-yet-effective drum from Michael R, another from Kyle St. Thomas that uses narrow drums, an efficient double-helix design from Stephan Weiβenböck and a model from Clix that uses both gravity and weight to help movement.
Facebook unveiled an initiative Tuesday to take on "hateful memes" by using artificial intelligence, backed by crowd sourcing, to identify maliciously motivated posts. The leading social network said it had already created a database of 10,000 memes -- images often blended with text to deliver a specific message -- as part of a ramped-up effort against hate speech. Facebook said it was releasing the database to researchers as part of a "hateful memes challenge" to develop improved algorithms to detect hate-driven visual messages, with a prize pool of $100,000. "These efforts will spur the broader AI research community to test new methods, compare their work, and benchmark their results in order to accelerate work on detecting multimodal hate speech," Facebook said in a blog post. Facebook's effort comes as it leans more heavily on AI to filter out objectionable content during the coronavirus pandemic that has sidelined most of its human moderators.
The Covid-19 pandemic caught the entire world grossly unprepared to supply ventilators. We should have had more. Full adult ICU ventilators are expensive and difficult to rapidly obtain at scale. There are alternatives, but none can provide full ventilator capabilities. While the US is flattening the curve, there are areas of the world that have staggeringly low access to this vital equipment.
FIDE CM Kingscrusher goes over a game featuring An imprisoned bishop Highly Evolved Leela vs Mighty Stockfish TCEC Season 17 Rd 34 Play turn style chess at http://bit.ly/chessworld FIDE CM Kingscrusher goes over amazing games of Chess every day, with a focus recently on chess champions such as Magnus Carlsen or even games of Neural Networks which are opening up new concepts for how chess could be played more effectively. The Game qualities that kingscrusher looks for are generally amazing games with some awesome or astonishing features to them. Many brilliant games are being played every year in Chess and this channel helps to find and explain them in a clear way. There are classic games, crushing and dynamic games. There are exceptionally elegant games.
Alfredo joined Element AI as a Research Engineer in the AI for Good lab in London, working on applications that enable NGOs and non-profits. He is one of the primary co-authors of the first technical report made in partnership with Amnesty International, on the large-scale study of online abuse against women on Twitter from crowd-sourced data. He's been a Machine Learning mentor at NASA's Frontier Development Program, helping teams apply AI for scientific space problems. More recently, he led the joint-research with Mila Montreal on Multi-Frame Super-Resolution, which was awarded by the European Space Agency for their top performance on the PROBA-V Super-Resolution challenge. His research interests lie in computer vision for satellite imagery, probabilistic modeling, and AI for Social Good.
NASA's Jet Propulsion Laboratory is seeking ideas from the public around what kind of scientific equipment they could use to outfit tiny lunar rovers to help with Artemis and other Moon missions. The call, issued via crowdsourcing platform HeroX and called'Honey, I Shrunk the NASA Payload' in a very contemporary nod to a movie that came out 31 years ago, seeks payloads with maximum dimensions of no more than 4″ x 2″, or "similar in size to a new bar of soap." NASA wants to be able to perform the kind of science that has, in the past, required large launch vehicles, large orbiters and large launch vehicles, but with much greater frequency and at much lower costs than has been possible before. In order to pave the way for long-term lunar human presence and eventual habitation, NASA says it needs "practical and affordable ways to use lunar resources," in order to defray the costs of resupply missions – already an expensive undertaking when just traveling to the International Space Station in Earth's orbit, and astronomically more so when going as far afield as the Moon . The goal is for these to be pretty much immediately available for service, with the hope that they can be shipped out to the Moon over the course of the next one to four years.
Social media, especially Twitter, is being increasingly used for research with predictive analytics. In social media studies, natural language processing (NLP) techniques are used in conjunction with expert-based, manual and qualitative analyses. However, social media data are unstructured and must undergo complex manipulation for research use. The manual annotation is the most resource and time-consuming process that multiple expert raters have to reach consensus on every item, but is essential to create gold-standard datasets for training NLP-based machine learning classifiers. To reduce the burden of the manual annotation, yet maintaining its reliability, we devised a crowdsourcing pipeline combined with active learning strategies. We demonstrated its effectiveness through a case study that identifies job loss events from individual tweets. We used Amazon Mechanical Turk platform to recruit annotators from the Internet and designed a number of quality control measures to assure annotation accuracy. We evaluated 4 different active learning strategies (i.e., least confident, entropy, vote entropy, and Kullback-Leibler divergence). The active learning strategies aim at reducing the number of tweets needed to reach a desired performance of automated classification. Results show that crowdsourcing is useful to create high-quality annotations and active learning helps in reducing the number of required tweets, although there was no substantial difference among the strategies tested.
Data is the bedrock of all machine learning systems. As such, working with the right data collection company is critical in order to solve a supervised machine learning problem. If you don't have a particular goal or project in mind, there is a wealth of open data available on the web to practice with. However, if you're looking to tackle a specific problem, chances are you'll need to collect data yourself or work with a company that can collect data for you. There are many data collection companies that provide crowdsourcing services to help individuals and corporations gather data at scale.
Research in the supervised learning algorithms field implicitly assumes that training data is labeled by domain experts or at least semi-professional labelers accessible through crowdsourcing services like Amazon Mechanical Turk. With the advent of the Internet, data has become abundant and a large number of machine learning based systems started being trained with user-generated data, using categorical data as true labels. However, little work has been done in the area of supervised learning with user-defined labels where users are not necessarily experts and might be motivated to provide incorrect labels in order to improve their own utility from the system. In this article, we propose two types of classes in user-defined labels: subjective class and objective class - showing that the objective classes are as reliable as if they were provided by domain experts, whereas the subjective classes are subject to bias and manipulation by the user. We define this as a subjective class issue and provide a framework for detecting subjective labels in a dataset without querying oracle. Using this framework, data mining practitioners can detect a subjective class at an early stage of their projects, and avoid wasting their precious time and resources by dealing with subjective class problem with traditional machine learning techniques.