Scientists have developed a new machine learning tool that can identify Covid-19-related conspiracy theories on social media and predict how they evolved over time, an advance which may lead to better ways for public health officials to fight misinformation online. The study, published in the Journal of Medical Internet Research, analysed anonymised Twitter data to characterise four Covid-19 conspiracy theory themes – such as one that erroneously claims the Bill and Melinda Gates Foundation engineered or has malicious intent related to the pandemic. Using the AI tool's analysis of more than 1.8 million tweets that contained Covid-19 keywords, the scientists from the Los Alamos National Laboratory in the US categorised the posts as misinformation or not, and provided context for each of these conspiracy theories through the first five months of the pandemic. "From this body of data, we identified subsets that matched the four conspiracy theories using pattern filtering, and hand labeled several hundred tweets in each conspiracy theory category to construct training sets," explained Dax Gerts, a computer scientist and co-author of the study from the Los Alamos National Laboratory. The four major themes examined in the study were that 5G cell towers spread the virus; that the Bill and Melinda Gates Foundation engineered or have "malicious intent" related to Covid-19; that the novel coronavirus was bioengineered or was developed in a laboratory; and that vaccines for Covid-19, which were still in development during the study period, would be dangerous.
In this tutorial, you will learn how to perform face detection with OpenCV and Haar cascades. I've been an avid reader for PyImageSearch for the last three years, thanks for all the blog posts! My company does a lot of face application work, including face detection, recognition, etc. We just started a new project using embedded hardware. I don't have the luxury of using OpenCV's deep learning face detector which you covered before, it's just too slow on my devices.
Machine learning, artificial intelligence, and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. Despite much promising research currently being undertaken, particularly in imaging, the literature as a whole lacks transparency, clear reporting to facilitate replicability, exploration for potential ethical concerns, and clear demonstrations of effectiveness. Among the many reasons why these problems exist, one of the most important (for which we provide a preliminary solution here) is the current lack of best practice guidance specific to machine learning and artificial intelligence. However, we believe that interdisciplinary groups pursuing research and impact projects involving machine learning and artificial intelligence for health would benefit from explicitly addressing a series of questions concerning transparency, reproducibility, ethics, and effectiveness (TREE). The 20 critical questions proposed here provide a framework for research groups to inform the design, conduct, and reporting; for editors and peer reviewers to evaluate contributions to the literature; and for patients, clinicians and policy makers to critically appraise where new findings may deliver patient benefit. Machine learning (ML), artificial intelligence (AI), and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. The potential uses include improving diagnostic accuracy,1 more reliably predicting prognosis,2 targeting treatments,3 and increasing the operational efficiency of health systems.4 Examples of potentially disruptive technology with early promise include image based diagnostic applications of ML/AI, which have shown the most early clinical promise (eg, deep learning based algorithms improving accuracy in diagnosing retinal pathology compared with that of specialist physicians5), or natural language processing used as a tool to extract information from structured and unstructured (that is, free) text embedded in electronic health records.2 Although we are only just …
Like superheroes capable of seeing through obstacles, environmental regulators may soon wield the power of all-seeing eyes that can identify violators anywhere at any time, according to a new Stanford University-led study. The paper, published the week of April 19 in Proceedings of the National Academy of Sciences (PNAS), demonstrates how artificial intelligence combined with satellite imagery can provide a low-cost, scalable method for locating and monitoring otherwise hard-to-regulate industries. "Brick kilns have proliferated across Bangladesh to supply the growing economy with construction materials, which makes it really hard for regulators to keep up with new kilns that are constructed," said co-lead author Nina Brooks, a postdoctoral associate at the University of Minnesota's Institute for Social Research and Data Innovation who did the research while a Ph.D. student at Stanford. While previous research has shown the potential to use machine learning and satellite observations for environmental regulation, most studies have focused on wealthy countries with dependable data on industrial locations and activities. To explore the feasibility in developing countries, the Stanford-led research focused on Bangladesh, where government regulators struggle to locate highly pollutive informal brick kilns, let alone enforce rules.
Recommender systems are among today's most successful application areas of artificial intelligence. However, in the recommender systems research community, we have fallen prey to a McNamara fallacy to a worrying extent: In the majority of our research efforts, we rely almost exclusively on computational measures such as prediction accuracy, which are easier to make than applying other evaluation methods. However, it remains unclear whether small improvements in terms of such computational measures matter greatly and whether they lead us to better systems in practice. A paradigm shift in terms of our research culture and goals is therefore needed. We can no longer focus exclusively on abstract computational measures but must direct our attention to research questions that are more relevant and have more impact in the real world. In this work, we review the various ways of how recommender systems may create value; how they, positively or negatively, impact consumers, businesses, and the society; and how we can measure the resulting effects.
In this panel, AI faculty with experience teaching online and blended classes were asked to share their experiences teaching online classes. The panel was composed of Ashok Goel, Georgia Institute of Technology, Ansaf Salleb-Aouissi, Columbia University and Mehran Sahami, Stanford University. The panelists were asked to describe which tools and methods work well to help instructors engage and bond with students online. They were furthermore asked to share their insights into which components of a course can be done best online and which ones are best accomplished in person. The panel took place as part of the 2021 Symposium on Educational Advances of AI, which was collocated with AAAI-21.
Bonjour Startup Montreal unveils a new map to visually represent Montreal's artificial intelligence ecosystem. This map, developed in collaboration with Next AI, IVADO and Montréal International, provides an overview of the organizations that make up the Montreal ecosystem. "Montreal is a well-known global hub in artificial intelligence, it's a fact. Throughout the years, Montreal was able to attract several large companies that in part launched innovative hubs dedicated to artificial intelligence. This expertise led to an increasing number of organizations dedicated to AI and, also, an increasing number of startups that incorporate AI into their business model," says Liette Lamonde, CEO of Bonjour Startup Montreal.
ColorShapeLinks is an AI competition for the Simplexity board game with arbitrary game dimensions. The first player to place n pieces of the same type in a row wins. In this regard, the base game, with a 6 x 7 board and n 4, is similar to Connect Four. However, pieces are defined not only by color, but also by shape: round or square. Round or white pieces offer the win to player 1, while square or red pieces do the same for player 2. Contrary to color, players start the game with pieces of both shapes.
Deep Learning is a subdivision of machine learning that imitates the working of a human brain with the help of artificial neural networks. It is useful in processing Big Data and can create important patterns that provide valuable insight into important decision making. The manual labeling of unsupervised data is time-consuming and expensive. DeepLearning tutorials help to overcome this with the help of highly sophisticated algorithms that provide essential insights by analyzing and cumulating the data. Deep Learning leverages the different layers of neural networks that enable learning, unlearning, and relearning.
Whatever business a company may be in, software plays an increasingly vital role, from managing inventory to interfacing with customers. Software developers, as a result, are in greater demand than ever, and that's driving the push to automate some of the easier tasks that take up their time. Productivity tools like Eclipse and Visual Studio suggest snippets of code that developers can easily drop into their work as they write. These automated features are powered by sophisticated language models that have learned to read and write computer code after absorbing thousands of examples. But like other deep learning models trained on big datasets without explicit instructions, language models designed for code-processing have baked-in vulnerabilities.