Goto

Collaborating Authors

Europe


Scientists combat anti-Semitism with artificial intelligence – IAM Network

#artificialintelligence

BERLIN (AP) -- An international team of scientists have joined forces to combat the spread of anti-Semitism online with the help of artificial intelligence. The Alfred Landecker Foundation, which supports the team, said Monday that the project named Decoding Anti-Semitism includes discourse analysts, computational linguists and historians. They will develop a "highly complex, AI-driven approach to identifying online anti-Semitism." The team includes researchers from Berlin's Technical University, King's College in London and other scientific institutions in Europe and Israel. Computers will run through vast amounts of data and images that humans wouldn't be able to assess because of their sheer quantity.


Artificial Intelligence In IoT Market (COVID 19 Impact Analysis) Opportunities, Industry Analysis with Major Vendors- Arundo, C3 IoT, Thingstel, Microsoft, PTC, Uptake - News Typical – Trusted News Coverage

#artificialintelligence

A fresh report titled "Artificial Intelligence In IoT Market" conveying key insights and providing a competitive advantage to clients through a comprehensive report. The report contains 123 pages which highly exhibit on up-to-date market analysis scenario, upcoming as well as future opportunities, revenue growth, pricing and profitability. An exclusive data and facts offered in this report is collected by research and industry experts' team. Research Trades proclaims the addition of new analytical data which helps to make informed business decisions. It has been abridged with a exhaustive description of the global Artificial Intelligence In IoT Market including overview, Types, Segments, Applications and Features of the market.


Declaration of the United States of America and the United Kingdom of Great Britain and Northern Ireland on Cooperation in Artificial Intelligence Research and Development

#artificialintelligence

Recommending priorities for future cooperation, particularly in R&D areas where each partner shares strong common interest (e.g., interdisciplinary research and intelligent systems) and brings complementary challenges, regulatory or cultural considerations, or expertise to the partnerships; Promoting research and development in AI, focusing on challenging technical issues, and protecting against efforts to adopt and apply these technologies in the service of authoritarianism and repression. We intend to establish a bilateral Government-to-Government dialogue on the areas identified in this vision and explore an AI R&D ecosystem that promotes the mutual wellbeing, prosperity, and security of present and future generations. Signed in London and Washington on 25 September 2020, in two originals, in the English language.


SIMBig Conference 2020

#artificialintelligence

Dr. Dina Demner-Fushman is a Staff Scientist at the Lister Hill National Center for Biomedical Communications, NLM. Demner-Fushman is a lead investigator in several NLM projects in the areas of Information Extraction for Clinical Decision Support, EMR Database Research and Development, and Image and Text Indexing for Clinical Decision Support and Education. The outgrowths of these projects are the evidence-based decision support system in use at the NIH Clinical Center since 2009, an image retrieval engine, Open-i, launched in 2012, and an automatic question answering service. Dr. Demner-Fushman earned her doctor of medicine degree from Kazan State Medical Institute in 1980, and clinical research Doctorate (PhD) in Medical Science degree from Moscow Medical and Stomatological Institute in 1989. She earned her MS and PhD in Computer Science from the University of Maryland, College Park in 2003 and 2006, respectively.


LafargeHolcim launches Industry 4.0 for cement production – Australian Bulk Handling Review

#artificialintelligence

LafargeHolcim will implement automation and robotics, artificial intelligence, predictive maintenance and digital twin technologies for its production process. The company is upgrading its production fleet for the future through its'Plants of Tomorrow" program. The program will be rolled out over four years as LafargeHolcim upgrades its technologies in the building materials industry. The company predicts a "Plants of Tomorrow" certified operation will show 15 to 20 percent of operational efficiency gains compared to a conventional cement plant. Among the technologies implemented are predictive operations that can detect abnormal conditions and process anomalies in real-time. This aims to reduce maintenance costs by more than 10 percent and significantly lower energy costs. Digital twins of plants will also be created to optimise training opportunities. Automation and robotics is another important element of the strategy. Unmanned surveillance is being performed for high exposure jobs in the entire plant. Partnering with Swiss start-up Flyability, the company is using drones that allow the frequency of inspections to increase while simultaneously reducing cost and increasing safety for employees by inspecting confined spaces. In addition, the new PACT (Performance and Collaboration) digital tool allows operational decision making from experience-based to data-centric, by combining data from various sources and enabling machine learning applications. LafargeHolcim is currently working on more than 30 pilot projects covering all regions where the company is active. The first integrated cement plant will be at LafargeHolcim's premises in Siggenthal, Switzerland, this plant will test all modules of the'Plants of Tomorrow' program. LafargeHolcim Global Head Cement Manufacturing, Solomon Baumgartner Aviles, said transforming the way we produce cement is one of the focus areas of our digitalisation strategy and the'Plants of Tomorrow' initiative will turn Industry 4.0 into reality at our plants. "These innovative solutions make cement production safer, more efficient and environmentally fit.


New AI Paradigm May Reduce a Heavy Carbon Footprint

#artificialintelligence

Artificial intelligence (AI) machine learning can have a considerable carbon footprint. Deep learning is inherently costly, as it requires massive computational and energy resources. Now researchers in the U.K. have discovered how to create an energy-efficient artificial neural network without sacrificing accuracy and published the findings in Nature Communications on August 26, 2020. The biological brain is the inspiration for neuromorphic computing--an interdisciplinary approach that draws upon neuroscience, physics, artificial intelligence, computer science, and electrical engineering to create artificial neural systems that mimic biological functions and systems. The human brain is a complex system of roughly 86 billion neurons, 200 billion neurons, and hundreds of trillions of synapses.


IBM Joins Effort by UN and Vatican to Use Ethical AI in Fight Against Hunger

#artificialintelligence

The Vatican's Pontifical Academy for Life, which began the year by urging the ethical development and application of artificial intelligence (AI), has announced an effort to use technology to fight world hunger, which has worsened during the pandemic. The Vatican institution, in collaboration with IBM, Microsoft and the UN Food and Agriculture Organization, or FAO, is encouraging governments, nonprofits and corporations to assure that technology is used to feed everyone, and to make farmers' lives more efficient and productive. In its quest to assure the transparent, responsible and inclusive use of AI, the Vatican and FAO are pushing for solutions in agriculture that will benefit not just the well off, but also the poor. "We need to face the biggest challenges on the planet," said John E. Kelly III, executive vice president of IBM. Kelly, who participated in the FAO and Pontifical Academy's Sept. 24 virtual conference announcing the effort against hunger, was one of the signers of the Vatican's call for AI ethics in February. The Vatican's effort to promote ethical AI for social good includes a new program to use digital technology to ensure a more sustainable and efficient global food supply.


At CAGR 36.2%, Artificial Intelligence Market 2020: Future Challenges And Industry Growth Outlook 2025

#artificialintelligence

Artificial Intelligence (AI) is the study of "intelligent agents" which can be define as any device that perceives its environment and takes appropriate action that makes the highest probability of achieving its goals. Additionally, it can also be define as a system's ability to interpret external data, learn from gathered data and use those learnings to realize specific goals through adaptation. It is also called as machine intelligence and attributed to the nature of intelligence demonstrated by machines. Some of the features of artificial intelligence are; successfully understanding human language, contending at the highest level in strategic games systems such as chess and go, autonomously operating cars, intelligent routing in content delivery networks and military simulations and others. To solve the problem of learning and perceiving the immediate environment, many approaches have been taken such as statistical methods, computational intelligence, versions of search and mathematical optimization, artificial neural networks, and methods based on statistic, probability and economics.


Can This Tiny Language Model Defeat Gigantic GPT3?

#artificialintelligence

While GPT-3 has been bragging about achieving state-of-the-art performance on Complex NLP tasks with hundred billion parameters, researchers from the LMU Munich, Germany have proposed a language model who can show similar achievements with way fewer parameters. GPT-3 has been trained on 175 billion parameters and thus showed remarkable few-shot abilities, and by reformulating a few tasks and prompting inputs, it also showed immense capabilities on SuperGLUE benchmark. However it comes with two most significant drawbacks -- large models aren't always feasible for real-world scenarios, and with the context window of these monstrous models is limited to a few hundred tokens, it doesn't scale more than a few examples. And thus, the researchers proposed an alternative to priming, i.e. PET required unlabelled data, which is easier to gather than labelled data, thus making it usable for real-world applications.


Deep learning for next-generation sleep diagnostics

#artificialintelligence

Currently, the diagnosis of sleep disorders relies on polysomnographic recordings with a time-consuming manual analysis with low reliability between different manual scorers. Throughout the night, sleep stages are identified manually in non-overlapping 30-second epochs starting from the onset of the recording based on electroencephalography (EEG), electro-oculography (EOG), and chin electromyography (EMG) signals which require meticulous placement of electrodes. Moreover, the diagnosis of many sleep disorders relies on outdated guidelines. When assessing the severity of obstructive sleep apnea (OSA), the patients are classified based on thresholds of the apnea-hypopnea index (AHI), i.e. the number of respiratory disruptions during sleep. These thresholds are not fully based on solid scientific evidence and remain the same across different measurement techniques.