Goto

Collaborating Authors

Deep Learning for NLP - Part 9 - CouponED

#artificialintelligence

Deep Learning for NLP - Part 9 Natural language processing (NLP) is a crucial part of artificial intelligence (AI), modeling how people share information. Description Since the proliferation of social media usage, hate speech has become a major crisis. On the one hand, hateful content creates an unsafe environment for certain members of our society. On the other hand, in-person moderation of hate speech causes distress to content moderators. Additionally, it is not just the presence of hate speech in isolation but its ability to dissipate quickly, where early detection and intervention can be most effective.


France calls killing of Islamic State leader big victory

Boston Herald

PARIS (AP) -- The leader of the Islamic State in the Greater Sahara died of wounds from a drone strike that hit him on a motorcycle last month in southern Mali, in a French-led operation involving backup from U.S., EU, Malian and Nigerien military forces, French authorities said Thursday. The French government did not disclose how they identified him as Adnan Abu Walid al-Sahrawi, whose group has terrorized the region. The claim could not immediately be independently verified. France declared the killing a major victory against jihadists in Africa and justification for years of anti-extremist efforts in the Sahel. French government officials described al-Sahrawi as "enemy No. 1" in the region, and accused him of ordering or overseeing attacks on U.S. troops, French aid workers and some 2,000-3,000 African civilians – most of them Muslim.


Various Machine learning methods in predicting rainfall - Tutors India Blog

#artificialintelligence

The term machine learning (ML) stands for "making it easier for machines," i.e., reviewing data without having to programme them explicitly. The major aspect of the machine learning process is performance evaluation. Four commonly used machine learning algorithms (BK1) are Supervised, semi-supervised, unsupervised and reinforcement learning methods. The variation between supervised and unsupervised learning is that supervised learning already has the expert knowledge to developed the input/output [2]. On the other hand, unsupervised learning takes only the input and uses it for data distribution or learn the hidden structure to produce the output as a cluster or feature [3].


Initiative aims to spur innovation by connecting, analyzing data bases - Indianapolis Business Journal

#artificialintelligence

Fueled with a $36 million grant from Lilly Endowment Inc., the Central Indiana Corporate Partnership has launched an initiative called AnalytiXIN to promote innovations in data science throughout Indiana. Build connections between Indiana's manufacturing and life sciences companies and the university researchers who can help them use artificial intelligence and advanced data analytics to tackle big challenges like reducing a factory's carbon footprint or improving worker health. "This is one way to ensure early that these kinds of critical collaborations are happening," said David Johnson, president and CEO of the Indianapolis-based Central Indiana Corporate Partnership. About half of the $36 million will be used to hire university-level data-science researchers, some of whom will be based at 16 Tech in Indianapolis. The other half will go toward the creation of "data lakes," or large data sets built from information from multiple contributors.


OpenAI's CLIP is the most important advancement in computer vision this year

#artificialintelligence

CLIP is a gigantic leap forward, bringing many of the recent developments from the realm of natural language processing into the mainstream of computer vision: unsupervised learning, transformers, and multimodality to name a few. The burst of innovation it has inspired shows its versatility. And this is likely just the beginning. There has been scuttlebutt recently about the coming age of "foundation models" in artificial intelligence that will underpin the state of the art across many different problems in AI; I think CLIP is going to turn out to be the bedrock model for computer vision. In this post, we aim to catalog the continually expanding use-cases for CLIP; we will update it periodically.


FTC says health apps must notify consumers about data breaches -- or face fines – TechCrunch

#artificialintelligence

The U.S. Federal Trade Commission (FTC) has warned apps and devices that collect personal health information must notify consumers if their data is breached or shared with third parties without their permission. In a 3-2 vote on Wednesday, the FTC agreed on a new policy statement to clarify a decade-old 2009 Health Breach Notification Rule, which requires companies handling health records to notify consumers if their data is accessed without permission, such as the result of a breach. This has now been extended to apply to health apps and devices -- specifically calling out apps that track fertility data, fitness, and blood glucose -- which "too often fail to invest in adequate privacy and data security," according to FTC chair Lina Khan. "Digital apps are routinely caught playing fast and loose with user data, leaving users' sensitive health information susceptible to hacks and breaches," said Khan in a statement, pointing to a study published this year in the British Medical Journal that found health apps suffer from "serious problems" ranging from the insecure transmission of user data to the unauthorized sharing of data with advertisers. There have also been a number of recent high-profile breaches involving health apps in recent years. Babylon Health, a U.K. AI chatbot and telehealth startup, last year suffered a data breach after a "software error" allowed users to access other patients' video consultations, while period tracking app Flo was recently found to be sharing users' health data with third-party analytics and marketing services.


Top deep learning algorithm to know in 2021 - Techiexpert.com

#artificialintelligence

What is deep learning algorithm? It is a crucial and advanced technology of the modern times. The technology happens to form an excellent and integral part of the machine learning system. If the industry buzz is to be taken into consideration, this kind of a learning mode provides you a great experience, which you would choose to treasure for sure. Deep learning algorithm is doing the rounds these days.


La veille de la cybersécurité

#artificialintelligence

Getting the software right is important when developing machine learning models, such as recommendation or classification systems. But at eBay, optimizing the software to run on a particular piece of hardware using distillation and quantization techniques was absolutely essential to ensure scalability. "[I]n order to build a truly global marketplace that is driven by state of the art and powerful and scalable AI services," Kopru said, "you have to do a lot of optimizations after model training, and specifically for the target hardware." With 1.5 billion active listings from more than 19 million active sellers trying to reach 159 million active buyers, the ecommerce giant has a global reach that is matched by only a handful of firms. Machine learning and other AI techniques, such as natural language processing (NLP), play big roles in scaling eBay's operations to reach its massive audience. For instance, automatically generated descriptions of product listings is crucial for displaying information on the small screens of smart phones, Kopru said.


Domino Data Lab's new release pushes the envelope on MLOps

ZDNet

MLOps is the machine learning operations counterpart to DevOps and DataOps. But, across the industry, definitions for MLOps can vary. Some see MLOps as focusing on ML experiment management. Others see the crux of MLOps as setting up CI/CD (continuous integration/continuous delivery) pipelines for models and data the same way DevOps does for code. Other vendors and customers believe MLOps should be focused on so-called feature engineering -- the specialized transformation process for the data used to train ML models.


One Hundred Year Study on Artificial Intelligence (AI100): 2021 report released

AIHub

Reproduced under a CC BY-ND 4.0 licence. Today, the One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report has been released. The mission of AI100 is to launch a study every five years, over the course of a century, to better track and anticipate how artificial intelligence propagates through society, and how it shapes different aspects of our lives. The first report was published in 2016, and, like that inaugural document, the 2021 edition has been written by a team of AI experts, all with much experience in the field. The report aims to address four audiences: the general public, industry, government, and AI researchers. It is structured as a collection of responses by the 2021 Study Panel to 12 standing questions and two workshop questions posed by the AI100 Standing Committee.