Goto

Collaborating Authors

Machine Learning


New machine learning method allows hospitals to share patient data -- privately

#artificialintelligence

PHILADELPHIA - To answer medical questions that can be applied to a wide patient population, machine learning models rely on large, diverse datasets from a variety of institutions. However, health systems and hospitals are often resistant to sharing patient data, due to legal, privacy, and cultural challenges. An emerging technique called federated learning is a solution to this dilemma, according to a study published Tuesday in the journal Scientific Reports, led by senior author Spyridon Bakas, PhD, an instructor of Radiology and Pathology & Laboratory Medicine in the Perelman School of Medicine at the University of Pennsylvania. Federated learning -- an approach first implemented by Google for keyboards' autocorrect functionality -- trains an algorithm across multiple decentralized devices or servers holding local data samples, without exchanging them. While the approach could potentially be used to answer many different medical questions, Penn Medicine researchers have shown that federated learning is successful specifically in the context of brain imaging, by being able to analyze magnetic resonance imaging (MRI) scans of brain tumor patients and distinguish healthy brain tissue from cancerous regions.


Neural Nets Aren't Black Boxes

#artificialintelligence

If you think neural nets are black boxes, you're certainly not alone. While they may not be as interpretable as something like a random forest (at least not yet), we can still understand how they process data to arrive at their predictions. In this post we'll do just that as we build our own network from scratch, starting with logistic regression. If you think neural nets are black boxes, you're certainly not alone. While they may not be as interpretable as something like a random forest (at least not yet), we can still understand how they process data to arrive at their predictions.


EDN - Embedding AI in smart sensors

#artificialintelligence

In 2018, the smart sensor market was valued at $30.82 billion and is expected to reach $85.93 billion by the end of 2024, registering an increase of 18.82% per year during the forecast period 2019-2024. With the growing roles that IoT applications, vehicle automation, and smart wearable systems play in the world's economies and infrastructures, MEMS sensors are now perceived as fundamental components for various applications, responding to the growing demand for performance and efficiency. Connected MEMS devices have found applications in nearly every part of our modern economy, including in our cities, vehicles, homes, and a wide range of other "intelligent" systems. As the volume of data produced by smart sensors rapidly increases, it threatens to outstrip the capabilities of cloud-based artificial intelligence (AI) applications, as well as the networks that connect the edge and the cloud. In this article, we will explore how on-edge processing resources can be used to offload cloud applications by filtering, analyzing, and providing insights that improve the intelligence and capabilities of many applications.


Multi-Label Image Classification in TensorFlow 2.0

#artificialintelligence

The 2.2M parameters in MobileNet are frozen, but there are 1.3K trainable parameters in the dense layers. You need to apply the sigmoid activation function in the final neurons to ouput a probability score for each genre apart. By doing so, you are relying on multiple logistic regressions to train simultaneously inside the same model. Every final neuron will act as a seperate binary classifier for one single class, even though the features extracted are common to all final neurons. When generating predictions with this model, you should expect an independant probability score for each genre and that all probability scores do not necessarily sum up to 1. This is different from using a softmax layer in multi-class classification where the sum of probability scores in the output is equal to 1.


Artificial Intelligence: Trends & Applications To Watch In 2020 - Simpliv Blog

#artificialintelligence

For movie buffs, the work that the factory machines do in Charlie Chaplin's 1936 classic, Modern Times, may have seemed too futuristic for its time. Fast forward eight decades, and the colossal changes that Artificial Intelligence is catalyzing around us will most likely give the same impression to our future generations. There is one crucial difference though: while those advancements were in movies, what we are seeing today are real. A question that seems to be on everyone's mind is, What is Artificial Intelligence? The pace at which AI is moving, as well as the breadth and scope of the areas it encompasses, ensure that it is going to change our lives beyond the normal.


The Next AI Frontier – Software That Writes Software - Liwaiwai

#artificialintelligence

Depending on your opinion, Artificial Intelligence is either a threat or the next big thing. Even though its deep learning capabilities are being applied to help solve large problems, like the treatment and prevention of human and genetic disorders, or small problems, like what movie to stream tonight, AI in many of its forms (such as machine learning, deep learning and cognitive computing) is still in its infancy in terms of being adopted to generate software code. AI is evolving from the stuff of science fiction, research, and limited industry implementations, to adoption across a multitude of fields, including retail, banking, telecoms, insurance, healthcare, and government. However, for the one field ripe for AI adoption – the software industry – progress is curiously slow. Consider this: why isn't an industry, which is built on esoteric symbols, machine syntax, and repetitive loops and functions, all-in on automating code?


Cheap, Easy Deepfakes Are Getting Closer to the Real Thing

WIRED

There are many photos of Tom Hanks, but none like the images of the leading everyman shown at the Black Hat computer security conference Wednesday: They were made by machine learning algorithms, not a camera. Philip Tully, a data scientist at security company FireEye, generated the hoax Hankses to test how easily open source software from artificial intelligence labs could be adapted to misinformation campaigns. His conclusion: "People with not a lot of experience can take these machine learning models and do pretty powerful things with them," he says. Seen at full resolution, FireEye's fake Hanks images have flaws like unnatural neck folds and skin textures. But they accurately reproduce the familiar details of the actor's face like his brow furrows and green-gray eyes, which gaze cooly at the viewer.


What is Deep Learning - Idiot Developer

#artificialintelligence

Currently, Artificial Intelligence (AI) is progressing at a great pace and deep learning is one of the main reasons for this, so all the people need to get a basic understanding of it. Deep Learning is a subset of Machine Learning, which in turn is a subset of Artificial Intelligence. Deep Learning uses a class of algorithms called artificial neural networks which are inspired by the way the biological neural network functions inside the brain. The advancement in the field of deep learning is due to the tremendous increase in computational power and the presence of a huge amount of data. Deep learning is very much efficient in problem-solving as compared to other traditional machine learning algorithms.


How AI is Becoming Essential to Cyber-Strategy

#artificialintelligence

Added to this, the diversity of internet and network-connected technologies are following an even faster curve. There are some hard truths that many organizations ignore at their own peril. Infosec budgets are not matching the pace of change Most security departments will acknowledge that their resources are already spread too thinly. Now there is an expectation to do much more with even less. In a recent Infosecurity webinar, the topic of the impact of artificial intelligence on cyber-resilience was discussed.


Sight Diagnostics raises $71 million for blood-testing computer vision

#artificialintelligence

This more than doubles the startup's total raised, and a spokesperson says it will be used to accelerate Sight's operations globally -- with a focus on the U.S. -- as Sight advances R&D for the detection of conditions like sepsis and cancer, as well as factors affecting COVID-19. Blood tests are generally unpleasant -- not to mention costly. On average, getting blood work done at a lab costs uninsured patients between $100 and $1,500. In the developing world, where the requisite equipment isn't always readily available, ancillary costs threaten to drive the price substantially higher. That's why Yossi Pollak, previously at Intel subsidiary Mobileye, and Daniel Levner, a former scientist at Harvard's Wyss Institute for Biologically Inspired Engineering, founded Sight Diagnostics in 2011.