Goto

Collaborating Authors

Global Big Data Conference

#artificialintelligence

A new course offered by AWS offers business leaders a foundational understanding of machine learning and its use cases without the need for a deep understanding of Python and coding. On one hand, organizations recognize the potential value of machine learning to scale operations, gain faster and deeper insights, respond to quickly changing conditions, and more. On the other hand, it's hard to get started on something that is novel to your organization. You may not have the talent in-house, and you don't have any experience. What's more, even for those organizations that have run successful pilots, many have struggled to move those pilots into production for a variety of reasons.


Winning The AI-Enabled War-at-Sea

#artificialintelligence

DARPA's Ocean of Things (OoT) program aims to achieve maritime situational awareness over large ocean areas through deploying thousands of small, low-cost floats that form a distributed sensor network. Each smart float will have a suite of commercially available sensors to collect environmental and activity data; the later function involves automatically detecting, tracking and identifying nearby ships and – potentially – close aircraft traffic. The floats use edge processing with detection algorithms and then transmit the semi-processed data periodically via the Iridium satellite constellation to a cloud network for on-shore storage. AI machine learning then combs through this sparse data in real time to uncover hidden insights. The floats are environmentally friendly, have a life of around a year and in buys of 50,000 have a unit cost of about US$500 each.


Deep Learning Tutorial for Beginners: A [Step-by-Step] Guide

#artificialintelligence

Deep Learning is a subdivision of machine learning that imitates the working of a human brain with the help of artificial neural networks. It is useful in processing Big Data and can create important patterns that provide valuable insight into important decision making. The manual labeling of unsupervised data is time-consuming and expensive. DeepLearning tutorials help to overcome this with the help of highly sophisticated algorithms that provide essential insights by analyzing and cumulating the data. Deep Learning leverages the different layers of neural networks that enable learning, unlearning, and relearning.


Tomorrow's Smart Cities - Data Aided Design

#artificialintelligence

You are sitting alone in a moving vehicle crossing the main square, it is a warm day, and the vehicle is silent. It speeds quickly on the wide street. An automated voice speaks: – According to the average data, your pulse has an irregular rate. Your body temperature is higher than normal; it is advised you book a doctor's appointment. I will remind you 30 minutes before. Imagine a city built for people to live more connected, where AI can help us in our daily life and check our health status.


During the pandemic, viewers have turned to content creators for mental health support

Washington Post - Technology News

The problem is that while streamers offer companionship and entertainment, they aren't trained therapists. Games and streamers have helped viewers maintain a sense of normalcy at an uncertain time. But stream chats and Discord servers can take a dark turn when viewers share their intimate mental health struggles. Responding during these sensitive moments is a big responsibility, and mental health experts say professional and medical intervention is often necessary. But while growing their audience is every content creator's dream, that growth can make it a more difficult task to handle or even notice individual cries for help.


Perceived Realism of High-Resolution Generative Adversarial Network–derived Synthetic Mammograms

#artificialintelligence

To explore whether generative adversarial networks (GANs) can enable synthesis of realistic medical images that are indiscernible from real images, even by domain experts. In this retrospective study, progressive growing GANs were used to synthesize mammograms at a resolution of 1280 1024 pixels by using images from 90 000 patients (average age, 56 years 9) collected between 2009 and 2019. To evaluate the results, a method to assess distributional alignment for ultra–high-dimensional pixel distributions was used, which was based on moment plots. This method was able to reveal potential sources of misalignment. A total of 117 volunteer participants (55 radiologists and 62 nonradiologists) took part in a study to assess the realism of synthetic images from GANs.


Toward deep-learning models that can reason about code more like humans

#artificialintelligence

Whatever business a company may be in, software plays an increasingly vital role, from managing inventory to interfacing with customers. Software developers, as a result, are in greater demand than ever, and that's driving the push to automate some of the easier tasks that take up their time. Productivity tools like Eclipse and Visual Studio suggest snippets of code that developers can easily drop into their work as they write. These automated features are powered by sophisticated language models that have learned to read and write computer code after absorbing thousands of examples. But like other deep learning models trained on big datasets without explicit instructions, language models designed for code-processing have baked-in vulnerabilities.


New-Zeland installs the world's most advanced AI supercomputer

#artificialintelligence

New Zealand's most powerful supercomputer for artificial intelligence applications has been installed at the University of Waikato as part of its commitment positioning New Zealand as a world leader in AI research and development. The NVIDIA DGX A100 is the first computer of its kind in New Zealand and is the world's most advanced system for powering universal AI workloads. The machine can rapidly and efficiently process massive amounts of data, allowing students and researchers at the University to process at lightning-fast speeds, enabling machine learning and artificial intelligence that can solve problems from addressing climate change to managing biodiversity. Professor Albert Bifet says that students and researchers could take months, or even years, to process the data needed to create models like the one they are working on if they had to use more traditional computing: "This computer will allow our researchers to process that data in a matter of days. It will enable them to gain insights and progress their research at an unprecedented scale."


Markov models and Markov chains explained in real life: probabilistic workout routine

#artificialintelligence

Andrei Markov didn't agree with Pavel Nebrasov, when he said independence between variables was necessary for the Weak Law of Large Numbers to be applied. When you collect independent samples, as the number of samples gets bigger, the mean of those samples converges to the true mean of the population. But Markov believed independence was not a necessary condition for the mean to converge. So he set out to define how the average of the outcomes from a process involving dependent random variables could converge over time. Thanks to this intellectual disagreement, Markov created a way to describe how random, also called stochastic, systems or processes evolve over time.


Beyond CUDA: GPU Accelerated Python for Machine Learning on Cross-Vendor Graphics Cards Made Simple

#artificialintelligence

Machine learning algorithms -- together with many other advanced data processing paradigms -- fit incredibly well to the parallel-architecture that GPU computing offers. This has driven massive growth in the advancement and adoption of graphics cards for accelerated computing in recent years. This has also driven exciting research around techniques that optimize towards concurrency, such as model parallelism and data parallelism. In this article you'll learn how to write your own GPU accelerated algorithms in Python, which you will be able to run on virtually any GPU hardware -- including non-NVIDIA GPUs. We'll introduce core concepts and show how you can get started with the Kompute Python framework with only a handful of lines of code. First we will be building a simple GPU Accelerated Python script that will multiply two arrays in parallel which this will introduce the fundamentals of GPU processing.