Machine Learning


RADSpa - RIS PACS with AI Enabled Radiology Workflow Platform

#artificialintelligence

RADSpa is Telerad Tech's Next Generation AI Integrated Radiology Workflow Platform with an Integrated RIS PACS, designed to scale from a standalone diagnostics center to large-scale Multi-Site, Multi-Geography radiology centers & hospitals. RADSpa is available in Cloud, Enterprise, and OEM Licensing models. It is currently deployed in more than 24 countries with highly advanced Analytics and Workflow Orchestration capabilities. It supports flexible radiology needs with customizable and dynamic workflows enabling seamless delivery across borders. It's enhanced Patient Security Framework enables secured and anonymized cross-border study transmission and reporting.


Children and machines think a lot alike

#artificialintelligence

Martin Spano is the author of Artificial Intelligence in a Nutshell, a book that explores the mystified subject of artificial intelligence (AI) with simple, non-technical language. Spano's passion for AI began after he watched 2001: A Space Odyssey, but he insists this ever-changing technology is not just the subject for sci-fi novels and movies; artificial intelligence is present in our everyday lives. Alex Krizhevsky was born in Ukraine but lived most of his life in Canada. After finishing his undergraduate studies, he continued as a postgraduate under the supervision of Geoffrey Hinton, legendary computer scientist and cognitive psychologist, one of the foremost advocates of using artificial neural networks for artificial intelligence. Krizhevsky stumbled upon an algorithm by Hinton that used graphics cards instead of processors for its execution.


Beyond Clustering: The New Methods that are Pushing the Future of Unsupervised Learning

#artificialintelligence

If you ask any group of data science students about the types of machine learning algorithms, they will answer without hesitation: supervised and unsupervised. However, if we ask that same group to list different types of unsupervised learning, we are likely to get an answer like clustering but not much more. While supervised methods lead the current wave of innovation in areas such as deep learning, there is very little doubt that the future of artificial intelligence(AI) will transition towards more unsupervised forms of learning. In recent years, we have seen a lot of progress on several new forms of unsupervised learning methods that expand way beyond traditional clustering or principal component analysis(PCA) techniques. Today, I would like to explore some of the most prominent new schools of thought in the unsupervised space and their role in the future of AI.


A day at the beach: Deep learning for a child

#artificialintelligence

The beach offers a wide open playscape where children are fuelled by curiosity. Whether at the beach or elsewhere outdoors, it helps to take a moment to see the world through the lens of a child who is discovering the world anew, and slow down to be present. Part of what happens through children's play is the exhilaration of making choices. These choices, and their consequences, are part of the child's emerging sense of agency and identity. Children's inquisitive minds crave opportunities that allow them to become designers, builders, mathematicians and innovators of their world.


Keeping Up with Robotics Trends Through RoboCup

#artificialintelligence

In March 2017, I joined the MathWorks Student Competitions team to focus on supporting university-level robotics competitions. The competition I spend most time with is RoboCup, which is great because RoboCup contains a variety of leagues and skill levels that keeps me sharp with almost everything going on in the field. Today I will talk about my experience in this role, and what it's been like returning to robotics and academia after more than 5 years away from the field. Let me start with a personal history lesson about my experience in robotics. I am a mechanical engineer with a background in controls, dynamics, and systems.


Scientists detect EIGHT new mysterious radio signals coming from deep space

Daily Mail - Science & tech

Scientists have found eight more mysterious repeating radio bursts emanating from deep space, which more than quadruples the known number of signals from earlier this year. The new signals were found by the Canadian Hydrogen Intensity Mapping Experiment (CHIME) radio telescope, and give scientists a much broader data set that they hope may help finally unlock their origin. With the discovery, described in a paper submitted to The Astrophysical Journal Letters, the number of repeating radio bursts signals has climbed to 11. The new signals will aid scientists in their efforts to trace the origin and cause of mysterious radio bursts from deep space. According to Nature, the results of a separate observation from researchers in Australia have yet to be published, but bring the number of findings this month alone to nine total.


4 Proven Ways Newbie Analysts Can Become Machine Learning Pros Transforming Data with Intelligence

#artificialintelligence

These four recommendations can help prepare you -- or the novice analyst on your team -- for a career in this burgeoning field. When Aurora Peddycord-Liu started as an analytical education intern at SAS in the summer of 2017, she came with a solid educational background from Worcester Polytechnic Institute and NC State's computer science Ph.D. program. These programs prepared her well for her current position at SAS, where she uses data to derive actionable insights on the design and use of SAS e-learning courses, but she's had to adapt her skill set to face the challenges of a real-world analytics position. To learn how newbie analysts can prepare for their work in this hot new age of machine learning, I spoke with Peddycord-Liu and senior executive, Dan Olley, global CTO at Elsevier. Recommendation #1: Don't be overwhelmed -- just get started Don't be intimidated by the powerful tools at your disposal; find a point to start and dive in.


Key considerations for operationalizing machine learning

#artificialintelligence

Training a machine learning model is important, but you need to get the model into a production environment working on real-world data to get real-world value from it. In the lingo of artificial intelligence, putting machine learning models into real-world environments where they are acting on real-world data and providing real-world predictions is called "operationalizing" the machine learning models. Why don't we simply say we're "deploying" an AI model or putting it into production? Once a model has been trained, it needs to be applied to a particular problem, but you can apply that model in any of a number of ways. The model can sit on a desktop machine providing results on demand, or it can sit on the edge in a mobile device, or it can sit in a cloud or server environment providing results in a wide range of use cases.


Machine Learning on dask

#artificialintelligence

So, in general, dask provides out-of-core abstractions over existing functionality in pandas and numpy. The part I find really interesting is the way in which these out-of-core abstractions are done. Rather than reimplementing a lot of the pandas API, dask creates wrappers that split certain operations into aggregates that work on small chunks of the original data. For instance, the arithmetic mean () can easily be parallelised, since the sum(s) as well as the overall count of values can be computed on small samples and then aggregated together. Below is a visualisation of what that looks like for a dask.dataframe with 3 partitions: While the arithmetic mean is rather trivial to parallelise, other computations are not.


How to Train a Progressive Growing GAN in Keras for Synthesizing Faces

#artificialintelligence

Generative adversarial networks, or GANs, are effective at generating high-quality synthetic images. A limitation of GANs is that the are only capable of generating relatively small images, such as 64 64 pixels. The Progressive Growing GAN is an extension to the GAN training procedure that involves training a GAN to generate very small images, such as 4 4, and incrementally increasing the size of the generated images to 8 8, 16 16, until the desired output size is met. This has allowed the progressive GAN to generate photorealistic synthetic faces with 1024 1024 pixel resolution. The key innovation of the progressive growing GAN is the two-phase training procedure that involves the fading-in of new blocks to support higher-resolution images followed by fine-tuning. In this tutorial, you will discover how to implement and train a progressive growing generative adversarial network for generating celebrity faces. Discover how to develop DCGANs, conditional GANs, Pix2Pix, CycleGANs, and more with Keras in my new GANs book, with 29 step-by-step tutorials and full source code. Photo by Alessandro Caproni, some rights reserved. GANs are effective at generating crisp synthetic images, although are typically limited in the size of the images that can be generated.