In this new world of artificial intelligence and data management, it's easy to get confused by some of the terms that are most commonly used in the IT world. For example, data science and machine learning have a lot to do with each other. It's not surprising that many people with only a passing knowledge of these disciplines would have trouble figuring out how they differ from one another. Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia. First of all, data science is really a broad, overarching category of technology that encompasses many different types of projects and creations.
Last June, a team at Harvard Medical School and MIT showed that it's pretty darn easy to fool an artificial intelligence system analyzing medical images. Researchers modified a few pixels in eye images, skin photos and chest X-rays to trick deep learning systems into confidently classifying perfectly benign images as malignant. These so-called "adversarial attacks" implement small, carefully designed changes to data--in this case pixel changes imperceptible to human vision--to nudge an algorithm to make a mistake. That's not great news at a time when medical AI systems are just reaching the clinic, with the first AI-based medical device approved in April and AI systems besting doctors at diagnosis across healthcare sectors. Now, in collaboration with a Harvard lawyer and ethicist, the same team is out with an article in the journal Science to offer suggestions about when and how the medical industry might intervene against adversarial attacks.
Nvidia has been more than a hardware company for a long time. As its GPUs are broadly used to run machine learning workloads, machine learning has become a key priority for Nvidia. In its GTC event this week, Nvidia made a number of related points, aiming to build on machine learning and extend to data science and analytics. Nvidia wants to "couple software and hardware to deliver the advances in computing power needed to transform data into insights and intelligence." Jensen Huang, Nvidia CEO, emphasized the collaborative aspect between chip architecture, systems, algorithms and applications.
Medical artificial intelligence breaks a little too easily. Although AI promises to improve healthcare by quickly analysing medical scans, there is increasing evidence that it trips up on seemingly innocuous changes. Sam Finlayson at Harvard Medical School and his colleagues fooled three AIs designed for scanning medical images into misclassifying them by simply altering a few pixels. In one example, the team ever so slightly altered a picture of a mole that was first classified as benign with 99 per cent confidence. The AI then classified the altered image as malignant with 100 per cent confidence, despite the two images being indistinguishable to the human eye.
In the marketplace for artificial intelligence technology, giant companies like Google, Amazon, and Microsoft offer a powerful, centralized approach: They sell access to platforms for machine learning that hoover up vast amounts of users' personal and proprietary information and use that data to train AI models. A new development called federated learning offers an alternative to the centralized model. It promises to distribute the power of machine learning to mobile phones, IoT devices, and other equipment on the network edge. The payoff: Better performance and enhanced data security. By distributing AI training to the edge, "you speed up the training process significantly, and you get better accuracy," says Marcin Rojek, co‑founder at byteLAKE, a Poland‑based company working on federated learning solutions using Internet of Things (IoT) devices.
The private consumer market is full of language learning apps that proclaim to be using AI to help people learn foreign languages. Last week, Busuu launched its AI-powered vocabulary training, and Duolingo has its AI conversational bots, just to name a few. Speexx is completely aimed at international enterprise customers, and Armin Hopp explained to me why AI is more than just a buzzword at his company. "I do think AI is used too often where we actually mean programmed queries on big data sets. That's what marketing does," said Hopp. "What we did, was to completely rebuild our entire tech ecosystem from scratch starting four years ago. It has been a huge effort but now we have a great foundation that really uses AI to benefit users and our customers."
I've been a student of Machine Learning for the past two years, but this past year was when I finally got to apply what I learned and solidify my understanding of it. So I decided to share 7 lessons I learned during my "first" year of Machine Learning and hopefully make this article an annual tradition. Nowadays, it is relatively easy to learn about Machine Learning thanks to the vast selection of learning resources that exist online. Unfortunately, many of them tend to gloss over the data collection and cleaning steps. During my first serious Machine learning project, my team and I run into the BIG question of where do we get our data from?
As a recent graduate of the Flatiron School's Data Science Bootcamp, I've been inundated with advice on how to ace technical interviews. A soft skill that keeps coming to the forefront is the ability to explain complex machine learning algorithms to a non-technical person. This series of posts is me sharing with the world how I would explain all the machine learning topics I come across on a regular basis...to my grandma. Some get a bit in-depth, others less so, but all I believe are useful to a non-Data Scientist. In the upcoming parts of this series, I'll be going over: To summarize, an algorithm is the mathematical life force behind a model.
Daniel D. Gutierrez is a practicing data scientist who's been working with data long before the field came in vogue. As a technology journalist, he enjoys keeping a pulse on this fast-paced industry. Daniel is also an educator having taught data science, machine learning and R classes at the university level. He has authored four computer industry books on database and data science technology, including his most recent title, "Machine Learning and Data Science: An Introduction to Statistical Learning Methods with R." Daniel holds a BS in Mathematics and Computer Science from UCLA.
CarveNiche Technologies on Friday announced a collaboration with tech major Microsoft for a new Artificial Intelligence (AI)-based Math learning programme to supplement classroom learning for school students. Called beGalileo, the programme uses AI to collect and analyse performance information and customise learning to serve each student. Meant for students from class 1 to 10, the programme supports beginners and offers them challenging questions as they advance. "Our product'beGalileo' is a highly personalised Math learning programme for K12 education, and our motto has been to help every child fall in love with Math," Avneet Makkar, CEO, CarveNiche Technologies Pvt Ltd said in a statement. "This association would help us reach a wider network and would be an ideal combination of Microsoft's advanced cloud infrastructure and CarveNiche's rich academic content and technology," Makkar added.