The science and tech world has been abuzz about quantum computers for years, but the devices are not yet affecting our daily lives. Quantum systems could seamlessly encrypt data, help us make sense of the huge amount of data we've already collected, and solve complex problems that even the most powerful supercomputers cannot – such as medical diagnostics and weather prediction. That nebulous quantum future became one step closer this November, when top -tier journal Nature published two papers that showed some of the most advanced quantum systems yet. If you still don't understand what a quantum computer is, what it does, or what it could do for you, never fear. Futurism recently spoke with Mikhail Lukin, a physics professor at Harvard University and the senior author of one of those papers, about the current state of quantum computing, when we might have quantum technology on our phones or our desks, and what it will take for that to happen. This interview has been slightly edited for clarity and brevity.
What do you think will drive more disruption and the use of AI? Satish Maripuri: Disruption comes from bringing organizations together across industries with a unique combination of capabilities to drive change. In the case of AI, we look for partnerships inside and outside of healthcare that can drive innovation effectively and more quickly create value. Right now, we are focused on some unique AI-related partnerships that allow radiologists to be the technology trailblazers they always have been. Radiologists began trailblazing technology with the introduction of picture archiving and communication systems (PACS) more than 20 years ago. Today, the latest advancements in radiology are highly receptive to the power of AI to improve productivity and accuracy while reducing the repetitive tasks that lead to burnout.
What do you think of when you hear about AI? Do you picture your favorite sci-fi movie or a book that you read when you were younger? In that favorite book or movie, were the robots smart? In AI, we can find a subset of machine learning called "deep learning," which is defined as networks that can learn unsupervised from unstructured data. Now the bigger question is: Are you ready to take advantage of deep learning in your business? The vast ocean of data grows exponentially every day.
Rael wanted to combine his years of knowledge in the finance world with advertising. He quit his job soon afterwards and after contacting Dr. Jun Wang who had papers on the very same subject, they both decided to co-found MediaGamma. Rael Cline shares with me how MediaGamma is helping companies harness the power of their customer data to drive better customer-centric products. Hi Rael, thanks for agreeing to share your story on YHP. Can you give us some background information about yourself?
It might not be too long before your average mobile PC will feature -- on its motherboard -- not just CPUs and GPUs but also an embedded AI inference chip, like the Intel/Movidius Vision Processor Unit (VPU). The first clue for this scenario unfolded in Microsoft Corp.'s launch announcement today, at its Windows Developer Day, of Windows ML, an open-standard framework for machine-learning tasks in the Windows OS. Microsoft said that it is extending Windows OS native support for the Intel/Movidius VPU. Implied in the message is that Intel/Movidius has taken a step closer to finding a home not just in embedded applications, such as drones and surveillance cameras, but also in Windows-based laptops and tablets. In a telephone interview with EE Times, Gary Brown, director of marketing at Movidius/Intel, confirmed, "Although today's announcement isn't about that [VPU integration on a mobile PC], yes, you will see VPU migrating into a PC motherboard."
United Technologies launched a digital accelerator in Brooklyn a year ago with a $300 million investment aimed at developing software that spans the company's various units such as Otis, Pratt & Whitney and Carrier to name a few. Vince Campisi, chief information officer at UTC, is overseeing UTC's digital efforts with the aim of connecting software, analytics and the Internet of things to drive the company's products and services and improve customer operations overall. We caught up with Campisi to talk shop and digitization. Here are a few highlights of our chat. Campisi noted that classically IT groups focused on operations, but the move to digitization is bringing more groups into the mix.
Artificial intelligence is a hot topic right now--but whether or not it is going live up to what some are calling the new healthcare reform is still up for discussion. Mayo Clinic Chief Information officer Christopher Ross and PricewaterhouseCooper Managing Director James Golden, tackled questions about the future of AI at HIMSS18. "In some ways [AI] could not be more hyped than it is. There is an enormous expectation for this stuff," Golden said. AI is already proving to be helpful in studies.
Al Martin: Hi folks, this is Al Martin from Making Data Simple, the series, if you will. Today I have Jean-Francois Puget. Jean-Francois Puget: Yes, you did great. You passed your French test. Al Martin: All right, good, I'm going to give you the [name] JFP from now on, is that all right? So JFP is the distinguished engineer for machine learning and optimization, that's the topic today and we're going to go into that. I also have with me [Steve Moore], who is a senior content designer and storage strategist. Al Martin: So Steve wanted to join the conversation, ask a few questions. So he'll ask the intelligent questions, I will ask the normal, blockhead questions, if you will. So, thank you for being here. We've done a lot, well we've done at least, I think two podcasts on machine learning.
Artificial intelligence is a hot topic right now--but whether or not it is going live up to what some are calling the new healthcare reform is still up for discussion. Mayo Clinic Chief Information officer Christopher Ross and Pricewaterhousecooper Managing Director James Golden, tackled questions about the future of AI at HIMSS18.
On today's episode of "The Interview" with The Next Platform, we focus on how geographic information systems (GIS) is, as a field, being revolutionized by deep learning. This stands to reason given the large volumes of satellite image data and robust deep learning frameworks that excel at image classification and analysis–a volume issue that has been compounded by more satellites with ever-higher resolution output. Unlike other areas of large-scale scientific data analysis that have traditionally relied on massive supercomputers, our audio interview (player below) reveals that a great deal of GIS analysis can be done on smaller systems. However, with the addition of deep learning, the field could be investing in more GPU systems for training and still others for inference at scale. Using lower end TitanX GPUs from Nvidia, the team, which includes Sudeep Sarkar and Mauricio Pamplona Segunda that created a CNN approach to GIS land classification described here, it was shown that deep learning can be a successful tool in the box of GIS analysts.