Machine Learning

Response to Comment on "Ghost cytometry"


Di Carlo et al. comment that our original results were insufficient to prove that the ghost cytometry technique is performing a morphologic analysis of cells in flow. We emphasize that the technique is primarily intended to acquire and classify morphological information of cells in a computationally efficient manner without reconstructing images. We provide additional supporting information, including images reconstructed from the compressive waveforms and a discussion of current and future throughput potentials. Ghost cytometry (GC) performs a direct analysis of compressive imaging waveforms and thereby substantially relieves the computational bottleneck hindering the realization of high-throughput cytometry based on morphological information (1). The comments by Di Carlo et al. argue against a number of our conclusions (2), but given the restricted length allowed for this response, we will address what we consider the most important points.

What Machine Learning needs from Hardware


On Monday I'll be giving a keynote at the IEEE Custom Integrated Circuits Conference, which is quite surprising even to me, considering I'm a software engineer who can barely solder! Despite that, I knew exactly what I wanted to talk about when I was offered the invitation. If I have a room full of hardware designers listening to me to twenty minutes, I want them to understand what people building machine learning applications need out of their chips. After thirteen years(!) of blogging, I find writing a post the most natural way of organizing my thoughts, so I hope any attendees don't mind some spoilers on what I'll be asking for. At TinyML last month, I think it was Simon Craske from Arm who said that a few years ago hardware design was getting a little bit boring, since the requirements seemed well understood and it was mostly just an exercise in iterating on existing ideas.

IBM Watson Health cuts back Drug Discovery 'artificial intelligence' after lackluster sales


IBM Watson Health is tapering off its Drug Discovery program, which uses "AI" software to help companies develop new pharmaceuticals, blaming poor sales. IBM spokesperson Ed Barbini told The Register: "We are not discontinuing our Watson for Drug Discovery offering, and we remain committed to its continued success for our clients currently using the technology. We are focusing our resources within Watson Health to double down on the adjacent field of clinical development where we see an even greater market need for our data and AI capabilities." In other words, it appears the product won't be sold to any new customers, however, organizations that want to continue using the system will still be supported. When we pressed Big Blue's spinners to clarify this, they tried to downplay the situation using these presumably Watson neural-network-generated words: The offering is staying on the market, and we'll work with clients who want to team with IBM in this area.

How U.S. Bank Uses A.I. and Machine Learning to Deeply Personalize Your Banking Experience


With more than 300M people in the United States, no one has the same fingerprint. Similar to your fingerprint, U.S. Bank realizes that your banking and financial needs and expectations are uniquely yours. Yet too often, banks and finance institutions apply broad brush-strokes to target your perceived needs and wants instead of monitoring the breadcrumbs and signals that customers give every day that infer what they want. Seeking to understand what makes you unique and your journey to improving financial success, U.S. Bank is using AI and ML to predict and deeply personalize the banking experience for its customers, bringing new products and better solutions to their financial needs, ultimately being one step ahead of their customers. If you have ever been a victim of suspected fraudulent activity on your account, you know it is a frustrating and disruptive experience.

We could soon have ROBOTS cleaning our messy bedrooms

Daily Mail

A Japanese tech start-up is using deep learning to teach a pair of machines a simple job for a human, but a surprisingly tricky task for a robot - cleaning a bedroom. Though it may seem like a basic, albeit tedious, task for a human, robots find this type of job surprisingly complicated. A Japanese tech start-up is using deep learning to teach AI how to deal with disorder and chaos in a child's room. Deep learning is where algorithms, inspired by the human brain, learn from large amounts of data so they're able to perform complex tasks. Some tasks, like welding car chassis in the exact same way day after day, are easy for robots as it is a repetitive process and the machines do not suffer with boredom in the same way as disgruntled employees.

Vivienne Sze wins Edgerton Faculty Award

MIT News

Vivienne Sze, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), has received the 2018-2019 Harold E. Edgerton Faculty Achievement Award. The award, announced at today's MIT faculty meeting, commends Sze for "her seminal and highly regarded contributions in the critical areas of deep learning and low-power video coding, and for her educational successes and passion in championing women and under-represented minorities in her field." Sze's research involves the co-design of energy-aware signal processing algorithms and low-power circuit, architecture, and systems for a broad set of applications, including machine learning, computer vision, robotics, image processing, and video coding. She is currently working on projects focusing on autonomous navigation and embedded artificial intelligence (AI) for health-monitoring applications. "In the domain of deep learning, [Sze] created the Eyeriss chip for accelerating deep learning algorithms, building a flexible architecture to handle different convolutional shapes," the Edgerton Faculty Award selection committee said in announcing its decision.

Can science writing be automated?

MIT News

The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand. Now, a team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two. Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists scan a large number of papers to get a preliminary sense of what they're about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition. The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a senior scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.

PhD student in machine learning and computational biology University of Helsinki


The Institute for Molecular Medicine Finland (FIMM) is an international research unit focusing on human genomics and personalised medicine at the Helsinki Institute of Life Science (HiLIFE) of the University of Helsinki - a leading Nordic university with a strong commitment to life science research. FIMM is part of the Nordic EMBL Partnership for Molecular Medicine, composed of the European Molecular Biology Laboratory (EMBL) and the centres for molecular medicine in Norway, Sweden and Denmark, and the EU-LIFE Community. A PhD student position is available in the research group of FIMM-EMBL Group Leader Dr. Esa Pitkänen at the Institute of Molecular Medicine Finland (FIMM), University of Helsinki. The research group will start at FIMM in July 2019, and will address data integration, analysis and interpretation challenges stemming from massive-scale data generated in clinical and research settings. We will work closely with interdisciplinary collaborators at University of Helsinki, Helsinki University Hospital, EMBL and German Cancer Research Center.

Learn about the Types of Machine Learning Algorithms


Isn't it true that we are living in a digitalized world that has eliminated tons of human work by positioning automation?. In fact, it is the most defined period as Google's self-driving car has been invented. But, this period is not in its final stages instead is multiplying to create many more awesome things to surface in the near future. The most exciting concept that sits beside all these major transformations is Machine Learning, which is nothing but allowing computers to learn on their own to arrive at useful insights. Supervised learning is similar to a teacher teaching his students with examples and after sufficient practice, the teacher stops supervising and let the students derive at their own solution.

A Deep Dive into Deep Learning


On Wednesday, March 27, the 2018 Turing Award in computing was given to Yoshua Bengio, Geoffrey Hinton and Yann LeCun for their work on deep learning. Deep learning by complex neural networks lies behind the applications that are finally bringing artificial intelligence out of the realm of science fiction into reality. Voice recognition allows you to talk to your robot devices. Image recognition is the key to self-driving cars. But what, exactly, is deep learning?