application


Deep learning enables real-time imaging around corners: Detailed, fast imaging of hidden objects could help self-driving cars detect hazards

#artificialintelligence

"Compared to other approaches, our non-line-of-sight imaging system provides uniquely high resolutions and imaging speeds," said research team leader Christopher A. Metzler from Stanford University and Rice University. "These attributes enable applications that wouldn't otherwise be possible, such as reading the license plate of a hidden car as it is driving or reading a badge worn by someone walking on the other side of a corner." In Optica, The Optical Society's journal for high-impact research, Metzler and colleagues from Princeton University, Southern Methodist University, and Rice University report that the new system can distinguish submillimeter details of a hidden object from 1 meter away. The system is designed to image small objects at very high resolutions but can be combined with other imaging systems that produce low-resolution room-sized reconstructions. "Non-line-of-sight imaging has important applications in medical imaging, navigation, robotics and defense," said co-author Felix Heide from Princeton University.


Operationalizing AI

#artificialintelligence

When AI practitioners talk about taking their machine learning models and deploying them into real-world environments, they don't call it deployment. Instead the term that's used is "operationalizing". This might be confusing for traditional IT operations managers and applications developers. Why don't we deploy or put into production AI models? What does AI operationalization mean and how is it different from the typical application development and IT systems deployment?


Operationalizing AI

#artificialintelligence

When AI practitioners talk about taking their machine learning models and deploying them into real-world environments, they don't call it deployment. Instead the term that's used is "operationalizing". This might be confusing for traditional IT operations managers and applications developers. Why don't we deploy or put into production AI models? What does AI operationalization mean and how is it different from the typical application development and IT systems deployment?


Angular Image Classification App Made Simple With Google Teachable Machine

#artificialintelligence

AI is a general field that encompasses machine learning and deep learning. The history of artificial intelligence in its modern sense begins in the 1950s, with the works of Alan Turing and the Dartmouth workshop, which brought together the first enthusiasts of this field and in which the basic principles of the science of AI were formulated. Further, this industry experienced several cycles of a surge of interest and subsequent recessions (the so-called "AI winters"), in order to become one of the key areas of world science today. However, there are several examples and applications of artificial intelligence in use today, a large community of developers is still wondering how or from where to start developing AI-driven applications. So this article may be a kick start for those who are eager to start developing AI or ML-driven applications.


How the Pentagon's JAIC Picks Its Artificial Intelligence-Driven Projects

#artificialintelligence

The Pentagon launched its Joint Artificial Intelligence Center in 2018 to strategically unify and accelerate AI applications across the nation's defense and military enterprise. Insiders at the center have now spent about nine months executing that defense driven AI-support. At an ACT-IAC forum in Washington Wednesday, Rachael Martin, the JAIC's mission chief of Intelligent Business Automation Augmentation and Analytics, highlighted insiders' early approach to automation and innovation. "Our mission is to transform the [Defense] business process through AI technologies, to improve efficiency and accuracy--but really to do all those things so that we can improve our overall warfighter support," Martin said. Within her specific mission area, Martin and the team explore and develop automated applications that support a range of efforts across the Pentagon, such as business administration, human capital management, acquisitions, finance and budget training, and beyond.


Liquid Cooling Trends in HPC - insideHPC

#artificialintelligence

In this special guest feature, Bob Fletcher from Verne Global reflects on how liquid cooling technologies on display at SC19 represent more than just a wave. Bob Fletcher is VP of Artificial Intelligence at Verne Global. Perhaps it is because I returned from my last business trip of 2019 to a flooded house, but more likely it's all the wicked cool water-cooled equipment that I encountered at SC19 that I'm in a watery mood! Many of the hardware vendors at SC19 were pushing their exascale-ready devices and about 15% of the devices on a typical computer manufacturer's booth were water-cooled. Adding rack-level water cooling is theoretically straight forward, so I spent a few minutes checking out the various options.


C3.ai: accelerating digital transformation

#artificialintelligence

As one of the leading enterprise AI software providers, C3.ai is renowned for building enterprise-scale AI applications and harnessing digital transformation. The C3 AI Suite is software that uses a model-driven architecture to speed up delivery and reduce the complexities of developing enterprise-scale AI applications. Supply Chain Digital takes a closer look at the AI firm. The Suite propels organisations to deliver AI-enabled applications quicker than alternative methods while reducing the technical debt from maintaining and upgrading these applications. Its solutions cater to a range of different industries such as manufacturing, oil and gas, utilities, banking, aerospace and defence, healthcare, retail, telecoms, smart cities and transportation.


MIT's new tool predicts how fast a chip can run your code

#artificialintelligence

Folks from the Massachusetts Institute of Technology (MIT) have developed a new machine learning-based tool that will tell you how fast a code can run on various chips. This will help developers tune their applications for specific processor architectures. Traditionally, developers used the performance model of compilers through a simulation to run basic blocks -- fundamental computer instruction at the machine level -- of code in order to gauge the performance of a chip. However, these performance models are not often validated through real-life processor performance. MIT researchers developed an AI model called Ithmel by training it to predict how fast a chip can run unknown basic blocks.


Living AI: From Potential To Practice

#artificialintelligence

Artificial intelligence (AI) is ubiquitous. Whether we are consciously aware of it or unknowingly using it, AI is present at work, at home and in our everyday transactions. From our productivity in the office to the route we take home to the products we purchase and even the music we listen to, AI is influencing many of our decisions. Those decisions are still ours to make, but soon enough the decisions will be made by AI-enabled systems without waiting for the final approval from us. As of now, the default state for decision systems is "off."


Uber Introduces PyML: Their Secret Weapon for Rapid Machine Learning Development

#artificialintelligence

Uber has been one of the most active companies trying to accelerate the implementation of real world machine learning solutions. Just this year, Uber has introduced technologies like Michelangelo, Pyro.ai and Horovod that focus on key building blocks of machine learning solutions in the real world. This week, Uber introduced another piece of its machine learning stack, this time aiming to short the cycle from experimentation to product. PyML, is a library to enable the rapid development of Python applications in a way that is compatible with their production runtime. The problem PyML attempts to address is one of those omnipresent challenges in large scale machine learning applications.