Artificial Intelligence (AI) is often regarded as “Great and Powerful;” it can add tremendous value by transforming business workflows with faster, smarter decisions. At the same time, AI can be mysterious and even scary. In order to build trust, AI needs to be transparent and explainable: “out from behind the curtain” so to speak. As IBM’s recent study on AI Ethics found, corporate boards are looking to Data and Technology leaders to make that happen, and I couldn’t agree more. CDOs and CTOs can be instrumental in bringing forth both human value and human values in enterprise AI. Putting the human first To build trust in business AI, we must always put the value of the human first. This should happen at the data-provider level and the decision-maker level. At the provider level, building trust starts with data governance to ensure that the data itself can be trusted. In our organization, embedded within this is the IBM…
The vision of smart autonomous robots in the indoor environment is becoming a reality in the current decade. This vision is now becoming a reality because of emerging technologies of Sensor Fusion and Artificial Intelligence. Sensor fusion is aggregating informative features from disparate hardware resources. Just like autonomous vehicles, the robotic industry is quickly moving towards automatic smart robots for handling indoor tasks. Now the major question arises.
Artificial intelligence (AI) is a widely used term that conjures notions of fantasy, the future, or even threat. This is not surprising considering the multitude of movies which dramatise the role of artificial intelligence and what it may become. In reality, artificial intelligence is a branch of computer science which aims to "understand and build intelligent entities by automating human intellectual tasks". These processes have contributed to numerous technological advances across various industries, for example. It is now quite common to see articles about the latest AI development -- check out these robots which flip burgers!
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world's toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale to thousands of GPUs or, with NVIDIA Multi-Instance GPU (MIG) technology, be partitioned into seven GPU instances to accelerate workloads of all sizes. And third-generation Tensor Cores accelerate every precision for diverse workloads, speeding time to insight and time to market. Pallab Maji is a "Senior Solutions Architect – Deep Learning" at NVIDIA working with System Integrators & Cloud Service Providers. His research interest lies in design and development of perception modules for autonomous systems, focusing mostly on Computer Vision, Natural Language Processing and Machine Learning.
According to 2015 APQC, 62% of accounts payable costs come from labor - and that figure doesn't account for the opportunity cost of wasting time that could be better spent on innovation and strategic thinking. At SAP Concur, we have been using Machine Learning (ML) for several years to do things for our customers that could not be done any other way. With SAP Leonardo, we continue investing in the future of ML and AI with a set of innovative services that make everything from travel booking to expense auditing smarter, more automated and easier for your employees. Download the white paper now, and learn more at www.concur.com.sg
In June, a crisis erupted in the artificial intelligence world. Conversation on Twitter exploded after a new tool for creating realistic, high-resolution images of people from pixelated photos showed its racial bias, turning a pixelated yet recognizable photo of former President Barack Obama into a high-resolution photo of a white man. Researchers soon posted images of other famous Black, Asian, and Indian people, and other people of color, being turned white. Two well-known AI corporate researchers -- Facebook's chief AI scientist, Yann LeCun, and Google's co-lead of AI ethics, Timnit Gebru -- expressed strongly divergent views about how to interpret the tool's error. A heated, multiday online debate ensued, dividing the field into two distinct camps: Some argued that the bias shown in the results came from bad (that is, incomplete) data being fed into the algorithm, while others argued that it came from bad (that is, short-sighted) decisions about the algorithm itself, including what data to consider.
Finally, AI is ready for the mainstream. When your enterprise is handling transactions between 25 million sellers and 182 million buyers, supporting 1.5 billion listings, manual decision-making processes just won't cut. Such is the case with eBay, the mega commerce site, that has been employing artificial intelligence for more than a decade. As Forbes contributor Bernard Marr points out, eBay employs AI across a broad range of functions, "in personalization, search, insights, discovery and its recommendation systems along with computer vision, translation, natural language processing and more." As part of a massive operation with so much experience with AI, Mazen Rawashdeh, CTO of eBay, has plenty to say about the current state of enterprise AI.
This article is a summary of a three-hour discussion at Stanford University in September 2019 among the authors. It has been written with combined experiences at and with organizations such as Zilog, Altera, Xilinx, Achronix, Intel, IBM, Stanford, MIT, Berkeley, University of Wisconsin, the Technion, Fairchild, Bell Labs, Bigstream, Google, DIGITAL (DEC), SUN, Nokia, SRI, Hitachi, Silicom, Maxeler Technologies, VMware, Xerox PARC, Cisco, and many others. These organizations are not responsible for the content, but may have inspired the authors in some ways, to arrive at the colorful ride through FPGA space described here. Field-programmable gate arrays (FPGAs) have been hitting a nerve in the ASIC community since their inception. In the mid-1980s, Ross Freeman and his colleagues bought the technology from Zilog and started Xilinx, targeting the ASIC emulation and education markets.