Charles William "Charlie" Bachman, the "father of databases" who received the ACM A.M. Turing Award for 1973 for creating the first database management system, died June 13 at the age of 92. Born in Manhattan, KS, in 1924, Bachman earned his B.S. in mechanical engineering in 1948, as well as an M.S. in mechanical engineering from the University of Pennsylvania. He went to work for Dow Chemical in 1950, using mechanical punched-card computing devices to solve networks of simultaneous equations representing data from Dow plants. In 1957, Bachman became head of Dow's Data Processing Department, through which he became a member of Share Inc., and a founding member of the Share Data Processing Committee. In 1960, Bachman joined the General Electric (GE) Production Control Services Group in New York City, using a factory in Philadelphia to test designs for a system to automate factory planning, scheduling, operational control, and inventory control. The resulting MIACS was based on the ...
Our Techie Tuesdays protagonist of the week, Anima has worked towards establishing a strong collaboration between academia and industry. Anima worked on solving this problem of tracking end to end service level transactions. She wanted to design learning algorithms that can process at scale and make efficient inferences about the underlying hidden information. When Anima joined UC Irvine as a faculty, that time was the beginning of the big data revolution.
If you have a desire to learn machine learning concepts and have some previous programming or Python experience, this course is perfect for you. Writing processing from scratch allows students to gain a more in-depth insight into data processing, and as each machine learning app is created, explanations and comments are provided to help students understand why things are being done in certain ways. Each code walk through also shows the building process in real time. The course begins with an introduction to machine learning concepts, after which you'll build your first machine learning application.
Researchers at the Hot Chips conference in Cupertino, California showed a Gated Recurrent Unit model running on Intel's new Stratix 10 field programmable gate array (FPGA) chip at a speed of 39.5 teraflops, without batching operations at all. Brainwave loads a trained machine learning model into FPGA hardware's memory that stays there throughout the lifetime of a machine learning service. Google announced the second revision of its Tensor Processing Unit -- a dedicated chip for machine learning training and serving -- earlier this year. Right now, Brainwave supports trained models created using Microsoft's CNTK framework and Google's TensorFlow framework.
Last week, Qualcomm announced that it had acquired Netherlands-based machine learning startup Scyfer for an undisclosed amount. While Qualcomm has been working on AI for about a decade, the company's more recent efforts have centered around deploying artificial intelligence technology at the device level – in smartphones and cars. While most large companies rely on the cloud for AI processing and machine learning, leveraging large data centers that run an array of graphics chips, Qualcomm notes that it is focusing on on-device solutions in order to enhance reliability, cut latency and bandwidth usage while improving privacy protection. Scyfer develops AI solutions for companies on a contract basis, catering to various industries ranging from healthcare, manufacturing, automation, and finance.
After a couple of AI winters and periods of false hope over the past four decades, rapid advances in data storage and computer processing power have dramatically changed the game in recent years. Artificial intelligence is the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals. Meanwhile, we're continuing to make foundational advances towards human-level artificial general intelligence (AGI), also known as strong AI. The definition of an AGI is an artificial intelligence that can successfully perform any intellectual task that a human being can, including learning, planning and decision-making under uncertainty, communicating in natural language, making jokes, manipulating people, trading stocks, or… reprogramming itself.
A similar story is becoming apparent within the marketing industry, as the advancement of technology has evolved consumer expectations to a point where marketers are challenged to keep up the pace amidst all the complexity. Whether we are referring to rules-based automation, natural language processing, machine learning or AI, the point is we need help managing some hard tasks. Marketers should focus on the value we're trying to realise: Delivering more relevant experiences for customers, improving business outcomes because of that relevance, and doing it efficiently to get more from our budgets. Machine learning and smart automation are beginning to prove value in an increasing number of areas including optimisation, personalisation, customer segmentation, and contextual intelligence.
First, we have defined highly customized, narrow-precision data types that increase performance without real losses in model accuracy. Third, Project Brainwave incorporates a software stack designed to support the wide range of popular deep learning frameworks. Companies and researchers building DNN accelerators often show performance demos using convolutional neural networks (CNNs). Running on Stratix 10, Project Brainwave thus achieves unprecedented levels of demonstrated real-time AI performance on extremely challenging models.
I've had the opportunity to work with the Space Robotics Lab with a team of 6 students (3 from Sapienza University of Rome like me, 2 from Hong Kong University and one from Georgia University), to build a mobile robot that had remote control with a video stream of what the robot sees, but also had autonomous obstacle avoidance and navigation, and other features that were up to us. Implementing autonomous navigation, obstacle avoidance and landmark detection and following was quite harder. Based on those sensors information, I implemented a obstacle avoidance algorithm, that performed evasion maneuvers in the presence of obstacles, but until then the robot wandered randomly anyway. But, since my major is focused on both Robotics and Artificial Intelligence, I decided to expand the software architecture by integrating more advanced deep learning vision models and natural language processing to give the robot commands in english language and make it understand further its environment.
Spark's unique use case is that it combines ETL, batch analytic, real-time stream analysis, machine learning, graph processing, and visualizations to allow Data Scientists to tackle the complexities that come with raw unstructured data sets. Spark embraces this approach and has the vision to make the transition from working on a single machine to working on a cluster, something that makes data science tasks a lot more agile. Then, you will get acquainted with Spark Machine learning algorithms and different machine learning techniques. His typical day includes building efficient processing with advanced machine learning algorithms, easy SQL, streaming and graph analytics.