American industry is in the midst of another revolution. This one is taking us to a place where decisions of many kinds, from when you should go in for a coronary bypass to where your car should turn left, will no longer be made entirely by us; they will be guided by artificial intelligence. That's good news, because artificial intelligence (AI) holds great promise for improving the health and welfare of much of the planet. But for society to take full advantage of the power of AI, algorithmic outcomes must be fair, and the application of those outcomes must be ethical. So far, efforts to cultivate algorithmic fairness lag far behind the enthusiasm to adopt the technology.
Woodie Flowers SM '68, MEng '71, PhD '73, the Pappalardo Professor Emeritus of Mechanical Engineering, passed away on Oct. 11 at the age of 75. Flowers' passion for design and his infectious kindness have impacted countless engineering students across the world. Flowers was instrumental in shaping MIT's hands-on approach to engineering design education, first developing teaching methods and learning opportunities that culminated in a design competition for class 2.70, now called 2.007 (Design and Manufacturing I). This annual MIT event, which has now been held for nearly five decades, has impacted generations of students and has been emulated at universities around the world. Flowers expanded this concept to high school and elementary school students, working to help found the world-wide FIRST Robotics Competition, which has introduced millions of children to science and engineering.
Dr. Ganapathi Pulipaka was a recipient of the Top 50 Technology Leader awards for recognition of his contribution to artificial intelligence, machine learning, and data science; for the past five years on Twitter as a machine learning and data science influencer; as a contributor to thought leadership and of project implementation articles on Medium, Data Driven Investor, LinkedIn, GitHub; as a best-selling author of two books on Amazon - "The Future of Data Science and Parallel Computing: A Road to Technological Singularity," published on June 29, 2018, and "Big Data Appliances for In-Memory Computing: A Real-World Research Guide for Corporations to Tame and Wrangle Their Data," published Dec. 8, 2015 - and other eBooks that have reached all-time high rankings from the world's largest book ratings authority (featured on Forbes), BookAuthority; and also for writing another 400 research papers as part of academic research programs for PostDoc and PhD. He is an American data scientist and AI luminary who has been featured in top-tier magazines and news and industry publications and was a speaker for multiple media distribution networks and some of the top media station affiliates, including ABC, FoxNews, NBC, Yahoo Finance, MarketWatch, The CW, VentureBeat, MirrorReview, CIOReview, SAP, Erie News Now, USA Today, Double T 97.3 Lubbock's Radio station, 100.7 KFM BFM San Diego, KITV, Telemundo Lubbock 46, AZCentral, Insights Success, NewsOk, Pittsburgh Post-Gazette, MarketWatch, and Ask.
Posted By C. M. Rubin on Oct 9, 2019 "We added "Artificial Intelligence" to "Robotics & STEM" this year because it is an important and timely topic for young people to learn about." Prior to joining the Girls of Steel Robotics Program at Carnegie Mellon University's (CMU) Field Robotics Center, Theresa Richards was a science teacher in Pittsburgh where she created an award-winning lesson integrating robotics into a Human Anatomy and Physiology course. The problem her organization is trying to solve is the demand for more people in STEM, and in particular, women. A December 2018 report in Pittsburgh shows there are 80,000 STEM jobs currently available. "We believe that building robots builds confidence in STEM," says Richards.
If you've been thinking about building your own deep learning computer for a while but haven't quite got'round to it, here's another reminder. Not only is it cheaper to do so, but the subsequent build can also be faster at training neural networks than renting GPUs on cloud platforms. When you start trying small side projects like, say, building little autonomous drones or crafting a bot to spit out random snippets of poetry, you begin to realise how much compute power is really needed to get interesting results. So you can either fork out money to rent hardware via cloud services like AWS or Google Compute Platform or build your own server. Jeff Chen, an AI engineer and entrepreneur, drew up a handy shopping list for all the different parts needed to craft your own deep learning rig.
Artificial Intelligence (alternate unit) was written and developed by Beverly Clarke. She is author of the book "Computer Science Teacher – insight into the computing classroom." Additionally, she is an Education consultant and former teacher. In writing this unit the following are acknowledged for their contributions in proof reading, checking for technical accuracy, testing activities in the classroom, filming, being sound boards and committed to seeing an AI curriculum available for high school students – Mike Mendelson (NVIDIA), James McClung (formerly of NVIDIA), Joanna Goode (University of Oregon), Alison Lowndes (NVIDIA), Rosie Lane (South Wilts Grammar School for Girls), Peter McOwan (Queen Mary University of London), Paul Curzon (Queen Mary University of London), Liz Austin (NVIDIA), Gemma Bond (Screen Boo Productions) and Neil Rickus (University of Hertfordshire). Morals and Ethics supporting cards were sampled from material by Andrew Csizmadia (Newman University).
Butterfly Network is reinventing medical imaging and championing a new era of healthcare by creating the first ever pocket-sized, whole-body ultrasound device – the Butterfly iQ. This breakthrough technology has reduced the cost of the traditional ultrasound system by miniaturizing it onto a single semiconductor silicon chip. Our mission is to democratize healthcare by making medical imaging accessible to everyone around the world. Since inception, Butterfly has raised over $375 million. The iQ is FDA-cleared and is being sold in hospitals and clinics around the globe.
Immersive simulations are increasingly used for teaching and training in many societally important arenas including healthcare, disaster response and science education. The interactions of participants in such settings lead to a complex array of emergent outcomes that present challenges for analysis. This paper studies a central element of such an analysis, namely the interpretability of models for inferring structure in time series data. This problem is explored in the context of modeling student interactions in an immersive ecological-system simulation. Unsupervised machine learning is applied to data on system dynamics with the aim of helping teachers determine the effects of students' actions on these dynamics. We address the question of choosing the optimal machine learning model, considering both statistical information criteria and interpretabilty quality. The results of a user study show that the models that are the best understood by people are not those that optimize information theoretic criteria. In addition, a model using a fully Bayesian approach performed well on both statistical measures and on human-subject tests of interpretabilty, making it a good candidate for automated model selection that does not require human-in-the-loop evaluation. The results from this paper are already being used in the classroom and can inform the design of interpretable models for a broad range of socially relevant domains. 1 Introduction There is increasing evidence of the value of multi-person embodied simulations for engaging learners in a variety of applications, such as healthcare, disaster response and education (Alinier et al. 2014; Amir and Gal 2013).
You already know what is Keras and to build a deep learning model using it. Instead of using TensorFlow directly you use Keras to build the model. But wait do you know you can also use the tools that are included in TensorFlow using Keras. There is a tool in the TensorFlow that is Tensorboard that lets you visualize your model's structure and monitor its training. In this entire intuition, you will learn how to view Tensorboard callbacks through Keras and do some analytics to improve your deep learning model.