Optimus Ride has already deployed its autonomous transportation systems in the Seaport area of Boston, in a mixed-use development in South Weymouth, Massachusetts, and in the Brooklyn Navy Yard, a 300-acre industrial park. Some of the biggest companies in the world are spending billions in the race to develop self-driving vehicles that can go anywhere. Meanwhile, Optimus Ride, a startup out of MIT, is already helping people get around by taking a different approach. The company's autonomous vehicles only drive in areas it comprehensively maps, or geofences. Self-driving vehicles can safely move through these areas at about 25 miles per hour with today's technology.
Chowbotics is packing up Sally the salad making robot and sending it off to college. Well, many colleges actually, as the food robotics startup is set to announce next week a bigger push into the higher education market. Chowbotics told us that this school year, students at multiple colleges and universities in the U.S. will be able to buy salads and breakfast bowls from Sally the robot. Those schools include: Case Western Reserve University in Cleveland, OH; College of the Holy Cross in Worcester, MA; the University of Guelph in Ontario, Canada; Elmira College in Elmira, NY; the University of Memphis in Memphis, TN; and Wichita State University in Wichita, KS. These schools join Marshall University in Huntington, WV, which installed Sally in 2018.
Deep learning is a subset of machine learning, a branch of artificial intelligence that configures computers to perform tasks through experience. Deep learning, an advanced artificial intelligence technique, has become increasingly popular in the past few years, thanks to abundant data and increased computing power. It's the main technology behind many of the applications we use every day, including online language translation and automated face-tagging in social media. This technology has also proved useful in healthcare: Earlier this year, computer scientists at the Massachusetts Institute of Technology (MIT) used deep learning to create a new computer program for detecting breast cancer. Classic models had required engineers to manually define the rules and logic for detecting cancer, but for this new model, the scientists gave a deep-learning algorithm 90,000 full-resolution mammogram scans from 60,000 patients and let it find the common patterns between scans of patients who ended up with breast cancer and those who didn't.
Robust machine learning relies on access to data that can be used with standardized frameworks in important tasks and the ability to develop models whose performance can be reasonably reproduced. In machine learning for healthcare, the community faces reproducibility challenges due to a lack of publicly accessible data and a lack of standardized data processing frameworks. We present MIMIC-Extract, an open-source pipeline for transforming raw electronic health record (EHR) data for critical care patients contained in the publicly-available MIMIC-III database into dataframes that are directly usable in common machine learning pipelines. MIMIC-Extract addresses three primary challenges in making complex health records data accessible to the broader machine learning community. First, it provides standardized data processing functions, including unit conversion, outlier detection, and aggregating semantically equivalent features, thus accounting for duplication and reducing missingness. Second, it preserves the time series nature of clinical data and can be easily integrated into clinically actionable prediction tasks in machine learning for health. Finally, it is highly extensible so that other researchers with related questions can easily use the same pipeline. We demonstrate the utility of this pipeline by showcasing several benchmark tasks and baseline results. These authors has an equal contribution, and should be considered co-first authors.
Should your Roomba need a W-2? Probably not, but it's an amusing thought when debating the more serious topic of whether or not a robot should have to pay taxes -- and how to do it. During the June MIT Technology Review EmTech Next event, two experts argued both sides of the question before an audience at the MIT Media Lab in Cambridge, Massachusetts. Ryan Abbott, professor of law and health sciences at the University of Surrey, argued in favor of taxing robots, while Ryan Avent, economics columnist for The Economist, argued against the idea. Both agreed there needs to be a shift in tax burden from labor to capital. Avent, however, carried the most audience votes by the end of the debate.
The healthcare sector has long been an early adopter of and benefited greatly from technological advances. These days, machine learning (a subset of artificial intelligence) plays a key role in many health-related realms, including the development of new medical procedures, the handling of patient data and records and the treatment of chronic diseases. As computer scientist Sebastian Thrum told the New Yorker in a recent article titled "A.I. Versus M.D., "Just as machines made human muscles a thousand times stronger, machines will make the human brain a thousand times more powerful." Despite warnings from some doctors that things are moving too fast, the rate of progress keeps increasing. And for many, that's as it should be. "AI is the future of healthcare," Fatima Paruk, CMO of Chicago-based Allscripts Analytics, said in 2017. She went on to explain how critical it would be in the ensuing few years and beyond -- in the care management of prevalent chronic diseases; in the leveraging of "patient-centered health data with external influences such as pollution exposure, weather factors and economic factors to generate precision medicine solutions customized to individual characteristics"; in the use of genetic information "within care management and precision medicine to uncover the best possible medical treatment plans." "AI will affect physicians and hospitals, as it will play a key role in clinical decision support, enabling earlier identification of disease, and tailored treatment plans to ensure optimal outcomes," Paruk explained. "It can also be used to demonstrate and educate patients on potential disease pathways and outcomes given different treatment options.
IBM and the Massachusetts Institute of Technology (MIT) have joined forces to establish an MIT-IBM Watson AI Lab in Cambridge that will pursue research in artificial intelligence (AI) with a focus on healthcare and cybersecurity, as well as on commercialising AI technologies born out of the lab. Touted as one of the largest university-industry AI collaborations and investments, the 10-year, $240 million initiative is expected to hire and bring together over 100 AI-focused scientists, professors, and students. In addition to IBM's plan to commercialise technologies developed within the lab, the pair will encourage MIT faculty and students to launch new companies that will focus on commercialising inventions and technologies that are developed at the lab. "The field of artificial intelligence has experienced incredible growth and progress over the past decade. Yet, today's AI systems, as remarkable as they are, will require new innovations to tackle increasingly difficult real-world problems to improve our work and lives," said Dr John Kelly III, IBM senior vice president, Cognitive Solutions and Research.
The idea that a machine could exhibit the same level of intelligence as a human being has captivated scientists for decades. A.I. is not about building a robot, but developing a computer mind that can think like a human... that learns... that can even approach--and exceed--human levels of intelligence. Come join us in 2019.
The industry has largely settled on the notion of a data pipeline as a means of encapsulating the engineering work that goes into collecting, transforming, and preparing data for downstream advanced analytics and machine learning workloads. Now the next step forward is to automate that pipeline work, which is a cause that several DataOps vendors are rallying around. Data engineers are some of the most in-demand people in organizations that are leveraging big data. While data scientists (or machine learning engineers, as many of them are calling themselves nowadays) get most of the glory, it's the data engineers who do much of the hands-on-keyboard work that makes the magic of data science possible. Just as data science platforms have emerged to automate the most common data science tasks, we are also seeing new software tools emerging to handle much of the repetitive data pipeline work that is typically handled by data engineers.