Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society. This article is part of Update or Die, a series from Future Tense about how businesses and other organizations keep up with technological change--and the cost of falling behind. Few passengers realize that in the airline industry, we exclusively train our pilots using simulators. When a new-hire pilot flies the real airplane for the first time, it's with paying customers in the back. To create one of our simulators, we hacksaw off the pointy end of a real airplane, put it on a 6-degree-of-freedom hexapod motion platform, and outfit it with video displays so pilots have something to look at out the front window.
An executive guide to the technology and market drivers behind the $135 billion robotics market. ZDNet talked to the CEO and co-founder of iRobot, Colin Angle, to discover the story of the Roomba's development and why people get so attached to their robot vacuum cleaners ZDNet: You originally started out while researching at MIT. How long were you there for? Angle: I started out there as an undergraduate. I was looking for a university that would feed my passion to build. At MIT, I was an undergrad before getting into graduate school.
By Mary Beth O'Leary With the push of a button, months of hard work were about to be put to the test. Sixteen teams of engineers convened in a cavernous exhibit hall in Nagoya, Japan, for the 2017 Amazon Robotics Challenge. The robotic systems they built were tasked with removing items from bins and placing them into boxes. For graduate student Maria Bauza, who served as task-planning lead for the MIT-Princeton Team, the moment was particularly nerve-wracking. "It was super stressful when the competition started," recalls Bauza.
With the push of a button, months of hard work were about to be put to the test. Sixteen teams of engineers convened in a cavernous exhibit hall in Nagoya, Japan, for the 2017 Amazon Robotics Challenge. The robotic systems they built were tasked with removing items from bins and placing them into boxes. For graduate student Maria Bauza, who served as task-planning lead for the MIT-Princeton Team, the moment was particularly nerve-wracking. "It was super stressful when the competition started," recalls Bauza.
Machine Learning is enabling a transformation in the software industry without precedents. New Machine Learning powered predictive applications are performing jobs that were previously considered exclusive to highly skilled humans. We are already witnessing a new wave of innovation that is changing the face of all sectors of the economy. BigML is bringing the fourth edition of our Summer School in Machine Learning to Valencia. We will hold a two-day crash course ideal for business leaders, industry practitioners, advanced undergraduates, as well as graduate students, seeking a quick, practical, and hands-on introduction to Machine Learning to solve real-world problems.
An undergraduate student from China's Fudan University showcased a new artificial intelligence (AI) software that can turn regular human photos into an anime masterpiece using the Generative Adversarial Network (GAN) and Deep Learning method. Yanghua Jin is attempting to create a computer program that can learn from its own mistakes the longer it works. This is all done using GAN's two networks: the generator and the discriminator, according to SoraNews24. The generator is in charge of producing the anime picture that runs through the software. It uses attributes taken from anime images, such as hair and eye color, whether the hair is long or short, and whether the mouth is open or not, and studies them.
IMAGE: This is the Rice University's PlinyCompute team includes (from left) Shangyu Luo, Sourav Sikdar, Jia Zou, Tania Lorido, Binhang Yuan, Jessica Yu, Chris Jermaine, Carlos Monroy, Dimitrije Jankov and Matt... view more HOUSTON -- (June 11, 2018) -- Computer scientists from Rice University's DARPA-funded Pliny Project believe they have the answer for every stressed-out systems programmer who has struggled to implement complex objects and workflows on'big data' platforms like Spark and thought: "Isn't there a better way?" Rice's PlinyCompute will be unveiled here Thursday at the 2018 ACM SIGMOD conference. In a peer-reviewed conference paper, the team describes PlinyCompute as "a system purely for developing high-performance, big data codes." Like Spark, PlinyCompute aims for ease of use and broad versatility, said Chris Jermaine, the Rice computer science professor leading the platform's development. Unlike Spark, PlinyCompute is designed to support the intense kinds of computation that have only previously been possible with supercomputers, or high-performance computers (HPC). "With machine learning, and especially deep learning, people have seen what complex analytics algorithms can do when they're applied to big data," Jermaine said.
This course provides an introduction to basic computational methods for understanding what nervous systems do and for determining how they function. We will explore the computational principles governing various aspects of vision, sensory-motor control, learning, and memory. Specific topics that will be covered include representation of information by spiking neurons, processing of information in neural networks, and algorithms for adaptation and learning. We will make use of Matlab/Octave/Python demonstrations and exercises to gain a deeper understanding of concepts and methods introduced in the course. The course is primarily aimed at third- or fourth-year undergraduates and beginning graduate students, as well as professionals and distance learners interested in learning how the brain processes information.
The University of California, Berkeley has released a vast dataset used by engineers to develop self-driving car technologies. The academic institution's dataset, which can be downloaded here, is part of the university's DeepDrive project. The dataset contains over 100,000 video sequences which have been recorded to represent different driving scenarios including weather conditions, various environments, and times of the day. The video sequences, recorded in HD, also contain GPS locations, IMU data, and timestamps across 1100 hours. UC Berkeley's BDD100K database can be used by engineers and developers of self-driving car technologies to train autonomous systems.