Today, you'll find a deal on Roomba, a discounted Hydro Flask water bottle and savings on beauty products at Ulta. Get a jumpstart on spring cleaning with discounted iRobot Roomba vacuums. The Roomba i3 is $100 off at $299.99 and the Roomba 675 is $80 off at $199.99, plus iRobot's automatic mopping and sweeping device, the Braava 380t, is discounted to $199.99. Whether you're looking for some new fashionable spring layers or discounted winter gear, United by Blue has got you covered with its end of season sale. Now through the end of January you can save up to 60% sitewide, plus, you can get an extra 50% off sale items with code BYEWINTER.
The US Army just took a giant step toward developing killer robots that can see and identify faces in the dark. DEVCOM, the US Army's corporate research department, last week published a pre-print paper documenting the development of an image database for training AI to perform facial recognition using thermal images. Why this matters: Robots can use night vision optics to effectively see in the dark, but to date there's been no method by which they can be trained to identify surveillance targets using only thermal imagery. This database, made up of hundreds of thousands of images consisting of regular light pictures of people and their corresponding thermal images, aims to change that. How it works: Much like any other facial recognition system, an AI would be trained to categorize images using a specific number of parameters.
General Motors has inched slightly closer to fulfilling its quest to put the world in flying cars. As part of the 2021 virtual Consumer Electronics Show on Tuesday, GM showed renderings and animation of what it dubbed its Cadillac Halo concepts: the Cadillac Personal Autonomous Vehicle, which is like a fancy self-driving taxi, and Cadillac Vertical Take-off and Landing (VTOL) vehicle, a sleek and futuristic drone-like flying car. "The VTOL is GM's first foray into air travel," said Michael Simcoe, GM's vice president of global design. Advances in electric vehicles and other technology are now "making personal air travel possible," he said. Simcoe's presentation came in the middle of GM CEO Mary Barra's keynote address to CES, the annual exhibition normally held in Las Vegas that features the latest technology.
Scientists are always hunting for materials that have superior properties. They therefore continually synthesize, characterize and measure the properties of new materials using a range of experimental techniques. Computational modelling is also used to estimate the properties of materials. However, there is usually a trade-off between the cost of the experiments (or simulations) and the accuracy of the measurements (or estimates), which has limited the number of materials that can be tested rigorously. Writing in Nature Computational Science, Chen et al.1 report a machine-learning approach that combines data from multiple sources of measurements and simulations, all of which have different levels of approximation, to learn and predict materials' properties. Their method allows the construction of a more general and accurate model of such properties than was previously possible, thereby facilitating the screening of promising material candidates.
Seven little Bluebots gently swim around a darkened tank in a Harvard University lab, spying on one another with great big eyes made of cameras. They're on the lookout for the two glowing blue LEDs fixed to the backs and bellies of their comrades, allowing the machines to lock on to one another and form schools, a complex emergent behavior arising from surprisingly simple algorithms. With very little prodding from their human engineers, the seven robots eventually arrange themselves in a swirling tornado, a common defensive maneuver among real-life fish called milling. Bluebot is the latest entry in a field known as swarm robotics, in which engineers try to get machines to, well, swarm. And not in a terrifying way, mind you: The quest is to get schools of Bluebots to swarm more and more like real fish, giving roboticists insights into how to improve everything from self-driving cars to the robots that may one day prepare Mars for human habitation.
In December, the University of Texas at Austin's computer science department announced that it would stop using a machine-learning system to evaluate applicants for its Ph.D. program due to concerns that encoded bias may exacerbate existing inequities in the program and in the field in general. This move toward more inclusive admissions practices is a rare (and welcome) exception to a worrying trend in education: Colleges, standardized test providers, consulting companies, and other educational service providers are increasingly adopting predatory, discriminatory, and outright exclusionary student data practices. Student data has long been used as a college recruiting and admissions tool. In 1972, College Board, the company that owns the PSAT, the SAT, and the AP Exams, created its Student Search Service and began licensing student names and data profiles to colleges (hence the college catalogs that fill the mail boxes of high school students who have taken the exams). Today, College Board licenses millions of student data profiles every year for 47 cents per examinee.
A growing number of IT workers are worried about what artificial intelligence (AI) and machine learning technologies mean for their future. Research from cybersecurity firm Trend Micro claims that nearly half of IT leaders think AI will render their roles redundant over the coming decade. Meanwhile, a 2020 report by security management platform Exabeam found that 53% of cybersecurity professionals aged 45 or under view AI and machine learning as threats to job security. Are IT professionals right to be concerned about the rise of AI technology, and how can they stay relevant in the years to come? There are many different reasons IT professionals are worried about the rise and advancement of AI in the technology workplace, according to Exabeam security specialist Sam Humphries.
Scientists from the Max Planck Institute of Psychiatry, led by Nikolaos Koutsouleris, combined psychiatric assessments with machine-learning models that analyze clinical and biological data. Although psychiatrists make very accurate predictions about positive disease outcomes, they might underestimate the frequency of adverse cases that lead to relapses. The algorithmic pattern recognition helps physicians to better predict the course of disease. The results of the study show that it is the combination of artificial and human intelligence that optimizes the prediction of mental illness. "This algorithm enables us to improve the prevention of psychosis, especially in young patients at high risk or with emerging depression, and to intervene in a more targeted and well-timed manner" explains Koutsouleris.
In a recent New Yorker article about the Capitol siege, Ronan Farrow described how investigators used a bevy of online data and facial recognition technology to confirm the identity of Larry Rendall Brock Jr., an Air Force Academy graduate and combat veteran from Texas. Brock was photographed inside the Capitol carrying zip ties, presumably to be used to restrain someone. Brock was arrested Sunday and charged with two counts.) Even as they stormed the Capitol, many rioters stopped to pose for photos and give excited interviews on livestream. Each photo uploaded, message posted, and stream shared created a torrent of data for police, researchers, activists, and journalists to archive and analyze.