A small helicopter opened a new chapter of space exploration this morning when it lifted off the surface of Mars, marking humankind's first powered flight on another planet. The 19-inch-tall chopper called Ingenuity kicked up a little rusty red dust as it lifted about 10 feet off the ground, hovered in place, turned slightly, and slowly touched back down. The flight lasted only about 40 seconds, but it represents one of history's most audacious engineering feats. "A lot of people thought it was not possible to fly at Mars," says MiMi Aung, the project manager of Ingenuity at NASA's Jet Propulsion Laboratory (JPL). "There is so little air."
The European Commission will this week present its proposal on Artificial Intelligence (AI), seen as a step toward a new regulatory framework, promised by Commission President Ursula von der Leyen in her State of the Union, writes Marie-Françoise Gondard-Argenti. Marie-Françoise Gondard-Argenti is a member of the Employers' Group at the European Economic and Social Committee. It is clear that there is no country or company manager in Europe at the moment that does not support the development of a trustworthy and innovative AI ecosystem, which promotes a human-centric approach and that primarily services people, increasing their well-being. There is no company in Europe that does not understand the need to leverage the EU market to spread the EU's approach to AI regulation globally. However, at the moment, the EU lags behind.
Spoken dialogue is the most natural way for people to interact with complex autonomous agents such as robots. Future Army operational environments will require technology that allows artificial intelligent agents to understand and carry out commands and interact with them as teammates. Researchers from the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory and the University of Southern California's Institute for Creative Technologies, a Department of Defense-sponsored University Affiliated Research Center, created an approach to flexibly interpret and respond to Soldier intent derived from spoken dialogue with autonomous systems. This technology is currently the primary component for dialogue processing for the lab's Joint Understanding and Dialogue Interface, or JUDI, system, a prototype that enables bi-directional conversational interactions between Soldiers and autonomous systems. "We employed a statistical classification technique for enabling conversational AI using state-of-the-art natural language understanding and dialogue management technologies," said Army researcher Dr. Felix Gervits. "The statistical language classifier enables autonomous systems to interpret the intent of a Soldier by recognizing the purpose of the communication and performing actions to realize the underlying intent."
Machine learning, artificial intelligence, and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. Despite much promising research currently being undertaken, particularly in imaging, the literature as a whole lacks transparency, clear reporting to facilitate replicability, exploration for potential ethical concerns, and clear demonstrations of effectiveness. Among the many reasons why these problems exist, one of the most important (for which we provide a preliminary solution here) is the current lack of best practice guidance specific to machine learning and artificial intelligence. However, we believe that interdisciplinary groups pursuing research and impact projects involving machine learning and artificial intelligence for health would benefit from explicitly addressing a series of questions concerning transparency, reproducibility, ethics, and effectiveness (TREE). The 20 critical questions proposed here provide a framework for research groups to inform the design, conduct, and reporting; for editors and peer reviewers to evaluate contributions to the literature; and for patients, clinicians and policy makers to critically appraise where new findings may deliver patient benefit. Machine learning (ML), artificial intelligence (AI), and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. The potential uses include improving diagnostic accuracy,1 more reliably predicting prognosis,2 targeting treatments,3 and increasing the operational efficiency of health systems.4 Examples of potentially disruptive technology with early promise include image based diagnostic applications of ML/AI, which have shown the most early clinical promise (eg, deep learning based algorithms improving accuracy in diagnosing retinal pathology compared with that of specialist physicians5), or natural language processing used as a tool to extract information from structured and unstructured (that is, free) text embedded in electronic health records.2 Although we are only just …
Like superheroes capable of seeing through obstacles, environmental regulators may soon wield the power of all-seeing eyes that can identify violators anywhere at any time, according to a new Stanford University-led study. The paper, published the week of April 19 in Proceedings of the National Academy of Sciences (PNAS), demonstrates how artificial intelligence combined with satellite imagery can provide a low-cost, scalable method for locating and monitoring otherwise hard-to-regulate industries. "Brick kilns have proliferated across Bangladesh to supply the growing economy with construction materials, which makes it really hard for regulators to keep up with new kilns that are constructed," said co-lead author Nina Brooks, a postdoctoral associate at the University of Minnesota's Institute for Social Research and Data Innovation who did the research while a Ph.D. student at Stanford. While previous research has shown the potential to use machine learning and satellite observations for environmental regulation, most studies have focused on wealthy countries with dependable data on industrial locations and activities. To explore the feasibility in developing countries, the Stanford-led research focused on Bangladesh, where government regulators struggle to locate highly pollutive informal brick kilns, let alone enforce rules.
Police in Texas investigating a Tesla car crash in which two men died will serve search warrants on the company to ascertain if the vehicle's autopilot mode was engaged at the time of the incident. However Tesla's CEO, Elon Musk, has said the self-driving feature was not being used, based on an internal probe by the company. In the incident, two men, both in their 50s, were killed after their 2019 Tesla Model S crashed into a tree and caught fire. According to police reports, the car was travelling at a high speed and failed to negotiate a curve in the road. Texas police noted that nobody was at the driving seat at the time of impact, raising doubts about the involvement of the car's autopilot mode.
When Microsoft spends $19.7 billion on a company whose specialties included voice recognition and artificial intelligence (AI) as part of its health sector strategy, you know that AI in the medical field is here to stay. It only makes sense, then, that regulations regarding the technology would not be far behind. Thanks to a leaked document first reported by Politico, we now have our first look at what such regulations might look like in the European Union. The regulation document largely concerns "high-risk" usages of AI. That's not surprising, as the European Commission originally published a whitepaper in February 2020 outlining ideas for regulating such uses of the technology.
It's a cold winter day in Detroit, but the sun is shining bright. Robert Williams decided to spend some quality time rolling on his house's front loan with his two daughters. Suddenly, police officers appeared from nowhere and brought to an abrupt halt a perfect family day. Robert was ripped from the arms of his crying daughters without an explanation, and cold handcuffs now gripped his hands. The police took him away in no time! His family were left shaken in disbelief at the scene which had unfolded in front of their eyes. What followed for Robert were 30 long hours in police custody.
More than 70 advocacy groups have called on the Department of Homeland Security to stop using Clearview AI's facial recognition software. In a letter addressed to DHS Secretary Alejandro Mayorkas and Susan Rice, the director of the White House's Domestic Policy Council, the American Civil Liberties Union, Electronic Frontier Foundation, OpenMedia and other organizations argue "the use of Clearview AI by federal immigration authorities has not been subject to sufficient oversight or transparency." The letter points to a recent BuzzFeed News report that found employees from 1,803 government bodies, including police departments and public schools, have been using the software without many of their bosses knowing about it. The company has given out free trials to individual employees at those organizations hoping that they'll advocate for their agency to sign up for it. Besides the lack of oversight, the letter points to issues like racial bias in facial recognition software and the fact Clearview built its database by scraping websites like Facebook, Twitter and YouTube.
A small robotic helicopter named Ingenuity made space exploration history on Monday when it lifted off the surface of Mars and hovered in the wispy air of the red planet. It was the first machine from Earth ever to fly like an airplane or a helicopter on another world. The achievement extends NASA's long, exceptional record of firsts on Mars. "We together flew at Mars," MiMi Aung, the project manager for Ingenuity, said to her team during the celebration. "And we together now have this Wright brothers moment."