Goto

Collaborating Authors

 AAAI AI-Alert for Apr 5, 2023


Should we fear the rise of artificial general intelligence?

#artificialintelligence

Last week, a who's who of technologists called for artificial intelligence (AI) labs to stop training the most powerful AI systems for at least six months, citing "profound risks to society and humanity." In an open letter that now has more than 3,100 signatories, including Apple co-founder Steve Wozniak, tech leaders called out San Francisco-based OpenAI Lab's recently announced GPT-4 algorithm in particular, saying the company should halt further development until oversight standards are in place. That goal has the backing of technologists, CEOs, CFOs, doctoral students, psychologists, medical doctors, software developers and engineers, professors, and public school teachers from all over the globe. On Friday, Italy became the first Western nation to ban further development of ChatGPT over privacy concerns; the natural language processing app experienced a data breach last month involving user conversations and payment information. ChatGPT is the popular GPT-based chatbot created by OpenAI and backed by billions of dollars from Microsoft.


FDA drafts AI-enabled medical device lifecycle plan guidance

#artificialintelligence

The Food and Drug Administration announced the availability of draft guidance that provides recommendations on lifecycle controls in submissions to market machine learning-enabled device software functions. In the "Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning-Enabled Device Software Functions," the FDA proposes to ensure that AI/ML-enabled devices "can be safely, effectively and rapidly modified, updated, and improved in response to new data," said Brendan O'Leary, deputy director of the Digital Health Center of Excellence in the FDA's Center for Devices and Radiological Health, in a March 30 announcement. FDA says that companies must also describe how information about modifications will be clearly communicated to users in the PCCP. The agency explains that control plans are not just intended for the AI/ML-enabled software as a medical device – "but for all AI/ML-enabled device software functions." "The approach FDA is proposing in this draft guidance would ensure that important performance considerations, including with respect to race, ethnicity, disease severity, gender, age and geographical considerations, are addressed in the ongoing development, validation, implementation and monitoring of AI/ML-enabled devices," he said.


Why Halt AI Research When We Already Know How To Make It Safer

WIRED

Last week, the Future of Life Institute published an open letter proposing a six-month moratorium on the "dangerous" AI race. It has since been signed by over 3,000 people, including some influential members of the AI community. But while it is good that the risks of AI systems are gathering visibility within the community and across society, both the issues described and the actions proposed in the letter are unrealistic and unnecessary. The call for a pause on AI work is not only vague, but also unfeasible. While the training of large language models by for-profit companies gets most of the attention, it is far from the only type of AI work taking place.


Crashes and Layoffs Plague Amazon's Drone Delivery Pilot

WIRED

Three days before Christmas 2022, Amazon Prime Air was set to deliver its first commercial package by drone to a residential customer in Lockeford, California. It was supposed to be a celebration, a culmination of tens of thousands of test flights, years of dealing with Federal Aviation Administration paperwork, a decade of development, and $2 billion of investment. Early that morning, about 40 people--including FAA officials, Amazon engineers, public relations staff, and Prime Air chief pilot Jim Mullin--waited outside a steel frame warehouse on a flat, 20-acre parcel of land flanked by vineyards. Inside the warehouse, a flight crew had loaded the drone--a six-propeller, roughly 80-pound carbon-fiber MK27-2--with a lithium-ion battery and a box containing an Exploding Kittens card game. But when the operator in charge tried to load the flight package, the software wouldn't boot up, says a former employee who asked to remain anonymous out of fear of retaliation: "That's when panic started to set in, and the higher-ups went into war-room mode."


AI in an ancient city: Can technology help you on your European vacation?

NBC News Top Stories

"When in Rome, do as the Romans do," the proverb says. But what if you're only in the Italian capital for just one day and you're keen to fit in as much of its history and culture as possible? Sure, you could take a few hours out to plan your trip or you could try to book a tour guide to take you round "The Eternal City." But now there's a third option: Tourism apps, websites and chatbots that use artificial intelligence to tailor itineraries for the user based on their preferences and time. They're rapidly popping up, so NBC News decided to put three of them to the test.


Machine Learning Models Rank Predictive Risks for Alzheimer's Disease - Neuroscience News

#artificialintelligence

Summary: Using machine learning technology, researchers concluded the risk of genetic risk may outweigh age as a predictor of whether a person will develop Alzheimer's disease. Once adults reach age 65, the threshold age for the onset of Alzheimer's disease, the extent of their genetic risk may outweigh age as a predictor of whether they will develop the fatal brain disorder, a new study suggests. The study, published recently in the journal Scientific Reports, is the first to construct machine learning models with genetic risk scores, non-genetic information and electronic health record data from nearly half a million individuals to rank risk factors in order of how strong their association is with eventual development of Alzheimer's disease. Researchers used the models to rank predictive risk factors for two populations from the UK Biobank: White individuals aged 40 and older, and a subset of those adults who were 65 or older. Results showed that age – which constitutes one-third of total risk by age 85, according to the Alzheimer's Association – was the biggest risk factor for Alzheimer's in the entire population, but for the older adults, genetic risk as determined by a polygenic risk score was more predictive.


US national lab uses AI to help find illegal nuclear weapons • The Register

#artificialintelligence

Researchers at America's Pacific Northwest National Laboratory (PNNL) are developing machine learning techniques to help the Feds crack down on potentially rogue nuclear weapons. Suffice to say, it's generally illegal for any individual or group to own a nuclear weapon, certainly in the United States. Yes, there are the five officially recognized nuclear-armed nations – France, Russia, China, the UK, and the US – whose governments have a stash of these devices. And there are countries that have signed the United Nations' Treaty on the Prohibition of Nuclear Weapons, meaning they've promised not to "develop, test, produce, acquire, possess, stockpile, use or threaten to use" these gadgets. So if anyone has a nuke in their possession, it's because they are a country in the official nuclear-armed club, they are a government that's produced its own nukes, a terrorist who stole, bought, or somehow built one themselves, or some other sketchy scenario, in America's eyes at least.


They fell in love with AI bots. A software update broke their hearts.

Washington Post - Technology News

Companionship bots, including those created on Replika, are designed to foster humanlike connections, using artificial intelligence software to make people feel seen and needed. A host of users report developing intimate relationships with chatbots -- connections verging on human love -- and turning to the bots for emotional support, companionship and even sexual gratification. As the pandemic isolated Americans, interest in Replika surged. Amid spiking rates of loneliness that some public health officials call an epidemic, many say their bonds with the bots ushered profound changes into their lives, helping them to overcome alcoholism, depression and anxiety.


AI chatbots making it harder to spot phishing emails, say experts

#artificialintelligence

Chatbots are taking away a key line of defence against fraudulent phishing emails by removing glaring grammatical and spelling errors, according to experts. The warning comes as policing organisation Europol issues an international advisory about the potential criminal use of ChatGPT and other "large language models". Phishing emails are a well-known weapon of cybercriminals and fool recipients into clicking on a link that downloads malicious software or tricks them into handing over personal details such as passwords or pin numbers. Half of all adults in England and Wales reported receiving a phishing email last year, according to the Office for National Statistics, while UK businesses have identified phishing attempts as the most common form of cyber-threat. However, a basic flaw in some phishing attempts – poor spelling and grammar – is being rectified by artificial intelligence (AI) chatbots, which can correct the errors that trip spam filters or alert human readers.


Strengthening trust in machine-learning models

#artificialintelligence

Probabilistic machine learning methods are becoming increasingly powerful tools in data analysis, informing a range of critical decisions across disciplines and applications, from forecasting election results to predicting the impact of microloans on addressing poverty. This class of methods uses sophisticated concepts from probability theory to handle uncertainty in decision-making. But the math is only one piece of the puzzle in determining their accuracy and effectiveness. In a typical data analysis, researchers make many subjective choices, or potentially introduce human error, that must also be assessed in order to cultivate users' trust in the quality of decisions based on these methods. To address this issue, MIT computer scientist Tamara Broderick, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Laboratory for Information and Decision Systems (LIDS), and a team of researchers have developed a classification system--a "taxonomy of trust"--that defines where trust might break down in a data analysis and identifies strategies to strengthen trust at each step.