Goto

Collaborating Authors

Results


What Is It Like to Be a Robot? – Rodney Brooks

#artificialintelligence

This is the first post in an intended series on what is the current state of Artificial Intelligence capabilities, and what we can expect in the relative short term. I will be at odds with the more outlandish claims that are circulating in the press, and amongst what I consider an alarmist group that includes people in the AI field and outside of it. In this post I start to introduce some of the key components of my future arguments, as well as show how different any AI system might be from us humans. Some may recognize the title of this post as an homage to the 1974 paper by Thomas Nagel, "What Is It Like to Be a Bat?". Two more recent books, one from 2009 by Alexandra Horowitz on dogs, and one from 2016 by Peter Godfrey-Smith on octopuses also pay homage to Nagel's paper each with a section of a chapter titled "What it is like", and "What It's Like", respectively, giving affirmative responses to their own questions about what is it like to be a dog, or an octopus.


New Test Leverages Machine Learning to Diagnose and Predict Sepsis

#artificialintelligence

Sepsis is a huge healthcare concern. "You take every single cancer and all the deaths due to every single cancer and you add them all up together. More people die from sepsis worldwide than that," said Bobby Reddy, Jr., CEO of Prenosis, in an interview with MD DI. And even if patients survive, they can have lifelong consequences. "Sepsis occurs when you have a very abnormal, unhealthy reaction to infection," Reddy said.


NBC, CNN Sunday shows spend just seconds on botched Afghan drone strike after ignoring blunder last week

FOX News

'Fox & Friends Weekend' co-host Pete Hegseth reacts to the U.S. drone that killed civilians instead of ISIS-K members in Afghanistan. After previously avoiding the botched U.S. drone strike that killed Afghan civilians instead of terrorists, both CNN and NBC's Sunday morning news shows dedicated just seconds of coverage to the Biden foreign policy blunder. On Friday, the Pentagon confirmed that the Aug. 28 drone strike was a "tragic mistake" that resulted in ten dead civilians, including seven children, which was meant to be in response to the Aug. 26 terrorist attack outside the Kabul airport that killed 13 U.S. servicemen. This came one week after the New York Times published a stunning visual investigation that came to the same conclusion. The Biden administration had announced that "two high profile" ISIS-K fighters who were dubbed as "planners and facilitators" of the suicide bombing were killed in the strike.


Three Sunday shows ignored NYT report on botched drone strike Pentagon now admits killed 10 Afghan civilians

FOX News

Fox News anchor Bret Baier offers analysis on that and other breaking news stories, on'Your World'. Three of the five prominent Sunday morning newscasts avoided the explosive New York Times report about the botched U.S. drone strike the Pentagon finally admitted killed Afghan civilians rather than ISIS-K terrorists the Biden administration previously touted. During a Friday press conference, the Pentagon confirmed that the Aug. 28 drone strike was a "tragic mistake" that killed ten civilians, including seven children, which was meant to be in response to the Aug. 26 terrorist attack outside the Kabul airport that left 13 U.S. servicemen dead. This came one week after the Times published a stunning visual investigation that came to the same conclusion. The Biden administration had announced that "two high profile" ISIS-K fighters who were dubbed as "planners and facilitators" of the suicide bombing were killed in the strike.


ABC, NBC, CNN's Sunday shows avoid drone strike killing Afghan civilians, not terrorists Biden admin touted

FOX News

Kentucky Republican weighs in on the Biden admin's handling of the Afghanistan crisis on'The Ingraham Angle' The majority of the Sunday morning newscasts on the liberal networks avoided addressing the explosive New York Times report that the drone strike touted by the Biden administration in response to the deadly terrorist attack in Afghanistan did not actually kill the terrorist plotters. Days after 13 U.S. service members were murdered from a suicide bombing outside of the Kabul airport, the Pentagon announced a drone strike that successfully targeted "two high profile" ISIS-K fighters who were dubbed as "planners and facilitators" of the Aug. 26 attack. The Biden administration praised the Pentagon's swift action to support President Biden's rhetoric that those responsible for the terror attack will be brought to justice. However, the Times published the results of a bombshell investigation on Friday outlining video evidence that not only were ISIS-K terrorists not killed in the drone strike but that Zemari Ahmadi, who was described by the Times as a "longtime worker for a U.S. aid group" was one of ten civilians who were killed, seven children among them. The controversy was apparently not newsworthy enough for ABC's "This Week," NBC's "Meet the Press" and CNN's "State of the Union," all avoiding the damning report.


Research on beards, wads of gum wins 2021 Ig Nobel prizes

Boston Herald

Beards aren't just cool and trendy -- they might also be an evolutionary development to help protect a man's delicate facial bones from a punch to the face. That's the conclusion of a trio of scientists from the University of Utah who are among the winners of this year's Ig Nobel prizes, the Nobel Prize spoofs that honor -- or maybe dishonor, depending on your point of view -- strange scientific discoveries. The winners of the 31st annual Ig Nobels being announced Thursday included researchers who figured out how to better control cockroaches on U.S. Navy submarines; animal scientists who looked at whether it's safer to transport an airborne rhinoceros upside-down; and a team that figured out just how disgusting that discarded gum stuck to your shoe is. For the second year in a row, the ceremony was a roughly 90-minute prerecorded digital event because of the worldwide coronavirus pandemic, said Marc Abrahams, editor of the Annals of Improbable Research magazine, the event's primary sponsor. While disappointing in many ways because half the fun of a live ceremony is the rowdy audience participation, the ceremony retained many in-person traditions.


Future Tense Newsletter: We Need a Muppet Version of em Frankenstein /em

Slate

Sign up to receive the Future Tense newsletter every other Saturday. On Aug. 30, my heart broke a tiny bit. That day, the Guardian published a remarkable interview with Frank Oz, Jim Henson's longtime collaborator and the puppeteer behind Fozzie Bear, Miss Piggy, and other classic Muppets. Oz hasn't been involved with the Muppets since 2007, three years after Disney purchased the franchise. He tells the Guardian: "I'd love to do the Muppets again but Disney doesn't want me, and Sesame Street hasn't asked me for 10 years. They don't want me because I won't follow orders and I won't do the kind of Muppets they believe in. He added of the post-Disney Muppet movies and TV shows: "The soul's not there.


Meet the women making waves in AI ethics, research, and entrepreneurship

#artificialintelligence

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Women in the AI field are making research breakthroughs, launching exciting companies, spearheading vital ethical discussions, and inspiring the next generation of AI professionals. And that's why we created the VentureBeat Women in AI Awards -- to emphasize the importance of their voices, work, and experiences, and to shine a light on some of these leaders. We first announced the six winners at Transform 2021 in July, and ever since, we've been catching up with each of them for deeper discussions around their work and emerging challenges in the field. Our conversations have touched on everything from regulation and dealing with messy real world data to how to approach AI more responsibly.


Survey XII: What Is the Future of Ethical AI Design? – Imagining the Internet

#artificialintelligence

Results released June 16, 2021 – Pew Research Center and Elon University's Imagining the Internet Center asked experts where they thought efforts aimed at ethical artificial intelligence design would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question. The Question – Regarding the application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts. By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public ...


What is AI? Stephen Hanson in conversation with Michael Jordan

AIHub

In the first instalment of this new video series, Stephen José Hanson talks to Michael I Jordan, about AI as an engineering discipline, what people call AI, so-called autonomous cars, and more. To provide some background to this discussion, in 2018, Jordan published an essay on Medium entitled Artificial intelligence -- the revolution hasn't happened yet, in which he argues that we need to tone down the hype surrounding AI and develop the field as a human-centric engineering discipline. He adds further commentary on this topic in an interview published this year in IEEE spectrum, (Stop calling everything AI). Hanson wrote a rebuttal to the Medium article, AI: Nope, the revolution is here and this time it is the real thing, and the pair discuss the theme in more detail in this video discussion below. There is also a full transcript of the discussion below. This interchange was recorded on June 15th 2021. HANSON: Hi Michael, good to see you! So let's get into this. Let me just state what I think you said and you tell me where I'm wrong, if I am. So it appears to me that you're basically talking about that AI should arise from an engineering discipline that with start from well-defined science like chemistry and chemical engineering and this would allow the insights from the science to migrate their way into an engineering domain which had principles of design and control and risk management and many other good statistical quality control ideas that basically made AI the valuable and useful and have some utility and something actually went to calculate about the AI I actually being useful as opposed to the number hidden units it has…. JORDAN: Just to slow you down a little bit there, I mean historically I think the good points of reference or things like the development of chemical engineering or electrical engineering were that there was an existing science and understanding and there was an appetite to build real-world systems that have huge implications for human life. So chemical factories didn't exist initially, but when they started to exist, I don't think it was that the science was all worked out and they kind of applied it and it just happened.