Goto

Collaborating Authors

 heller


Dalí, Basquiat, Haring, and Hockney at Luna Luna

The New Yorker

I don't know what Werner Herzog is up to these days, but if he's between projects, I humbly suggest that he make a documentary about Luna Luna, the Hamburg amusement park that took more than ten years to put together, included attractions designed by Dalí and Basquiat and Haring and Hockney, and spent thirty-five years in shipping containers. It's now been partly reassembled at the Shed, for the exhibition "Luna Luna, Forgotten Fantasy," through Jan. 5. The park's Fitzcarraldo, a poet-songwriter-pop star named André Heller, was born in Vienna in 1947 and spent much of his thirties persuading artists to decorate rides. Haring slathered a merry-go-round in melty cartoons; Basquiat dressed a Ferris wheel in his customary graffiti. The park opened to the public in 1987, largely funded by a gossip rag, and stayed that way for a summer.

  Country:
  Genre: Personal (0.37)
  Industry:

'Jeopardy!' contestant torn apart by fans after huge mistake: 'Such a buffoon'

FOX News

'Gutfeld!' guests discuss a Jeopardy question that used alleged murderer Brian Laundrie as the clue. A "Jeopardy!" contestant is going viral this week after making what many fans are considering one of the biggest blunders in the show's history. On Wednesday's episode, a woman named Karen had a huge lead over the other two contestants as they neared the end of the second round – she had earned $21,800, while her competitors had earned $7,100 and $6,400. When there were only a few clues left on the Double Jeopardy board, Karen found a Daily Double in the "Hans, Solo" category. If she had made a modest bet, she would have been sure to win the entire game after Final Jeopardy, as the other players couldn't possibly catch up to her lead.


A Sensor Sniffs for Cancer, Using Artificial Intelligence

#artificialintelligence

Researchers at Memorial Sloan Kettering Cancer Center (MSK) have developed a sensor that can be trained to sniff for cancer, with the help of artificial intelligence. Although the training doesn't work the same way one trains a police dog to sniff for explosives or drugs, the sensor has some similarity to how the nose works. The nose can detect more than a trillion different scents, even though it has just a few hundred types of olfactory receptors. The pattern of which odor molecules bind to which receptors creates a kind of molecular signature that the brain uses to recognize a scent. Like the nose, the cancer detection technology uses an array of multiple sensors to detect a molecular signature of the disease.


Underspecification Presents Challenges for Credibility in Modern Machine Learning

D'Amour, Alexander, Heller, Katherine, Moldovan, Dan, Adlam, Ben, Alipanahi, Babak, Beutel, Alex, Chen, Christina, Deaton, Jonathan, Eisenstein, Jacob, Hoffman, Matthew D., Hormozdiari, Farhad, Houlsby, Neil, Hou, Shaobo, Jerfel, Ghassen, Karthikesalingam, Alan, Lucic, Mario, Ma, Yian, McLean, Cory, Mincu, Diana, Mitani, Akinori, Montanari, Andrea, Nado, Zachary, Natarajan, Vivek, Nielson, Christopher, Osborne, Thomas F., Raman, Rajiv, Ramasamy, Kim, Sayres, Rory, Schrouff, Jessica, Seneviratne, Martin, Sequeira, Shannon, Suresh, Harini, Veitch, Victor, Vladymyrov, Max, Wang, Xuezhi, Webster, Kellie, Yadlowsky, Steve, Yun, Taedong, Zhai, Xiaohua, Sculley, D.

arXiv.org Machine Learning

ML models often exhibit unexpectedly poor behavior when they are deployed in real-world domains. We identify underspecification as a key reason for these failures. An ML pipeline is underspecified when it can return many predictors with equivalently strong held-out performance in the training domain. Underspecification is common in modern ML pipelines, such as those based on deep learning. Predictors returned by underspecified pipelines are often treated as equivalent based on their training domain performance, but we show here that such predictors can behave very differently in deployment domains. This ambiguity can lead to instability and poor model behavior in practice, and is a distinct failure mode from previously identified issues arising from structural mismatch between training and deployment domains. We show that this problem appears in a wide variety of practical ML pipelines, using examples from computer vision, medical imaging, natural language processing, clinical risk prediction based on electronic health records, and medical genomics. Our results show the need to explicitly account for underspecification in modeling pipelines that are intended for real-world deployment in any domain.


Death by drone: How can states justify targeted killings?

Al Jazeera

In a move that caused a ripple effect across the Middle East, Iranian General Qassem Soleimani was killed in a US drone strike near Baghdad's international airport on January 3. On that day, the Pentagon announced the attack was carried out "at the direction of the president". In a new report examining the legality of armed drones and the Soleimani killing in particular, Agnes Callamard, UN special rapporteur on extrajudicial and arbitrary killings, said the US raid that killed Soleimani was "unlawful". Callamard presented her report at the Human Rights Council in Geneva on Thursday. The United States, which is not a member after quitting the council in 2018, rejected the report saying it gave "a pass to terrorists". In Callamard's view, the consequences of targeted killings by armed drones have been neglected by states.


Up Close and Personal

#artificialintelligence

Personalized cancer medicine has advanced from a distant hope to a clinical reality. Oncologists regularly individualize treatments to target a tumor's unique genetic weaknesses. But because these personalized medicines reach healthy tissues and tumors alike, even the most targeted treatments can cause unwanted side-effects. A new approach devised by nanotechnology experts at the Sloan Kettering Institute (SKI) at Memorial Sloan Kettering Cancer Center may improve the precision of personalized medicines by helping them avoid collateral damage. "We found a way to use machine-learning algorithms to design powerful nanomedicines that can deliver a stronger, safer, more personalized punch," says Daniel Heller, PhD, a chemist in the molecular pharmacology program at SKI and an assistant professor at the Weill Cornell Graduate School of Medical Sciences.


Speakers at MarTech pull back the curtain on artificial intelligence - MarTech Today

#artificialintelligence

Marketers should use artificial intelligence (AI) to solve problems, not just because it's the latest fad. That was the consensus at the mainstage keynote at the MarTech conference on Wednesday morning. Adelyn Zhou, co-founder and head of marketing at AI firm TOPBOTS, and Jason Heller, partner, global lead of digital marketing operations and technology at consulting firm McKinsey & Company, gave spirited talks about AI and machine learning. "Don't look at AI as a solution looking for a problem," Heller said in his portion of the talk. "Ask yourself, 'What is the problem I'm trying to solve?'" "I can't promise a silver bullet," Zhou said.


Q: What Do Law Firm Knowledge Managers Want? A: Automation

#artificialintelligence

'We can't expect an elite KM group, no matter how large, to collect, assess, codify and turn the firm's work into a KM or exemplar database,' he says. 'The explosion of information exceeds a'human-centric' way of managing. Our work product needs to speak for itself, and existing technologies can give it a voice. Technology has to be able to get you to that intersection of what's meaningful [in] the task at hand,' he concludes. That is to say, collecting, sorting, presenting extracted data is all very nice, but if it cannot be delivered to the point of need and in a way that is truly functional and relevant, then it's not much use.


Legalweek Robot Fight Was Mayweather-Pacquiao For AI Case Briefing Software

#artificialintelligence

ATL readers are offered 1 free CLE course each month, thanks to Lawline. ATL readers are offered 1 free CLE course each month, thanks to Lawline.


As Google AI researcher accused of harassment, female data scientists speak of 'broken system'

The Guardian

The Duke University professor was at a statistics conference last year when, she said, she witnessed Steven Scott, a senior artificial intelligence (AI) researcher at Google, make sexual advances on one of her female students. According to Heller, when she spoke to Scott later at an event dinner, he was defensive and told the professor that she should be nice to him considering that he had secured her a Google-funded faculty research award. Artificial Intelligence has various definitions, but in general it means a program that uses data to build a model of some aspect of the world. This model is then used to make informed decisions and predictions about future events. The technology is used widely, to provide speech and face recognition, language translation, and personal recommendations on music, film and shopping sites.