BERLIN – A turning point for Rafael Yuste, a neuroscientist at New York's Columbia University, came when his lab discovered it could activate a few neurons in a mouse's visual cortex and make it hallucinate. The mouse had been trained to lick at a water spout every time it saw two vertical bars, and researchers were able to prompt it to drink even with no bars in sight, said Yuste, whose team published a study on the experiment in 2019. "We could make the animal see something it didn't see, as if it were a puppet," he said in a phone interview. "If we can do this today with an animal, we can do it tomorrow with a human for sure." Yuste is part of a group of scientists and lawmakers, stretching from Switzerland to Chile, who are working to rein in the potential abuses of neuroscience by companies from tech giants to wearable startups.
Your visual cortex does two incredible things, thousands of times a second. First, it takes all the information streaming in through your retinas and passes it through a series of steps – looking first for patches of dark and light, then for features such as lines and edges, then for simple recognisable shapes like this letter'A', working up to household objects like a toaster or kettle, or individual faces, like your grandmother, or the person who you used to see every day at the bus stop on the way to work. The second incredible thing it does is to completely forget that it's done any of that at all. The inner workings of our minds are not accessible to us – and that is one of the things that will always separate us from artificially intelligent machines like the ones depicted in Klara and the Sun, the new novel from British author Kazuo Ishiguro. The book is set in a near-future where robotic humanoids called'Artificial Friends' or'AFs' are the purchase of choice for wealthy teenagers, who – for unspecified reasons – are taught remotely, and rarely get the opportunity to interact with their peers face to face.
As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio-organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algorithm-centered. We take a developmental step towards socially-situated XAI by introducing and exploring Social Transparency (ST), a sociotechnically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making. To explore ST conceptually, we conducted interviews with 29 AI users and practitioners grounded in a speculative design scenario. We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST's effect and implications at the technical, decision-making, and organizational level. The framework showcases how ST can potentially calibrate trust in AI, improve decision-making, facilitate organizational collective actions, and cultivate holistic explainability. Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI.
CONTENTshift is the accelerator program of the German Book Publishers and Printers Association. Below you will find more interviews from past batches. We used to record the interviews directly at Frankfurt Book Fair, but since it is canceled this year due to Corona, we resorted to remote only interviews. At the time of the recording, we did not know who won the final award. We will publish the exclusive interview with them as the last of our series this year.
C3.ai CEO Tom Siebel is rarely short of opinion and his next big bet is that artificial intelligence is going to drive CRM software in a new direction. The AI and ML deployments are well underway, but for CXOs the biggest issue will be managing these initiatives, and figuring out where the data science team fits in and what algorithms to buy versus build. Fresh off a partnership with Microsoft and Adobe to meld data, CRM and AI, we caught up with Siebel to talk about C3.ai's Digital Transformation Institute, COVID-19 data lakes, education's next innovation and why social media firms need to be regulated. The full interview is in the video. Here are some of the takeaways from my interview with Siebel.
IMAGE: Siamak Yousefi, Ph.D., an assistant professor in the Department of Ophthalmology and the Department of Genetics, Genomics, and Informatics at the University of Tennessee Health Science Center, has received two... view more He was awarded $180,000 from the Bright Focus Foundation to study the impact of glaucoma on certain retinal ganglion cells, as a path to uncover more information on glaucoma progression. The foundation is a nonprofit organization supporting research on brain and eye diseases. "Glaucoma affects over 90 million people worldwide and its incidence is predicted to double over the next two decades," Dr. Yousefi said. "The costs of treatment of glaucoma increase sharply for later stages of the disease. Therefore, earlier detection of glaucoma and its progression could result not only in retaining vision, but also in significant financial savings. Effective monitoring and determining appropriate treatment strategies require reliable approaches that quantify disease-induced changes more accurately."
Building an open-domain socialbot that talks to real people is challenging - such a system must meet multiple user expectations such as broad world knowledge, conversational style, and emotional connection. Our socialbot engages users on their terms - prioritizing their interests, feelings and autonomy. As a result, our socialbot provides a responsive, personalized user experience, capable of talking knowledgeably about a wide variety of topics, as well as chatting empathetically about ordinary life. Neural generation plays a key role in achieving these goals, providing the backbone for our conversational and emotional tone. At the end of the competition, Chirpy Cardinal progressed to the finals with an average rating of 3.6/5.0,
Automatically detecting personality traits can aid several applications, such as mental health recognition and human resource management. Most datasets introduced for personality detection so far have analyzed these traits for each individual in isolation. However, personality is intimately linked to our social behavior. Furthermore, surprisingly little research has focused on personality analysis using low resource languages. To this end, we present a novel peer-to-peer Hindi conversation dataset- Vyaktitv. It consists of high-quality audio and video recordings of the participants, with Hinglish textual transcriptions for each conversation. The dataset also contains a rich set of socio-demographic features, like income, cultural orientation, amongst several others, for all the participants. We release the dataset for public use, as well as perform preliminary statistical analysis along the different dimensions. Finally, we also discuss various other applications and tasks for which the dataset can be employed.
What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.