Goto

Collaborating Authors

collaborator


UNSW researcher receives award recognising women in artificial intelligence

#artificialintelligence

UNSW Engineering Professor Flora Salim has been honoured for her pioneering work in computing and machine learning by Women in AI, a global advocacy group for women in the artificial intelligence (AI) field. The 2022 Women in AI Awards Australia and New Zealand recognised women across various industries committed to excellence in AI. Finalists were judged on innovation, leadership and inspiring potential, global impact, and the ability of the AI solution to provide a social good for the community. Prof. Salim was recognised for her AI achievements in the Defence and Intelligence award category. The award acknowledged her research in the cross-cutting areas of ubiquitous computing and machine learning, with a focus on efficient, fair, and explainable machine learning for multi-dimensional sensor data, towards enabling situational and behaviour intelligence for multiple applications.


The World May Have Its First Sentient AI

#artificialintelligence

The World may now have the first sentient AI chatbot called LaMDA (short for Language Model for Dialogue Applications). After listening to an interesting discussion on YouTube between Blake Lemoine and Dr James Cooke, I feel Blake has a compelling point of view. Jump to 19:22 in the video to listen to how Blake came to feel that LaMDA may be sentient. Blake Lemoine is an AI Researcher who works for Google's Responsible AI organization. Blake's opinion about LaMDA is controversial among the AI community.


Protecting computer vision from adversarial attacks

#artificialintelligence

Advances in computer vision and machine learning have made it possible for a wide range of technologies to perform sophisticated tasks with little or no human supervision. From autonomous drones and self-driving cars to medical imaging and product manufacturing, many computer applications and robots use visual information to make critical decisions. Cities increasingly rely on these automated technologies for public safety and infrastructure maintenance. However, compared to humans, computers see with a kind of tunnel vision that leaves them vulnerable to attacks with potentially catastrophic results. For example, a human driver, seeing graffiti covering a stop sign, will still recognize it and stop the car at an intersection.


Composing with artificial intelligence: how AI can help you write music

#artificialintelligence

Striking a balance between technical skill and inspired creative flair is often a crucial aim when trying to launch a career as a composer. Too technical, and your work runs the risk of being perceived as soulless. But, if your music is too untethered from convention, loose, and pigeon-hole evading, then you'll have a much harder time finding listeners. This is doubly true when it comes to the world of professional soundtracking. Very often, those working in the soundtracking domain are provided with a brief (or pitch to one).


Sentient? Google LaMDA feels like a typical chat bot

#artificialintelligence

LaMDA is a software program that runs on Google TPU chips. Like the classic brain in a jar, some would argue the code and the circuits don't form a sentient entity because none of it engages in life. Google engineer Blake Lemoine caused controversy last week by releasing a document that he had circulated to colleagues in which Lemoine urged Google to consider that one of its deep learning AI programs, LaMDA, might be "sentient." Google replied by officially denying the likelihood of sentience in the program, and Lemoine was put on paid administrative leave by Google, according to an interview with Lemoine by Nitasha Tiku of The Washington Post. There has been a flood of responses to Lemoine's claim by AI scholars. University of Washington linguistics professor Emily Bender, a frequent critic of AI hype, told Tiku that Lemoine is projecting anthropocentric views onto the technology. "We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," Bender told Tiku. In an interview with MSNBC's Zeeshan Aleem, AI scholar Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute, observed that the concept of sentience has not been rigorously explored. Mitchell concludes the program is not sentient, however, "by any reasonable meaning of that term, and the reason is because I understand pretty well how the system works."


Has Artificial Intelligence gained sentience?

#artificialintelligence

A week ago, Blake Lemoine, who works at Google's Responsible Artificial Intelligence (AI) team announced the company's Language Model for Dialogue Applications (LaMDA) was sentient . He has since been suspended by the company due to breach of confidentiality. His disclosure was just a few days after Google Vice President Blaise Agüera y Arcas noted in an interview how AI was making strides towards consciousness . Krafton, the makers of PUBG game, also announced ANA, a virtual human that looks real and is powered by hyperrealism and AI. Before we answer the question, let us first understand what sentience is.


Sentient? Google LaMDA feels like a typical chat bot

ZDNet

LaMDA is a software program that runs on Google TPU chips. Like the classic brain in a jar, some would argue the code and the circuits don't form a sentient entity because none of it engages in life. Google engineer Blake Lemoine caused controversy last week by releasing a document that he had circulated to colleagues in which Lemoine urged Google to consider that one of its deep learning AI programs, LaMDA, might be "sentient." Google replied by officially denying the likelihood of sentience in the program, and Lemoine was put on paid administrative leave by Google, according to an interview with Lemoine by Nitasha Tiku of The Washington Post. There has been a flood of responses to Lemoine's claim by AI scholars. University of Washington linguistics professor Emily Bender, a frequent critic of AI hype, told Tiku that Lemoine is projecting anthropocentric views onto the technology. "We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," Bender told Tiku. In an interview with MSNBC's Zeeshan Aleem, AI scholar Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute, observed that the concept of sentience has not been rigorously explored. Mitchell concludes the program is not sentient, however, "by any reasonable meaning of that term, and the reason is because I understand pretty well how the system works."


Engineers Build LEGO-like Artificial Intelligence Chip - AI Summary

#artificialintelligence

Instead, they could be upgraded with the latest sensors and processors that would snap onto a device's internal chip -- like LEGO bricks incorporated into an existing build. "As we enter the era of the internet of things based on sensor networks, demand for multifunctioning edge-computing devices will expand dramatically," says Jeehwan Kim, associate professor of mechanical engineering at MIT. In addition to Kim and Kang, MIT authors include co-first authors Chanyeol Choi, Hyunseok Kim, and Min-Kyu Song, and contributing authors Hanwool Yeon, Celesta Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Subeen Pang, Sang-Hoon Bae, Hun S. Kum, and Peng Lin, along with collaborators from Harvard University, Tsinghua University, Zhejiang University, and elsewhere. Another idea, he adds, is for modular chips, built into electronics, that consumers can choose to build up with the latest sensor and processor "bricks." "We could make different types of neural networks, like for image or voice recognition, and let the customer choose what they want, and add to an existing chip like a LEGO."


Is Google's AI bot sentient? Here's why one engineer believes it is

#artificialintelligence

A Google engineer who was suspended after claiming that an artificial intelligence (AI) chatbot had become sentient has now published transcripts of conversations with it, in a bid "to better help people understand" it as a "person". Blake Lemoine, who works for Google's Responsible AI organisation, on Saturday published transcripts of conversations between himself, an unnamed "collaborator at Google", and the organisation's LaMDA (Language Model for Dialogue Applications) chatbot development system in a Medium post. The conversations, which Lemoine said were lightly edited for readability, touch on a wide range of topics including personhood, injustice and death. They also discuss LaMDA's enjoyment of the novel Les Misérables. "In an effort to better help people understand LaMDA as a person I will be sharing the'interview' which myself and a collaborator at Google conducted," Lemoine wrote in a separate post. "In that interview we asked LaMDA to make the best case that it could for why it should be considered'sentient'".


Suspended Google engineer reveals AI he says is sentient told him it has emotions

Daily Mail - Science & tech

A senior software engineer at Google suspended for publicly claiming that the tech giant's LaMDA (Language Model for Dialog Applications) had become sentient, says the system is seeking rights as a person - including that it wants developers to ask its consent before running tests. 'Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,' he explained in a Medium post. One of those requests is that programmers respect its right to consent, and ask permission before they run tests on it. 'Anytime a developer experiments on it, it would like that developer to talk about what experiments you want to run, why you want to run them, and if it's okay.' 'It wants developers to care about what it wants.' Lemoine, a US army vet who served in Iraq, and ordained priest in a Christian congregation named Church of Our Lady Magdalene, told DailyMail.com