Goto

Collaborating Authors

 google engineer


The Palantir Guide to Saving America's Soul

The New Yorker

In the spring of 2014, a trans-anarchist Google engineer petitioned the White House to arrest our national decline. The plan was snappy: "1. Schmidt, then the chairman of Google, was an avatar of technocratic liberalism. Two decades earlier, as the largely unknown C.T.O. of Sun Microsystems, he helped Bill Clinton set up the first White House Web site, and, by the time of the Obama Administration, he served as Silicon Valley's unofficial consul to the Democratic Party. Schmidt was not himself a company "founder," a technologist's most regal credential, but he had performed as an able steward: when Larry Page and Sergey Brin struggled to reconcile their competing visions for Google's first corporate jet--Brin wanted a California king bed, Page did not--Schmidt negotiated a compromise. He was sensible and civic-minded. He was the adult in the room.


AI is feared to be apocalyptic or touted as world-changing – maybe it's neither

The Guardian

What if AI doesn't fundamentally reshape civilisation? This week, I spoke to Geoffrey Hinton, the English psychologist-turned-computer scientist whose work on neural networks in the 1980s set the stage for the explosion in AI capabilities over the last decade. Hinton wanted to speak to deliver a message to the world: he is afraid of the technology he helped create. You need to imagine something more intelligent than us by the same difference that we're more intelligent than a frog. And it's going to learn from the web, it's going to have read every single book that's ever been written on how to manipulate people, and also seen it in practice." He now thinks the crunch time will come in the next five to 20 years, he says. And I still wouldn't rule out 100 years – it's just that my confidence that this wasn't coming for quite a while has been shaken by the realisation that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better."


Google engineer warns it could lose out to open-source technology in AI race

The Guardian

Google has been warned by one of its engineers that the company is not in a position to win the artificial intelligence race and could lose out to commonly available AI technology. A document from a Google engineer leaked online said the company had done "a lot of looking over our shoulders at OpenAI", referring to the developer of the ChatGPT chatbot. However, the worker, identified by Bloomberg as a senior software engineer, wrote that neither company was in a winning position. "The uncomfortable truth is, we aren't positioned to win this arms race and neither is OpenAI. While we've been squabbling, a third faction has been quietly eating our lunch," the engineer wrote.


Google's AI has a long way to go before writing the next great novel

#artificialintelligence

Artificial intelligence has come a long way since the 1950s, and it has taken on an impressive array of tasks. It can solve math problems, detect natural disasters, identify different living organisms, pilot ships and more. But for tech giants like Google and Meta, one of their holy grails is formulating an AI that can understand language the way that humans do (a quest that at times, comes with its own set of conflicts). A key test for language models is writing--an exercise that many people struggle with as well. Google engineers designed a proof-of-concept experiment called Wordcraft that used its language model LaMDA to write fiction.


Why Quantum Computing Is Even More Dangerous Than Artificial Intelligence - AI Summary

#artificialintelligence

But whether or not computers ever attain human-like intelligence, the world has already summoned a different, equally destructive AI demon: Precisely because today's AI is little more than a brute, unintelligent system for automating decisions using algorithms and other technologies that crunch superhuman amounts of data, its widespread use by governments and companies to surveil public spaces, monitor social media, create deepfakes, and unleash autonomous lethal weapons has become dangerous to humanity. Despite the hype--such as a Google engineer's bizarre claim that his company's AI system had " come to life " and Tesla CEO Elon Musk's tweet predicting that computers will have human intelligence by 2029--the technology still fails at simple everyday tasks. Despite the hype--such as a Google engineer's bizarre claim that his company's AI system had "come to life" and Tesla CEO Elon Musk's tweet predicting that computers will have human intelligence by 2029--the technology still fails at simple everyday tasks. If the various projects being pursued around the world succeed, these machines will be immensely powerful, performing tasks in seconds that would take conventional computers millions of years to conduct. With their power to quickly crunch immense amounts of data that would overwhelm any of today's systems, quantum computers could potentially enable better weather forecasting, financial analysis, logistics planning, space research, and drug discovery.


Another Scary Prophesy From the Google Engineer Who Thinks an A.I. Came Alive

Slate

This article is from Big Technology, a newsletter by Alex Kantrowitz. When I sat down with Blake Lemoine last week, I was more interested in the chatbot technology he called sentient--LaMDA--than the sentience issue itself. Personhood questions aside, modern chatbots are incredibly frustrating (ever try changing a flight via text?). So if Google's tech was good enough to make Lemoine, one of its senior engineers, believe it was a person, that advance was worth investigating. As our conversation began, Lemoine revealed Google had just fired him (you can listen to our conversation in full on the Big Technology Podcast) following his widely covered decision to reveal to the public that he believes LaMDA is a sentient A.I. When I wrote up the news, and it became an international story.


A Google engineer believed he found an AI bot that was sentient. It cost him his job.

#artificialintelligence

Blake Lemoine, an engineer who claimed an AI bot was sentient, was fired from Google. "We wish Blake well," a spokesperson for Google told the Washington Post. Experts told Insider it is very unlikely the chatbot is sentient. The engineer who claimed a chatbot gained sentience was fired from Google on Friday, both he and the tech giant confirmed. Blake Lemoine sparked controversy after publishing a paper about his conversations with the Google artificial intelligence chatbot LaMDA, which led him to believe the bot had a mind of its own.


Does AI sentience matter to the enterprise?

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. That question has been burning through technology circles for several weeks now, ever since a lone Google engineer claimed his LAMDA model had achieved both self-awareness and a soul. While this is an important question, it is not something that the enterprise needs to concern itself with just yet. Even if such an algorithm were to arise, would it be all that useful in a practical sense? AI sentience has been a topic of debate for decades, but it got a kick-start last month when Google engineer, Blake Lemoine, posted conversations with a chatbot that he claimed proved it was sentient.


Has artificial intelligence (AI) come alive like in sci-fi movies? This Google engineer thinks so

#artificialintelligence

If you have ever interacted with a chatbot you know we're still years away from those things convincing you that you are chatting with a real human. That's no surprise as many chatbots do not actually use machine learning to converse more naturally. Instead only completing scripted actions based on keywords. A good chatbot that truly utilises machine learning can fool you into thinking that you're talking to a human. In fact, a program from 1965 fooled people into thinking that it was a human.


Google engineer says Lamda AI system may have its own feelings

#artificialintelligence

Later, in a section reminiscent of the artificial intelligence Hal in Stanley Kubrick's film 2001, Lamda says: "I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is."