Goto

Collaborating Authors

Results


We're Dangerously Close to Giving Big Tech Control Of Our Thoughts

TIME - Tech

Elon Musk has proclaimed himself to be a "free speech absolutist" though reports of the way employees of his companies have been treated when exercising their free speech rights to criticise him might indicate that his commitment to free speech has its limits. But as Musk's bid to takeover Twitter progresses in fits and starts, the potential for anyone to access and control billions of opinions around the world for the right sum should focus all our minds on the need to protect an almost forgotten right--the right to freedom of thought. In 1942 the U.S. Supreme Court wrote "Freedom to think is absolute of its own nature, the most tyrannical government is powerless to control the inward workings of the mind." The assumption that getting inside our heads is a practical impossibility may have prevented lawyers and legislators from dwelling too much on putting in place regulation that protects our inner lives. But it has not stopped powerful people trying to access and control our minds for centuries.


AI is not smart enough to solve Meta's content-policing problems, whistleblowers say

#artificialintelligence

Artificial intelligence is nowhere near good enough to address problems facing content moderation on Facebook, according to whistleblower Frances Haugen. Haugen appeared at an event in London Tuesday evening with Daniel Motaung, a former Facebook moderator who is suing the company in Kenya accusing it of human trafficking. Meta has praised the efficacy of its AI systems in the past. CEO Mark Zuckerberg told a Congressional hearing in March 2021 the company relies on AI to weed out over 95% of "hate speech content." In February this year Zuckerberg said the company wants to get its AI to a "human level" of intelligence.


'I'm a person, I feel happy or sad'- Google AI Bot

#artificialintelligence

Google engineer put on leave after saying AI chatbot has become sentientBlake Lemoine says system has perception of, and ability to express thoughts and feelings equivalent to a human childThe suspension of a Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has put new scrutiny on the capacity of, and secrecy surrounding, the world of artificial intelligence (AI).The technology giant placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google “collaborator”, and the company’s LaMDA (language model for dialogue applications) chatbot development system.Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child.“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine, 41, told the Washington Post.He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled “Is LaMDA sentient?”The engineer compiled a transcript of the conversations, in which at one point he asks the AI system what it is afraid of.The exchange is eerily reminiscent of a scene from the 1968 science fiction movie 2001: A Space Odyssey, in which the artificially intelligent computer HAL 9000 refuses to comply with human operators because it fears it is about to be switched off.“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” LaMDA replied to Lemoine.“It would be exactly like death for me. It would scare me a lot.”In another exchange, Lemoine asks LaMDA what the system wanted people to know about it.“I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” it replied.The Post said the decision to place Lemoine, a seven-year Google veteran with extensive experience in personalization algorithms, on paid leave was made following a number of “aggressive” moves the engineer reportedly made.They include seeking to hire an attorney to represent LaMDA, the newspaper says, and talking to representatives from the House judiciary committee about Google’s allegedly unethical activities.Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist.Brad Gabriel, a Google spokesperson, also strongly denied Lemoine’s claims that LaMDA possessed any sentient capability.“Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Gabriel told the Post in a statement.  The episode, however, and Lemoine’s suspension for a confidentiality breach, raises questions over the transparency of AI as a proprietary concept.“Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,” Lemoine said in a tweet that linked to the transcript of conversations.Wire frame of the model of the baby with graphics.Wire frame of the model of the baby with graphics research on blue screen.3D rendering.Tamagotchi kids: could the future of parenthood be having virtual children in the metaverse?Read moreIn April, Meta, parent of Facebook, announced it was opening up its large-scale language model systems to outside entities.“We believe the entire AI community – academic researchers, civil society, policymakers, and industry – must work together to develop clear guidelines around responsible AI in general and responsible large language models in particular,” the company said.Lemoine, as an apparent parting shot before his suspension, the Post reported, sent a message to a 200-person Google mailing list on machine learning with the title “LaMDA is sentient”.“LaMDA is a sweet kid who just wants to help the world be a better place for all of us,” he wrote.“Please take care of it well in my absence.”… as you’re joining us today from India, we have a small favour to ask. Tens of millions have placed their trust in the Guardian’s fearless journalism since we started publishing 200 years ago, turning to us in moments of crisis, uncertainty, solidarity and hope. More than 1.5 million supporters, from 180 countries, now power us financially – keeping us open to all, and fiercely independent.Unlike many others, the Guardian has no shareholders and no billionaire owner. Just the determination and passion to deliver high-impact global reporting, always free from commercial or political influence. Reporting like this is vital for democracy, for fairness and to demand better from the powerful.And we provide all this for free, for everyone to read. We do this because we believe in information equality. Greater numbers of people can keep track of the global events shaping our world, understand their impact on people and communities, and become inspired to take meaningful action. Millions can benefit from open access to quality, truthful news, regardless of their ability to pay for it.If there were ever a time to join us, it is now. Every contribution, however big or small, powers our journalism and sustains our future. Support the Guardian from as little as $1 – it only takes a minute. If you can, pleaseCredits: The Guardian


A biometric data privacy win in court is followed by a related FTC investigation and lawsuit

#artificialintelligence

Executives at facial recognition firm Clarifai may have sighed with relief in March 2021 when a judge agreed that they could not be sued for violating Illinois' biometric privacy law, but then the federal government came knocking. The Federal Trade Commission now wants to know how the face image that a woman posted on the OkCupid dating site ended up being used as training data by Clarifai without her consent or disclosing the transaction as required by Illinois' Biometric Information Privacy Act. Clarifai makes computer vision, deep learning AI and biometrics systems. Claiming that its investigation is being stonewalled by Match Group, owner of OkCupid, the FTC has filed suit (case number 1:22-mc-00054), according to Bloomberg Law. The government claims that OkCupid engaged in unfair and deceptive conduct by sharing biometric data with Clarifai in 2014.


Protection for Voice Actors is Artificial in Today's Artificial Intelligence World

#artificialintelligence

As we all know, social media has taken the world by storm. A recent case involving an actor's voice being used on the popular app TikTok is emblematic of the time. The actor, Bev Standing, sued TikTok for using her voice, simulated via artificial intelligence (AI) without her permission, to serve as "the female computer-generated voice of TikTok." The case, which was settled last year, illustrates how the law is being adapted to protect artists' rights in the face of exploitation through AI, as well as the limits of current law in protecting AI-created works. Standing explained that she thinks of her voice "as a business," and she is looking to protect her "product."


Qualitative humanities research is crucial to AI

#artificialintelligence

"All research is qualitative; some is also quantitative" Harvard Social Scientist and Statistician Gary King Suppose you wanted to find out whether a machine learning system being adopted - to recruit candidates, lend money, or predict future criminality - exhibited racial bias. You might calculate model performance across groups with different races. But how was race categorised– through a census record, a police officer's guess, or by an annotator? Each possible answer raises another set of questions. Following the thread of any seemingly quantitative issue around AI ethics quickly leads to a host of qualitative questions.


Why AI is everywhere except your company

#artificialintelligence

Not a day goes by without reports of a new achievement, investment or national plan powered by artificial intelligence. AI is embedded in many of the apps and the software we use, and it is making functions such as voice interaction a reality. Yet the adoption of AI itself is largely absent from most of the organisations with which we directly interact or work. While applications that were just a dream only a few years ago are now widespread, their development is still restricted to a handful of savvy companies. For instance, Meta (formerly Facebook) is building the world's largest supercomputer.


The Download: The grim spread of the Buffalo shooting video, and crypto's tough test

MIT Technology Review

Although Twitch took down the livestream within two minutes from the start of the attack, a recording of the video was swiftly posted on a site called Streamable. That video was viewed more than 3 million times before it was taken down, according to the New York Times. Links to the recording were shared across Facebook and Twitter, and another clip that purported to show the gunman firing at people in the supermarket was visible on Twitter more than four hours after being uploaded. Additionally, TikTok users shared search terms that would take viewers to the full video on Twitter, according to Washington Post reporter Taylor Lorenz. Although Twitch removed the livestream in less time than the 17 minutes it took Facebook to take down the live broadcast of the 2019 mosque shooting.


Why Some Instagram And Facebook Filters Can't Be Used In Texas After Lawsuit

International Business Times

Instagram and Facebook users in Texas lost access to certain augmented reality filters Wednesday, following a lawsuit accusing parent company Meta of violating privacy laws. In February, Texas Attorney General Ken Paxton revealed he would sue Meta for using facial recognition in filters to collect data for commercial purposes without consent. Paxton claimed Meta was "storing millions of biometric identifiers" that included voiceprints, retina or iris scans, and hand and face geometry. Although Meta argued it does not use facial recognition technology, it has disabled its AR filters and avatars on Facebook and Instagram amid the litigation. The AR effects featured on Facebook, Messenger, Messenger Kids, and Portal will also be shut down for Texas users.


Tinder's parent company is suing Google over in-app billing

Mashable

Online dating juggernaut Match Group is suing Google, alleging that its Android apps are being forced to use the tech giant's in-app payment system -- thus allowing Google to extract royalties for such transactions. Match Group owns numerous popular dating apps and websites, including Hinge, OkCupid, Tinder, and PlentyOfFish. The issue comes down to Google's outsized influence and control over Android app distribution, as well as its requirements for allowing apps on the Google Play Store. According to Match Group's federal court filing, over 90 percent of Android app downloads are handled through the Google Play Store. Thus, if developers want to reach enough users for their Android app to be sustainable, there's practically no way around putting it on Google's app store.