Goto

Collaborating Authors

Results


Twitter Data-Breach Case Won't Be Resolved Before Year's End, Ireland's Regulator Says

WSJ.com: WSJD - Technology

Helen Dixon, head of Ireland's Data Protection Commission, in May submitted a draft decision to more than two dozen of the bloc's privacy regulators for review, as required under the law. Eleven regulators objected to the proposed ruling, sparking a lengthy dispute-resolution mechanism, she said. The contents of the draft decision haven't been disclosed. Twitter's European operations are based in Dublin. "It's a long process," Ms. Dixon said at The Wall Street Journal's virtual CIO Network conference.


Podcast: How Russia's everything company works with the Kremlin

MIT Technology Review

Russia's biggest technology company enjoys a level of dominance that is unparalleled by any one of its Western counterparts. Think Google mixed with equal parts Amazon, Spotify and Uber and you're getting close to the sprawling empire that is Yandex--a single, mega-corporation with its hands in everything from search to ecommerce to driverless cars. But being the crown jewel of Russia's silicon valley has its drawbacks. The country's government sees the internet as contested territory amid ever-present tensions with US and other Western interests. As such, it wants influence over how Yandex uses its massive trove of data on Russian citizens. Foreign investors, meanwhile, are more interested in how that data can be turned into growth and profit. For the September/October issue of MIT Technology Review, Moscow-based journalist Evan Gershkovich explains how Yandex's ability to walk a highwire between the Kremlin and Wall Street could potentially serve as a kind of template for Big Tech.


Twitter probes alleged racial bias in image cropping feature

The Japan Times

New York – Social media giant Twitter said Monday it would investigate its image-cropping function after users complained it favored white faces over Black ones. The image preview feature of Twitter's mobile app automatically crops pictures that are too big to fit on the screen, selecting which parts of the image to display and which to conceal. Prompted by a graduate student who found an image he was posting cropped out the face of a Black colleague, a San Francisco-based programmer found Twitter's system would crop out images of President Barack Obama when posted together with images of Republican Senate Leader Mitch McConnell. "Twitter is just one example of racism manifesting in machine learning algorithms," the programmer, Tony Arcieri, wrote on Twitter. Twitter is one of the world's most popular social networks, with nearly 200 million daily users.


CIPR AI in PR ethics guide

#artificialintelligence

UK EDITION Ethics Guide to Artificial Intelligence in PR 2. The AIinPR panel and the authors are grateful for the endorsements and support from the following: In May 2020 the Wall Street Journal reported that 64 per cent of all signups to extremist groups on Facebook were due to Facebook's own recommendation algorithms. There could hardly be a simpler case study in the question of AI and ethics, the intersection of what is technically possible and what is morally desirable. CIPR members who find an automated/AI system used by their organisation perpetrating such online harms have a professional responsibility to try and prevent it. For all PR professionals, this is a fundamental requirement of the ability to practice ethically. The question is – if you worked at Facebook, what would you do? If you're not sure, this report guide will help you work out your answer. Alastair McCapra Chief Executive Officer CIPR Artificial Intelligence is quickly becoming an essential technology for ...


Exclusive dating app for Tesla owners is not a joke (maybe)

Mashable

If you own a Tesla and want a partner to raise a little X Æ A-12 of your own one day, you're in luck: Tesla Dating is an up-and-coming dating site just for you. With the tagline "Because You Can't Spell LOVE Without EV [electric vehicle]," one might think that this is a prank -- and according to founder Ajitpal Grewal, it did start out as one. "To be honest the site was put up as a joke," he told Mashable over email, "but now that I'm seeing some traction I might consider building out the app to launch." There are already prototypes for the app's design: Grewal, a Canadian e-commerce entrepreneur according to the Wall Street Journal, thought of the app after hearing from "countless" friends and acquaintances about how much they love their Teslas. He said, "It seemed like once they became a customer, that's all they wanted to talk about. It became a big part of their identity."


The Impact Of Artificial Intelligence On Influencer Marketing

#artificialintelligence

In October 2017, Facebook altered the Instagram API to make it harder for users to search its giant database of photos. The change was a small element of the company's response to the Cambridge Analytica scandal, but it was a significant problem for parts of the digital marketing industry. Not long before, New York-based influencer marketing agency Amra & Elma had developed a platform that ingested data from Instagram, and allowed its client to use AI image classifiers to find very specific influencers. For instance, they could find an influencer with, say, between 10,000 and 50,000 followers who had posted photos of themselves in a Jeep. Facebook's move killed this capability in a keystroke.


The Impact Of Artificial Intelligence On Influencer Marketing

#artificialintelligence

In October 2017, Facebook altered the Instagram API to make it harder for users to search its giant database of photos. The change was a small element of the company's response to the Cambridge Analytica scandal, but it was a significant problem for parts of the digital marketing industry. Not long before, New York-based influencer marketing agency Amra & Elma had developed a platform that ingested data from Instagram, and allowed its client to use AI image classifiers to find very specific influencers. For instance, they could find an influencer with, say, between 10,000 and 50,000 followers who had posted photos of themselves in a Jeep. Facebook's move killed this capability in a keystroke.


Facebook knew its algorithm made people turn against each other but stopped research

The Independent - Tech

Facebook executives took the decision to end research that would make the social media site less polarising for fears that it would unfairly target right-wing users, according to new reports. The company also knew that its recommendation algorithm exacerbated divisiveness, leaked internal research from 2016 appears to indicate. Building features to combat that would require the company to sacrifice engagement – and by extension, profit – according to a later document from 2018 which described the proposals as "antigrowth" and requiring "a moral stance." "Our algorithms exploit the human brain's attraction to divisiveness," a 2018 presentation warned, warning that if action was not taken Facebook would provide users "more and more divisive content in an effort to gain user attention & increase time on the platform." According to a report from the Wall Street Journal, in 2017 and 2018 Facebook conducted research through newly created "Integrity Teams" to tackle extremist content and a cross-jurisdictional task force dubbed "Common Ground."


Researchers use machine learning to unearth underground Instagram "pods"

#artificialintelligence

BROOKLYN, New York, Monday, April 27, 2020 – Likes, shares, followers, and comments are the currency of online social networks. Posts with high levels of engagement are prioritized by content curation algorithms, allowing social network "influencers" to monetize the size and loyalty of their audience. Yet not all engagement is organic, according to a team of researchers at New York University Tandon School of Engineering and Drexel University, who have published the first analysis of a robust underground ecosystem of "pods." These groups of users manipulate curation algorithms and artificially boost content popularity -- whether to increase the reach of promoted content or amplify rhetoric -- through a tactic known as "reciprocity abuse," whereby each member reciprocally interacts with content posted by other members of the group. The researchers also developed a machine learning tool to detect posts with a high likelihood of having gained popularity through pod engagement.


Asian Scientist Magazine posted on LinkedIn

#artificialintelligence

Dr. Khanna believes that the purpose of #AI is to amplify the human potential. After a 10-year stint in Wall Street developing large scale trading, risk management and data analytics systems, Dr. Khanna pursued her PhD in Information Systems and Innovation at the The London School of Economics and Political Science (LSE). Since then, she has become one of Asia's leading #FemaleEntrepreneurs and #fintechexpert. Under ADDO AI, Dr. Khanna has been a strategic advisor on AI smart cities and fintech to leading corporations and governments. Dr. Khanna also serves on the Board of Infocomm Media Development Authority (IMDA), helping to power its #SmartNation vision.