Goto

Collaborating Authors

 Communications: AI-Alerts


Using everyday WiFi to help robots see and navigate better indoors

ScienceDaily > Robotics Research

The technology consists of sensors that use WiFi signals to help the robot map where it's going. Most systems rely on optical light sensors such as cameras and LiDARs. In this case, the so-called "WiFi sensors" use radio frequency signals rather than light or visual cues to see, so they can work in conditions where cameras and LiDARs struggle -- in low light, changing light, and repetitive environments such as long corridors and warehouses. And by using WiFi, the technology could offer an economical alternative to expensive and power hungry LiDARs, the researchers noted. A team of researchers from the Wireless Communication Sensing and Networking Group, led by UC San Diego electrical and computer engineering professor Dinesh Bharadia, will present their work at the 2022 International Conference on Robotics and Automation (ICRA), which will take place from May 23 to 27 in Philadelphia.


Why Some Instagram And Facebook Filters Can't Be Used In Texas After Lawsuit

International Business Times

Instagram and Facebook users in Texas lost access to certain augmented reality filters Wednesday, following a lawsuit accusing parent company Meta of violating privacy laws. In February, Texas Attorney General Ken Paxton revealed he would sue Meta for using facial recognition in filters to collect data for commercial purposes without consent. Paxton claimed Meta was "storing millions of biometric identifiers" that included voiceprints, retina or iris scans, and hand and face geometry. Although Meta argued it does not use facial recognition technology, it has disabled its AR filters and avatars on Facebook and Instagram amid the litigation. The AR effects featured on Facebook, Messenger, Messenger Kids, and Portal will also be shut down for Texas users.


Autonomous Vehicle with 2D Lidar

#artificialintelligence

Lidar is an acronym for light detection and ranging. Lidar is like radar, except that it uses light instead of radio waves. The light source is a laser. A lidar sends out light pulses and measures the time it takes for a reflection bouncing off a remote object to return to the device. As the speed of light is a known constant, the distance to the object can be calculated from the travel time of the light pulse (Figure 1).


Facial Recognition - Can It Evolve From A "Source of Bias" to A "Tool Against Bias"

#artificialintelligence

Original article by Azfar Adib, who is currently pursuing his PhD in Electrical and Computer Engineering in Concordia University in Montreal. He is a Senior Member in the Institute of Electrical and Electronic Engineers (IEEE). A recent announcement by Meta about terminating the face recognition system in Facebook sparked worldwide attention. It comes as a sort of new reality for many Facebook users, who have been habituated for years to the automatic people recognition feature in Facebook photos and videos. Since the arrival of mankind on earth, facial outlook has remained as the most common identifier for humans.



This huge Chinese company is selling video surveillance systems to Iran

MIT Technology Review

A Chinese company is selling its surveillance technology to Iran's Revolutionary Guard, police, and military, according to a new report by IPVM, a surveillance research group. The firm, called Tiandy, is one of the world's largest video surveillance companies, reporting almost $700 million in sales in 2020. The company sells cameras and accompanying AI-enabled software, including facial recognition technology, software that it claims can detect someone's race, and "smart" interrogation tables for use alongside "tiger chairs," which have been widely documented as a tool for torture. The report is a rare look into some specifics of China's strategic relationship with Iran and the ways in which the country disperses surveillance technology to other autocracies abroad. Tiandy's "ethnicity tracking" tool, which has been widely challenged by experts as both inaccurate and unethical, is believed to be one of several AI-based systems the Chinese government uses to repress the Uyghur minority group in the country's Xinjiang province, along with Huawei's face recognition software, emotion-detection AI technologies, and a host of others.


The 'Invisible', Often Unhappy Workforce That's Deciding the Future of AI

#artificialintelligence

Two new reports, including a paper led by Google Research, express concern that the current trend to rely on a cheap and often disempowered pool of random global gig workers to create ground truth for machine learning systems could have major downstream implications for AI. Among a range of conclusions, the Google study finds that the crowdworkers' own biases are likely to become embedded into the AI systems whose ground truths will be based on their responses; that widespread unfair work practices (including in the US) on crowdworking platforms are likely to degrade the quality of responses; and that the'consensus' system (effectively a'mini-election' for some piece of ground truth that will influence downstream AI systems) which currently resolves disputes can actually throw away the best and/or most informed responses. That's the bad news; the worse news is that pretty much all the remedies are expensive, time-consuming, or both. The first paper, from five Google researchers, is called Whose Ground Truth? Accounting for Individual and Collective Identities Underlying Dataset Annotation; the second, from two researchers at Syracuse University in New York, is called The Origin and Value of Disagreement Among Data Labelers: A Case Study of Individual Differences in Hate Speech Annotation.


AI Weekly: AI researchers release toolkit to promote AI that helps to achieve sustainability goals

#artificialintelligence

While discussions about AI often center around the technology's commercial potential, increasingly, researchers are investigating ways that AI can be harnessed to drive societal change. Among others, Facebook chief AI scientist Yann LeCun and Google Brain cofounder Andrew Ng have argued that mitigating climate change and promoting energy efficiency are preeminent challenges for AI researchers. Along this vein, researchers at the Montreal AI Ethics Institute have proposed a framework designed to quantify the social impact of AI through techniques like compute-efficient machine learning. An IBM project delivers farm cultivation recommendations from digital farm "twins" that simulate the future soil conditions of real-world crops. Other researchers are using AI-generated images to help visualize climate change, and nonprofits like WattTime are working to reduce households' carbon footprint by automating when electric vehicles, thermostats, and appliances are active based on where renewable energy is available.


Clearview AI is closer to getting a US patent for its facial recognition technology

#artificialintelligence

Clearview AI is on track to receive a US patent for its facial recognition technology, according to a report from Politico. The company was reportedly sent a "notice of allowance" by the US Patent and Trademark Office, which means that once it pays the required administration fees, its patent will be officially approved. Clearview AI builds its facial recognition database using images of people that it scrapes across social media (and the internet in general), a practice that has the company steeped in controversy. The company's patent application details its use of a "web crawler" to acquire images, even noting that "online photos associated with a person's account may help to create additional records of facial recognition data points," which its machine learning algorithm can then use to find and identify matches. Critics argue that Clearview AI's facial recognition technology is a violation of privacy and that it may negatively impact minority communities.


When Curation Becomes Creation

Communications of the ACM

Liu Leqi is a Ph.D. student in the Machine Learning Department at Carnegie Mellon University, Pittsburgh, PA, USA. Her research interests include AI and human-centered problems in machine learning. Dylan Hadfield-Menell is an assistant professor of artificial intelligence and decision-making at the Massachusetts Institute of Technology, Cambridge, MA, USA. His recent work focuses on the risks of (over-) optimizing proxy metrics in AI systems. Zachary C. Lipton is the BP Junior Chair Assistant Professor of Operations Research and Machine Learning at Carnegie Mellon University, Pittsburgh, PA, USA, and a Visiting Scientist at Amazon AI. He directs the Approximately Correct Machine Intelligence (ACMI) lab, whose research spans core machine learning methods, applications to clinical medicine and NLP, and the impact of automation on social systems. He can be found on Twitter (@zacharylipton), GitHub (@zackchase), or his lab's website (acmilab.org).