Collaborating Authors


Online dating platforms are set to offer 'digital health passports' to UK singletons

Daily Mail - Science & tech

Online dating giants are set to offer digital health passports to millions of UK singletons to prove they are free of coronavirus. Manchester-based cyber firm VST Enterprises (VSTE), is pioneering technology which it says can be used to safeguard daters when coronavirus restrictions are eased. The company says it has been approached for its digital health passports by several leading dating app companies. Tinder and Grindr are believed to be two of the dating apps that are waiting to launch them. The technology, called'VCode', would enable a doctor or nurse to upload the results of a government-approved Covid-19 test to the digital health passport.

Satellites used to track food supplies in COVID-19 era


BANGKOK- As the coronavirus pandemic leads to anxiety over the strength of the world's food supply chains, everyone from governments to banks are turning to the skies for help. Orbital Insight, a California-based big data company that uses satellites, drones, balloons and mobile phone geolocation data to track what's happening on Earth, has seen inquiries about monitoring food supplies double in the past two months, according to James Crawford, founder and chief executive officer of the company. "We're helping supply chain managers, financial institutions, and government agencies answer questions they never thought they would have to ask," Crawford said in a phone interview. The coronavirus outbreak has triggered a fresh surge in demand for alternative data to shed light on how the pandemic is impacting industries and trade across the globe. That is especially important as multiple government lockdowns and tighter restrictions on the movement of people and goods upend supply chains and logistics everywhere from Asia to Europe and the Americas.

AI fuels research that could lead to positive impact on health care


Brainstorm guest contributor Paul Fraumeni speaks with four York U researchers who are applying artificial intelligence to their research ventures in ways that, ultimately, could lead to profound and positive impacts on health care in this country. Meet four York University researchers: Lauren Sergio and Doug Crawford have academic backgrounds in physiology; Shayna Rosenbaum has a PhD in psychology; Joel Zylberberg has a doctorate in physics. They share two things in common: They focus on neuroscience – the study of the brain and its functions – and they leverage advanced computing technology using artificial intelligence (AI) in their research ventures, the application of which could have a profound and positive impact on health care. In a nondescript room in the Sherman Health Sciences Research Centre, Lauren Sergio sits down and places her right arm in a sleeve on an armrest. It's an odd-looking contraption; the lower part looks like a sling attached to a video game joystick.

Algorithms that run our lives are racist and sexist. Meet the women trying to fix them


Timnit Gebru was wary of being labelled an activist. As a young, black female computer scientist, Gebru – who was born and raised in Addis Ababa, Ethiopia, but now lives in the US – says she'd always been vocal about the lack of women and minorities in the datasets used to train algorithms. She calls them "the undersampled majority", quoting another rising star of the artificial intelligence (AI) world, Joy Buolamwini. But Gebru didn't want her advocacy to affect how she was perceived in her field. "I wanted to be known primarily as a tech researcher. I was very resistant to being pigeonholed as a black woman, doing black woman-y things."

AI Now Institute Warns About Misuse Of Emotion Detection Software And Other Ethical Issues


The AI Now Institute has released a report that urges lawmakers and other regulatory bodies to set hard limits on the use of emotion-detecting technology, banning it in cases where it may be used to make important decisions like employee hiring or student acceptance. In addition, the report contained a number of other suggestions regarding a range of topics in the AI field. The AI Now Institute is a research institute based at NYU, possessing the mission of studying AI's impact on society. AI Now releases a yearly report demonstrating their findings regarding the state of AI research and the ethical implications of how AI is currently being used. As the BBC reported, this year's report addressed topics like algorithmic discrimination, lack of diversity in AI research, and labor issues.

Emotion-detecting tech 'must be restricted by law'


A leading research centre has called for new laws to restrict the use of emotion-detecting tech. The AI Now Institute says the field is "built on markedly shaky foundations". Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices. It wants such software to be banned from use in important decisions that affect people's lives and/or determine their access to opportunities. The US-based body has found support in the UK from the founder of a company developing its own emotional-response technologies - but it cautioned that any restrictions would need to be nuanced enough not to hamper all work being done in the area.

'People fix things. Tech doesn't fix things.' – TechCrunch


Veena Dubal is an unlikely star in the tech world. A scholar of labor practices regarding the taxi and ride-hailing industries and an Associate Professor at San Francisco's U.C. Hastings College of the Law, her work on the ethics of the gig economy has been covered by the New York Times, NBC News, New York Magazine, and other publications. She's been in public dialogue with Naomi Klein and other famous authors, and penned a prominent op-ed on facial recognition tech in San Francisco -- all while winning awards for her contributions to legal scholarship in her area of specialization, labor and employment law. At the annual symposium of the AI Now Institute, an interdisciplinary research center at New York University, Dubal was a featured speaker. The symposium is the largest annual public gathering of the NYU-affiliated research group that examines AI's social implications.

Group scours Pacific for sunken WWII battleships, lost war graves

FOX News

FILE - In this June 4, 1942 file photo provided by the U.S. Navy shows the USS Yorktown listing heavily to port after being struck by Japanese bombers and torpedo planes in the Battle of Midway. Researchers scouring the world's oceans for sunken World War II ships are honing in on debris fields deep in the Pacific.(AP MIDWAY ATOLL, Northwestern Hawaiian Islands (AP) -- Deep-sea explorers scouring the world's oceans for sunken World War II ships are focusing on debris fields deep in the Pacific, in an area where one of the most decisive battles of the time took place. Hundreds of miles off Midway Atoll, nearly halfway between the United States and Japan, a research vessel is launching underwater robots miles into the abyss to look for warships from the famed Battle of Midway. Weeks of grid searches around the Northwestern Hawaiian Islands have already led the crew of the Petrel to one sunken warship, the Japanese ship the Kaga.

How would a Latino be classified by an Artificial Intelligence system?


We know all know that artificial intelligence (AI) and facial recognition are perfect tools to unlock your iPhone. The new technological systems are a novelty, however, what mortals don't understand is how policies are governed and created to categorize facial recognition through AI and its algorithms. Trevor Paglen and Kate Crawford, two artists who question the boundaries between science and ideology, created ImageNet Roulette, a database where the user can upload images and be tagged by an AI system to understand how this technology categorizes us. The results could be entertaining or really prejudiced, sexist or even racist. ImageNet Roulette was created to understand how human beings are classified by machine learning systems.

600,000 Images Removed from AI Database After Art Project Exposes Racist Bias


ImageNet will remove 600,000 images of people stored on its database after an art project exposed racial bias in the program's artificial intelligence system. Created in 2009 by researchers at Princeton and Stanford, the online image database has been widely used by machine learning projects. The program has pulled more than 14 million images from across the web, which have been categorized by Amazon Mechanical Turk workers -- a crowdsourcing platform through which people can earn money performing small tasks for third parties. According to the results of an online project by AI researcher Kate Crawford and artist Trevor Paglen, prejudices in that labor pool appear to have biased the machine learning data. Training Humans -- an exhibition that opened last week at the Prada Foundation in Milan -- unveiled the duo's findings to the public, but part of their experiment also lives online at ImageNet Roulette, a website where users can upload their own photographs to see how the database might categorize them.