As cases of violence against women and girls have surged in South Asia in recent years, authorities have introduced harsher penalties and expanded surveillance networks, including facial recognition systems, to prevent such crimes. Police in the north Indian city of Lucknow earlier this year said they would install cameras with emotion recognition technology to spot women being harassed, while in Pakistan, police have launched a mobile safety app after a gang rape. But use of these technologies with no evidence that they help reduce crime, and with no data protection laws, has raised alarm among privacy experts and women's rights activists who say the increased surveillance can hurt women even more. "The police does not even know if this technology works," said Roop Rekha Verma, a women's rights activist in Lucknow in Uttar Pradesh state, which had the highest number of reported crimes against women in India in 2019. "Our experience with the police does not give us the confidence that they will use the technology in an effective and empathetic manner. If it is not deployed properly, it can lead to even more harassment, including from the police," she said.
The potential positive economic effects of artificial intelligence (AI) have been well-documented, with several high-profile studies highlighting its impact on areas such as workforce productivity and wealth creation. At the same time, widespread adoption of AI technologies has contributed to increased scrutiny and a sharper focus on AI's potentially harmful implications. Listed below are the key regulatory trends impacting the AI theme, as identified by GlobalData. In 2020, the US and Europe have taken steps to regulate AI, but there are notable differences in approach. Europe appears more optimistic about the benefits of regulation, while the US has warned of the dangers of overregulation.
As research involving transplanting lab-grown human'mini-brains' into animals to study neurological diseases continues to expand, experts warn the work with these brain organoids could result in a'Planet of the Apes' scenario. The concern is animals could develop humanized traits and start to behave similar to the intelligent apes of the popular science fiction story. The warning comes from a team at Kyoto University who released a paper highlighting a number of ethical implications that could arise with brain organoid research. Although many see brain organoids as a way to quickly develop disease treatments, others fear that because they are designed to mimic the real thing, they too may attain some form of consciousness. Experts warn the work with these brain organoids could result in a'Planet of the Apes' (pictured) scenario.
The system uses robots to conduct polymerase chain reaction, or PCR, tests, significantly reducing infection risks for technicians. "The system will reduce the burden on medical workers, who are becoming exhausted from measures aimed at preventing infections," especially as Japan braces for a fourth wave of COVID-19 cases, Hiroyasu Ito, a professor at the university, said. The system, developed by Kawasaki Heavy Industries Ltd., is housed in a container 2.5 meters wide and 12.2 meters long, and has 13 robotic arms. It conducts all the steps required to test samples for coronavirus infections without human intervention. The university is aiming to make it possible for the system to produce test results in just 80 minutes.
KDCA official Na Seong-woong (left) and SKT AI service head Lee Hyun-ah hold an MOU certificate at the KDCA headquarters in Cheongju, North Chungcheong Province. SK Telecom, South Korea's top telecom company, is utilizing its artificial intelligence technology to support the country's health authority in monitoring recipients after COVID-19 vaccination, it said Thursday. Dubbed NUGU Vaccine Carecall, SKT's NUGU AI platform will provide guidance to those subject to vaccination through calls, and monitor any abnormal signs after shots are administered. The telecom company signed a Memorandum of Understanding with the Korea Disease Control and Prevention Agency on Thursday. Under the MOU, medical institutions will register their lists of recipients on the NUGU Vaccine Carecall website.
It's whip fast, obeys commands and doesn't leave unpleasant surprises on the floor – meet the AlphaDog, a robotic response to two of China's burgeoning loves: pets and technology. The high-tech hound uses sensors and Artificial Intelligence (AI) technology to'hear' and'see' its environment – and can even be taken for walks. "It's really very similar to a real dog," says Ma Jie, chief technology officer at Weilan, the company behind the product. The Nanjing-based creators say their robot dog – which moves at a speed of almost 15 kilometres (nine miles) per hour and spins on the spot like an excited puppy – is the fastest on the market. With four metal legs it is more stable than a real dog, Ma explains as one of his team swiftly kicks it to prove the point.
The world celebrated Women's History Month in March, and it is a timely moment for us to look at the forces that will shape gender parity in the future. Even as the pandemic accelerates digitization and the future of work, artificial intelligence (AI) stands out as a potentially helpful--or hurtful--tool in the equity agenda. McKinsey recorded a podcast in collaboration with Citi that dives into how gender bias is reflected in AI, why we must consciously debias our machine-human interfaces, and how AI can be a positive force for gender parity. Ioana Niculcea: Before we start the conversation, I think it's important for us to spend a moment assessing the amount of change that has taken place with regard to AI, and how the pace of that change has accelerated over the past few years. And many people argue that in light of the current COVID-19 circumstance, we'll feel further acceleration as people move toward digitization. I spent the past eight years in financial services, and it all started with data. Datafication of the industry was sort of the point of origin. And we hear often that over 90 percent of the data that we have today was created over the past two years. You hear things like every minute, there's over one million Facebook logins and 4.5 million YouTube videos being streamed, or 17,000 different Uber rides. There's a lot of data, and only 1 percent of that is being analyzed, as said today.
Newegg users can now give their name as "Mohammad" when leaving reviews, because apparently they couldn't do that before. The online tech retailer is revising its language filter after it was called out for banning one of the most popular names in the world -- for 15 years. The issue was brought to light by Mohammad Al-Tayyar, a government worker in Kuwait, who discovered it after attempting to review one of the products on Newegg's website. "I was writing a review @Newegg and the system marked my name (Mohammad) as: "UNACCEPTABLE WORDS USED -- offensive language," Al-Tayyar tweeted on Wednesday, sharing a screenshot of the error message. "Is my name offensive @Newegg?" Other users were quickly able to duplicate this, indicating that Al-Tayyar's experience wasn't just an unfortunate bug. "Just verified this - I guess @Newegg wants your reviews unless you have the most common first name on Earth," tweeted game developer Rami Ismail. I was writing a review @Newegg and the system marked my name (Mohammad) as: "UNACCEPTABLE WORDS USED - offensive language". Is my name offensive @Newegg? Speaking to Mashable via DM, Al-Tayyar said he'd been trying to review a laptop and NAS storage he'd purchased for his 6-year-old daughter, who was using them for remote learning. He was shocked to see Newegg flag his name as potentially offensive "in a big red alert all in caps." For Al-Tayyar, the alert was yet another example of the damaging, pervasive nature of Islamophobia. Fear and hatred of Arab and Muslim people has caused even the most innocent elements of their culture to be regarded with suspicion, inflicting undeniable harm to these communities. "Every time I see a movie in the media or the video games...[a]ll the Arab/Muslims [are] displayed as the bad, evil, stupid thieves," said Al-Tayyar, noting that Arab people are often negatively depicted as "in the desert with the camels." "Now the system [is] telling me I have to change my name?" Our team looked into the list of words and looks like it was added in 2006. Words were added when used inappropriately on our site, so likely there was an incident back then that led to this. Regardless, we feel this is wrong and are updating the list as we speak. Al-Tayyar told Mashable he emailed Newegg about this issue, but has not yet received a response. However, Newegg did quickly respond on Twitter, apologising and stating that "Mohammad" has now been removed from its list of prohibited words. According to Newegg, the name had been on its banned list since it was first added in 2006. The company stated it had banned religious terms that were being misused, including "Jesus" and "God." "Words were added when used inappropriately on our site, so likely there was an incident back then that led to this," wrote Newegg's official Twitter account. "Regardless, we feel this is wrong and are updating the list as we speak.