If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Eyes are important, don't get me wrong. So are ears, noses, tongues, fingers, balance calibration organs and everything else that feeds that massive brain of yours.1 Salinity detectors in narwhals, electrical sensors in freshwater bottom feeders, echolocation in bats all provide sensory input that humans couldn't adequately process. Every beast has its own senses relevant to its own living conditions. Even your smartphone has cameras, microphones, gyroscopes, an accelerometer, a magnetometer, interfaces for phone/GPS/Bluetooth/WiFi, and some have a barometer, proximity sensors, and ambient light sensors. Biometric sensing equipment in today's phones can include optical, capacitive or ultrasonic fingerprint readers and an infrared map sensor for faces.
With more companies moving their business models online and adopting new solutions, it's been opening up more opportunities for cybercrime and identity theft. The spread of Covid-19 has only made it worse, as the Federal Trade Commission (FTC) estimated a loss of $13.4 million to Covid-19 scams as of April 15, 2020. This makes the protection of digital identity more important than ever before. This is also a new era of identity authentication. With artificial intelligence (AI) and biometrics, users today are enjoying more streamlined processes while reaping the benefits of added security.
A robot dog joined the human members of the NYPD's response to a domestic dispute inside a public housing apartment building in Manhattan. NEW YORK - Now viral videos show -- for lack of a better term -- a robot dog joining the human members of the NYPD's response to a domestic dispute inside a NYCHA building in Kips Bay, Monday. "I can't believe what I'm seeing," 344 E. 28th St. Tenant Association President Melanie Aucello said. Aucello shot one of those viral videos on her smartphone and compared the scene she witnessed to something out of a dystopian movie. "It scared me," she said.
Williams's wrongful arrest, which was first reported by the New York Times in August 2020, was based on a bad match from the Detroit Police Department's facial recognition system. Two more instances of false arrests have since been made public. Both are also Black men, and both have taken legal action to try rectifying the situation. Now Williams is following in their path and going further--not only by suing the Detroit Police for his wrongful arrest, but by trying to get the technology banned. On Tuesday, the ACLU and the University of Michigan Law School's Civil Rights Litigation Initiative filed a lawsuit on behalf of Williams, alleging that his arrest violated Williams's Fourth Amendment rights and was in defiance of Michigan's civil rights law.
Let's say, just hypothetically, that a surveillance robot styled after a dog was giving you a hard time. In this situation, you'd want to shut the thing down, and quickly. Thankfully, when it comes to Boston Dynamic's Spot robot, there are several ways to do just that. The robots, marketed for industrial use and used for viral hijinks, evoke a robot dystopia in the public imagination -- a fact compounded by an April viral video of the NYPD trotting out its very own customized Spot. The first reported instance of police using Spot was in November of 2019, when the Massachusetts State Police leased at least one of the robots for a three-month trial period.
When Jerrel Gantt was released from prison after three years, he was handed a pamphlet about healthcare and nothing else. He began searching for employment, a deep source of anxiety for him, and secured housing through a ministry in New York City. He later enrolled in school part-time. As he settled into life outside of prison and developed a support system, Gantt began going on dates with people he met on apps like Tinder. The process has not been without challenges – revealing that he is formerly incarcerated usually comes up early in the dating process for Gantt.
As cases of violence against women and girls have surged in South Asia in recent years, authorities have introduced harsher penalties and expanded surveillance networks, including facial recognition systems, to prevent such crimes. Police in the north Indian city of Lucknow earlier this year said they would install cameras with emotion recognition technology to spot women being harassed, while in Pakistan, police have launched a mobile safety app after a gang rape. But use of these technologies with no evidence that they help reduce crime, and with no data protection laws, has raised alarm among privacy experts and women's rights activists who say the increased surveillance can hurt women even more. "The police does not even know if this technology works," said Roop Rekha Verma, a women's rights activist in Lucknow in Uttar Pradesh state, which had the highest number of reported crimes against women in India in 2019. "Our experience with the police does not give us the confidence that they will use the technology in an effective and empathetic manner. If it is not deployed properly, it can lead to even more harassment, including from the police," she said.
Nearly two thousand government bodies, including police departments and public schools, have been using Clearview AI without oversight. Buzzfeed News reports that employees from 1,803 public bodies used the controversial facial-recognition platform without authorization from bosses. Reporters contacted a number of agency heads, many of which said they were unaware their employees were accessing the system. A database of searches, outlining which agencies were able to access the platform, and how many queries were made, was leaked to Buzzfeed by an anonymous source. It has published a version of the database online, enabling you to examine how many times each department has used the tool.
Abhijit Shanbhag is CEO of Graymatics, a cognitive media processing company, providing AI-powered solutions for multiple sectors including security and surveillance, digital marketing, telecommunications and IoT. He believes that many more cities can and should improve their safety and efficiency by engaging in Smart Cities projects, and here shares how the use of AI in the Smart Cities scheme can help to create a safer society. How can Smart Cities detect and track criminal activity? Smart Cities largely leverage Smart CCTV's to keep the city safe by employing powerful AI-based solutions, such as Graymatics, applied to the CCTV feeds to automatically detect various kinds of suspicious and/or criminal activities anywhere in the city. Any instances of criminal activity such as assault, brandishing of weapon, fire, vandalism, and/or suspicious behaviour will immediately be detected by the AI platform linking to the CCTVs, at which point an alert is created and assigned accordingly to law enforcement officials.
In 2012, in United States' Santa Cruz, a company called Predpol Inc devised a software that promised to predict future criminal activities by analysing past criminal records and identifying patterns. This simple idea of "predictively policing" an unsuspecting population aimed to change the face of law and order in the US. Police departments in major US cities began to use such predictive technology in their efforts to curb crime. In India too, such artificial intelligence tools are increasingly being put to use. For instance, during his annual press briefing in February, the Delhi police commissioner said that 231 of the 1,818 people arrested for their alleged role in the 2020 Delhi riots had been identified using technological tools.