Artificial Intelligence: Its impact in the near future

#artificialintelligence

Academic and technology experts have joined Stanford University's report on robotics development entitled, "One Hundred Year Study on Artificial Intelligence." Not only will the investigative inquiry focuses on the advancement of artificial forms, but it will also involve issues associated ethical challenges. "Artificial Intelligence and Life in 2030," a research paper consisting of 28,000 words, will tackle impacts in sectors affiliated with employment, healthcare, security, entertainment, education, service robots, transportation and poor communities. Foreseeing how smart technologies will affect urban life will also be included. With the release of the AI100 report, researchers and scientists hope that by thinking and discussing ahead what AI might actually bring, preparations to address both the coming benefits and challenges must be instituted.


Scientists create a computer tool that scans for tell-tale signs of lying

Daily Mail - Science & tech

Scientists have developed a computer tool that can spot if somebody has filed a fake police statement - based purely on text included in the document. The tool has been rolled out across Spain to support police officers and indicate where further investigations are necessary. And, so far, it has been able to successfully identify false robbery reports with over 80 per cent accuracy. Known as VeriPol, the tool is specific to reports of robbery and can recognise patterns that are more common with false claims, such as the types of items reported stolen, finer details of incidents and descriptions of a perpetrator. The research team, which included computer science experts from Cardiff University and Charles III University of Madrid, believe the tool could save the police time and effort by complementing traditional investigative techniques, whilst also deterring people from filing fake statements in the first place.


How Can We Eliminate Bias In Our Algorithms?

#artificialintelligence

It's almost comical how surprised we are at the pitfalls of artificial intelligence (AI). After all, we've been making movies for decades warning against the dangerous potential of sentient machines. And yet, the minute Facebook perpetuates foreign interference in a superpower's election or a Twitter bot becomes a marijuana-loving Nazi, we're shocked. And the reality is that our biases (political, racial and gendered) show in the data that we feed to our AI algorithms. As the COO of an AI-powered company that serves clients who also develop AI-powered products, I've come across the potential pitfalls of biased algorithms numerous times.


Researchers develop AI tool to evade Internet censorship

#artificialintelligence

Internet censorship, basically, is a very effective strategy used by dictatorial governments to limit access to information available online for controlling freedom of expression and prevent rebellion and discord. Countries at the forefront of adopting Internet censorship, as per the findings of the 2019 Freedom House report, are India and China as these are declared to be the worst abusers of digital freedom. Conversely, the US, Brazil, Sudan, and Kazakhstan are the countries where Internet freedom has considerably declined recently. When a country curbs Internet freedom, activists need to find ways to evade it. However, they may not need to manually search for it now that "Geneva" is here.


Artificial Intelligence Could Aid Future Background Investigators

#artificialintelligence

Washington, DC – In the future, artificial intelligence could augment the background investigative work performed by humans, cutting the time it takes and providing a more realistic, in-depth and realistic profile of the individual, the technical director for research and development and technology transfer at the Defense Security Service's National Background Investigative Services said recently. Mark Nehmer spoke at the "Genius Machines: The New Age of Artificial Intelligence" event, hosted by Nextgov and Defense One in Arlington, Virginia. Millions of service members, federal employees and contractors receive background checks and are issued clearances on a periodic basis. There are several problems with the current system of background investigations, Nehmer said. The use of artificial intelligence, or AI, could significantly reduce the time it takes investigations and ease the strain on already-overworked personnel and reduce the backlog of cases, Nehmer said.