Academic and technology experts have joined Stanford University's report on robotics development entitled, "One Hundred Year Study on Artificial Intelligence." Not only will the investigative inquiry focuses on the advancement of artificial forms, but it will also involve issues associated ethical challenges. "Artificial Intelligence and Life in 2030," a research paper consisting of 28,000 words, will tackle impacts in sectors affiliated with employment, healthcare, security, entertainment, education, service robots, transportation and poor communities. Foreseeing how smart technologies will affect urban life will also be included. With the release of the AI100 report, researchers and scientists hope that by thinking and discussing ahead what AI might actually bring, preparations to address both the coming benefits and challenges must be instituted.
A trio of Princeton social scientists recently conducted a mass experiment with 160 research teams to see if any of them could predict how children's lives would turn out. The participants were given fifteen years of data and were allowed to use any technique they wanted, from good old fashioned statistical analysis to modern-day artificial intelligence. That's because artificial intelligence – much like psychics and headless chickens – cannot predict the future. Sure, it can predict trends and in some cases provide valuable insights that can help industries make the best decisions, but determining whether or not a child will become successful requires a level of prescience that brute-force mathematics can't provide. We investigated this question with a scientific mass collaboration using the common task method; 160 teams built predictive models for six life outcomes using data from the Fragile Families and Child Wellbeing Study, a high-quality birth cohort study.
It's almost comical how surprised we are at the pitfalls of artificial intelligence (AI). After all, we've been making movies for decades warning against the dangerous potential of sentient machines. And yet, the minute Facebook perpetuates foreign interference in a superpower's election or a Twitter bot becomes a marijuana-loving Nazi, we're shocked. And the reality is that our biases (political, racial and gendered) show in the data that we feed to our AI algorithms. As the COO of an AI-powered company that serves clients who also develop AI-powered products, I've come across the potential pitfalls of biased algorithms numerous times.
Scientists have developed a computer tool that can spot if somebody has filed a fake police statement - based purely on text included in the document. The tool has been rolled out across Spain to support police officers and indicate where further investigations are necessary. And, so far, it has been able to successfully identify false robbery reports with over 80 per cent accuracy. Known as VeriPol, the tool is specific to reports of robbery and can recognise patterns that are more common with false claims, such as the types of items reported stolen, finer details of incidents and descriptions of a perpetrator. The research team, which included computer science experts from Cardiff University and Charles III University of Madrid, believe the tool could save the police time and effort by complementing traditional investigative techniques, whilst also deterring people from filing fake statements in the first place.
Internet censorship, basically, is a very effective strategy used by dictatorial governments to limit access to information available online for controlling freedom of expression and prevent rebellion and discord. Countries at the forefront of adopting Internet censorship, as per the findings of the 2019 Freedom House report, are India and China as these are declared to be the worst abusers of digital freedom. Conversely, the US, Brazil, Sudan, and Kazakhstan are the countries where Internet freedom has considerably declined recently. When a country curbs Internet freedom, activists need to find ways to evade it. However, they may not need to manually search for it now that "Geneva" is here.