If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
A new report from Data and Society raises doubts about automated solutions to deceptively altered videos, including machine learning-altered videos called deepfakes. Authors Britt Paris and Joan Donovan argue that deepfakes, while new, are part of a long history of media manipulation -- one that requires both a social and a technical fix. Relying on AI could actually make things worse by concentrating more data and power in the hands of private corporations. "The panic around deepfakes justifies quick technical solutions that don't address structural inequality," says Paris. "It's a massive project, but we need to find solutions that are social as well as political so people without power aren't left out of the equation."
Such is its meteoric rise, that its adoption among businesses increased by 60% between 2017 and 2018. Despite the obvious potential, recent events have exposed how automated systems can both intentionally and unintentionally lead to bias. For example, accidental bias was identified in cases where algorithms manage digital ads for STEM roles. With this trend only expected to accelerate, it is critical that the risk of bias is recognised and addressed. While AI bias is creeping into the business world, a recent UNESCO report provided more concerning findings, revealing that voice-activated assistants with female voices such as Amazon's Alexa instil views of gender subservience.
In early 2017, Alex Borek of Volkswagen convinced me that "this time, machine learning is real" and that data quality was a real problem. So I dug in--researched what we knew, talked to a lot of people, and thought through the various ways that bad data could do harm. And it struck me--this is really scary. As I use that metaphor, I find that practically everyone agrees! The next steps were to sort out what to do and write a straightforward article.
Demand for models that are "closer to biology" To carry out their interdisciplinary collaborative research, the scientists utilized data on locust behaviour from the Cluster of Excellence "Centre for the Advanced Study of Collective Behaviour" in Konstanz, which carries out internationally leading research on collective behaviour and is being funded through the German Excellence Strategy since the beginning of 2019. Biologists in particular are demanding that models explaining collective behaviour be designed to be "closer to biology." Most current models were devised by physicists who assume that interacting individuals are influenced by a physical force. As a result, they don't necessarily perceive individuals within swarms to be agents, but instead, as points such as interacting magnetization units on a grid. "The models work well in physics and have a good empirical basis there. However, they do not model the interaction between living individuals," says Thomas Müller.
At ETH Zurich, scientists from the Department of Physics and the Department of Computer Science have now joined forces to improve on standard methods for estimating the dark matter content of the universe through artificial intelligence. They used cutting-edge machine learning algorithms for cosmological data analysis that have a lot in common with those used for facial recognition by Facebook and other social media. Their results have recently been published in the scientific journal Physical Review D. While there are no faces to be recognized in pictures taken of the night sky, cosmologists still look for something rather similar, as Tomasz Kacprzak, a researcher in the group of Alexandre Refregier at the Institute of Particle Physics and Astrophysics, explains: "Facebook uses its algorithms to find eyes, mouths or ears in images; we use ours to look for the tell-tale signs of dark matter and dark energy." As dark matter cannot be seen directly in telescope images, physicists rely on the fact that all matter -- including the dark variety -- slightly bends the path of light rays arriving at the Earth from distant galaxies. This effect, known as "weak gravitational lensing," distorts the images of those galaxies very subtly, much like far-away objects appear blurred on a hot day as light passes through layers of air at different temperatures.
A common refrain in the media is that people don't like their boss and people are scared of robots. So I wondered about the truth and nuance to these emotions: how many people would prefer a robot to their boss? The old saying goes, "People join a company, but they leave a bad boss." As Gallup research demonstrates, 70% of how we feel about work--our emotional commitment--is driven by who our manager is. The ongoing employee engagement crisis is largely about managers who know how to manage tasks, but don't know how to lead people.
NEW DELHI: The skills gap is widening, found a global survey by Wiley Education Services and Future Workplace. It also found that the number of recruiters who would prefer to invest in AI (artificial intelligence) rather than upskilling their employees had increased to 40% this year from 29% last year. The report, titled'Closing the Skills Gap 2019', surveyed 600 HR leaders and found that 64% employers thought there's a skill gap in their firm, up from 52%. The top reasons for the skill gap were found to be the pace of change due to technology, a lack of skilled talent that can move into the required positions, and a lack of candidates who are qualified. Around 90% of employers said they would hire a person without a 4-year college degree, although 68% said that a college degree was used to validate to hard skills.
Robots aren't going to take everyone's jobs, but technology has already reshaped the world of work in ways that are creating clear winners and losers. And it will continue to do so without intervention, says the first report of MIT's Task Force on the Work of the Future. The supergroup of MIT academics was set up by MIT President Rafael Reif in early 2018 to investigate how emerging technologies will impact employment and devise strategies to steer developments in a positive direction. And the headline finding from their first publication is that it's not the quantity of jobs we should be worried about, but the quality. Widespread press reports of a looming "employment apocalypse" brought on by AI and automation are probably wide of the mark, according to the authors.
Judge Napolitano's Chambers: Judge Andrew Napolitano breaks down why the Fourth Amendment is an intentional obstacle to government, an obstacle shown necessary by history to curtail tyrants. A trial in Great Britain has just concluded with potentially dangerous implications for personal freedom in the U.S. Great Britain is currently one of the most watched countries in the Western world – watched, that is, by its own police forces. In London alone, one study found that more than 420,000 surveillance cameras were present in public places in 2017. What do the cameras capture? Everything done and seen in public.
A link has been posted to your Facebook feed. Tesla has won the highest safety honor from the Insurance Institute for Highway Safety for the first time in the electric vehicle maker's history. The Tesla Model 3 earned the 2019 Top Safety Pick award from the organization after achieving a "good" performance in all six IIHS crash tests. The compact car also had to perform well in a headlight test and in a test for its frontal-crash prevention systems. The accomplishment reflects a significant endorsement of Tesla's safety systems, which CEO Elon Musk has often touted.