The invention of an artificial super-intelligence has been a central theme in science fiction since at least the 19th century. From E.M. Forster's short story The Machine Stops (1909) to the recent HBO television series Westworld, writers have tended to portray this possibility as an unmitigated disaster. But this issue is no longer one of fiction. Prominent contemporary scientists and engineers are now also worried that super-AI could one day surpass human intelligence (an event known as the "singularity") and become humanity's "worst mistake". Current trends suggest we are set to enter an international arms race for such a technology.
OpenAI, the AI company that Elon Musk founded and then quit has just released a more powerful version of its AI text-writing software. The company still won't release their full software - that can be used to write fake news and messages en masse - due to fears it might be misused. OpenAI says its text-writing system is so advanced it can write news stories and even fiction that passes as human. A user can feed the system text - anything from a few sentences to pages of it - and the system will then continue that same text in an uncannily well-written, contextually relevant, human style. However, after releasing its original system, GPT-2, in February, the company said the full software was too dangerous to release to the public - a weaker version was made available.
The concept of randomness is easy to grasp on an intuitive level but challenging to characterize in rigorous mathematical terms. In "Algorithmic Randomness" (May 2019), Rod Downey and Denis R. Hirschfeldt present a comprehensive discussion of this issue, incorporating the distinct perspectives of "statisticians, coders, and gamblers." Randomness is also a concern to "modelers" who depend on simulation models driven by random number generators or analytic models built using probabilistic assumptions. In such cases, the underlying mathematical model is often an ergodic stochastic process, and the issue is whether the output of the simulator's random number generator or the observed behavior of the real-world system being modeled is "random enough" to establish confidence in the model's predictions. In a sense, this highly pragmatic perspective represents a less restrictive approach to the issue of randomness: if any of the strong criteria described by the authors are satisfied, the output of the simulator's random number generator or the observed behavior of the system being modeled should be sufficiently random to establish confidence in a model's predictions.
The consequences of fabricated news stories may have lingering effects on your perception. According to a new study, voters may develop false memories after reading a fake news report. And, they're more likely to do so if the narrative lines up with their own beliefs. Researchers presented over 3,000 eligible voters in Ireland with legitimate and made-up stories ahead of the 2018 referendum on legalizing abortion. In subsequent questioning – and after being told that some of the reports were fake – nearly half of participants reported a memory for at least one of the fabricated events, and many tended to be steadfast in these beliefs.
Science-fiction can sometimes be a good guide to the future. In the film Upgrade (2018) Grey Trace, the main character, is shot in the neck. His wife is shot dead. Trace wakes up to discover that not only has he lost his wife, but he now faces a future as a wheelchair-bound quadriplegic. He is implanted with a computer chip called Stem designed by famous tech innovator Eron Keen – any similarity with Elon Musk must be coincidental – which will let him walk again.
Ever wondered how video-streaming services such as YouTube and Netflix fetch videos that you like? Or how Google and Facebook find stories that are interesting to you? This is because these services are powered by Artificial Intelligence (AI) and Machine Learning (ML) algorithms – These algorithms are coded using a programming language in such a way that they can analyze your behavior at a granular level to find out your interests and preferences. This article focuses on Python programming language and explains why it is the most effective AI and ML language. AI and ML are seeping into nearly every aspect of our lives, helping us in ways that augment our abilities and make us better at what we do.
I see two main points of interest personally. The first is adversarial examples. There have been adversarially robust generative models developed, but it seems to me that there is more to be understood here. Obviously the'adversarial examples are features, not bugs' paper lays out a convincing argument around the theoretical meaning of the problem, but... is there some overarching pattern that can help distinguish useful features from brittle features? The main area I'm personally interested in though (nowhere near knowledgable enough to be caught up with current research, but it's what I'm working towards at the moment) is unsupervised model based reinforcement learning.