If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The fight against videos altered by the use of artificial intelligence just got a new ally. According to researchers at UC Berkeley and the University of Southern California, a new algorithm can help spot whether a video has been manipulated via a process known as'deepfaking.' Counter-intuitively, the tool that scientists say will aid them in their crusade against faked videos happens to be the very same tool that helps make the videos in the first place: artificial intelligence. The fight against videos altered by the use of artificial intelligence just got a new ally. Pictured is a grab from a deep fake video where Steve Buscemi's face is superimposed over Jennifer Lawrence's body Deepfakes are so named because they utilize deep learning, a form of artificial intelligence, to create fake videos.
The scholars focused on combating the malicious use of AI by terrorists. Their findings were published in the journal Russia in Global Affairs. Much has been written on the threats that artificial intelligence (AI) can pose to humanity. Today, this topic is among the most discussed issues in scientific and technical development. Despite the fact that so-called Strong AI, characterised by independent systems thinking and possibly self-awareness and will power, is still far from reality, various upgraded versions of Narrow AI are now completing specific tasks that seemed impossible just a decade ago.
Three years ago, if you told me that one day I would use python to analyze AI policy and make Guido van Rossum chuckle, I would think you are crazy. Three years later at PyCon 2019 in Cleveland, that's exactly what happened. I was by no means a tech person. I was trained as an economist (read: stats nerd), but somehow for the past three years I've been writing analysis on deep-tech fields including AI and 5G. What I hope to achieve with this post is not #humblebrag (ok, maybe a little happy dance) but to share with you all the struggles I had and am still experiencing on a daily basis and to reassure a fellow researcher somewhere feeling that he/she is faking it all the time, you are not alone.
When asked why he robbed banks, Willie Sutton famously replied, "Because that's where the money is". And so much of artificial antelligence evolved in the United States – because that's where the computers were. However with Europe's strong educational institutions, the path to advanced AI technologies has been cleared by European computer scientists, neuroscientists, and engineers – many of whom were later poached by US universities and companies. From backpropagation to Google Translate, deep learning, and the development of more advanced GPUs permitting faster processing and rapid developments in AI over the past decade, some of the greatest contributions to AI have come from European minds. Modern AI can be traced back to the work of the English mathematician Alan Turing, who in early 1940 designed the bombe – an electromechanical precursor to the modern computer (itself based on previous work by Polish scientists) that broke the German military codes in World War II.
It could be that Richard Socher's operating system just runs with more energy than other people's. He has just flown in from California and his body clock is telling him it's still 4 a.m. Already, though, he has delivered a keynote address, participated in a panel and held a question-and-answer session at the START Summit in St. Gallen, Switzerland, an important innovation conference. Despite all that, he's in a good mood as he poses for the ZEIT ONLINE photographer and later helps carry her flash equipment. He then sits down in a drafty corner of the congress hall for the following three-hour interview.
For as smart as artificial intelligence systems seem to get, they're still easily confused by hackers who launch so-called adversarial attacks -- cyberattacks that trick algorithms into misinterpreting their training data, sometimes to disastrous ends. In order to bolster AI's defenses from these dangerous hacks, scientists at the Australian research agency CSIRO say in a press release they've created a sort of AI "vaccine" that trains algorithms on weak adversaries so they're better prepared for the real thing -- not entirely unlike how vaccines expose our immune systems to inert viruses so they can fight off infections in the future. CSIRO found that AI systems like those that steer self-driving cars could easily be tricked into thinking that a stop sign on the side of the road was actually a speed limit sign, a particularly dangerous example of how adversarial attacks could cause harm. The scientists developed a way to distort the training data fed into an AI system so that it isn't as easily fooled later on, according to research presented at the International Conference on Machine Learning last week. "We implement a weak version of an adversary, such as small modifications or distortion to a collection of images, to create a more'difficult' training data set," Richard Nock, head of machine learning at CSIRO, said in the press release.
A conquering army wants to take a major city but doesn't want troops to get bogged down in door-to-door fighting as they fan out across the urban area. Instead, it sends in a flock of thousands of small drones, with simple instructions: Shoot everyone holding a weapon. A few hours later, the city is safe for the invaders to enter. This sounds like something out of a science fiction movie. But the technology to make it happen is mostly available today -- and militaries worldwide seem interested in developing it. Experts in machine learning and military technology say it would be technologically straightforward to build robots that make decisions about whom to target and kill without a "human in the loop" -- that is, with no person involved at any point between identifying a target and killing them.
NASA is gearing up for a rescue operation that they hope will save a critical instrument on its Mars lander that remains trapped just centimeters below the surface. In March, after less than a year on Mars' surface, NASA's InSight Lander reported that a critical instrument -- a'mole' probe that is designed to burrow into the planet and assess heat emissions -- hit a snag. For several months, the probe, which was meant to bore 16 feet downward, has been trapped just 30 centimeters beneath the planet's surface after less than a month into its burrowing process. A newly devised plan, however, could extricate the probe once and for all. NASA's InSight probe will try to save a critical instrument that is trapped beneath Mars' surface.
This is a unique and exciting opportunity, supported by Kingston University academics and based at Instinet, a leading global broker of equities, in London. This KTP project will develop and deploy innovative machine learning techniques for trading execution performance and monitoring. You will have primary responsibility for the application of these techniques, to deploy machine-learned products to improve the execution performance of equity trading, and machine-learned tools to monitor this performance. You will also have responsibility for authoring reports and academic publications that describe aspects of your work, and the potential impact for Instinet and its clients. You will be working alongside another KTP Associate, employed as a Systems Architect.