Computers have already taken over many things that used to be done by people. But how far can this go and is it a good thing? This talk will deal with a little history of how computing and psychology have developed together as well as with what's happening now. We'll end with a discussion of what might happen in the future and what that may mean for how we live our lives.
The impressive promise that smart contracts hold for the future of business will only grow more impressive in the near-future as blockchain services that make such clever agreements possible in the first place become more advanced. Businesses will rely on smart contracts for a wide number of reasons, but nowhere will they be more important than when it comes to negotiating complex agreements with other businesses. These companies will cut the costs of working together by removing third parties from the equation, as smart contracts will be able to manage and adjust themselves with minimal to no human oversight.
In the blink of an eye, we've seen robots begin to take over the workplace: robots for packing and shipping boxes at Amazon, robots for hospital care, robots for dentists, first responders, truck drivers, battle fields, and office buildings. New American writer Dennis Behreandt highlights in the AI print issue that: "Government too, is beginning to benefit from AI, naturally enough at the expense of citizen privacy." Through fingerprint identification and facial recognition, the U.S. federal government has been unaccountably collecting massive databases of your private information, turning AI into a very dangerous powerful aspect in today's world. Elon Musk, technology entrepreneur, investor, and engineer, expressed his concerns with AI at a tech conference in Texas: "I am really quite close, I am very close, to the cutting edge in AI and it scares ... me. It's capable of vastly more than almost anyone knows and the rate of improvement is exponential …and mark my words, AI is far more dangerous than nukes. With Stephen Hawking, Apple CEO Tim Cook, and Oxford's Nick Bostrom agreeing with Mr. Musk and expressing similar concerns, one might question humankind's invention. Mr. Behreandt does the same in the cover story: If we are already having difficulty understanding the limited AI of the present, how can we hope to understand, much less control, the increasingly intelligent AI of the near future? And should we create machine intelligence that exceeds our own, as ours exceeds that of the cockroach? One also should start to wonder, by replicating how the brain works through technology are we attempting to replace God? Christianity Today comments on this in their article: "Does'The Image of God' Extend to Robots, Too?" saying that mere morality isn't enough. Such complicated, uneasy relationships with AI are and will continue to be built on our flawed nature as creators. There is a real danger that humans-as-creators will be selfish and amoral creators, fashioning intelligent designs that exist simply to serve our own interests and desires –or our own sense of right and wrong. The immorality we have wrought on our world will be magnified by AI. Since the fall of mankind, the world has always been influenced by sin. As God's children we naturally want to create, but instead of creating in our own image, we should create in the image of God. By striving for the virtues of morality instead of our own desires, only then will we live in a free and prosperous society. So as technology seems to fly into a new dimension, let's remember the wise words of John Adams: "Our Constitution was made for a moral and religious people.
A recent analysis on the future of warfare indicates that countries that continue to develop AI for military use risk losing control of the battlefield. Those that don't risk eradication. Here's what that means, according to a trio of experts. Researchers from ASRC Federal, a private company that provides support for the intelligence and defense communities, and the University of Maryland recently published a paper on pre-print server ArXiv discussing the potential ramifications of integrating AI systems into modern warfare. Get 50% off tickets if you buy now.
The guidelines also call on governments to boost AI funding and establish frameworks that help turn research into real-world applications. There could be "deregulated environments" to test AI before unleashing it in the wild, as an example. The guidelines should be released on May 22nd and come from 50 experts in the public and private sectors, including governments and tech companies. President Trump pushed for regulations in his executive order prioritizing AI. American companies and institutions have pressed for positive uses of AI, too.
From treating chronic diseases and reducing fatality rates in traffic accidents, to fighting climate change and anticipating cybersecurity threats, Artificial Intelligence (AI) is no longer considered a futuristic construct – it is already a reality and is helping humanity solve pressing global challenges. It significantly improves peoples' lives, helps with day-to-day tasks and benefits society and the economy. Nevertheless, AI applications should not only be consistent with the law, but also adhere to ethical principles. The ethical dimension of AI is not a luxury feature or an add-on: it needs to be an integral part of AI development. The European Commission recognises AI as one of the 21st century's most strategic technologies and is therefore increasing its annual investment in AI by 70% as part of the research and innovation programme Horizon 2020, reaching €1.5 billion for the period 2018-2020.
Not all tech billionaires are advocates of artificial intelligence (AI). Some are so worried about the effects AI is having on society that they are spending their billions trying to monitor it. This, in turn, has created a new frontier in philanthropy. For Pierre Omidyar, the founder of eBay, AI is such a concern that last year he set up Luminate, a London-based organization that advocates for civic empowerment, data and digital rights, financial transparency, and independent media. Pierre Omidyar, the founder of eBay, has supported monitoring artificial intelligence.
What that suggested to the researchers is that in every data set, there are two types of correlations: patterns that actually correlate with the meaning of the data, such as the whiskers in a cat image or the fur colorations in a panda image, and patterns that happen to exist within the training data but do not generalize to other contexts. These latter "misleading" correlations, as we'll call them, are the ones exploited in adversarial attacks. In the diagram above, for example, the attack takes advantage of a pixel pattern falsely correlated with gibbons by burying those imperceptible pixels within the panda image. The recognition system, trained to recognize the misleading pattern, then picks up on it and assumes it's looking at a gibbon.