TTT Studios An open conversation about AI ethics


Increased deployment of Artificial Intelligence around the world has torn open a very public and heated debate. While AI is being used to do things like sentence criminals, determine who should be hired and fired, and assess what loan rate you should be offered, it's also being leveraged to protect against poaching, detect illnesses sooner and more accurately, and shed new insights into fighting climate change. As we continue to develop AmandaAI here at TTT, we increasingly involve ourselves in the field. And as the technology continues to advance, we will continue to take on more and more clients who want to incorporate AI into their software. Since we're helping to create an AI-enabled future, we have a responsibility to explore what exactly that means.

Automatic for the people? Experts predict how AI will transform the workplace Smarter working


"There are going to be errors, whether it's humans or robots. It's more about where do you want those errors to occur," Harvey said. This means it may make more sense to focus on internal processes where mistakes are unlikely to cause significant problems. But when they could affect clients or have a regulatory impact, "that's probably not where I would want to have AI. I would want to seriously look deep into what the potential losses are associated with that – not only to clients but to the firm."

S2E3: Artificial Intelligence


This week Rebecca and Jessie talk about their fear of artificial intelligence, deepfakes, Sophia the Robot and what might happen when machine learning goes too far.

Explainable AI or Halting Faulty Models ahead of Disaster


Experienced machine learning experts will know about the challenge's complexity and rightfully question the results' validity. At the same time, submissions like this Notebook illustrate how the Titanic competition's leaderboard can be forged effortlessly; A top-performing model can be created by collecting and including the publicly accessible list of survivors. Clearly, such overfit models only work for one very specific use case and are virtually useless for predicting outcomes in any other situation (not to mention the ethics of cheating). So, how can we make sure we have trained or are provided with a model that we can actually use in production? How can machine learning systems be deployed without likely ensuing disaster?

Here's How Artificial Intelligence Is Fueling Climate Change


The'AI Apocalypse' might kill humanity before any actual robot uprising Education Images/Universal Images Group via Getty Images You can think of artificial intelligence (AI) in the same way you think about cloud computing, if you think about either of them through an environmental lens: an enormous and growing source of carbon emissions, with the very real potential to choke out humans' ability to breathe clean air long before a sentient and ornery AI goes all Skynet on us. At the moment, data centers--the enormous rooms full of stacks and stacks of servers that juggle dank memes, fire tweets, your vitally important Google docs and all the other data that is stored somewhere other than on your phone and in your home computer--use about 2% of the world's electricity. SEE ALSO: Can Giant Snow-Blowing Cannons Save Earth From Climate Change? Of that, servers that run AI--processing all the data and making the decisions and computations that a machine mimicking a human brain must handle in order to achieve "deep learning"--use about 0.1% of the world's electricity, according to a recent MIT Technology Review article. The likelihood that figure will grow, it turns out, is quite good.

Beyond Clustering: The New Methods that are Pushing the Future of Unsupervised Learning


If you ask any group of data science students about the types of machine learning algorithms, they will answer without hesitation: supervised and unsupervised. However, if we ask that same group to list different types of unsupervised learning, we are likely to get an answer like clustering but not much more. While supervised methods lead the current wave of innovation in areas such as deep learning, there is very little doubt that the future of artificial intelligence(AI) will transition towards more unsupervised forms of learning. In recent years, we have seen a lot of progress on several new forms of unsupervised learning methods that expand way beyond traditional clustering or principal component analysis(PCA) techniques. Today, I would like to explore some of the most prominent new schools of thought in the unsupervised space and their role in the future of AI.

The Ethics of Artificial Intelligence


Road Watch 2.0 Vision Zero Pedestrian Deaths Project: Learn how an award-winning Richmond Hill and York Regional Police road safety Road Watch program is the base for a space age approach to make Toronto roads safer, as kicked off on the Global News 640 AM John Oakley Show. Hear a plan to make roads safer while mitigating climate through earth and Space LiDAR technology. Learn how road safety and climate change mitigation is combined in the Ethical AI Energy Cloud City master plan, a UN 17 Sustainable Development Goals Emerging Technology Framework to Unite Society. Dave D'Silva founded Intelligent Market Solutions Group (IMSG) to make good on a University of Waterloo pact with Bill Gates. IMSG is a socio-economic emerging technology project management firm creating Star Trek inspired Ethical AI systems.

From/To: Everything you wanted to know about the future of your work but were afraid to ask Future of Work Cognizant


Futures are always a reaction to the present; tomorrow is always a judgment on today. By training a microscope on how we work now, we can try to figure out how we're going to work when this day is done. Though the future of work will always be in the future, the future of your work has never been closer. The rise of robots, machine intelligence, distributed ledgers, quantum physics, "gig" labor, the unexaggerated death of privacy, a world eaten alive by software -- all these trends point to a new world that's shaping up quite differently from anything we've ever seen, or worked in, before. File size - 9MB The file size is large, so please give it a couple of minutes to download.

Enhancing trust in artificial intelligence: Audits and explanations can help


There is a lively debate all over the world regarding AI's perceived "black box" problem. Most profoundly, if a machine can be taught to learn itself, how does it explain its conclusions? This issue comes up most frequently in the context of how to address possible algorithmic bias. One way to address this issue is to mandate a right to a human decision per the General Data Protection Regulation's (GDPR) Article 22. Here in the United States, Senators Wyden and Booker propose in the Algorithmic Accountability Act that companies be compelled to conduct impact assessments.

The US Army is developing AI missiles that find their own targets

New Scientist

Artificial intelligence may soon be deciding who lives or dies. The US Army wants to build smart missiles that will use AI to select their targets, out of reach of human oversight. The project has raised concerns that the missiles will be a form of lethal autonomous weapon – a technology many people are campaigning to ban. The US Army's project is called Cannon-Delivered Area Effects Munition (C-DAEM). Companies will bid for the contract to build the weapon, with the requirements stating it should be able to hit "moving and imprecisely located armoured targets" …