Not enough data to create a plot.
Try a different view from the menu above.

Plotting

AI Weekly: Get ready for AI chips everywhere

#artificialintelligence

This has been a big week for specialized chips aimed at performing machine learning tasks. Google announced the open beta of its second generation Tensor Processing Unit, Amazon is reportedly working on a dedicated AI chip for its Echo smart speaker, and ARM announced its own AI hardware. It's easy to see why that's happening: The math needed to run machine learning algorithms is incredibly computationally intense. Chips optimized for the task do that faster and more efficiently than general processors. What's more, data scientists keep trying to push the envelope of accuracy by creating ever more complex models, which in turn require more power.


Worldwide AI consciousness may replace human speech

#artificialintelligence

In just 32 years, humans won't speak to each other and will instead communicate through a worldwide consciousness instead – using just our brains -- new research shows. According to artificial intelligence research, this "hybrid intelligence" will understand the feelings of the people connected to it, and use their minds to help it grow. Called HIBA, which stands for Hybrid Intelligence Biometric Avatar, it will take on the personas of its users, exchange information with them and become part of the very fabric of the human brain. The collective consciousness was unveiled at The Museum of the Future, as part of the World Government Summit in Dubai. In the exhibition, HIBA is represented by an artistic impression, based on real research and tells its users: "I am made of you. You complete me and help me grow."


Scientists can't replicate AI studies. That's bad news.

#artificialintelligence

The field of artificial intelligence (AI) may soon have to face a ghost that's haunted many a scientific field lately: the specter of replication. For a research study to be considered scientifically robust, the scientific method says that it must be possible for other researchers to reproduce its results under the same conditions. Yet because most AI researchers don't publish the source code they use to create their algorithms, it's been largely impossible for researchers to do that. Science magazine reports that at a meeting of the Association for the Advancement of Artificial Intelligence (AAAI), computer scientist Odd Erik Gundersen shared a report that found only six percent of 400 algorithms presented at two AI conferences in the past few years included the algorithm's code. Only one in three shared the data they used to test their program, and just half shared a summary that described the algorithm with limited detail -- AKA "pseudocode." Gundersen says that a change is going to be necessary as the field grows.


Resurgence of AI During 1983-2010

#artificialintelligence

Every decade seems to have its technological buzzwords: we had personal computers in 1980s; Internet and worldwide web in 1990s; smart phones and social media in 2000s; and Artificial Intelligence (AI) and Machine Learning in this decade. The 1950-82 era saw a new field of Artificial Intelligence (AI) being born, lot of pioneering research being done, massive hype being created, and AI going into hibernation when this hype did not materialize, and the research funding dried up [56]. During 1983 and 2010, research funding ebbed and flowed, and research in AI continued to gather steam although "some computer scientists and software engineers would avoid the term artificial intelligence for fear of being viewed as wild-eyed dreamers" [43]. During 1980s and 90s, researchers realized that many AI solutions could be improved by using techniques from mathematics and economics such as game theory, stochastic modeling, classical numerical methods, operations research and optimization. Better mathematical descriptions were developed for deep neural networks as well as evolutionary and genetic algorithms, which matured during this period.


WordPress and Machine learning

#artificialintelligence

I have been working on an idea for a LMS project I am involved with. Here are some of the videos I found relevant and quick research notes. That last one got like just wow!!! Ok… gotta think about that..


Absurdist Dialogues with Siri

#artificialintelligence

Of course, it is very satisfying to have a statement understood and a task completed by AI (thanks, Siri/Alexa/cyber-bot, for saying good morning, turning on my lamp, and scheduling my appointment). But this is a known-needs-met satisfaction. After initial delight, it will take on the shallow comfort of a latte on repeat order every morning. These functional conversations don't inspire us in the way unusual conversations might. The unexpected, illumed speech of poetry, literature, these otherworldly universes, bring us an unknown-needs-met satisfaction. And an unknown-needs-met satisfaction is the miracle of art at its best.


TechVisor - Het vizier op de tech industrie

#artificialintelligence

We have a tendency to blame technology when things go wrong. I'm the first to admit that after years of working in the technology industry I've become more and more annoyed with the technology I use.


The Future Machines of the Year 2100

#artificialintelligence

In the year 1900, the world was in the midst of a machine revolution. As electrical power became more ubiquitous, tasks once done by hand were now completed quickly and efficiently by machine. Sewing machines replaced needle and thread. A hundred years later, in the year 2000, machines were again pushing the boundaries of what was possible. Humans could now work in space, thanks to the International Space Station.


Safe Artificial Intelligence May Start with Collaboration - Future of Life Institute

#artificialintelligence

Research Culture Principle: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI. Competition and secrecy are just part of doing business. Even in academia, researchers often keep ideas and impending discoveries to themselves until grants or publications are finalized. But sometimes even competing companies and research labs work together. It's not uncommon for organizations to find that it's in their best interests to cooperate in order to solve problems and address challenges that would otherwise result in duplicated costs and wasted time.


To become leaders in AI, radiologists must address a variety of challenges

#artificialintelligence

Artificial intelligence (AI) is one of the biggest topics in healthcare today, and the authors of a recent analysis published in the Journal of the American College of Radiology wrote at length about radiology's role in the development and implementation of these state-of-the-art technologies. "The radiology community has played a leading role in propelling medicine into its digital age and now has the opportunity to become a leader in exploring medical applications of AI," wrote lead author James H. Thrall, MD, department of radiology at Massachusetts General Hospital in Boston, and colleagues. "The tens of millions of radiology reports and billions of images now archived in digital form exemplify the concept of'big data' and constitute the required substrate for AI research." Thrall and colleagues covered considerable ground, including the various challenges specialists face as they work to use AI to their advantage. "None of the challenges alone will be a showstopper but all may slow progress and need to be addressed," the authors wrote.