correlation


Machine-learning Mendeleevs have rediscovered the periodic table

#artificialintelligence

How are you enjoying the International Year of the Periodic Tables so far? Yes, tables – we should probably have been using the plural all along. Since Dmitri Mendeleev (and others) first sketched out the periodic relationships between the elements in the 1860s, it has been estimated that around a thousand different tables have appeared in print – and that's before considering all those on the internet. Even the T-shirts handed out at the opening ceremony in January (I grabbed one, naturally) offered a new version, courtesy of the European Chemical Society, with the elements colour-coded and given different-sized boxes according to their abundance and availability. Mostly these tables embody careful deliberation about what to put where, which information to prioritise, which message to convey.


Artificial Intelligence (AI) Stats News: AI Is Actively Watching You In 75 Countries

#artificialintelligence

Recent surveys, studies, forecasts and other quantitative assessments of the impact and progress of AI highlighted the strong state of AI surveillance worldwide, the lack of adherence to common privacy principles in companies' data privacy statement, the growing adoption of AI by global businesses, and the perception of AI as a major risk by institutional investors. Using just the first fifteen minutes of a patient's raw electrocardiogram (ECG) signal, the tool produces a score that places patients into different risk categories. Patients in the top quartile were nearly seven times more likely to die of cardiovascular death when compared to the low-risk group in the bottom quartile. U.S. AI and machine learning startups raised $6.62 billion so far in 2019, and international startups raised $6.79 in the same period. The global total for all of 2018 was $19.5 billion [Crunchbase News] The North America AI chip market is estimated to reach $30.62 billion in 2027, up from $2.5 billion in 2018 [ResearchAndMarkets] The Asia Pacific AI chip market is estimated to reach $22.27 billion in 2027, up from $1.03 billion in 2018 [ResearchAndMarkets] "An AI-equipped surveillance camera would be not a mere recording device, but could be made into something closer to an automated police officer"--Edward Snowden "When you get into the millions, you can really start to generate the levels at which humans stop understanding the correlations, and the machines start to understand the correlations"--Ricky Knox, co-founder and CEO, Tandem Bank "As AI gets better at performing the routine tasks traditionally done by humans, only the hardest ones will be left for us to do. But wrestling with only difficult decisions all day long is stressful and unpleasant"--Fred Benenson, former vice president of data, Kickstarter "AI can do things previously unimaginable with the volume, velocity, variety and veracity of big data. It can deliver an edge given the information intensity of all of the processes in asset management"--Amin Rajan, CEO, Create-Research "By 2025, a quarter of all miles driven will be driven by on-demand services"--Amy Wyron, vice president of business solutions, Gett


Computers and Humans 'See' Differently. Does It Matter? Quanta Magazine

#artificialintelligence

When engineers first endeavored to teach computers to see, they took it for granted that computers would see like humans. The first proposals for computer vision in the 1960s were "clearly motivated by characteristics of human vision," said John Tsotsos, a computer scientist at York University. Things have changed a lot since then. Computer vision has grown from a pie-in-the-sky idea into a sprawling field. Computers can now outperform human beings in some vision tasks, like classifying pictures -- dog or wolf?


How IBM Sees The Future Of Artificial Intelligence

#artificialintelligence

Ever since IBM's Watson system defeated the best human champions at the game show Jeopardy!, artificial intelligence (AI) has been the buzzword of choice. More than just hype, intelligent systems are revolutionizing fields from medicine and manufacturing to changing fundamental assumptions about how science is done. Yet for all the progress, it appears that we are closer to the beginning of the AI revolution than the end. Intelligent systems are still limited in many ways. They depend on massive amounts of data to learn accurately, have trouble understanding context and their susceptibility to bias makes them ripe targets for sabotage.


The Ethics of A.I. Doesn't Come Down to 'Good vs. Evil'

#artificialintelligence

The Artificial Intelligence (A.I.) Brain Chip will be the dawn of a new era in human civilization. The Brain Chip will be the end of human civilization. These two diametrically opposite statements summarize the binary core of how we look at artificial intelligence (A.I.) and its applications: Good or bad? Ethics in A.I. is about trying to make space for a more granular discussion that avoids these binary polar opposites. It's about trying to understand our role, responsibility, and agency in shaping the final outcome of this narrative in our evolutionary trajectory.


IQ is largely a pseudoscientific swindle

#artificialintelligence

For some technical backbone to this piece,see here. Also 1) Turns out IQ beats random selection in the best of applications by less than 6%, typically 2%, as the computation of correlations have a flaw and psychologists do not seem to know the informational value of correlation in terms of "how much do I gain information about B knowing A" and propagation of error (intra-test variance for a single individual). The psychologists who engaged me on this piece -- with verbose writeups --made the mistake of showing me the best they got: papers with the strongest pro-IQ arguments. They do not seem to grasp what noise/signal really means in practice. Background: "IQ" is a stale test meant to measure mental capacity but in fact mostly measures extreme unintelligence (learning difficulties), as well as, to a lesser extent (with a lot of noise), a form of intelligence, stripped of 2nd order effects -- how good someone is at taking some type of exams designed by unsophisticated nerds.


Comprehensive Guide to 12 Dimensionality Reduction Techniques

#artificialintelligence

Have you ever worked on a dataset with more than a thousand features? I have, and let me tell you it's a very challenging task, especially if you don't know where to start! Having a high number of variables is both a boon and a curse. It's great that we have loads of data for analysis, but it is challenging due to size. It's not feasible to analyze each and every variable at a microscopic level. It might take us days or months to perform any meaningful analysis and we'll lose a ton of time and money for our business! Not to mention the amount of computational power this will take. We need a better way to deal with high dimensional data so that we can quickly extract patterns and insights from it. So how do we approach such a dataset?


The biggest problem in AI? Machines have no common sense.

#artificialintelligence

GARY MARCUS: The dominant vision in the field right now is, collect a lot of data, run a lot of statistics, and intelligence will emerge. And I think that's wrong. I think that having a lot of data is important, and collecting a lot of statistics is important. But I think what we also need is deep understanding, not just so-called "deep learning." So deep learning finds what's typically correlated, but we all know that correlation is not the same thing as causation.


The biggest problem in AI? Machines have no common sense.

#artificialintelligence

GARY MARCUS: The dominant vision in the field right now is, collect a lot of data, run a lot of statistics, and intelligence will emerge. And I think that's wrong. I think that having a lot of data is important, and collecting a lot of statistics is important. But I think what we also need is deep understanding, not just so-called "deep learning." So deep learning finds what's typically correlated, but we all know that correlation is not the same thing as causation.


Building your first machine learning model using KNIME (no coding)

#artificialintelligence

One of the biggest challenges for beginners in machine learning / data science is that there is too much to learn simultaneously. Especially so, if you do not know how to code. You need to quickly get used to Linear Algebra, Statistics, other mathematical concepts and learn how to code them! It might end up being a bit overwhelming for the new users. If you have no background in coding and find it difficult to cope with, you can start learning data science with a tool which is GUI driven.