Goto

Collaborating Authors

 preconception


Your short-term memory can be unreliable after just a few seconds

New Scientist

You can misremember something just seconds after it happened, re-framing events in your mind to better fit with your own preconceptions. Our brains probably do this in an effort to make sense of the world in line with our expectations, even if that isn't helpful all of the time. Marte Otten at the University of Amsterdam in the Netherlands and her colleagues wanted to tease out the relationship between prior expectations and short-term memories. "We already know that long-term memory is fallible, we just wanted to find out if we could determine the specific ways in which short-term memory is fallible also," she says. The team conducted several experiments on more than 400 people that all involved showing the participants random letters arranged in a circle on a computer screen.


La veille de la cybersécurité

#artificialintelligence

Miseducation of algorithms is a crucial issue; when artificial intelligence mimics the unconscious attitudes, bigotry, and preconceptions of the humans who created these algorithms, serious harm can result. Computer tools, for example, have incorrectly identified Black offenders as twice as common to re-offend as white defendants. When an artificial intelligence used pricing as a proxy for healthcare needs, it incorrectly identified Black patients as being healthier than equally ill white patients since less money has been spent on them. Even artificial intelligence, which was used to compose a play, depended on damaging preconceptions for casting. Removing sensitive information from the data appears to be a possible option.


12 Bytes by Jeanette Winterson review – how we got here and where we might go next

#artificialintelligence

In Mary Shelley's 1818 novel Frankenstein, a scientist creates life and is horrified by what he has done. Two centuries on, synthetic life, albeit in a far simpler form, has been created in a dish. What Shelley imagined has only now become possible. But as Jeanette Winterson points out in this essay collection, the achievements of science and technology always start out as fiction. Not everything that can be imagined can be realised, but nothing can be realised if it hasn't been imagined first.


Insights Discovery in Data Science Through Novel Machine Learning Approaches

#artificialintelligence

I have always appreciated the unusual, unexpected, and surprising in science and in data. As famous science author Arthur C. Clarke once said, "The most exciting phrase to hear in science, the one that heralds new discoveries, is not'Eureka!' (I found it) but'That's funny!'" This is the primary reason that I motivated most of the doctoral students that I mentored at GMU to work on some variation of Novelty Discovery (or Surprise Discovery) for their Ph.D. dissertations. "Surprise discovery" for me is a much more positive, exciting phrase than "outlier detection" or "anomaly detection", and it is much richer in meaning, in algorithms, and in new opportunities. Finding the surprising unexpected thing in your data is what inspires our exclamation "That's funny!" that may be signaling a great discovery (either about your data's quality, or about your data pipeline's deficiencies, or about some wholly new scientific concept). As famous astronomer, Vera Rubin said, "Science progresses best when observations force us to alter our preconceptions."


Council Post: It's Time To Challenge Our Preconceptions About AI's Role In Business

#artificialintelligence

Whether it boosts productivity or increases workplace satisfaction, artificial intelligence (AI) is increasingly recognized as an essential tool to unlock a competitive advantage in business. It's hard to dispute the potential of AI to transform the way we work. This is a technology that Alphabet CEO Sundar Pichai stated at Davos was akin to "electricity and fire" in its impact on humanity. And I agree -- AI will be (and is) revolutionary. However, for AI to achieve a meaningful, positive impact on businesses and the world, we have to understand it and challenge our preconceptions.


Let's break down the common self-driving car myths

#artificialintelligence

If you've never seen or been near (or inside) a self-driving car, the concept can seem a bit much. But the thing is, autonomous vehicle technology is already all over: in many of our human-controlled cars, on the road in driverless shuttles and vans, and coming from self-driving companies like Alphabet's Waymo, GM-funded Cruise, Amazon-backed Aurora, and Uber. Even if it sounds like a far-off, far-fetched, futuristic proposition, self-driving cars aren't sci-fi. Engineering simulation software company Ansys surveyed more than 22,000 adults from the U.S., UK, France, Italy, Spain, Sweden, Japan, China, India, and other regions, about self-driving perceptions. The survey, out last week, found that older adults are less optimistic than younger adults about ever riding in a robocar.


Let's break down the common self-driving car myths

#artificialintelligence

If you've never seen or been near (or inside) a self-driving car, the concept can seem a bit much. But the thing is, autonomous vehicle technology is already all over: in many of our human-controlled cars, on the road in driverless shuttles and vans, and coming from self-driving companies like Alphabet's Waymo, GM-funded Cruise, Amazon-backed Aurora, and Uber. Even if it sounds like a far-off, far-fetched, futuristic proposition, self-driving cars aren't sci-fi. Engineering simulation software company Ansys surveyed more than 22,000 adults from the U.S., UK, France, Italy, Spain, Sweden, Japan, China, and India about self-driving perceptions. The survey, out last week, found that older adults are less optimistic than younger adults about ever riding in a robocar.


How to Convince People That Machine Learning Works

#artificialintelligence

Many people still are not convinced that machine learning works reliably. But they want analytics insight and most of the time machine learning is the way to go. This means, when you are working with customers you need to do a lot of convincing. Especially if they are not into machine learning themselves. Many people are still under the impression that analytics only works when it's based on physics.


When not to use deep learning

#artificialintelligence

I know it's a weird way to start a blog with a negative, but there was a wave of discussion in the last few days that I think serves as a good hook for some topics on which I've been thinking recently. It all started with a post in the Simply Stats blog by Jeff Leek on the caveats of using deep learning in the small sample size regime. In sum, he argues that when the sample size is small (which happens a lot in the bio domain), linear models with few parameters perform better than deep nets even with a modicum of layers and hidden units. He goes on to show that a very simple linear predictor, with top ten most informative features, performs better than a simple deep net when trying to classify zeros and ones in the MNIST dataset using only 80 or so samples. This prompted Andrew Beam to write a rebuttal in which a properly trained deep net was able to beat the simple linear model, even with very few training samples.


When not to use deep learning

#artificialintelligence

I know it's a weird way to start a blog with a negative, but there was a wave of discussion in the last few days that I think serves as a good hook for some topics on which I've been thinking recently. It all started with a post in the Simply Stats blog by Jeff Leek on the caveats of using deep learning in the small sample size regime. In sum, he argues that when the sample size is small (which happens a lot in the bio domain), linear models with few parameters perform better than deep nets even with a modicum of layers and hidden units. He goes on to show that a very simple linear predictor, with top ten most informative features, performs better than a simple deep net when trying to classify zeros and ones in the MNIST dataset using only 80 or so samples. This prompted Andrew beam to write a rebuttal in which a properly trained deep net was able to beat the simple linear model, even with very few training samples.