Goto

Collaborating Authors

 yamin



AI platform CEO talks new tech detecting plagiarism following Harvard scandal: 'As prevalent as ever'

FOX News

Alon Yamin, co-founder and CEO of the AI-based text analysis platform Copyleaks, is helping to combat plagiarism in education, especially in light of the recent Harvard scandal. Following the controversial accusations against the school's former president Claudine Gay, Yamin emphasized that tackling the issue of plagiarism is more important now than ever, especially with the rise in AI. "A year ago, many people considered plagiarism a moot point following the expansion of AI. What was there to worry about if AI was writing everything? But as we've seen in the news over the last few months, plagiarism hasn't gone anywhere. It seems to be as prevalent as ever," Yamin said to Fox News Digital.


Deep Neural Networks Help to Explain Living Brains

#artificialintelligence

In the winter of 2011, Daniel Yamins, a postdoctoral researcher in computational neuroscience at the Massachusetts Institute of Technology, would at times toil past midnight on his machine vision project. He was painstakingly designing a system that could recognize objects in pictures, regardless of variations in size, position and other properties -- something that humans do with ease. The system was a deep neural network, a type of computational device inspired by the neurological wiring of living brains. "I remember very distinctly the time when we found a neural network that actually solved the task," he said. It was 2 a.m., a tad too early to wake up his adviser, James DiCarlo, or other colleagues, so an excited Yamins took a walk in the cold Cambridge air. "I was really pumped," he said. It would have counted as a noteworthy accomplishment in artificial intelligence alone, one of many that would make neural networks the darlings of AI technology over the next few years.


Deep Neural Networks Are Helping Decipher How Brains Work

WIRED

In the winter of 2011, Daniel Yamins, a postdoctoral researcher in computational neuroscience at the Massachusetts Institute of Technology, would at times toil past midnight on his machine vision project. He was painstakingly designing a system that could recognize objects in pictures, regardless of variations in size, position, and other properties--something that humans do with ease. The system was a deep neural network, a type of computational device inspired by the neurological wiring of living brains. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research develop ments and trends in mathe matics and the physical and life sciences. "I remember very distinctly the time when we found a neural network that actually solved the task," he said.


Deep Neural Networks Help to Explain Living Brains

#artificialintelligence

In the winter of 2011, Daniel Yamins, a postdoctoral researcher in computational neuroscience at the Massachusetts Institute of Technology, would at times toil past midnight on his machine vision project. He was painstakingly designing a system that could recognize objects in pictures, regardless of variations in size, position and other properties -- something that humans do with ease. The system was a deep neural network, a type of computational device inspired by the neurological wiring of living brains. "I remember very distinctly the time when we found a neural network that actually solved the task," he said. It was 2 a.m., a tad too early to wake up his adviser, James DiCarlo, or other colleagues, so an excited Yamins took a walk in the cold Cambridge air. "I was really pumped," he said. It would have counted as a noteworthy accomplishment in artificial intelligence alone, one of many that would make neural networks the darlings of AI technology over the next few years.


AI researchers explore solutions for real-life health challenges - Scope

#artificialintelligence

When most of us stumble and fall, it's likely we'll end up with bruises, a chipped tooth or maybe scraped-up knees and elbows. But as we age, various factors can conspire to increase the chances we'll fall, and how badly we'll be injured when we do. In the United States, the Centers for Disease Control and Prevention reports, falls are the leading cause of injury-related deaths among adults 65 and older; and the rate of falls for people in that age group increased 30% between 2009 and 2018, costing the country's health care system about $50 million a year. While there are things older adults can do to improve balance and strength, Stanford researcher Karen Liu, PhD, is leading a project to create a wearable robotic device to help predict and prevent such falls. Liu, a professor of computer science, is one of multiple researchers who recently received funding through the new Stanford Institute for Human-Centered Artificial Intelligence Hoffman-Yee Research Grant Program.


How AI and neuroscience drive each other forwards

#artificialintelligence

Chethan Pandarinath wants to enable people with paralysed limbs to reach out and grasp with a robotic arm as naturally as they would their own. To help him meet this goal, he has collected recordings of brain activity in people with paralysis. His hope, which is shared by many other researchers, is that he will be able to identify the patterns of electrical activity in neurons that correspond to a person's attempts to move their arm in a particular way, so that the instruction can then be fed to a prosthesis. Essentially, he wants to read their minds. "It turns out, that's a really challenging problem," says Pandarinath, a biomedical engineer at the Georgia Institute of Technology in Atlanta. "These signals from the brain -- they're really complicated."


The Journal of Open Source Software

#artificialintelligence

Osprey is a tool for hyperparameter optimization of machine learning algorithms in Python. Hyperparameter optimization can often be an onerous process for researchers, due to time-consuming experimental replicates, non-convex objective functions, and constant tension between exploration of global parameter space and local optimization (Jones, Schonlau, and Welch 1998). We've designed Osprey to provide scientists with a practical, easy-to-use way of finding optimal model parameters. The software works seamlessly with scikit-learn estimators (Pedregosa et al. 2011) and supports many different search strategies for choosing the next set of parameters with which to evaluate a given model, including gaussian processes (GPy 2012), tree-structured Parzen estimators (Yamins, Tax, and Bergstra 2013), as well as random and grid search. As hyperparameter optimization is an embarrassingly parallel problem, Osprey can easily scale to hundreds of concurrent processes by executing a simple command-line program multiple times.