Goto

Collaborating Authors

Results


Artificial Intelligence In Healthcare -- Everything Artificial Intelligence + Robotics + IoT +

#artificialintelligence

Artificial intelligence (AI), Machine learning, NLP, Robotics, and Automation are increasingly prevalent in all aspects and are being applied to healthcare as well. These technologies have the potential to transform all aspects of health care from patient care to the development and production of new experimental drugs that can have a faster roll-out date than traditional methods. There are numerous research studies suggesting that AI can outperform humans at key healthcare tasks, such as diagnosing ailments. Here is a great example, AI'outperforms' doctors diagnosing breast cancer¹. Artificial intelligence is a collection of technologies that come together form artificial intelligence. Tech firms and startups are also working assiduously on the same issues.


MIT researcher held up as model of how algorithms can benefit humanity

#artificialintelligence

In June, when MIT artificial intelligence researcher Regina Barzilay went to Massachusetts General Hospital for a mammogram, her data were run through a deep learning model designed to assess her risk of developing breast cancer, which she had been diagnosed with once before. The workings of the algorithm, which predicted that her risk was low, were familiar: Barzilay helped build that very model, after being spurred by her 2014 cancer diagnosis to pivot her research to health care. Barzilay's work in AI, which ranges from tools for early cancer detection to platforms to identify new antibiotics, is increasingly garnering recognition: On Wednesday, the Association for the Advancement of Artificial Intelligence named Barzilay as the inaugural recipient of a new annual award honoring an individual developing or promoting AI for the good of society. The award comes with a $1 million prize sponsored by the Chinese education technology company Squirrel AI Learning. While there are already prizes in the AI field, notably the Turing Award for computer scientists, those existing awards are typically "more focused on scientific, technical contributions and ideas," said Yolanda Gil, a past president of AAAI and an AI researcher at the University of Southern California.


OpenCV Sudoku Solver and OCR - PyImageSearch

#artificialintelligence

In this tutorial, you will create an automatic Sudoku puzzle solver using OpenCV, Deep Learning, and Optical Character Recognition (OCR). My wife is a huge Sudoku nerd. Every time we travel, whether it be a 45-minute flight from Philadelphia to Albany or a 6-hour transcontinental flight to California, she always has a Sudoku puzzle with her. The funny thing is, she prefers the printed Sudoku puzzle books. She hates the digital/smartphone app versions and refuses to play them. I'm not a big puzzle person myself, but one time, we were sitting on a flight, and I asked: How do you know if you solved the puzzle correctly?


OpenAI 'GPT-f' Delivers SOTA Performance in Automated Mathematical Theorem Proving

#artificialintelligence

San Francisco-based AI research laboratory OpenAI has added another member to its popular GPT (Generative Pre-trained Transformer) family. In a new paper, OpenAI researchers introduce GPT-f, an automated prover and proof assistant for the Metamath formalization language. While artificial neural networks have made considerable advances in computer vision, natural language processing, robotics and so on, OpenAI believes they also have potential in the relatively underexplored area of reasoning tasks. The new research explores this potential by applying a transformer language model to automated theorem proving. Automated theorem proving tends to require general and flexible reasoning to efficiently check the correctness of proofs.


Road To Machine Learning Mastery: Interview With Kaggle GM Vladimir Iglovikov

#artificialintelligence

"I did not have lines in the resume that showed my ML expertise. I did not have a Data Science industry experience or relevant papers. For this week's ML practitioner's series, Analytics India Magazine got in touch with Vladimir Iglovikov, an ex-Spetsnaz, theoretical physicist and also a Kaggle GrandMaster. In this exclusive interview, he shares valuable information from his journey in the world of data science. After a brief stint in Russian special forces, Iglovikov enrolled for the Master's programme in theoretical Physics at the St.Petersburg State University whose distinguished alumni include President Vladimir Putin. In September 2010, Iglovikov moved to California to pursue a PhD in Physics from UC Davis and on completion of the degree, he moved to Silicon Valley in the summer of 2015. Currently, Iglovikov works as Sr. Software Engineer at Lyft, a ride-sharing company that operates in the United States and Canada. His work is centered around building robust machine learning models for autonomous vehicles at Lyft, Level5. Post PhD, Iglovikov had two options in hand. One was to pursue postdoc, and the other was to get into the industry as a software engineer. His career took a new turn when one of his friends introduced him to the world of data science. "I attended a lecture where the presenter talked about Data Science as the 4th paradigm of scientific discovery.


Researchers examine uncertainty in medical AI papers going back a decade

#artificialintelligence

In the big data domain, researchers need to ensure that conclusions are consistently verifiable. But that can be particularly challenging in medicine because physicians themselves aren't always sure about disease diagnoses and treatment plans. To investigate how machine learning research has historically handled medical uncertainties, scientists at the University of Texas at Dallas; the University of California, San Francisco; the National University of Singapore; and over half a dozen other institutions conducted a meta-survey of studies over the past 30 years. They found that uncertainty arising from imprecise measurements, missing values, and other errors was common among data and models but that the problems could potentially be addressed with deep learning techniques. The coauthors sought to quantify the prevalence of two types of uncertainty in the studies: structural uncertainty and uncertainty in model parameters.


Machine Learning Helps Plasma Physics Researchers Understand Turbulence Transport

#artificialintelligence

This snapshot of turbulence density and vorticity from a simulation using SDSC's'Comet' supercomputer illustrates a notable physics concept: the formation of zonal (i.e. For more than four decades, UC San Diego Professor of Physics Patrick H. Diamond and his research group have been advancing fundamental concepts in plasma physics, which is an important aspect of furthering advancements in fusion energy. Most recently, Diamond worked with graduate student Robin Heinonen on a model reduction study that used the Comet supercomputer at the San Diego Supercomputer Center at the University of California San Diego to show how machine learning produced a novel model for plasma turbulence. Diamond and Heinonen say that advances in machine learning, such as new deep learning techniques, have provided them with new tools to better understand the self-organization process that emerges from what the researchers term as a seemingly chaotic process. "Turbulence and its transport is chaotic in a sense, but this chaos is ordered and constrained," said Heinonen, who co-authored Turbulence Model Reduction by Deep Learning with Diamond in the academic journal entitled Physical Review E. "Moreover, in certain turbulent systems, the chaos conspires to spontaneously form large, long-lived coherent structures and in many cases, we only have a tenuous understanding of why and now. There are definitely aspects of structure formation and self-organization which we do understand, but it's still an active area of research."


Machine Learning Helps Plasma Physics Researchers Understand Turbulence Transport

#artificialintelligence

For more than four decades, UC San Diego Professor of Physics Patrick H. Diamond and his research group have been advancing fundamental concepts in plasma physics, which is an important aspect of furthering advancements in fusion energy. Most recently, Diamond worked with graduate student Robin Heinonen on a model reduction study that used the Comet supercomputer at the San Diego Supercomputer Center at the University of California San Diego to showcase how machine learning produced a novel model for plasma turbulence. Diamond and Heinonen say that advances in machine learning, such as new deep learning techniques, have provided them with new tools to better understand the self-organization process that emerges from what the researchers term as a seemingly chaotic process. "Turbulence and its transport is chaotic in a sense, but this chaos is ordered and constrained," said Heinonen, who co-authored Turbulence Model Reduction by Deep Learning with Diamond in the academic journal entitled Physical Review E. "Moreover, in certain turbulent systems, the chaos conspires to spontaneously form large, long-lived coherent structures and in many cases, we only have a tenuous understanding of why and now. There are definitely aspects of structure formation and self-organization which we do understand, but it's still an active area of research."


What is GPT-3? Everything your business needs to know about OpenAI's breakthrough AI language program

ZDNet

GPT-3 is a computer program created by the privately held San Francisco startup OpenAI. It is a gigantic neural network, and as such, it is part of the deep learning segment of machine learning, which is itself a branch of the field of computer science known as artificial intelligence, or AI. The program is better than any prior program at producing lines of text that sound like they could have been written by a human. The reason that such a breakthrough could be useful to companies is that it has great potential for automating tasks. GPT-3 can respond to any text that a person types into the computer with a new piece of text that is appropriate to the context. Type a full English sentence into a search box, for example, and you're more likely to get back some response in full sentences that is relevant. That means GPT-3 can conceivably amplify human effort in a wide variety of situations, from questions and answers for customer service to due diligence document search to report generation. The program is currently in a private beta for which people can sign up on a waitlist. It's being offered by OpenAI as an API accessible through the cloud, and companies that have been granted access have developed some intriguing applications that use the generation of text to enhance all kinds of programs, from simple question-answering to producing programming code. Along with the potential for automation come great drawbacks. GPT-3 is compute-hungry, putting it beyond the use of most companies in any conceivable on-premise fashion. Its generated text can be impressive at first blush, but long compositions tend to become somewhat senseless.


PyTorch drives next-gen intelligent farming machines

#artificialintelligence

PyTorch is helping to power a new generation of AI-enhanced farming machines. For farmers, weeds pose a very real threat to the health of crops at a time when global population growth is raising food demand while also making resources such as land and water increasingly scarce. Seeking solutions to helping farmers produce more food with fewer resources, California-based Blue River Technology, a subsidiary of John Deere, has turned to artificial intelligence and robotics technology. The company's See & Spray robotic farming machine combines machine learning (ML) and computer vision to identify weeds among crops in real time and to treat weeds while leaving crops unharmed -- giving farmers a more consistent, precise, and efficient means of weeding crops. As the See & Spray machine moves through a field, it collects images of crops and weeds through the use of a high-resolution camera array.