Machine Learning Is The Most Dangerous Trend In Tech (Though Not For The Reasons You Think) - StoAmigo


Today's machine learning systems are generally extremely "narrow." Machine learning works by studying large amounts of data, essentially picking out recognizable patterns and making decisions based on those patterns. Companies spend millions and millions of dollars researching the exact'triggers' that get people to spend money on these kinds of systems. Sophisticated machine learning plus massive amounts of your data means companies will identify your'triggers' very, very quickly.



Yet the inexperienced or rushed data scientist skipped past feature engineering, the critical stage at which those invalid fields would have been removed. The experienced data scientist would know to invest lots of time in feature engineering to explicitly screen out potential bias from our training data. If our hiring data to date has a past human bias of not hiring women at the same rate as men, our machine learning model would learn to emulate that behavior unless we explicitly removed gender from consideration. It's easy to see how bias could creep in if inexperienced or rushed data scientists are building models from massive datasets.

Machine Learning and AI at the IoT Edge


As the world of internet devices expands, many devices are being created to process data on their own. An intelligent device can potentially process enough data on the spot to alert the control team that anomalies are occuring. Pushing the machine data down to the devices on the machine lets plant operators take action on the spot in real-time. "They need to build patterns to identify anomalies.

Banking Chatbots – Chatbots Magazine


According to a report, it is expected that there will be around 1.2 billion mobile banking users worldwide by the end of 2016. According to a report released by Gartner, consumers will manage 85% of the total business associations with banks through Fintech chatbots by 2020. The artificial assistant helps a customer to save their money. If you enjoyed the story, you can read the whole story on Banking chatbots and its benefits for the industry here:"How Chatbots are transforming Wall Street and Main Street Banks?"

Cooperatively Learning Human Values


Our recent work on Cooperative Inverse Reinforcement Learning formalizes and investigates optimal solutions to this value alignment problem -- the joint problem of eliciting and optimizing a user's intended objective. We can attribute the failures above to the mistaken assumption that the reward function communicated to the learning system is the true reward function that the system designer cares about. This suggests a rough strategy for value alignment: the robot observes human behavior, learns the human reward function with inverse reinforcement learning, and behaves according to that function. "The Off-Switch Game" analyzes robots' incentives to accept human oversight or intervention.

Machine-learning algorithms can dramatically improve ability to predict suicide attempts


After a meta-analysis, or a synthesis of the results in these published studies, they found that no single risk factor had clinical significance in predicting suicidal ideation, attempts or completion. The authors also found that the ability of researchers to find factors that predict suicidal thoughts and behaviors did not improve over the 50 years they surveyed, and that some of the most popular factors to study--including mood disorders, substance abuse and demographics--are some of the weakest predictors. "Few would expect hopelessness measured as an isolated trait-like factor to accurately predict suicide death over the course of a decade," the researchers write. Colin Walsh, an internist and data scientist at Vanderbilt University Medical Center, along with FSU's Franklin and Ribeiro, looked at millions of anonymized health records and compared 3,250 clear cases of nonfatal suicide attempts with a random group of patients.

How to eliminate social bias from artificial intelligence?


Given that computers, software, artificial intelligence, machine learning and other'intelligent' systems are human creations then it makes sense that such systems may contain social biases. Researchers from University of Massachusetts at Amherst have put forward the case that greater care needs to be taken when developing artificial intelligence that social biases are minimized. As an example the researcher states that ethnic bias exists in online advertising delivery systems. To aid developers in spotting social bias, Professor Meliou have created a new technique termed "Themis."

The importance of building ethics into artificial intelligence


A crucial step toward building a secure and thriving AI industry is collectively defining what ethical AI means for people developing the technology – and people using it. At Sage, we define ethical AI as the creation of intelligent machines that work and react like humans, built with the ability to autonomously conduct, support or manage business activity across disciplines in a responsible and accountable way. Consequently, the industry should focus on efforts to develop and grow a diverse talent pool that can build AI technologies to enhance business operations and address specific sets of workplace issues, while ensuring that it is accountable. Hopefully, AI's human co-workers – including people actually building the technology – will learn vital AI management skills, adopt strong ethics and hold themselves more accountable in the process.

DNA test data may not alter a person's health habits

The Japan Times

The field also gained a new entrant in July, when a company called Helix launched an online marketplace for DNA tests, including some for genetic health risk. Last year, researchers published an analysis that combined 18 studies of people who got doctor-ordered DNA test results about disease risks. In an interview, Dr. James Lu, a co-founder of Helix, agreed that the evidence on whether people change their lifestyles in response to DNA information is mixed. " Dr. Robert C. Green of Brigham and Women's Hospital in Boston, whose research indicates DNA test results can change health behavior, said cases like Collins are just the point.

Game AI: Non-Human Behavior Part 2


This is where we can see some interesting behavior trees. If food is plentiful, that decision is easy, but if that creature has gone a long time with out food they may take bigger and bigger risks to find food, encroaching into areas they know to be dangers. Some games, especially hunting games and some survival games, attempt highly realistic simulated environments with a balance of creatures that exist for the player to hunt. In theHunter:Call of the Wild the designers knew that players wanted a realistic hunting experience, and that often players would spend a lot of time watching an animal before taking a shot, so the designers had to be prepared to do extensive research on how those creatures behaved to ensure a believable experience for the player.