Architecting a Machine Learning Pipeline

#artificialintelligence

Funneling incoming data into a data store is the first step of any ML workflow. The key point is that data is persisted without undertaking any transformation at all, to allow us to have an immutable record of the original dataset. Data can be fed from various data sources; either obtained by request (pub/sub) or streamed from other services. NoSQL document databases are ideal for storing large volumes of rapidly changing structured and/or unstructured data since they are schema-less. They also offer a distributed, scalable, replicated data storage.


Introduction to AutoML with MLBox

#artificialintelligence

Today's post is very special. It's written in collaboration with Axel de Romblay the author of the MLBox Auto-ML package that has gained a lot of popularity these last years. If you haven't heard about this library, go and check it out on github: It encompasses interesting features, it's gaining in maturity and is now under active development. In this post, we'll show you how you can easily use it to train an automated machine learning pipeline for a classification problem. MLBox has been presented to many machine learning Meetups.


Linear Regression with Gradient Descent from Scratch in Numpy

#artificialintelligence

I strongly advise you to read the article linked above. It will set the foundations on the topic, plus some math is already discussed there. To start out, I'll define my dataset -- only three points that are in a linear relationship. I've chosen so few points only because the math will be shorter -- needless to say, the math won't be more complex for longer dataset, it would just be longer, and I don't want to make some stupid arithmetic mistake. Then I'll set coefficients beta 0 and beta 1 to some constant and define the cost function as Sum of Squared Residuals (SSR/SSE).


Swift for TensorFlow aims for high-performance machine learning

#artificialintelligence

Google developers behind Swift for TensorFlow, which tunes the Apple-designed Swift programming language for machine learning applications, shared project roadmap information in a recent talk. Future plans for Swift for TensorFlow include capabilities such as C interoperability, improved automatic differentiation, and support for distributed training. Swift for TensorFlow is an early-stage, Google-led project that integrates Google's TensorFlow machine learning library with Swift, the modern general purpose language created by Apple. The use of Swift enables more powerful algorithms to be expressed in a new manner, and easy differentiation of functions via generalized differentiation APIs, according to the Swift for TensorFlow developers. Open source Swift has been described on the Swift for TensorFlow project website as easy to use and elegant, with advantages such as a strong type system, which can help developers catch errors earlier and promotes good API design.


Reach and IBM launch brand-safety AI to tackle unnecessary keyword blacklisting

#artificialintelligence

Reach, the Daily Mirror and Daily Express publisher, has launched a brand-safety platform created by IBM that will hope to curb articles being unnecessarily blacklisted from advertising. The platform, called Mantis, uses IBM Watson's artificial-intelligence engine and machine learning to check whether content is appropriate. Reach began looking for a tech solution last year in response to a signficicant proportion of news content being blacklisted due to "less intuitive and less sophisticated solutions" currently on the market. The main four players offering third-party brand-safety solutions for publishers are ADmantX (which was hired by newspaper sales joint venture The Ozone Project earlier this year), DoubleVerify, Grapeshot and Integral Ad Science. Like ADmantX, Mantis uses natural language processing to decipher context in language.


Building the Public's Trust in AI Is Key to Coming Guidance, White House Official Says

#artificialintelligence

The White House Office of Science and Technology Policy's Assistant Director for Artificial Intelligence offered fresh details Wednesday into a memo being developed to help foster public trust and build agencies' confidence in regulating artificial intelligence technologies. "This is a memo directed to agencies that suggests regulatory and non-regulatory principles for how you oversee the use of AI in the private sector," Lynne Parker, OSTP's assistant director for artificial intelligence, said. "So these will establish some common principles [and] some predictability across agencies in terms of how they think about regulatory and non-regulatory approaches to the use of AI." In February, President Trump issued an executive order to accelerate American advancements in AI. One of the key priorities of the order, Parker noted, is to "foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people."


Employees trust in workplace AI growing HRExecutive.com

#artificialintelligence

There used to be a time in the not-too-distant past when we feared the oncoming hordes of robots in the workplace. That time is no longer. People now have more trust in robots than their managers, according to the second annual AI at Work study conducted by Oracle and research firm Future Workplace. The study of 8,370 employees, managers and HR leaders across 10 countries, found that AI has changed the relationship between people and technology at work and is reshaping the role HR teams and managers need to play in attracting, retaining and developing talent. The latest advancements in machine learning and artificial intelligence are rapidly reaching mainstream, resulting in a massive shift in the way people across the world interact with technology and their teams, says Emily He, senior vice president, human capital management for Oracle's cloud business group.


MIT develops a way for robots to grasp and manipulate objects much faster – TechCrunch

#artificialintelligence

Picking stuff up seems easy, right? It is – for humans with powerful brain computers that instantly and intuitively figure out everything needed to get the job done. But for robots, even advanced robots, the compute required is surprisingly complex, especially if you want the robot to not, you know, break the thing it's grabbing. MIT has developed a new way to speed up the planning involved in a robot grasping an object, making it "significantly" faster – reducing the total time from as much as ten or more minutes, to under a second. This could have big practical benefits to setting where robotics are already in use, including in industrial environments.


How AI and Machine Learning are Transforming the Way Discoveries are Made - PULSE

#artificialintelligence

James Collins, Ph.D. is on a mission to end antibiotic resistance with the help of artificial intelligence and machine learning. James Collins, the Termeer Professor of Bioengineering in the Department of Biological Engineering and Institute for Medical Engineering & Science at the Massachusetts Institute of Technology (MIT), has founded several companies based on research in synthetic biology. According to Collins, "the generic principles that apply to physical systems don't extend so well to living organic systems." Collins explains that synthetic biology starts by looking at living systems from an engineering perspective. Rather than simply understanding how everything works, synthetic biologists must determine whether it's possible to reverse-engineer cells to create desirable outcomes.


Can Artificial Intelligence Tell the Difference Between Ischemic and Hemorrhagic Stroke? – Young Scientists Journal

#artificialintelligence

Stroke is a very common and serious medical problem. Ischemic strokes happen when a blood clot blocks blood flow to the brain, causing brain cells to die. A hemorrhagic stroke happens when a brain artery breaks and blood flows freely into the brain. Telling the difference between the two types of stroke is very important as their treatments are quite different, and the longer a stroke goes without treatment, the more brain cells die. Artificial intelligence (AI), a computerised form of learning, has allowed great progress in identifying situations or items in the technological world, but it has rarely been used in medicine.