The acquisition will bring Zementis' predictive analytics to Software AG's real-time streaming analytics platform. Software AG has acquired California-based Zementis for an undisclosed sum in a move designed to bolster its internet of things capability. Zementis offers software for'deep learning' which plays a crucial role in the development of machine learning, data science and fundamental technology that drives artificial intelligence (AI) development. According to Software AG, the advances in machine learning and AI are being applied in the next generation Internet of Things (IoT) such as self-driving cars, personal digital assistants, medical diagnosis, predictive maintenance and robotics. Software AG has already employed Adaptive Decision and Predictive Analytics (ADAPA) from Zementis into its Digital Business Platform to offer its clients with comprehensive insights for real time business analytics.
Artificial intelligence (AI) is one of the most evocative and confusing terms in technology. According to Accenture, artificial intelligence (AI) could add an additional US$814 billion in 2035 to the UK's economy--with growth rates increasing from 2.5 percent to 3.9 percent in 2035. We have seen a machine master the complex game of Go, previously thought to be the most difficult challenge of artificial processing. We have witnessed vehicles operating autonomously, including a caravan of trucks crossing Europe with only a single operator to monitor systems. We have seen a proliferation of robotic counterparts and automated means for accomplishing a variety of tasks.
Artificial intelligence has been a far-flung goal of computing since the conception of the computer, but we may be getting closer than ever with new cognitive computing models. Cognitive computing comes from a mashup of cognitive science -- the study of the human brain and how it functions -- and computer science, and the results will have far-reaching impacts on our private lives, healthcare, business, and more. The goal of cognitive computing is to simulate human thought processes in a computerized model. Using self-learning algorithms that use data mining, pattern recognition and natural language processing, the computer can mimic the way the human brain works. While computers have been faster at calculations and processing than humans for decades, they haven't been able to accomplish tasks that humans take for granted as simple, like understanding natural language, or recognizing unique objects in an image.
A good friend recently told me that it takes a special kind of nerd to appreciate what Google's AlphaGo did to international Go champion Lee Sedol: a nerd that is both a Go nerd and a computer nerd. For Go nerdiness, I am recently enamored with the massively complex game that has exponentially more outcomes and dimensions than chess. As for the tech nerdiness, many of us assumed that after DeepBlue beat Kasparov in chess, any other game was a foregone conclusion. But actually, it's taken twenty years for a computer to rise to the level of top-ranked Go players, because high-level Go incorporates less calculation of a limited set of future outcomes and far more intuition. Challenges like this are not just an interesting competition.
Magenta will use TensorFlow, the machine-learning engine that Google built and opened up to the public at the end of 2015, to determine whether AI systems can be trained to create original pieces of music, art, or video. Much in the same way that Google opened up TensorFlow, Eck said Magenta will make available its tools to the public. Roberts also showed off a simple digital synthesizer program he'd been working on, where an AI could listen to notes that he played, and play back a more complete melody from those notes: The goal of the project, Eck suggested, could well be to create a system that could give a listener "musical chills" with entirely new pieces of music, on a regular basis, as they sit listening to computer-generated music from the comfort of their couch at home. Eck said the inspiration for Magenta had come from other Google Brain projects, like Google DeepDream, where AI systems were trained on image databases to "fill in the gaps" in pictures, trying to find structures in images that weren't necessarily present in the images themselves.