Nervana Systems, one of a handful startups focusing on a type of artificial intelligence called deep learning, today is announcing that it has released its Neon deep learning software under an Apache open-source license, allowing anyone to try it out for free. The startup is pointing to benchmarks a Facebook researcher recently conducted suggesting that the Nervana software outperforms other publicly available deep learning tools, including Nvidia's cuDNN and Facebook's own Torch7 libraries. "We really want to get the tools out there to make it easy for people to apply deep learning to the problem," Naveen Rao, chief executive and a cofounder of Nervana, told VentureBeat in an interview. "Keeping a closed environment makes it kind of hard for people to try things out and have an idea even for what people can do. If they want the fastest, they'll come to us."
As neural nets increase in complexity they also become harder to write and harder to teach. Our hypothesis is that these difficulties stem from the absence of a language that elegantly describe neural networks. Mariana (named after the deepest place on earth, the Mariana trench) is an attempt to create such a language within python. That being said, you can also call it an Extendable Python Machine Learning Framework build on top of Theano that focuses on ease of use. Mariana provides an interface so simple and intuitive that writing models becomes a breeze.
Why do we need layer the last layer #nodes #classes? What is the difference between multiclass and one vs all? How to define dense neurons before the softmax classifier? On what depends how many dense networks do we need before the softmax layer in a Convolutional Neural Network? Are loss and accuracy complementary to each other?
Although widespread famine and the Black Death epidemic helped to bring the Dark Ages on its knees, it was ultimately a newfound faith in knowledge and the discovery of new ways of living that brought the Middle ages to a definitive end in the 15th century. Especially the re-discovery of Ancient Greek philosophy paved the way for the period we've come to know as the renaissance. Human thinking once again started to flourish under Protagoras' famous adagio: "Men is the measure of all things", revolutionizing the fields of science, politics, art, architecture and philosophy. In this important transitional period, in which mankind started to believe in itself again rather than relying on the heavenly creatures above, it was no coincidence that the Italian diplomat and historian Niccolò Machiavelli wrote his most influential work; Il Principe (The Prince). In the midst of the madness that was called The Great Italian Wars, Machiavelli served the house of de' Medici in his role of diplomat and had a front row seat to all the power struggles that were going on in the epicentre of the renaissance.
Some background is in order. The LessWrong community is concerned with the future of humanity, and in particular with the singularity--the hypothesized future point at which computing power becomes so great that superhuman artificial intelligence becomes possible, as does the capability to simulate human minds, upload minds to computers, and more or less allow a computer to simulate life itself. The term was coined in 1958 in a conversation between mathematical geniuses Stanislaw Ulam and John von Neumann, where von Neumann said, "The ever accelerating progress of technology ... gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." Futurists like science-fiction writer Vernor Vinge and engineer/author Kurzweil popularized the term, and as with many interested in the singularity, they believe that exponential increases in computing power will cause the singularity to happen very soon--within the next 50 years or so. Kurzweil is chugging 150 vitamins a day to stay alive until the singularity, while Yudkowsky and Peter Thiel have enthused about cryonics, the perennial favorite of rich dudes who want to live forever.
IF you were to mention artificial intelligence (AI) or bots in a conversation about science with the average person, they would consider these to be more science fiction than accepted science. The average person would be painfully wrong, as these concepts have seen measured success in the past half-century. However, their expected impact on our lives has not met the demands of the general public who envision more than it is capable of just yet, but the science in support of it grows each day. Artificial intelligence by the cinematic definition is a complete conscious entity unnaturally created through science. Similarly, bots are defined in public opinion by characters such as the Nanny robot in the cartoon The Jetsons.
They are meant to organize information and complete numerical tasks without mistakes. However, in the modern world of interactive technology, there is a growing desire to have computer applications that are more human like. Vicarious is an Artificial Intelligence business based in Silicon Valley. Their visions encompass creative ways of data processing. They are using what they know about information flow through the human brain to bring their ideas to life.
Why do we even want computers writing bodice-rippers? Won't it help the headless robot cheetah slip its leash? The Google Brain AI project seriously did just try writing romance novels. And while they stank, so does most human-unit-generated stuff. Indeed, I'm surprised judges in a recent Dartmouth College "Turing Test" style competition could distinguish machine-made sonnets because, one judge said, they had "idiosyncrasies of syntax and diction, uses of language that were just a little off."
IBM is tackling modern security issues with a modern approach: cognitive technology. As part of a year long research project, IBM is rolling out Watson for Cyber Security, a new cloud-based version of the company's cognitive tech that focuses on the language of security. IBM Watson is the organisation's technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data. According to a statement, IBM is teaming up with eight universities around the United States to further scale the system and expand the collection of security data that Watson is currently trained with. According to IBM, training for Watson for Cyber Security is a crucial step in the advancement of cognitive security.
Sony continues to streamline its business by offloading part of the Creative designing and editing range. On Tuesday, Magix Software GmbH said the firm has acquired the "majority" of the Sony Creative Software (SCS) product range. In a press release, Magix said it now has purchased the full Sony Vegas Pro, Movie Studio, Sound Forge Pro, and ACID Pro product lines. The German software and app provider provides designing, editing and presentation software for both private and business users, and says the acquisition of SCS products will improve the firm's position across other markets, including the United States. Over the past few years, Sony launched itself down the path of a major restructuring effort in the wake of increased competition, poor sales and a weak Japanese yen.