Goto

Collaborating Authors

Kingfisher County


Machine Learning, Analytics Play Growing Role in US Exascale Efforts - AI Trends

#artificialintelligence

Exascale computing promises to bring significant changes to both the high-performance computing space and eventually enterprise datacenter infrastructures. The systems, which are being developed in multiple countries around the globe, promise 50 times the performance of current 20 petaflop-capable systems that are now among the fastest in the world, and that bring corresponding improvements in such areas as energy efficiency and physical footprint. The systems need to be powerful run the increasingly complex applications being used by engineers and scientists, but they can't be so expensive to acquire or run that only a handful of organizations can use them. At the same time, the emergence of high-level data analytics and machine learning is forcing some changes in the exascale efforts in the United States, changes that play a role in everything from the software stacks that are being developed for the systems to the competition with Chinese companies that also are aggressively pursuing exascale computing. During a talk last week at the OpenFabrics Workshop in Austin, Texas, Al Geist, from the Oak Ridge National Laboratory and CTO of the Exascale Computing Project (ECP), outlined the work the ECP is doing to develop exascale-capable systems within the next few years.


autumnai/leaf

#artificialintelligence

Leaf is a open Machine Learning Framework for hackers to build classical, deep or hybrid machine learning applications. It was inspired by the brilliant people behind TensorFlow, Torch, Caffe, Rust and numerous research papers and brings modularity, performance and portability to deep learning. Leaf has one of the simplest APIs, is lean and tries to introduce minimal technical debt to your stack. See the Leaf - Machine Learning for Hackers book for more. Leaf is a few months old, but thanks to its architecture and Rust, it is already one of the fastest Machine Intelligence Frameworks available.


Python Data Science Handbook: Essential Tools for Working with Data

#artificialintelligence

For many researchers, Python is a first-class tool mainly because of its libraries for storing, manipulating, and gaining insight from data. Several resources exist for individual pieces of this data science stack, but only with the Python Data Science Handbook do you get them all--IPython, NumPy, Pandas, Matplotlib, Scikit-Learn, and other related tools. Working scientists and data crunchers familiar with reading and writing Python code will find this comprehensive desk reference ideal for tackling day-to-day issues: manipulating, transforming, and cleaning data; visualizing different types of data; and using data to build statistical or machine learning models. Quite simply, this is the must-have reference for scientific computing in Python.


Alphabet bests Uber in self-driving car reliability

Engadget

It's no secret that Uber's young self-driving car program still needs work, but how does it stack up next to efforts from others? Not so well, it seems. California's Department of Motor Vehicles has published stats showing that Alphabet's Waymo is well ahead of the pack. While Uber's autonomous system disengages about once every mile, Waymo's only requires human intervention once every 5,128 miles. While this sounds like a condemnation of Uber's technological chops, it's more a reflection of the vast experience gap between the two.


Machine Learning, Analytics Play Growing Role in US Exascale Efforts

#artificialintelligence

Exascale computing promises to bring significant changes to both the high-performance computing space and eventually enterprise datacenter infrastructures. The systems, which are being developed in multiple countries around the globe, promise 50 times the performance of current 20 petaflop-capable systems that are now among the fastest in the world, and that bring corresponding improvements in such areas as energy efficiency and physical footprint. The systems need to be powerful run the increasingly complex applications being used by engineers and scientists, but they can't be so expensive to acquire or run that only a handful of organizations can use them. At the same time, the emergence of high-level data analytics and machine learning is forcing some changes in the exascale efforts in the United States, changes that play a role in everything from the software stacks that are being developed for the systems to the competition with Chinese companies that also are aggressively pursuing exascale computing. During a talk last week at the OpenFabrics Workshop in Austin, Texas, Al Geist, from the Oak Ridge National Laboratory and CTO of the Exascale Computing Project (ECP), outlined the work the ECP is doing to develop exascale-capable systems within the next few years.


Distilling Information Reliability and Source Trustworthiness from Digital Traces

arXiv.org Machine Learning

Online knowledge repositories typically rely on their users or dedicated editors to evaluate the reliability of their content. These evaluations can be viewed as noisy measurements of both information reliability and information source trustworthiness. Can we leverage these noisy evaluations, often biased, to distill a robust, unbiased and interpretable measure of both notions? In this paper, we argue that the temporal traces left by these noisy evaluations give cues on the reliability of the information and the trustworthiness of the sources. Then, we propose a temporal point process modeling framework that links these temporal traces to robust, unbiased and interpretable notions of information reliability and source trustworthiness. Furthermore, we develop an efficient convex optimization procedure to learn the parameters of the model from historical traces. Experiments on real-world data gathered from Wikipedia and Stack Overflow show that our modeling framework accurately predicts evaluation events, provides an interpretable measure of information reliability and source trustworthiness, and yields interesting insights about real-world events.


Seven Game Changing Digital Technologies for Enterprise Transformation

#artificialintelligence

The trend towards digital transformation of the enterprise, regardless of industry or sector, will accelerate in 2017 from the already significant levels seen last year. We identify the following seven digital technology trends as game changers for software-driven enterprises. The crumbling of the barriers to entry for machine intelligence – driven by the availability of high-quality open-source software components; cloud platforms from all major providers; and the availability of wildly popular and high-quality introductory courses on MOOC platforms – will drive growing mainstream adoption of machine intelligence as a differentiating and foundational technology layer in the digital transformation stack for identifying and closing new revenue opportunities, customizing user experience, driving operational efficiencies, and predicting failures. Also expect to see acceptance and greater adoption of advanced machine learning techniques for delivering closed-loop actionable insights in domains such as the Industrial Internet of Things and cybersecurity. The fully distributed, transparent, tamper-resistant, and auditable shared ledger technology known as blockchain is particularly powerful in settings where multiple parties need to reconcile without a central intermediary, or need to track provenance of assets across organizational boundaries, or need to establish and enforce contracts between untrusting parties and speed up reconciliation with a secure and verifiable audit trail.


Unsupervised Investments: A Comprehensive Guide to AI Investors

#artificialintelligence

Investing in AI is not an easy job: AI technologies are black boxes and unless you are able to dig into lines of code they may be inscrutable. Simply looking at proof of concepts might not be enough to really understand the underlying stack behind specific applications, and this represents a big barrier for investors to efficiently allocate their capitals. Generalist investors found then alternative ways to discern investable companies from the pile of tech-driven companies out there. AI specialists are luckily not that naive, but they are able to go much deeper and look behind the veil. I then compiled a list as extensive as possible of every investor I read or bumped into over the past months.


Robots can read your mind to fix their mistakes

#artificialintelligence

Imagine a robot stacking boxes in a warehouse when it suddenly sees that one box is in the wrong stack. It goes back and puts the container in the right place. How did the machine know it had made a mistake? The robot's human boss didn't punch any codes into a computer to have the robot correct its mistake. The boss didn't say a word.


Intel creates AI group, aims for more focus ZDNet

#artificialintelligence

Intel has put its artificial intelligence efforts under one group led by Naveen Rao, former CEO of Nervana, which was acquired by the chip giant. The next wave of IT innovation will be powered by artificial intelligence and machine learning. We look at the ways companies can take advantage of it and how to get started. The company has been repositioning via acquisitions to focus on Internet of Things to autonomous vehicles. The upshot is that Intel is trying to build a data center to IoT stack powered by its processors.