computing


Why AI companies don't always scale like traditional software startups

#artificialintelligence

At a technical level, artificial intelligence seems to be the future of software. AI is showing remarkable progress on a range of difficult computer science problems, and the job of software developers – who now work with data as much as source code – is changing fundamentally in the process. Many AI companies (and investors) are betting that this relationship will extend beyond just technology – that AI businesses will resemble traditional software companies as well. Based on our experience working with AI companies, we're not so sure. We are huge believers in the power of AI to transform business: We've put our money behind that thesis, and we will continue to invest heavily in both applied AI companies and AI infrastructure. However, we have noticed in many cases that AI companies simply don't have the same economic construction as software businesses. At times, they can even look more like traditional services companies.


Why Golang and Not Python? Which Language is Perfect for AI?

#artificialintelligence

Golang is now becoming the mainstream programming language for machine learning and AI with millions of users worldwide. Python is awesome, but Golang is perfect for AI programming! Launched a decade back, November 2009, Golang recently turned ten. The language built by Google's developers is now making programmers more productive. These creators main goal was to create a language that would eliminate the so-called "extraneous garbage" of programming languages like C .



Workplace 2040 Fast Future Publishing

#artificialintelligence

A healthy workplace makes good use of the latest insights on human behaviors, wellness, and achieving sustainable performance, and deploys the cutting-edge work tools of the times. A wide array of ever-more powerful technologies is becoming part of the core design of organizations--from artificial intelligence (AI) to 3D printing, we now assume they will be part of the fabric of work and the workplace. So, what might these factors mean for the different possible futures of the workplace? The workplace of the future could potentially manifest many of the technological possibilities being developed today. Hence, there are a range of views about how the boundaries between us and our devices might play out in a world of super computing power, particularly in the world of work.


Is Artificial Intelligence A Myth?

#artificialintelligence

Here is something we all love: tech buzz words and the grand promise of new technologies. You know its right, because me, you and the person pitching the sales deck, we use them all the time. "The Internet Of Things will connect everything," "blockchain will democratize everything," and "AI will solve all of our problems." AI specifically is high on the buzz word list: from traffic jams to climate change, there is a solution, and it is the new breed of machines that can think and act like us. Most of everything that was once plain "digital" is becoming "AI-enabled."


EETimes - Let's Talk Edge Intelligence -

#artificialintelligence

When new industry buzzwords or phrases come up, the challenge for people like us who write about the topic is figuring out what exactly a company means, especially when it uses the phrase to fit its own marketing objective. The latest one is edge artificial intelligence or edge AI. Because of the proliferation of internet of things (IoT) and the ability to add a fair amount of compute power or processing to enable intelligence within those devices, the'edge' can be quite wide, and could mean anything from the'edge of a gateway' to an'endpoint'. So, we decided to find out if there was consensus in the industry on the definition of edge vs. endpoint, who would want to add edge AI, and how much'smartness' you could add to the edge. First of all, what is the difference between edge and endpoint? Well it depends on your viewpoint -- anything not in the cloud could be defined as edge. Probably the clearest definition was from Wolfgang Furtner, Infineon Technologies' senior principal for concept and system engineering.


AI At The Edge: Creating Coordinated Autonomy

#artificialintelligence

Today organizations have to deal with so many emergent behaviors that the notion of central control as the only coping mechanism seems to be receding as a dominant management model. Freedom must be doled out further from the centrist idea by creating goals, constraints, boundaries and allowable edge behaviors. Someday software and hardware agents will negotiate their contribution to business outcomes on their own, but until then organizations will have to prepare themselves by managing coordinated autonomy. Edge computing is a form of distributed computing which brings computation and data storage closer to the location where it is needed, to improve response times and provide better actions. Now, AI on Edge, can offer a whole lot of new possibilities.


Going Beyond Exascale Computing

#artificialintelligence

One thing is certain: The explosion of data creation in our society will continue as far as pundits and anyone else can forecast. In response, there is an insatiable demand for more advanced high performance computing to make this data useful. The IT industry has been pushing to new levels of high-end computing performance; this is the dawn of the exascale era of computing. Recent announcements from the US Department of Energy for exascale computers represent the starting point for a new generation of computing advances. This is critical for the advancement of any number of use cases such as understanding the interactions underlying the science of weather, sub-atomic structures, genomics, physics, rapidly emerging artificial intelligence applications, and other important scientific fields.


One sketch for all: Theory and Application of Conditional Random Sampling

Neural Information Processing Systems

It was previously presented using a heuristic argument. This study extends CRS to handle dynamic or streaming data, which much better reflect the real-world situation than assuming static data. Compared with other known sketching algorithms for dimension reductions such as stable random projections, CRS exhibits a significant advantage in that it is one-sketch-for-all.'' Although a fully rigorous analysis of CRS is difficult, we prove that, with a simple modification, CRS is rigorous at least for an important application of computing Hamming norms. A generic estimator and an approximate variance formula are provided and tested on various applications, for computing Hamming norms, Hamming distances, and $\chi 2$ distances.


Computing and maximizing influence in linear threshold and triggering models

Neural Information Processing Systems

We establish upper and lower bounds for the influence of a set of nodes in certain types of contagion models. We derive two sets of bounds, the first designed for linear threshold models, and the second more broadly applicable to a general class of triggering models, which subsumes the popular independent cascade models, as well. We quantify the gap between our upper and lower bounds in the case of the linear threshold model and illustrate the gains of our upper bounds for independent cascade models in relation to existing results. Importantly, our lower bounds are monotonic and submodular, implying that a greedy algorithm for influence maximization is guaranteed to produce a maximizer within a (1 - 1/e)-factor of the truth. Although the problem of exact influence computation is NP-hard in general, our bounds may be evaluated efficiently.