Goto

Collaborating Authors

Results


AI Chip Startup Syntiant Scales Production

#artificialintelligence

Syntiant Corp., the "neural decision processor" startup, announced completion of another funding round this week along with the shipment of more than 1 million low-power edge AI chips. The three-year-old startup based in Irvine, Calif., said Tuesday (Aug. The round was led by Microsoft's (NASDAQ: MSFT) venture arm M12 and Applied Ventures, the investment fund of Applied Materials (NASDAQ: AMAT). New investors included Atlantic Bridge Capital, Alpha Edison and Miramar Digital Ventures. Intel Capital was an early backer of Syntiant, part of a package of investments the chip maker announced in 2018 targeting AI processors that promise to accelerate the transition of machine learning from the cloud to edge devices.


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


AI Research Considerations for Human Existential Safety (ARCHES)

arXiv.org Artificial Intelligence

Framed in positive terms, this report examines how technical AI research might be steered in a manner that is more attentive to humanity's long-term prospects for survival as a species. In negative terms, we ask what existential risks humanity might face from AI development in the next century, and by what principles contemporary technical research might be directed to address those risks. A key property of hypothetical AI technologies is introduced, called \emph{prepotence}, which is useful for delineating a variety of potential existential risks from artificial intelligence, even as AI paradigms might shift. A set of \auxref{dirtot} contemporary research \directions are then examined for their potential benefit to existential safety. Each research direction is explained with a scenario-driven motivation, and examples of existing work from which to build. The research directions present their own risks and benefits to society that could occur at various scales of impact, and in particular are not guaranteed to benefit existential safety if major developments in them are deployed without adequate forethought and oversight. As such, each direction is accompanied by a consideration of potentially negative side effects.


A Deep Structural Model for Analyzing Correlated Multivariate Time Series

arXiv.org Machine Learning

Multivariate time series are routinely encountered in real-world applications, and in many cases, these time series are strongly correlated. In this paper, we present a deep learning structural time series model which can (i) handle correlated multivariate time series input, and (ii) forecast the targeted temporal sequence by explicitly learning/extracting the trend, seasonality, and event components. The trend is learned via a 1D and 2D temporal CNN and LSTM hierarchical neural net. The CNN-LSTM architecture can (i) seamlessly leverage the dependency among multiple correlated time series in a natural way, (ii) extract the weighted differencing feature for better trend learning, and (iii) memorize the long-term sequential pattern. The seasonality component is approximated via a non-liner function of a set of Fourier terms, and the event components are learned by a simple linear function of regressor encoding the event dates. We compare our model with several state-of-the-art methods through a comprehensive set of experiments on a variety of time series data sets, such as forecasts of Amazon AWS Simple Storage Service (S3) and Elastic Compute Cloud (EC2) billings, and the closing prices for corporate stocks in the same category.


Advances and Open Problems in Federated Learning

arXiv.org Machine Learning

Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.


Intel, GraphCore And Groq: Let The AI Cambrian Explosion Begin

#artificialintelligence

As we approach the end of a year full of promises from AI startups, a few companies are meeting their promised 2019 launch dates. These include Intel, with its long-awaited Nervana platform, UK startup Graphcore and the stealthy Groq from Silicon Valley. Some of these announcements fall a bit short on details, but all claim to represent breakthroughs in performance and efficiency for training and/or inference processing. Other recent announcements include Cerebras's massive wafer-scale AI engine inside its multi-million dollar CS-1 system and NVIDIA's support for GPUs on ARM-based servers. I'll opine on those soon, but here I will focus on Intel, Graphcore and Groq's highly anticipated chips.


Stock Market News NSDQ, NYSE, and AMEX Stock Market News, Market News Categories, Market Indicators

#artificialintelligence

VBTansform 2019, was a largest AI conference (the AI event of the year) held at San Francisco, CA by VentureBeat Magazine. Accenture Chief Data Scientist, Dr. Ganapathi Pulipaka attended and joined 900 AI executives and practitioners (director-and-above C-Suite) from innovative brands with leading best practices and spoke along with other 120 speakers with disruptive emerging companies who presented more than 48 sessions. Several exhibitors from top tier brands like Accenture, Google, Verizon, IBM, Amazon, Cisco, Oracle, New York University, Microsoft, Uber, Data Robot, Intel, eBay, Johnson and Johnson, GE, Gap, Lyft, Etsy, Kohl's, New York Times, Amazon and many more brand speakers showcased their AI products and presented stories about real business results with their production strategies around the deployment with specialists in this area with practical lessons from their deployments and took the audience on a journey of disruptive AI technologies to keep an eye on. The sessions focused on six AI trends on natural language processing, smart speech, computer vision, Business AI integration, implementing AI across organization, IoT and AI at the Edge, intelligent RPA and automation. Reinforcement learning has been disruptive and the history of AI has showed that it took the gaming industry by storm.


Opinion: AI, blockchain can help China shift from copycat to innovator

#artificialintelligence

Editor's note: Noah Wang is co-founder and chief marketing officer of TOP Network, a Silicon Valley-based tech firm developing a business-friendly public blockchain and the world's first blockchain-based cloud communication network. The article reflects the author's views, and not necessarily those of CGTN. Earlier in January, at the annual World Economic Forum in Davos, Switzerland, Bloomberg released the 2019 Bloomberg Innovation Index, which ranks the most innovative countries using criteria including R&D investment, manufacturing capability, and patent activity. China jumped three spots to the 16th compared with a year before, beating the UK for the first time. However, Bloomberg index showed that China lagged far behind its innovative peers such as six-time champion the Republic of Korea as well as the U.S. and Japan, which secured their places among the top 10.


Applications of artificial intelligence - Wikipedia

#artificialintelligence

Artificial intelligence, defined as intelligence exhibited by machines, has many applications in today's society. More specifically, it is Weak AI, the form of A.I. where programs are developed to perform specific tasks, that is being utilized for a wide range of activities including medical diagnosis, electronic trading, robot control, and remote sensing. AI has been used to develop and advance numerous fields and industries, including finance, healthcare, education, transportation, and more. AI for Good is a movement in which institutions are employing AI to tackle some of the world's greatest economic and social challenges. For example, the University of Southern California launched the Center for Artificial Intelligence in Society, with the goal of using AI to address socially relevant problems such as homelessness. At Stanford, researchers are using AI to analyze satellite images to identify which areas have the highest poverty levels.[1] The Air Operations Division (AOD) uses AI for the rule based expert systems. The AOD has use for artificial intelligence for surrogate operators for combat and training simulators, mission management aids, support systems for tactical decision making, and post processing of the simulator data into symbolic summaries.[2]


Making Machine Learning Robust Against Adversarial Inputs

Communications of the ACM

Machine learning has advanced radically over the past 10 years, and machine learning algorithms now achieve human-level performance or better on a number of tasks, including face recognition,31 optical character recognition,8 object recognition,29 and playing the game Go.26 Yet machine learning algorithms that exceed human performance in naturally occurring scenarios are often seen as failing dramatically when an adversary is able to modify their input data even subtly. Machine learning is already used for many highly important applications and will be used in even more of even greater importance in the near future. Search algorithms, automated financial trading algorithms, data analytics, autonomous vehicles, and malware detection are all critically dependent on the underlying machine learning algorithms that interpret their respective domain inputs to provide intelligent outputs that facilitate the decision-making process of users or automated systems. As machine learning is used in more contexts where malicious adversaries have an incentive to interfere with the operation of a given machine learning system, it is increasingly important to provide protections, or "robustness guarantees," against adversarial manipulation. The modern generation of machine learning services is a result of nearly 50 years of research and development in artificial intelligence--the study of computational algorithms and systems that reason about their environment to make predictions.25 A subfield of artificial intelligence, most modern machine learning, as used in production, can essentially be understood as applied function approximation; when there is some mapping from an input x to an output y that is difficult for a programmer to describe through explicit code, a machine learning algorithm can learn an approximation of the mapping by analyzing a dataset containing several examples of inputs and their corresponding outputs. Google's image-classification system, Inception, has been trained with millions of labeled images.28 It can classify images as cats, dogs, airplanes, boats, or more complex concepts on par or improving on human accuracy. Increases in the size of machine learning models and their accuracy is the result of recent advancements in machine learning algorithms,17 particularly to advance deep learning.7 One focus of the machine learning research community has been on developing models that make accurate predictions, as progress was in part measured by results on benchmark datasets. In this context, accuracy denotes the fraction of test inputs that a model processes correctly--the proportion of images that an object-recognition algorithm recognizes as belonging to the correct class, and the proportion of executables that a malware detector correctly designates as benign or malicious. The estimate of a model's accuracy varies greatly with the choice of the dataset used to compute the estimate.