AI Safety for High Energy Physics - INSPIRE-HEP

#artificialintelligence

(arXiv) The field of high-energy physics (HEP), along with many scientific disciplines, is currently experiencing a dramatic influx of new methodologies powered by modern machine learning techniques. Over the last few years, a growing body of HEP literature has focused on identifying promising applications of deep learning in particular, and more recently these techniques are starting to be realized in an increasing number of experimental measurements. The overall conclusion from this impressive and extensive set of studies is that rarer and more complex physics signatures can be identified with the new set of powerful tools from deep learning. However, there is an unstudied systematic risk associated with combining the traditional HEP workflow and deep learning with high-dimensional data. In particular, calibrating and validating the response of deep neural networks is in general not experimentally feasible, and therefore current methods may be biased in ways that are not covered by current uncertainty estimates.


$7.5B smart 'mini-city' secures land on Las Vegas Boulevard

#artificialintelligence

UPDATED, Nov. 13, 2019: Bleutech Park announced last week it secured a 210-acre parcel of land on the south end of Las Vegas Boulevard. We are happy to announce we have secured a 210-acre parcel of land for our energy efficient mini-city at the south end of the Las Vegas Strip. Las Vegas, it's time to revolutionize the world for the future and it all starts here. The developer touts this as a step forward in building a futuristic "mini-city" equipped with vertical gardens and advanced smart buildings featuring self-healing concrete and energy-generating materials, though some details remain unknown. The Las Vegas Review-Journal reports there is no guarantee the deal will close, and noted the planned amenities are described using "an arsenal of buzz words."


Intel HPC Developer Conference

#artificialintelligence

Heterogeneous Computing and oneAPI: Computing is evolving to require a diverse mix of scalar, vector, matrix, and spatial architectures deployed in CPU, GPU, accelerator, and FPGA sockets, enabled by a scalable software stack and optimized for workload flexibility and efficiency. Designed for developers' needs in this environment, Intel's oneAPI project will provide a unified programming model to simplify application development across diverse computing architectures. Convergence of HPC & AI: The evolution of AI and HPC is the confluence of disciplines, technologies, and software models which will impact compute moving forward for scientific and commercial uses. Sessions in this track will cover topics such as neural networks, machine learning, inferencing, optimized frameworks, systems, performance libraries, compilers, languages, and techniques that optimize and scale HPC & AI convergence. Design, Develop & Deploy: Parallel programming, code modernization, storage, high speed fabrics, visualization, system design and management, and cloud are critical to HPC.


Researchers develop AI tool to evade Internet censorship

#artificialintelligence

Internet censorship, basically, is a very effective strategy used by dictatorial governments to limit access to information available online for controlling freedom of expression and prevent rebellion and discord. Countries at the forefront of adopting Internet censorship, as per the findings of the 2019 Freedom House report, are India and China as these are declared to be the worst abusers of digital freedom. Conversely, the US, Brazil, Sudan, and Kazakhstan are the countries where Internet freedom has considerably declined recently. When a country curbs Internet freedom, activists need to find ways to evade it. However, they may not need to manually search for it now that "Geneva" is here.


Business Daily: The ethics of AI on Apple Podcasts

#artificialintelligence

One of the world's top thinkers on artificial intelligence, tells us why we should be cautious but not terrified at the prospect of computers that can outsmart us. Professor Stuart Russell of the University of California, Berkeley, tells Ed Butler where he thinks we are going wrong in setting objectives for existing artificial intelligence systems, and the risk of unintended consequences.


MONTRÉAL.AI Montréal Artificial Intelligence - MONTRÉAL.AI

#artificialintelligence

On October 25, 2018, the first artificial intelligence artwork ever sold at Christie's auction house shattered expectations, fetching $432,500. Today, the House of Montréal.AI Fine Arts introduces: Montréal.AI's Fine Arts Auction, the first international auction dedicated to quintessential fine AI arts. "The Artists Creating with AI Won't Follow Trends; THEY WILL SET THEM." -- Montréal.AI Fine Arts We are getting ready for the first auction. Top art collectors will be able to place bids internationally. On Tue, Nov 26, 2019 6:30 PM - 8:30 PM EST, the General Secretariat of MONTREAL.AI will present, with authority: "Artificial Intelligence 101: The First World-Class Overview of AI for the General Public".


How to Ensure Data Quality for AI - insideBIGDATA

#artificialintelligence

In this special guest feature, Wilson Pang, CTO of Appen, offers a few quality controls that organizations can implement to allow for the most accurate and consistent data annotation process possible. Wilson joined Appen in November 2018 and is responsible for the company's products and technology. Wilson has over seventeen years' experience in software engineering and data science. Prior to joining Appen, Wilson was Chief Data Officer of CTrip in China, the second largest online travel agency company in the world where he led data engineers, analysts, data product managers and scientists to improve user experience and increase operational efficiency that grew the business. Before that, he was senior director of engineering in eBay in California and provided leadership to various domains including data service and solutions, search science, marketing technology and billing systems.


How to Ensure Data Quality for AI - insideBIGDATA

#artificialintelligence

In this special guest feature, Wilson Pang, CTO of Appen, offers a few quality controls that organizations can implement to allow for the most accurate and consistent data annotation process possible. Wilson joined Appen in November 2018 and is responsible for the company's products and technology. Wilson has over seventeen years' experience in software engineering and data science. Prior to joining Appen, Wilson was Chief Data Officer of CTrip in China, the second largest online travel agency company in the world where he led data engineers, analysts, data product managers and scientists to improve user experience and increase operational efficiency that grew the business. Before that, he was senior director of engineering in eBay in California and provided leadership to various domains including data service and solutions, search science, marketing technology and billing systems.


NVIDIA Wins MLPerf Inference Benchmarks – NVIDIA Developer News Center

#artificialintelligence

Today, NVIDIA posted the fastest results on new MLPerf benchmarks measuring the performance of AI inference workloads in data centers and at the edge. The new results come on the heels of the company's equally strong results in the MLPerf benchmarks posted earlier this year. MLPerf's five inference benchmarks -- applied across a range of form factors and four inferencing scenarios -- cover such established AI applications as image classification, object detection and translation. NVIDIA topped all five benchmarks for both data center-focused scenarios (server and offline), with Turing GPUs providing the highest performance per processor among commercially available entries. Xavier provided the highest performance among commercially available edge and mobile SoCs under both edge-focused scenarios (single-stream and multistream).