fanelli
Deterministic versus stochastic dynamical classifiers: opposing random adversarial attacks with noise
Chicchi, Lorenzo, Fanelli, Duccio, Febbe, Diego, Buffoni, Lorenzo, Di Patti, Francesca, Giambagli, Lorenzo, Marino, Raffele
The Continuous-Variable Firing Rate (CVFR) model, widely used in neuroscience to describe the intertangled dynamics of excitatory biological neurons, is here trained and tested as a veritable dynamically assisted classifier. To this end the model is supplied with a set of planted attractors which are self-consistently embedded in the inter-nodes coupling matrix, via its spectral decomposition. Learning to classify amounts to sculp the basin of attraction of the imposed equilibria, directing different items towards the corresponding destination target, which reflects the class of respective pertinence. A stochastic variant of the CVFR model is also studied and found to be robust to aversarial random attacks, which corrupt the items to be classified. This remarkable finding is one of the very many surprising effects which arise when noise and dynamical attributes are made to mutually resonate.
- Europe > Italy > Umbria > Perugia Province > Perugia (0.04)
- Europe > San Marino > Fiorentino > Fiorentino (0.04)
- Information Technology > Security & Privacy (0.71)
- Government > Military (0.71)
- Health & Medicine > Therapeutic Area > Neurology (0.48)
Automatic Input Feature Relevance via Spectral Neural Networks
Chicchi, Lorenzo, Buffoni, Lorenzo, Febbe, Diego, Giambagli, Lorenzo, Marino, Raffaele, Fanelli, Duccio
Working with high-dimensional data is a common practice, in the field of machine learning. Identifying relevant input features is thus crucial, so as to obtain compact dataset more prone for effective numerical handling. Further, by isolating pivotal elements that form the basis of decision making, one can contribute to elaborate on - ex post - models' interpretability, so far rather elusive. Here, we propose a novel method to estimate the relative importance of the input components for a Deep Neural Network. This is achieved by leveraging on a spectral re-parametrization of the optimization process. Eigenvalues associated to input nodes provide in fact a robust proxy to gauge the relevance of the supplied entry features. Unlike existing techniques, the spectral features ranking is carried out automatically, as a byproduct of the network training. The technique is successfully challenged against both synthetic and real data.
- Europe > Italy (0.04)
- North America > Mexico > Oaxaca (0.04)
- Europe > San Marino > Fiorentino > Fiorentino (0.04)
Nvidia and VMware team up to help enterprises scale up AI development
Enterprises can begin to run trials of their AI projects using VMware vSphere with Tanzu together with Nvidia AI Enterprise software suite, as part of moves by both companies to further simplify AI development and application management. By extending testing to vSphere with Tanzu, Nvidia boasts it will enable developers to run AI workloads on Kubernetes containers within their existing VMware environments. The software suite will run on mainstream Nvidia-certified systems, the company said, noting it would provide a complete software and hardware stack suitable for AI development. "Nvidia has gone and invested in building all of the next-generation cloud application-level components, where you can now take the NGC libraries, which are container-based, and run those in a Kubernetes orchestrated VMware environment, so you're getting the ability now to go and bridge the world of developers and infrastructure," VMware cloud infrastructure business group marketing VP Lee Caswell told media. The move comes off the back of VMware announcing Nvidia AI Enterprise in March.
- Information Technology > Software (1.00)
- Information Technology > Hardware (1.00)
- Information Technology > Virtualization (1.00)
- Information Technology > Cloud Computing (1.00)
- Information Technology > Artificial Intelligence (1.00)
Nvidia, VMware partner to offer virtualized GPUs ZDNet
Nvidia and VMware on Monday announced a new software product that lets customers virtualize GPUs, either on premise or as part of VMware Cloud on AWS. The companies say it's the first hybrid cloud offering that lets enterprises use GPUs to accelerate AI, machine learning or deep learning workloads. "In a modern data center, organizations are going to be using GPUs to power AI, deep learning, analytics," John Fanelli, VP of product management for Nvidia, told reporters. "And due to the scale of those types of workloads, they're going to be doing some processing on premise in data centers, some processing in clouds and continually iterating between them." The new offering starts with the enterprise data center product -- Nvidia's new Virtual Compute Server (vComputeServer) software.
- North America > United States > Texas > Travis County > Austin (0.06)
- North America > United States > California > San Francisco County > San Francisco (0.06)
- Information Technology > Services (1.00)
- Information Technology > Hardware (1.00)
VMware and Nvidia partner to simplify virtualised GPUs
Nvidia announced its new enterprise software product, vComputeServer, which has been developed and optimised for use with VMware's vSphere. Last week, VMware announced its intention to acquire Carbon Black and Pivotal, in a massive deal that will expand the company's SaaS offerings, while enhancing its ability to enable digital transformation for customers. Before the dust had even settled on that news, the company announced today (26 August), that it is set to launch a hybrid cloud on AWS (Amazon Web Services) in partnership with Nvidia, which will improve GPU (graphics processing unit) virtualisation. The two companies say that this is the first hybrid cloud service that lets enterprises accelerate AI, machine learning or deep learning workloads with GPUs. At the VMWorld conference in San Francisco, Nvidia's VP of product management, John Fanelli, told reporters: "In a modern data centre, organisations are going to be using GPUs to power AI, deep learning and analytics. "Due to the scale of those types of workloads, they're going to be doing some processing on premise in data centres, some processing in clouds and continually iterating between them." The company said that this will make the completion of deep learning training up to 50 times faster than with a CPU alone. This product is aimed at people who may be using Nvidia's Rapids software, Fanelli explained, which is a suite of data processing and machine learning libraries used for GPU-acceleration in data science workflows. Nvidia founder and CEO Jensen Huang said: "From operational intelligence to artificial intelligence, businesses rely on GPU-accelerated computing to make fast, accurate predictions that directly impact their bottom line.
- Information Technology > Software (1.00)
- Information Technology > Services (1.00)
- Information Technology > Hardware (1.00)
Nvidia, VMware to Bring Virtual GPUs to VMware's AWS Cloud
If you've ever found yourself wishing you could do all the things you've been able to do with a hypervisor and regular virtual machines but on a GPU cluster – in your own data center or in the cloud – Nvidia and VMware are now saying your wish is about to come true. Monday morning, in conjunction with the start of VMworld in San Francisco, the two companies announced that VMware Cloud on AWS, the VMware-operated cloud service running on bare-metal infrastructure in AWS data centers, will soon feature virtualized GPUs you'll be able to provision and manage using the same vSphere tools you use with regular VM infrastructure. You'll be able to share a single physical GPU among multiple VMs, but you'll also be able to aggregate the power of many GPUs to train a machine-learning model at massive scale, the companies said. Related: VMworld: Look at Acquisitions for Virtualization's Cloud Play The play here is to get VMware into the infrastructure mix for the emerging set of enterprise computing workloads that benefit from GPU acceleration, such as AI and machine learning, as well as more traditional Big Data analytics. Also on Monday, the company announced a broad strategy for tackling the hybrid cloud opportunity, which is essentially to provide a single set of tools for managing all enterprise infrastructure, on premises and/or in any public cloud, in a uniform way.
- Information Technology > Software (1.00)
- Information Technology > Services (1.00)
- Information Technology > Virtualization (1.00)
- Information Technology > Cloud Computing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.56)
Physicist takes cues from artificial intelligence
IMAGE: Fanelli, who is currently a postdoctoral researcher at the Massachusetts Institute of Technology, is the winner of the 2018 Jefferson Science Associates Postdoctoral Prize for his project to use artificial... view more In the world of computing, there's a groundswell of excitement for what is perceived as the impending revolution in artificial intelligence. Like the industrial revolution in the 19th century and the digital revolution in the 20th, the AI revolution is expected to change the way we live and work. Now, Cristiano Fanelli aims to bring the AI revolution to nuclear physics. Fanelli, who is currently a postdoctoral researcher at the Massachusetts Institute of Technology, is the winner of the 2018 Jefferson Science Associates Postdoctoral Prize for his project to use artificial intelligence to optimize systems for nuclear physics research being carried out at the U.S. Department of Energy's Thomas Jefferson National Accelerator Facility. "It's an exciting time to do nuclear and particle physics research with the artificial intelligence revolution happening now," said Fanelli.
- North America > United States > Massachusetts (0.46)
- North America > United States > New York > Suffolk County > Stony Brook (0.05)
- Energy (1.00)
- Government > Regional Government > North America Government > United States Government (0.60)
Nash Stable Outcomes in Fractional Hedonic Games: Existence, Efficiency and Computation
Bilò, Vittorio, Fanelli, Angelo, Flammini, Michele, Monaco, Gianpiero, Moscardelli, Luca
We consider fractional hedonic games, a subclass of coalition formation games that can be succinctly modeled by means of a graph in which nodes represent agents and edge weights the degree of preference of the corresponding endpoints. The happiness or utility of an agent for being in a coalition is the average value she ascribes to its members. We adopt Nash stable outcomes as the target solution concept; that is we focus on states in which no agent can improve her utility by unilaterally changing her own group. We provide existence, efficiency and complexity results for games played on both general and specific graph topologies. As to the efficiency results, we mainly study the quality of the best Nash stable outcome and refer to the ratio between the social welfare of an optimal coalition structure and the one of such an equilibrium as to the price of stability. In this respect, we remark that a best Nash stable outcome has a natural meaning of stability, since it is the optimal solution among the ones which can be accepted by selfish agents. We provide upper and lower bounds on the price of stability for different topologies, both in case of weighted and unweighted edges. Beside the results for general graphs, we give refined bounds for various specific cases, such as triangle-free, bipartite graphs and tree graphs. For these families, we also show how to efficiently compute Nash stable outcomes with provable good social welfare.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Austria > Vienna (0.14)
- Europe > Monaco (0.05)
- (18 more...)