Goto

Collaborating Authors

Results


Top 10 AI-Generated Images by DALL-E 2 - Simplified

#artificialintelligence

OpenAI, a San Francisco Artificial Intelligence company closely affiliated with Microsoft, launched an A.I. system and neural network in January 2021 known as DALL-E. Named using a pun of the surrealist artist Salvador Dalí and Pixar's famous movie WALL-E, DALL-E creates images from text.In this blog, we'll let you in on everything you should know about DALL-E, its variation DALL-E 2, and share ten of the most creative AI-generated images of Dall-E 2. Picture of a dog wearing a beret and a turtleneck generated by the DALL-E 2 image generation software. Now, you may be wondering what DALL-E is all about. It's an AI tool that takes a description of an object or a scene and automatically produces an image depicting the scene/object. DALL-E also allows you to edit all the wonderful AI-generated images you've created with simple tools and text modifications.


Nvidia launches a new GPU architecture and the Grace CPU Superchip – TechCrunch

#artificialintelligence

At its annual GTC conference for AI developers, Nvidia today announced its next-gen Hopper GPU architecture and the Hopper H100 GPU, as well as a new data center chip that combines the GPU with a high-performance CPU, which Nvidia calls the "Grace CPU Superchip" (not to be confused with the Grace Hopper Superchip). With Hopper, Nvidia is launching a number of new and updated technologies, but for AI developers, the most important one may just be the architecture's focus on transformer models, which have become the machine learning technique de rigueur for many use cases and which powers models like GPT-3 and asBERT. The new Transformer Engine in the H100 chip promises to speed up model training by up to six times and because this new architecture also features Nvidia's new NVLink Switch system for connecting multiple nodes, large server clusters powered by these chips will be able to scale up to support massive networks with less overhead. "The largest AI models can require months to train on today's computing platforms," Nvidia's Dave Salvator writes in today's announcement. AI, high performance computing and data analytics are growing in complexity with some models, like large language ones, reaching trillions of parameters.


NVIDIA Hopper GPU Architecture and H100 Accelerator Announced: Working Smarter and Harder

#artificialintelligence

Depending on your point of view, the last two years have either gone by very slowly, or very quickly. While the COVID pandemic never seemed to end – and technically still hasn't – the last two years have whizzed by for the tech industry, and especially for NVIIDA. The company launched its Ampere GPU architecture just two years ago at GTC 2020, and after selling more of their chips than ever before, now in 2022 it's already time to introduce the next architecture. So without further ado, let's talk about the Hopper architecture, which will underpin the next generation of NVIDIA server GPUs. As has become a ritual now for NVIDIA, the company is using its Spring GTC event to launch its next generation GPU architecture. Introduced just two years ago, Ampere has been NVIDIA's most successful server GPU architecture to date, with over $10B in data center sales in just the last year.


Xie

AAAI Conferences

Among various traditional art forms, brush stroke drawing is one of the widely used styles in modern computer graphic tools such as GIMP, Photoshop and Painter. In this paper, we develop an AI-aided art authoring (A4) system of non-photorealistic rendering that allows users to automatically generate brush stroke paintings in a specific artist's style. Within the reinforcement learning framework of brush stroke generation proposed, our contribution in this paper is to learn artists' drawing styles from video-captured stroke data by inverse reinforcement learning. Through experiments, we demonstrate that our system can successfully learn artists' styles and render pictures with consistent and smooth brush strokes.


Wassersplines for Stylized Neural Animation

arXiv.org Artificial Intelligence

Much of computer-generated animation is created by manipulating meshes with rigs. While this approach works well for animating articulated objects like animals, it has limited flexibility for animating less structured creatures such as the Drunn in "Raya and the Last Dragon." We introduce Wassersplines, a novel trajectory inference method for animating unstructured densities based on recent advances in continuous normalizing flows and optimal transport. The key idea is to train a neurally-parameterized velocity field that represents the motion between keyframes. Trajectories are then computed by pushing keyframes through the velocity field. We solve an additional Wasserstein barycenter interpolation problem to guarantee strict adherence to keyframes. Our tool can stylize trajectories through a variety of PDE-based regularizers to create different visual effects. We demonstrate our tool on various keyframe interpolation problems to produce temporally-coherent animations without meshing or rigging.


RAMANMETRIX: a delightful way to analyze Raman spectra

arXiv.org Machine Learning

Although Raman spectroscopy is widely used for the investigation of biomedical samples and has a high potential for use in clinical applications, it is not common in clinical routines. One of the factors that obstruct the integration of Raman spectroscopic tools into clinical routines is the complexity of the data processing workflow. Software tools that simplify spectroscopic data handling may facilitate such integration by familiarizing clinical experts with the advantages of Raman spectroscopy. Here, RAMANMETRIX is introduced as a user-friendly software with an intuitive web-based graphical user interface (GUI) that incorporates a complete workflow for chemometric analysis of Raman spectra, from raw data pretreatment to a robust validation of machine learning models. The software can be used both for model training and for the application of the pretrained models onto new data sets. Users have full control of the parameters during model training, but the testing data flow is frozen and does not require additional user input. RAMANMETRIX is available in two versions: as standalone software and web application. Due to the modern software architecture, the computational backend part can be executed separately from the GUI and accessed through an application programming interface (API) for applying a preconstructed model to the measured data. This opens up possibilities for using the software as a data processing backend for the measurement devices in real-time. The models preconstructed by more experienced users can be exported and reused for easy one-click data preprocessing and prediction, which requires minimal interaction between the user and the software. The results of such prediction and graphical outputs of the different data processing steps can be exported and saved.


Boosting machine learning workflows with GPU-accelerated libraries

#artificialintelligence

Abstract: In this article, we demonstrate how to use RAPIDS libraries to improve machine learning CPU-based libraries such as pandas, sklearn and NetworkX. We use a recommendation study case, which executed 44x faster in the GPU-based library when running the PageRank algorithm and 39x faster for the Personalized PageRank. Scikit-learn and Pandas are part of most data scientists' toolbox because of their friendly API and wide range of useful resources-- from model implementations to data transformation methods. However, many of these libraries still rely on CPU processing and, as far as this thread goes, libraries like Scikit-learn do not intend to scale up to GPU processing or scale out to cluster processing. To overcome this drawback, RAPIDS offers a suite of Python open source libraries that takes these widely used data science solutions and boost them up by including GPU-accelerated implementations while still providing a similar API.


Nvidia's AI-powered scaling makes old games look better without a huge performance hit

#artificialintelligence

Nvidia's latest game-ready driver includes a tool that could let you improve the image quality of games that your graphics card can easily run, alongside optimizations for the new God of War PC port. The tech is called Deep Learning Dynamic Super Resolution, or DLDSR, and Nvidia says you can use it to make "most games" look sharper by running them at a higher resolution than your monitor natively supports. DLDSR builds on Nvidia's Dynamic Super Resolution tech, which has been around for years. Essentially, regular old DSR renders a game at a higher resolution than your monitor can handle and then downscales it to your monitor's native resolution. This leads to an image with better sharpness but usually comes with a dip in performance (you are asking your GPU to do more work, after all). So, for instance, if you had a graphics card capable of running a game at 4K but only had a 1440p monitor, you could use DSR to get a boost in clarity.


Themis: Fair and Efficient GPU Cluster Scheduling

#artificialintelligence

For facilitating the execution of distributed Machine Learning (ML) training workloads, GPU clusters are the mainstream infrastructure. However, when multiple of these workloads execute on a shared cluster, a significant contention occurs. The authors of Themis [1] mention that available cluster scheduling mechanisms are not fit for ML training workloads' unique characteristics. ML training workloads are usually long-running jobs that need to be gang-scheduled, and their performance is sensitive to tasks' relative placement. They propose Themis [1] as a new scheduling framework for ML training workloads.


How does artificial intelligence learn? -- Design and Animation

#artificialintelligence

For this project, Champ mentioned he had a lot of fun art directing, designing and animating this piece. The vision for this came out beyond what he expected and in the end Champ says "I felt that I really pushed myself. I can say that I'm very proud of it."