If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Machine learning metrics for distributed, scalable PyTorch applications. TorchMetrics is a collection of 80 PyTorch metrics implementations and an easy-to-use API to create custom metrics. The module-based metrics contain internal metric states (similar to the parameters of the PyTorch module) that automate accumulation and synchronization across devices! This can be run on CPU, single GPU or multi-GPUs! Module metric usage remains the same when using multiple GPUs or multiple nodes.
Massive scale, both in terms of data availability and computation, enables significant breakthroughs in key application areas of deep learning such as natural language processing (NLP) and computer vision. There is emerging evidence that scale may be a key ingredient in scientific deep learning, but the importance of physical priors in scientific domains makes the strategies and benefits of scaling uncertain. Here, we investigate neural scaling behavior in large chemical models by varying model and dataset sizes over many orders of magnitude, studying models with over one billion parameters, pre-trained on datasets of up to ten million datapoints. We consider large language models for generative chemistry and graph neural networks for machine-learned interatomic potentials. To enable large-scale scientific deep learning studies under resource constraints, we develop the Training Performance Estimation (TPE) framework to reduce the costs of scalable hyperparameter optimization by up to 90%.
Gizmodo is 20 years old! To celebrate the anniversary, we're looking back at some of the most significant ways our lives have been thrown for a loop by our digital tools. Like so many others after 9/11, I felt spiritually and existentially lost. It's hard to believe now, but I was a regular churchgoer at the time. Watching those planes smash into the World Trade Center woke me from my extended cerebral slumber and I haven't set foot in a church since, aside from the occasional wedding or baptism. I didn't realize it at the time, but that godawful day triggered an intrapersonal renaissance in which my passion for science and philosophy was resuscitated. My marriage didn't survive this mental reboot and return to form, but it did lead me to some very positive places, resulting in my adoption of secular Buddhism, meditation, and a decade-long stint with vegetarianism.
But due to a large number of available languages and their features, it can be hard to choose one that would be good for both purposes. To help in this process, we have made a list of some programming languages that are popular and widely used around the world or by large companies or organizations. Here, on this list, we have selected Python in the first place. It has constantly ranked as one of the top programming languages since its creation in 1989. In addition, it is attributed to its easy learning curve and how easily it can be combined with other frameworks like TensorFlow, SciPy, or any other scientific library for machine learning and data analysis.
Nocebo is the antipode of placebo and refers to adverse events a person manifests after receiving placebo. Recent findings: In randomized trials for migraine prevention meta-analyses revealed that eight out of 20 patients treated with placebo experienced any adverse event. More importantly, one out of 20 patients treated with placebo withdrew treatment because of adverse events. The adverse events in placebo groups mirrored the adverse events expected of the active medication studied, confirming that pretrial suggestions induce the adverse events in placebo-treated patients. Nocebo was higher in preventive treatments than in symptomatic ones.
Artificial intelligence guru Jack Clark has written the longest, most interesting Twitter thread on AI policy that I've ever read. After a brief initial introductory tweet on August 6, Clark went on to post an additional 79 tweets in this thread. It was a real tour de force. Because I'm currently finishing up a new book on AI governance, I decided to respond to some of his thoughts on the future of governance for artificial intelligence (AI) and machine learning (ML). Clark is a leading figure in the field of AI science and AI policy today. He is the co-founder of Anthropic, an AI safety and research company, and he previously served as the Policy Director of OpenAI. So, I take seriously what he has to say on AI governance matters and really learned a lot from his tweetstorm. But I also want to push back on a few things. Specifically, several of the issues that Clark raises about AI governance are not unique to AI per se; they are broadly applicable to many other emerging technology sectors, and even some traditional ones. Below, I will refer to this as my "general critique" of Clark's tweetstorm. On the other hand, Clark correctly points to some issues that are unique to AI/ML and which really do complicate the governance of computational systems.
In 2018, the Eye Filmmuseum in Amsterdam welcomed its first robot filmmaker: "Jan Bot" is an artificial intelligence tasked with producing experimental short films using the material in the museum's archives. Each day, Jan Bot scours the web for trending topics, then takes them as inspiration for abstract interpretations. Creators Pablo Núñez Palma and Bram Loogman have described it as "prolific, often misunderstood." Now, with more than 25,000 films in its catalog, the time has come to switch Jan Bot off. Conceived as a way to bring a physical archive into the internet age, the next phase will be posthumously archiving Jan Bot's oeuvre via NFT.
An artificial neuron that can both release and receive dopamine in connection with real rat cells could be used in future machine-human interfaces. Most brain-machine interfaces measure simple electrical signals in neurons to glean information about brain function. But much of the information in neural networks, like the brain, is encoded in neurotransmitters such as dopamine, chemicals that neurons use to send messages to each another. "The brain's native language is chemical, but current brain-machine interfaces all use an electrical language," says Benhui Hu at Nanjing Medical University in China. "So we devised an artificial neuron to duplicate the way a real neuron communicates."