The latest breakthrough is the transformer model. Before this, to take account of temporal correlations, we needed to use recurrent neural networks, which were much more difficult to train and much less successful. When they worked they worked wonders, but most of the time it was better to avoid such an architecture. Then the idea of attention was invented and a number of different language models making use of it started to change the way we think about neural networks. So much so that they were renamed "foundational models" and are taken by one faction to be the future of AI and by the remaining faction as being nothing but a magic trick with no substance.
As the metaverse industry is expected to be an $800 billion market by 2024, we continue to learn new ways this immersive, virtual environment might better enable us to connect with each other from anywhere in the world. This comes at a time when many are already participating in and benefitting from virtual activities that otherwise would not be possible due to constraints of distance, time or cost. In enabling new opportunities for virtual rather than in-person instruction, the metaverse has the power to transform access to education and the way we learn. The types of education that the metaverse can accommodate are varied, from school-based interactive learning and workplace training to professional accreditation. In so many ways, the metaverse is offering new chances for people to learn what they want by mitigating obstacles of accessibility.
Author summary Interest in machine learning as applied to challenges in medicine has seen an exponential rise over the past decade. A key issue in developing machine learning models is the availability of sufficient high-quality data. Another related issue is a requirement to validate a locally trained model on data from external sources. However, sharing sensitive biomedical and clinical data across different hospitals and research teams can be challenging due to concerns with data privacy and data stewardship. These issues have led to innovative new approaches for collaboratively training machine learning models without sharing raw data. One such method, termed ‘federated learning,’ enables investigators from different institutions to combine efforts by training a model locally on their own data, and sharing the parameters of the model with others to generate a central model. Here, we systematically review reports of successful deployments of federated learning applied to research problems involving biomedical data. We found that federated learning links research teams around the world and has been applied to modelling in such as oncology and radiology. Based on the trends we observed in the studies reviewed in our paper, we observe there are opportunities to expand and improve this innovative approach so global teams can continue to produce and validate high quality machine learning models.
We are excited to announce the release of AI Builder's new home page. It is available at https://aka.ms/tryaibuilder. We heard your feedback and made a complete redesign so you can better understand which model fits your scenario. Try it out with a few clicks, get inspiration from customer's stories, and learn how to use it more easily. We have added a video presenting AI Builder and how it can help to enhance your business.
The other day, Brian reported on Sony's new LinkBuds headphones, including its partnership with "what if Brian Eno was a piece of computer software" app Endel. The company uses really fascinating AI technology to generate soundscapes and music tracks to help your brain do its best work -- to help you focus deeper, sleep more easily or to relax you. I spoke with one of Endel's founders to learn more about the tech and its deal with Sony. "Endel is first and foremost a technology that was built to help you focus, relax and sleep. And the way this technology works, it procedurally generates a soundscape in real time on the spot, on the device. It is personalized to you based on a number of inputs that we collect about you; things like the time of day, your heart rate, the weather, your movement and your circadian rhythms, like how much sleep you got last night," explains Oleg Stavitsky, CEO and co-founder at Endel.
Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. It's free, we don't spam, and we never share your email address.
In the past five years, interest in applying artificial intelligence (AI) approaches in drug research and development (R&D) has surged. Driven by the expectation of accelerated timelines, reduced costs and the potential to reveal hidden insights from vast datasets, more than 150 companies with a focus on AI have raised funding in this period, based on an analysis of the field by Back Bay Life Science Advisors (Figure 1a). And the number of financings and average amount raised soared in 2021. At the forefront of this field are companies harnessing AI approaches such as machine learning (ML) in small-molecule drug discovery, which account for the majority of financings backed by venture capital (VC) in recent years (Figure 1b), as well as some initial public offerings (IPOs) for pioneers in the area (Table 1). Such companies have also attracted large pharma companies to establish multiple high-value partnerships (Table 2), and the first AI-based small-molecule drug candidates are now in clinical trials (Nat.
In this article, I shall argue that AI's likely developments and possible challenges are best understood if we interpret AI not as a marriage between some biological-like intelligence and engineered artefacts, but as a divorce between agency and intelligence, that is, the ability to solve problems successfully and the necessity of being intelligent in doing so. I shall then look at five developments: (1) the growing shift from logic to statistics, (2) the progressive adaptation of the environment to AI rather than of AI to the environment, (3) the increasing translation of difficult problems into complex problems, (4) the tension between regulative and constitutive rules underpinning areas of AI application, and (5) the push for synthetic data.
Researchers at Duke University have demonstrated that incorporating known physics into machine learning algorithms can help the inscrutable black boxes attain new levels of transparency and insight into material properties. In one of the first projects of its kind, researchers constructed a modern machine learning algorithm to determine the properties of a class of engineered materials known as metamaterials and to predict how they interact with electromagnetic fields. Because it first had to consider the metamaterial's known physical constraints, the program was essentially forced to show its work. Not only did the approach allow the algorithm to accurately predict the metamaterial's properties, it did so more efficiently than previous methods while providing new insights. The results appear online the week of May 9 in the journal Advanced Optical Materials.