Goto

Collaborating Authors

Machine Learning: AI-Alerts


Classifying galaxies with artificial intelligence

#artificialintelligence

Astronomers have applied artificial intelligence (AI) to ultra-wide field-of-view images of the distant Universe captured by the Subaru Telescope, and have achieved a very high accuracy for finding and classifying spiral galaxies in those images. This technique, in combination with citizen science, is expected to yield further discoveries in the future. A research group, consisting of astronomers mainly from the National Astronomical Observatory of Japan (NAOJ), applied a deep-learning technique, a type of AI, to classify galaxies in a large dataset of images obtained with the Subaru Telescope. Thanks to its high sensitivity, as many as 560,000 galaxies have been detected in the images. It would be extremely difficult to visually process this large number of galaxies one by one with human eyes for morphological classification.


Machine learning methods provide new insights into organic-inorganic interfaces

#artificialintelligence

Oliver Hofmann and his research group at the Institute of Solid State Physics at TU Graz are working on the optimization of modern electronics. A key role in their research is played by interface properties of hybrid materials consisting of organic and inorganic components, which are used, for example, in OLED displays or organic solar cells. The team simulates these interface properties with machine-learning-based methods. The results are used in the development of new materials to improve the efficiency of electronic components. The researchers have now taken up the phenomenon of long-range charge transfer.


New machine learning method allows hospitals to share patient data -- privately

#artificialintelligence

PHILADELPHIA - To answer medical questions that can be applied to a wide patient population, machine learning models rely on large, diverse datasets from a variety of institutions. However, health systems and hospitals are often resistant to sharing patient data, due to legal, privacy, and cultural challenges. An emerging technique called federated learning is a solution to this dilemma, according to a study published Tuesday in the journal Scientific Reports, led by senior author Spyridon Bakas, PhD, an instructor of Radiology and Pathology & Laboratory Medicine in the Perelman School of Medicine at the University of Pennsylvania. Federated learning -- an approach first implemented by Google for keyboards' autocorrect functionality -- trains an algorithm across multiple decentralized devices or servers holding local data samples, without exchanging them. While the approach could potentially be used to answer many different medical questions, Penn Medicine researchers have shown that federated learning is successful specifically in the context of brain imaging, by being able to analyze magnetic resonance imaging (MRI) scans of brain tumor patients and distinguish healthy brain tissue from cancerous regions.


Cheap, Easy Deepfakes Are Getting Closer to the Real Thing

WIRED

There are many photos of Tom Hanks, but none like the images of the leading everyman shown at the Black Hat computer security conference Wednesday: They were made by machine learning algorithms, not a camera. Philip Tully, a data scientist at security company FireEye, generated the hoax Hankses to test how easily open source software from artificial intelligence labs could be adapted to misinformation campaigns. His conclusion: "People with not a lot of experience can take these machine learning models and do pretty powerful things with them," he says. Seen at full resolution, FireEye's fake Hanks images have flaws like unnatural neck folds and skin textures. But they accurately reproduce the familiar details of the actor's face like his brow furrows and green-gray eyes, which gaze cooly at the viewer.


Radiant Earth Foundation releases benchmark land cover training data for Africa

#artificialintelligence

Radiant Earth Foundation has released "LandCoverNet," a human-labelled global land cover classification training dataset. This release contains data across Africa, which accounts for 1/5 of the global dataset. Available for download on Radiant MLHub, the open geospatial library, LandCoverNet will enable accurate and regular land cover mapping for timely insights into natural and anthropogenic impacts on the Earth. Global land cover maps derived from Earth observations are not new, but the influx of open-access high spatial resolution Earth observations, such as that from the European Space Agency's Sentinel missions, coupled with improved computer power, encouraged the development of advanced algorithms. Machine learning models applied to high resolution remotely sensed imagery can classify land cover classes more accurately and faster, given the availability of high-quality training data.


Knowledge Graphs And AI: Interview With Chaitan Baru, University Of California San Diego (UCSD)

AITopics Custom Links

One of the challenges with modern machine learning systems is that they are very heavily dependent on large quantities of data to make them work well. This is especially the case with deep neural nets, where lots of layers means lots of neural connections which requires large amounts of data and training to get to the point where the system can provide results at acceptable levels of accuracy and precision. Indeed, the ultimate implementation of this massive data, massive network vision is the currently much-vaunted Open AI GPT-3, which is so large that it can predict and generate almost any text with surprising magical wizardry. However, in many ways, GPT-3 is still a big data magic trick. Indeed, Professor Luis Perez-Breva makes this exact point when he says that what we call machine learning isn't really learning at all.


Introducing 'The AI & Machine Learning Imperative'

#artificialintelligence

Leading organizations recognize the potential for artificial intelligence and machine learning to transform work and society. The technologies offer companies strategic new opportunities and integrate into a range of business processes -- customer service, operations, prediction, and decision-making -- in scalable, adaptable ways. As with other major waves of technology, AI requires organizations and managers to shed old ways of thinking and grow with new skills and capabilities. "The AI & Machine Learning Imperative," an Executive Guide from MIT SMR, offers new insights from leading academics and practitioners in data science and AI. The guide explores how managers and companies can overcome challenges and identify opportunities across three key pillars: talent, leadership, and organizational strategy.


So many stars, so little time: Machine learning helps astroboffins spot the most oxygen-starved galaxy yet

#artificialintelligence

Astronomers have spied a tiny galaxy with the lowest oxygen levels yet observed, a discovery made possible thanks to a machine-learning algorithm. The galaxy, dubbed HSC J1631 4426, contains just 1.6 per cent of the total amount of oxygen locked in our Sun – the lowest levels yet seen, beating the previous record by just a smidgen. These extremely metal-poor galaxies are rare; they tend to be small, formless dwarf galaxies that contain a small smattering of stars. The lack of heavier elements such as oxygen is a sign that the galaxy is still in its primordial stage. Elements heavier than hydrogen and helium like carbon, oxygen, and all the way up to iron can only be created by later generations of stars.


Machine-learning test may improve kidney failure prediction in patients with diabetes

#artificialintelligence

For patients with type 2 diabetes or the APOL1-HR genotype, a machine learning test integrating biomarkers and electronic health record data demonstrated improved prediction of kidney failure compared with commonly used clinical models. According to Kinsuk Chauhan, MD, MPH, of Icahn School of Medicine at Mount Sinai, and colleagues, diabetic kidney disease from type 2 diabetes accounts for 44% of all patients with end-stage kidney disease, with the APOL1 high-risk genotypes also associated with increased risk for chronic kidney disease progression and eGFR decline that may ultimately result in kidney failure. "Even though these populations are on average higher risk than the general population, accurate prediction of who will have rapid kidney function decline (RKFD) and worse kidney outcomes is lacking," the researchers wrote, noting that the current standard of using the kidney failure risk equation to predict ESKD has only been validated in patients who already have kidney disease and not in those with preserved kidney function at baseline. "Widespread electronic health records (EHR) usage provides the potential to leverage thousands of clinical features," the researchers added. "Standard statistical approaches are inadequate to leverage this data due to feature volume, unaligned nature of data and correlation structure."


Applying Linearly Scalable Transformers to Model Longer Protein Sequences

#artificialintelligence

In a bid to make transformer models even better for real-world applications, researchers from Google, University of Cambridge, DeepMind and Alan Turing Institute have proposed a new transformer architecture called "Performer" -- based on what they call fast attention via orthogonal random features (FAVOR). Believed to be particularly well suited for language understanding tasks when proposed in 2017, transformer is a novel neural network architecture based on a self-attention mechanism. To date, in addition to achieving SOTA performance in Natural Language Processing and Neural Machine Translation tasks, transformer models have also performed well across other machine learning (ML) tasks such as document generation/summarization, time series prediction, image generation, and analysis of biological sequences. Neural networks usually process language by generating fixed- or variable-length vector-space representations. A transformer however only performs a small, constant number of steps -- in each step, it applies a self-attention mechanism that can directly model relationships between all words in a sentence, regardless of their respective position.