Goto

Collaborating Authors

Norway


Employees attribute AI project failure to poor data quality

#artificialintelligence

A clear majority of employees (87%) peg data quality issues as the reason their organizations failed to successfully implement AI and machine learning. That's according to Alation's latest quarterly State of Data Culture Report, produced in partnership with Wakefield Research, which also found that only 8% of data professionals believe AI is being used across their organizations. For the report, Wakefield conducted a quantitative research study of 300 data and analytics leaders at enterprises with more than 2,500 employees in the U.S., U.K., Germany, Denmark, Sweden, and Norway. The enterprises were polled regarding their progress in establishing a culture of data-driven decision-making and the challenges they continue to face. According to Alation, 87% of professionals say inherent biases in the data being used in their AI systems produce discriminatory results that create compliance risks for their organizations.


A shapeshifting robot is figuring out how to stay upright in the real world

Mashable

Created by researchers at the University of Oslo, DyRET or Dynamic Robot for Embodied Testing is a four-legged dog-like robot. It was built to improve the versatility and efficacy of all walking robots. DyRET can change the length of its limbs on the spot, helping it to maintain balance on a variety of surfaces. While the bot is still very slow and wobbly it could play a huge role in future robotics to come.


Norway's first AI-powered robotic sorter for industrial waste using ZenRobotics technology

#artificialintelligence

Norwegian waste management frontrunner Bjorstaddalen has opened the country's first robotic sorting facility for C&D and C&I waste in the municipality of Skien in Norway. The fully automated robotic sorting station supplied by ZenRobotics features robotic arms that will perform up to 6000 picks per hour. The robotic sorting station is set up as a standalone waste sorting process connected to Bjorstaddalen's existing material recycling facility that has a total capacity of 150 000 tons per hour. By investing in AI and robot technologies, Bjorstaddalen aims to become a leader in material recycling in Norway. The robotic sorting station will substantially increase material recovery, reducing waste incineration and making a major leap toward the circular economy.


Watch a Shape-Shifting Robot Prowl the Big, Bad World

WIRED

Sure, evolution invented mammals that soar 200 feet through the air on giant flaps of skin and 3-foot-wide crabs that climb trees, but has it ever invented a four-legged animal with telescoping limbs? Meet the Dynamic Robot for Embodied Testing, aka DyRET, a machine that changes the length of its legs on the fly--not to creep out humans, but to help robots of all stripes not fall over so much. Writing today in the journal Nature Machine Intelligence, researchers in Norway and Australia describe how they got DyRET to learn how to lengthen or shorten its limbs to tackle different kinds of terrains. Then once they let the shape-shifting robot loose in the real world, it used that training to efficiently tread surfaces it had never seen before. "We can actually take the robot, bring it outside, and it will just start adapting," says computer scientist Tønnes Nygaard of the University of Oslo and the Norwegian Defence Research Establishment, the lead author on the paper.


AI tissue-section analysis system for diagnosing breast cancer

#artificialintelligence

The team at Charité – Universitätsmedizin Berlin, TU Berlin, and the University of Oslo, have developed the system that, for the first time, integrates morphological, molecular, and histological data in a single analysis. The system also provides a clarification of the AI decision process in the form of heatmaps. The heatmaps show which visual information influenced the AI decision process and to what extent, which enables doctors to understand and assess the plausibility of the results – representing an essential step forward for the future use of AI systems in hospitals. The research has been published in Nature Machine Intelligence. The molecular characterisation of tumour tissue samples is becoming increasingly important for cancer treatment, with studies being conducted to determine changes to DNA as well as the gene and protein expression in the samples.


International Women's Day

#artificialintelligence

As I prepare for the Amplifying Her Voice event on International Women's Day hosted by The State of Women Institute (its a three day event! Register here), I wish I could be more excited about celebrating the progress of women in my field, but given my field is AI, Machine Learning and Data Science, my outlook on gender parity in the world as it is today is disappointingly unsatisfied. I can imagine a world where women's participation in AI fields is so normal that discussing gender parity is an afterthought. A world where there are so many examples of women (ALL women!) in technical and leadership roles that none of us feel like imposters. A world where we are celebrated and included in AI decisions, designs and policy improvements.


Making the role of AI in medicine explainable

#artificialintelligence

Universitätsmedizin Berlin and TU Berlin as well as the University of Oslo have developed a new tissue-section analysis system for diagnosing breast cancer based on artificial intelligence (AI). Two further developments make this system unique: For the first time, morphological, molecular and histological data are integrated in a single analysis. Secondly, the system provides a clarification of the AI decision process in the form of heatmaps. Pixel by pixel, these heatmaps show which visual information influenced the AI decision process and to what extent, thus enabling doctors to understand and assess the plausibility of the results of the AI analysis. This represents a decisive and essential step forward for the future regular use of AI systems in hospitals. The results of this research have now been published in Nature Machine Intelligence.


Machine learning offers fresh approach to tackling SQL injection vulnerabilities

#artificialintelligence

UPDATED A new machine learning technique could make it easier for penetration testers to find SQL injection exploits in web applications. Introduced in a recently published paper by researchers at the University of Oslo, the method uses reinforcement learning to automate the process of exploiting a known SQL injection vulnerability. While the technique comes with quite a few caveats and assumptions, it provides a promising path toward developing machine learning models that can assist in penetration testing and security assessment tasks. Reinforcement learning is a branch of machine learning in which an AI model is given the possible actions and rewards of an environment and is left to find the best ways to apply those actions to maximize the reward. "It's inevitable that AI and machine learning are also applied in offensive security," Laszlo Erdodi, lead author of the paper and postdoctoral fellow at the department of informatics at the University of Oslo, told The Daily Swig.


An introduction to distributed training of deep neural networks for segmentation tasks with large seismic datasets

arXiv.org Artificial Intelligence

Deep learning applications are drastically progressing in seismic processing and interpretation tasks. However, the majority of approaches subsample data volumes and restrict model sizes to minimise computational requirements. Subsampling the data risks losing vital spatio-temporal information which could aid training whilst restricting model sizes can impact model performance, or in some extreme cases, renders more complicated tasks such as segmentation impossible. This paper illustrates how to tackle the two main issues of training of large neural networks: memory limitations and impracticably large training times. Typically, training data is preloaded into memory prior to training, a particular challenge for seismic applications where data is typically four times larger than that used for standard image processing tasks (float32 vs. uint8). Using a microseismic use case, we illustrate how over 750GB of data can be used to train a model by using a data generator approach which only stores in memory the data required for that training batch. Furthermore, efficient training over large models is illustrated through the training of a 7-layer UNet with input data dimensions of 4096 4096 ( 7.8M parameters). Through a batch-splitting distributed training approach, training times are reduced by a factor of four. The combination of data generators and distributed training removes any necessity of data subsampling or restriction of neural network sizes, offering the opportunity of utilisation of larger networks, higher-resolution input data or moving from 2D to 3D problem spaces.


Bayesian Neural Networks for Virtual Flow Metering: An Empirical Study

arXiv.org Machine Learning

Recent works have presented promising results from the application of machine learning (ML) to the modeling of flow rates in oil and gas wells. The encouraging results combined with advantageous properties of ML models, such as computationally cheap evaluation and ease of calibration to new data, have sparked optimism for the development of data-driven virtual flow meters (VFMs). We contribute to this development by presenting a probabilistic VFM based on a Bayesian neural network. We consider homoscedastic and heteroscedastic measurement noise, and show how to train the models using maximum a posteriori estimation and variational inference. We study the methods by modeling on a large and heterogeneous dataset, consisting of 60 wells across five different oil and gas assets. The predictive performance is analyzed on historical and future test data, where we achieve an average error of 5-6% and 9-13% for the 50% best performing models, respectively. Variational inference appears to provide more robust predictions than the reference approach on future data. The difference in prediction performance and uncertainty on historical and future data is explored in detail, and the findings motivate the development of alternative strategies for data-driven VFM.