LafargeHolcim will implement automation and robotics, artificial intelligence, predictive maintenance and digital twin technologies for its production process. The company is upgrading its production fleet for the future through its'Plants of Tomorrow" program. The program will be rolled out over four years as LafargeHolcim upgrades its technologies in the building materials industry. The company predicts a "Plants of Tomorrow" certified operation will show 15 to 20 percent of operational efficiency gains compared to a conventional cement plant. Among the technologies implemented are predictive operations that can detect abnormal conditions and process anomalies in real-time. This aims to reduce maintenance costs by more than 10 percent and significantly lower energy costs. Digital twins of plants will also be created to optimise training opportunities. Automation and robotics is another important element of the strategy. Unmanned surveillance is being performed for high exposure jobs in the entire plant. Partnering with Swiss start-up Flyability, the company is using drones that allow the frequency of inspections to increase while simultaneously reducing cost and increasing safety for employees by inspecting confined spaces. In addition, the new PACT (Performance and Collaboration) digital tool allows operational decision making from experience-based to data-centric, by combining data from various sources and enabling machine learning applications. LafargeHolcim is currently working on more than 30 pilot projects covering all regions where the company is active. The first integrated cement plant will be at LafargeHolcim's premises in Siggenthal, Switzerland, this plant will test all modules of the'Plants of Tomorrow' program. LafargeHolcim Global Head Cement Manufacturing, Solomon Baumgartner Aviles, said transforming the way we produce cement is one of the focus areas of our digitalisation strategy and the'Plants of Tomorrow' initiative will turn Industry 4.0 into reality at our plants. "These innovative solutions make cement production safer, more efficient and environmentally fit.
Artificial intelligence might not be as smart as we think. University and military researchers are studying how attackers could hack into AI systems by exploiting how these systems learn. It's known as "adversarial AI." In this encore episode, Dina Temple-Raston tells us that some of these experiments use seemingly simple techniques. For more, check out Dina's special series, I'll Be Seeing You.
Deep learning is a sub-field of machine learning and an aspect of artificial intelligence. To understand this more easily, understand that it is meant to emulate the learning approach that humans use to acquire certain types of knowledge. This is somewhat different from machine learning, often people get confused in this and machine learning. Deep learning uses a sequencing algorithm while machine learning uses a linear algorithm. To understand this more accurately, understand this example that if a child is identified with a flower, then he will ask again and again, is this flower?
I want to talk about a misconception on the difference between inference and prediction. For a well run analytically oriented business, there may not be as many reasons to prefer inference over prediction one may have heard. A common refrain is: data scientists are in error in centering so much on prediction, a mistake no true Scotsman statistician would make. I've actually come to question this and more and more. Mere differences in practice between two fields doesn't immediately imply either field is inferior or in error.
In a survey conducted by Gurugram-based BML Munjal University (School of Law) in July 2020, it was found that about 42% of lawyers believed that in the next 3 to 5 years as much as 20% of regular, day-to-day legal works could be performed with technologies such as artificial intelligence. The survey also found that about 94% of law practitioners favoured research and analytics as to the most desirable skills in young lawyers. Earlier this year, Chief Justice of India SA Bobde, in no uncertain terms, underlined that the Indian judiciary must equip itself with incorporating artificial intelligence in its system, especially in dealing with document management and cases of repetitive nature. With more industries and professional sectors embracing AI and data analytics, the legal industry, albeit in a limited way, is no exception. According to the 2020 report of the National Judicial Data Grid, over the last decade, 3.7 million cases were pending across various courts in India, including high courts, district and taluka courts.
Using those primitives, DeepMind generated a dataset known as Procedurally Generated Matrices(PGM) that consists of triplets [progression, shape, color]. The relationship between the attributes in a triplet represent an abstract challenge. For instance, if the first attribute is progression, the values of the other two attributes must along rows or columns in the matrix. In order to show signs of abstract reasoning using PGM, a neural network must be able to explicitly compute relatioships between different matrix images and evaluate the viability of each potential answer in parallel. To address this challenge, the DeepMind team created a new neural network architecture called Wild Relation Network(WReN) in recognition of John Rave's wife Mary Wild who was also a contributor to the original IQ Test. In the WReN architecture, a convolutional neural network(CNN) processes each context panel and an individual answer choice panel independently to produce 9 vector embeddings. This set of embeddings is then passed to an recurrent network, whose output is a single sigmoid unit encoding the "score" for the associated answer choice panel.
For the visualization of features, the authors use deconvolutional networks (deconvnet). Think of deconvnet as decoder part of the autoencoders. It does the reverse of a normal convolutional network, it uses unpooling and filters to recover the pixels from the features. The only confusing part in this network is how it is undoing the pooling because when any pooling is done, only one value remains out of N² values given NxN filter was used. That whole data cannot be recovered but the max value is still there but it is no use if we don't know where it is located in the output of the convolutional layer.