Goto

Collaborating Authors

Expert Systems


Why mechanical engineers should learn A.I.

#artificialintelligence

There are some mechanical engineering fields in which AI is about to give a paradigm shift. AI used in Computer-Aided Design (CAD) generally works on knowledge-based systems. Design artefacts, rules, and problems in CAD are stored which later assist CAD designers. Merging of AI and CAD is done through Model-Based Reasoning (MBR). Many new releases of software packages are using knowledge-based systems.


L.A. County sees another sharp rise in coronavirus cases as mask rules set to take effect

Los Angeles Times

Los Angeles County recorded more than 1,900 new coronavirus cases Friday, another major jump, as a mandatory mask restriction for inside public places takes effect Saturday night. Over the last week, L.A. County has reported an average of more than 1,000 new coronavirus cases a day -- a tally that, though merely a fraction of the sky-high counts seen during previous surges, is still six times as high as what the county was seeing in mid-June. Daily case numbers have jumped: 1,537 new cases were reported Thursday, and 1,902 more were added Friday. COVID-19 hospitalizations also doubled over that same time period, from 223 on June 15 to 462 on Thursday. More than 8,000 coronavirus-positive patients were hospitalized countywide during the darkest days of the winter wave.


Most Covid rules set to be lifted in Wales on 7 August

BBC News

Cases of the virus have risen sharply since the Delta variant emerged six weeks ago but, thanks to our fantastic vaccination programme, we are not seeing these translate into large numbers of people falling seriously ill or needing hospital treatment.


Facebook Groups can now have dedicated topic 'experts'

Engadget

Facebook is working on a new way to highlight authoritative information within Groups. The platform is starting to roll out a new "expert" label for group members who have expertise in an area related to the group's interests. With the change, which Facebook says is available to "select" Groups, an admin can invite a group member to be a group "expert." If the person accepts, then they'll get a badge next to their name similar to the way group moderators and admins are identified. Notably, being a group "expert" doesn't grant you extra control of group features, or higher visibility within a group.


Nexyad and HERE improve vehicle safety with next generation, cognitive artificial intelligence

#artificialintelligence

Paris and Amsterdam – Nexyad, the embedded, real-time platform for aggregating on-board data, and HERE Technologies, the leading location data and technology platform, are now working together to apply cognitive AI to road safety. Nexyad uses cognitive AI to aggregate extensive data sources in a vehicle in real time and interprets them to assess whether a certain driving behaviour is appropriate given the surrounding context. Nexyad's assessment, that can easily be delivered to a driver via a mobile phone, can be calculated from four sets of data only: HERE map, Global Navigation Satellite System, electronic horizon and acceleration. Nexyad's platform is also scalable and can aggregate data from Advanced Driving Assistant Systems (ADAS) sensors to include camera, radar and lidar, weather (visibility and temperature), and traffic data. Nexyad's real-time data aggregation platform provides two output values 20 times every second: the lack of caution of the driver and the maximum speed recommended given the road conditions – legal speed limit, road roughness, topography of the road, weather, and traffic.


Surfside building collapse: Multiple lawsuits seek to get answers, assign blame

FOX News

Even as the search continues over a week later for signs of life in the mangled debris of the fallen Champlain Towers South, the process of seeking answers about why it happened and who is to blame is already underway in Florida's legal system. Authorities have opened criminal and civil investigations into the collapse of the oceanfront condominium building, which left at least 28 confirmed dead and more than 117 unaccounted for. Miami-Dade State Attorney Katherine Fernandez Rundle pledged to bring the matter soon before grand jurors, who could recommend criminal charges or simply investigate the cause to suggest reforms. And at least five lawsuits have been filed on behalf of residents who survived or are feared dead. One lawyer involved in the litigation said the collapse raises widespread concerns about infrastructure issues and the trust we put in those responsible for them.



An Empirical Investigation into Deep and Shallow Rule Learning

arXiv.org Artificial Intelligence

Inductive rule learning is arguably among the most traditional paradigms in machine learning. Although we have seen considerable progress over the years in learning rule-based theories, all state-of-the-art learners still learn descriptions that directly relate the input features to the target concept. In the simplest case, concept learning, this is a disjunctive normal form (DNF) description of the positive class. While it is clear that this is sufficient from a logical point of view because every logical expression can be reduced to an equivalent DNF expression, it could nevertheless be the case that more structured representations, which form deep theories by forming intermediate concepts, could be easier to learn, in very much the same way as deep neural networks are able to outperform shallow networks, even though the latter are also universal function approximators. In this paper, we empirically compare deep and shallow rule learning with a uniform general algorithm, which relies on greedy mini-batch based optimization. Our experiments on both artificial and real-world benchmark data indicate that deep rule networks outperform shallow networks.


Labelling Drifts in a Fault Detection System for Wind Turbine Maintenance

arXiv.org Artificial Intelligence

A failure detection system is the first step towards predictive maintenance strategies. A popular data-driven method to detect incipient failures and anomalies is the training of normal behaviour models by applying a machine learning technique like feed-forward neural networks (FFNN) or extreme learning machines (ELM). However, the performance of any of these modelling techniques can be deteriorated by the unexpected rise of non-stationarities in the dynamic environment in which industrial assets operate. This unpredictable statistical change in the measured variable is known as concept drift. In this article a wind turbine maintenance case is presented, where non-stationarities of various kinds can happen unexpectedly. Such concept drift events are desired to be detected by means of statistical detectors and window-based approaches. However, in real complex systems, concept drifts are not as clear and evident as in artificially generated datasets. In order to evaluate the effectiveness of current drift detectors and also to design an appropriate novel technique for this specific industrial application, it is essential to dispose beforehand of a characterization of the existent drifts. Under the lack of information in this regard, a methodology for labelling concept drift events in the lifetime of wind turbines is proposed. This methodology will facilitate the creation of a drift database that will serve both as a training ground for concept drift detectors and as a valuable information to enhance the knowledge about maintenance of complex systems.


Interpretable Machine Learning Classifiers for Brain Tumour Survival Prediction

arXiv.org Artificial Intelligence

Prediction of survival in patients diagnosed with a brain tumour is challenging because of heterogeneous tumour behaviours and responses to treatment. Better estimations of prognosis would support treatment planning and patient support. Advances in machine learning have informed development of clinical predictive models, but their integration into clinical practice is almost non-existent. One reasons for this is the lack of interpretability of models. In this paper, we use a novel brain tumour dataset to compare two interpretable rule list models against popular machine learning approaches for brain tumour survival prediction. All models are quantitatively evaluated using standard performance metrics. The rule lists are also qualitatively assessed for their interpretability and clinical utility. The interpretability of the black box machine learning models is evaluated using two post-hoc explanation techniques, LIME and SHAP. Our results show that the rule lists were only slightly outperformed by the black box models. We demonstrate that rule list algorithms produced simple decision lists that align with clinical expertise. By comparison, post-hoc interpretability methods applied to black box models may produce unreliable explanations of local model predictions. Model interpretability is essential for understanding differences in predictive performance and for integration into clinical practice.