Scientists are always hunting for materials that have superior properties. They therefore continually synthesize, characterize and measure the properties of new materials using a range of experimental techniques. Computational modelling is also used to estimate the properties of materials. However, there is usually a trade-off between the cost of the experiments (or simulations) and the accuracy of the measurements (or estimates), which has limited the number of materials that can be tested rigorously. Writing in Nature Computational Science, Chen et al.1 report a machine-learning approach that combines data from multiple sources of measurements and simulations, all of which have different levels of approximation, to learn and predict materials' properties. Their method allows the construction of a more general and accurate model of such properties than was previously possible, thereby facilitating the screening of promising material candidates.
In December, the University of Texas at Austin's computer science department announced that it would stop using a machine-learning system to evaluate applicants for its Ph.D. program due to concerns that encoded bias may exacerbate existing inequities in the program and in the field in general. This move toward more inclusive admissions practices is a rare (and welcome) exception to a worrying trend in education: Colleges, standardized test providers, consulting companies, and other educational service providers are increasingly adopting predatory, discriminatory, and outright exclusionary student data practices. Student data has long been used as a college recruiting and admissions tool. In 1972, College Board, the company that owns the PSAT, the SAT, and the AP Exams, created its Student Search Service and began licensing student names and data profiles to colleges (hence the college catalogs that fill the mail boxes of high school students who have taken the exams). Today, College Board licenses millions of student data profiles every year for 47 cents per examinee.
A growing number of IT workers are worried about what artificial intelligence (AI) and machine learning technologies mean for their future. Research from cybersecurity firm Trend Micro claims that nearly half of IT leaders think AI will render their roles redundant over the coming decade. Meanwhile, a 2020 report by security management platform Exabeam found that 53% of cybersecurity professionals aged 45 or under view AI and machine learning as threats to job security. Are IT professionals right to be concerned about the rise of AI technology, and how can they stay relevant in the years to come? There are many different reasons IT professionals are worried about the rise and advancement of AI in the technology workplace, according to Exabeam security specialist Sam Humphries.
Scientists from the Max Planck Institute of Psychiatry, led by Nikolaos Koutsouleris, combined psychiatric assessments with machine-learning models that analyze clinical and biological data. Although psychiatrists make very accurate predictions about positive disease outcomes, they might underestimate the frequency of adverse cases that lead to relapses. The algorithmic pattern recognition helps physicians to better predict the course of disease. The results of the study show that it is the combination of artificial and human intelligence that optimizes the prediction of mental illness. "This algorithm enables us to improve the prevention of psychosis, especially in young patients at high risk or with emerging depression, and to intervene in a more targeted and well-timed manner" explains Koutsouleris.