Housebound by a pandemic, humanity slowed its emissions of greenhouse gases in 2020. But Earth paid little heed: Temperatures last year tied the modern record, climate scientists reported last week. Overall, the planet was about 1.25°C warmer than in preindustrial times, a trend that puts climate targets in jeopardy, according to jointly reported assessments from NASA, Berkeley Earth, the U.K. Met Office, and the National Oceanic and Atmospheric Administration. The annual update of global surface temperatures—an average of readings from thousands of weather stations and ocean probes—shows 2020 essentially tied records set in 2016. But the years were nothing alike. Temperatures in 2016 were boosted by a strong El Niño, a weather pattern that warms the globe by blocking the rise of cold deep waters in the eastern Pacific Ocean. Last year, however, the Pacific entered La Niña, which has a cooling effect. That La Niña didn't provide more relief is an unwelcome surprise, says Nerilie Abram, a climate scientist at Australian National University. “It makes me worried about how quickly the global warming trend is growing.” The past 6 years are the six warmest on record, but the warming of the atmosphere is unsteady because of its chaotic nature. The ocean, which absorbs more than 90% of the heat from global warming, displays a steadier trend, and here, too, 2020 was a record year. The upper levels of the ocean contained 20 zettajoules (1021 joules) more heat than in 2019, and the rise was double the typical annual increase, scientists reported last week in Advances in Atmospheric Sciences . The subtropical Atlantic Ocean was particularly hot, fueling a record outbreak of hurricanes, says Lijing Cheng, a climate scientist at the Chinese Academy of Sciences's Institute of Atmospheric Physics who led the work. This heat, monitored down to 2000 meters by a fleet of 4000 robotic probes, is spreading deeper into the ocean while also migrating toward the poles. An extreme heat wave struck the northern Pacific, killing marine life. For the first time, warm Atlantic waters were seen penetrating into the Arctic Ocean, melting sea ice from below and reducing its extent nearly to a record low ( Science , 28 August 2020, p. ). The warming ocean and melting ice sheets are raising sea levels by 4.8 millimeters per year, and the rate is accelerating ( Science , 20 November 2020, p. ). On land, 2020 was even more relentless, with temperatures rising 1.96°C above preindustrial levels, a clear record, Berkeley Earth reported. It was the warmest year ever in Asia and Europe and tied for the warmest in South America. Russia was particularly hot, breaking its previous record by 1.2°C, while swaths of Siberia were 7°C warmer than in preindustrial times, leading to large-scale fires and thawing permafrost that caused buildings to founder and set off oil spills ( Science , 7 August 2020, p. ). “Siberia was crazy,” says Zeke Hausfather, a climate scientist at the Breakthrough Institute and co-author of the Berkeley Earth analysis. “That heat would effectively be impossible without the warming we've seen.” In Australia, record-setting heat and drought fueled catastrophic bushfires at the start of 2020. Fires torched nearly one-quarter of southeastern Australia's forests and destroyed 3000 homes. Climate change was to blame for the country's “Black Summer,” Abram and co-authors concluded in a study published this month in Communications Earth & Environment . Meanwhile, in the United States, unprecedented heat came to the desert Southwest, which is already warming faster than the rest of the country. Phoenix wilted under its hottest summer ever, averaging 36°C. Arizona's Maricopa county, home to Phoenix, is a leader in addressing heat exposure, yet its heat deaths have hit a new record each year since 2016. In 2020, the number approached 300, a jump of some 50% over the previous year, says David Hondula, a climatologist who studies heat mortality at Arizona State University, Tempe. “It was just off the charts in terms of heat.” ![Figure] Turning up the heatCREDITS: (GRAPHIC) N. DESAI/ SCIENCE ; (DATA) MET OFFICE; NASA; BERKELEY EARTH; NOAA Although the global economic slowdown of the COVID-19 pandemic cut carbon dioxide (CO2) emissions by some 7%, atmospheric CO2 is long-lived, and warming from previous emissions is preordained. In any case, the drop in emissions is unlikely to last. Later this year, in May, before photosynthesis in the Northern Hemisphere draws down CO2, the U.K. Met Office predicts that levels of atmospheric CO2 will pass 417 parts per million for several weeks, 50% higher than preindustrial levels. Only dramatic action by the world's countries, far beyond existing efforts, can begin to halt this build up, Cheng says. Should the current rate of warming continue, the world will breach the targets set in the Paris climate agreement—limiting warming to 1.5°C or 2°C—by 2035 and 2065, respectively. But Hausfather says it's quite possible that warming, which has largely held steady for the past few decades at 0.19°C per decade, will actually speed up. The rate of warming over the past 14 years is well above the long-term trend. The debate now, he says, is whether that is an omen of an even darker future. : https://www.sciencemag.org/content/369/6507/1043.full : https://www.sciencemag.org/content/370/6519/901.full : https://www.sciencemag.org/content/369/6504/612.full : pending:yes
The Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE) taskforce on AI & COVID-19 supported the creation of a research group focused on AI-assisted diagnosis of COVID-19 pneumonia. The first results demonstrate the great potential of AI-assisted diagnostic imaging. Furthermore, the impact of the taskforce work is much larger, and it embraces the cross-fertilisation of artificial intelligence (AI) and high-performance computing (HPC): a partnership with rocketing potential for many scientific domains. Through several initiatives aimed at improving the knowledge of COVID-19, containing its diffusion, and limiting its effects, CLAIRE's COVID-19 taskforce was able to organise 150 volunteer scientists, divided into seven groups covering different aspects of how AI could be used to tackle the pandemic. Emanuela Girardi, the co-coordinator of the CLAIRE taskforce on AI & COVID-19, supported the setup of a novel European group to study the diagnosis of COVID-19 pneumonia assisted by artificial intelligence.
As a new mandate takes effect, researchers and institutions grapple with the trade-offs of making scientific publications free for all In 2018, a group of mostly European funders sent shock waves through the world of scientific publishing by proposing an unprecedented rule: The scientists they funded would be required to make journal articles developed with their support immediately free to read when published. The new requirement, which takes effect starting this month, seeks to upend decades of tradition in scientific publishing, whereby scientists publish their research in journals for free and publishers make money by charging universities and other institutions for subscriptions. Advocates of the new scheme, called Plan S (the “S” stands for the intended “shock” to the status quo), hope to destroy subscription paywalls and speed scientific progress by allowing findings to be shared more freely. It's part of a larger shift in scientific communication that began more than 20 years ago and has recently picked up steam. Scientists have several ways to comply with Plan S, including by paying publishers a fee to make an article freely available on a journal website, or depositing the article in a free public repository where anyone can download it. The mandate is the first by an international coalition of funders, which now includes 17 agencies and six foundations, including the Wellcome Trust and Howard Hughes Medical Institute, two of the world's largest funders of biomedical research. The group, which calls itself Coalition S, has fallen short of its initial aspiration to catalyze a truly international movement, however. Officials in three top producers of scientific papers—China, India, and the United States—have expressed general support for open access, but have not signed on to Plan S. Its mandate for immediate open access will apply to authors who produced only about 6% of the world's papers in 2017, according to an estimate by the Clarivate analytics firm, publisher of the Web of Science database. Still, there's reason to think Coalition S will make an outsize impact, says Johan Rooryck, Coalition S's executive director and a linguist at Leiden University. In 2017, 35% of papers published in Nature and 31% of those in Science cited at least one coalition member as a funding source. “The people who get [Coalition S] funding are very prominent scientists who put out very visible papers,” Rooryck says. “We punch above our weight.” In a dramatic sign of that influence, the Nature and Cell Press families of journals—stables of high-profile publications—announced in recent weeks that they would allow authors to publish papers outside their paywall, for hefty fees. Other recent developments point to growing support for open access. In 2017, for the first time, the majority of new papers across all scholarly disciplines, most of them in the sciences, were published open access, according to the Curtin Open Knowledge Initiative. More recently, most major publishers removed paywalls from articles about COVID-19 last year in an attempt to speed development of vaccines and treatments. Despite these and other signs of momentum, some publishing specialists say Plan S and other open-access measures could be financially stressful and ultimately unsustainable for publishers and the research institutions and authors who foot the bill. As debate continues about just how far and fast the movement will go, Science offers this guide for authors readying to plunge in. Authors who make their work open access may reap benefits, but their magnitude depends partly on what you measure. One yardstick is a paper's impact. Some studies have reported up to triple the number of citations for open-access articles on average compared with paywalled ones. But authors may be likely to publish their best work open access, which might bring it more citations. A recent analysis that used statistical methods to control for this tendency found a far more modest citation advantage for open access—8%—and only for a minority of “superstar” papers. Mark McCabe of SKEMA Business School and Christopher Snyder of Dartmouth College studied how citations to articles changed when their journal volumes moved from behind paywalls to entirely open access, and compared them with citations for articles that remained paywalled. For each article in their sample of more than 200,000 papers in ecology and other fields, the researchers accounted for other characteristics that affect citations, such as a paper's age: Newly published papers usually receive a burst of citations at first but fewer later. The modest citation advantage from open access accrued only to high-quality papers, defined as having already garnered 11 or more citations during a 2-year period before the paper became open access, McCabe and Snyder reported in November 2020. Other studies have found that open-access articles have a larger reach by other measures, including the number of downloads and online views. They also have an edge in Altmetric scores, a composite of an article's mentions on social media and in news stories and policy documents. These nonscholarly mentions buttress reports that open access enables a broader audience, beyond the core scientific community, to read research findings. In November 2020, Springer Nature and partners released findings from a survey of 6000 visitors to its websites. They reported that an “astonishing” 28% were general users, including patients, teachers, and lawyers. Another 15% worked in industry or medical jobs that required them to read but not publish research. Even for faculty members who can read subscription-based journals through their institution's libraries, open access could allow quicker access to articles in journals to which the institution doesn't subscribe. Some 57% of academics surveyed said they “almost always” or “frequently” had trouble accessing the full content of Springer Nature's articles. Open access comes in different varieties, or colors, each with its own costs and benefits. In what's called gold open access, articles carry a license making them freely available on publication. Typically the publisher charges a fee to offset lost subscription revenue and cover the cost of publishing. In recent years, the median paid, after discounts, was about $2600, according to a 2020 study by Nina Schönfelder of Bielefeld University. More selective journals, such as The Lancet Global Health , have charged up to $5000. The Nature Research family of journals has set its top open-access fee at €9500 (about $11,600), and Cell Press will charge $9900 for its flagship, Cell . Some journals are entirely gold open access; other, “hybrid” journals offer authors a choice between free publication behind a paywall or open access for a fee. A growing number of universities and research institutions, especially in Europe, are striking deals in which they pay a publisher a single fee that covers open-access publishing by their authors and also lets people on their campuses read content that remains behind paywalls. The largest such agreement was reached in 2019 between Springer Nature and 700 German research institutions and libraries. Since the first such deal in 2015, the number grew to 137 in 2020, according to the ESAC Transformative Agreement Registry. However, the deals last year covered publication fees for only 3% of papers produced globally. A variant called green open access allows authors to avoid publication fees. In this arrangement, authors publish in journals—even ones that use paywalls instead of charging authors—but also make their article freely available in an online repository. U.S. policy already requires the final, published versions of papers developed with federal funding to be deposited within 12 months in a repository such as the National Institutes of Health's PubMed Central, and many publishers do this automatically. Other authors can use online tools to find repositories. The Directory of Open Access Repositories lists more than 5500 of them. Publishers typically impose a 6- or 12-month embargo before authors can deposit the final, peer-reviewed version of a paywalled article, but this runs afoul of the Plan S requirement for immediate open access. (The embargo policies of thousands of journals globally are listed in a database called Sherpa/Romeo.) As a compromise, many publishers including the Science family of journals allow authors to immediately post a nearly final, peer-reviewed version of a paper in an institutional repository. Plan S accepts this form of green open access, but has added a controversial provision that these accepted manuscripts be licensed for free distribution. Some publishers have complained that this approach threatens their subscription revenues because it could widen free reading of these articles. Rooryck says Coalition S canvassed major publishers and found none was planning to routinely reject submitted manuscripts funded by Coalition S members because of the prospect that the authors would immediately post them when accepted. A spokesperson for publishing giant Elsevier told Science that all its journals will offer authors funded by Coalition S members the option to publish open access for a fee, allowing authors to comply with Plan S without violating embargoes. Where a researcher works strongly influences how much money is available for open-access fees. In Europe, institutions used dedicated internal funds to pay fees for 50% of articles their authors published in hybrid journals (those that publish both open-access and subscription content), but in the rest of the world, the figure was only 25%, according to a 2020 survey of authors by Springer Nature. Authors also tap funders and other sources, including their own personal funds. European scholars reported paying out of their own wallets for just 1% of the articles, compared with 16% in other countries. In Italy, the Nature group's new €9500 open-access fee has riled some researchers. That figure is “insane, there's no way on Earth to justify that,” says Manlio De Domenico, who leads a network science lab at the Bruno Kessler Foundation. The annual research budget for his 10-person lab recently included a total of €8000 for open-access fees for the entire year. “We can spend the money better another way,” he says—to pay Ph.D. students and, in normal times, fund travel to conferences and other labs. “To me, the trade-off is clear.” (The Nature group says the price reflects its costs to produce such highly selective journals; journals don't normally collect fees for papers they review but don't publish.) Nor do open-access publication fees hew closely to the laws of demand. One would expect fees to increase with the prestige of the journal, but a recent study by Schönfelder suggests that's not always true. She examined the relationship between fees paid by U.K. funders and the impact factor—a measure based on the average number of citations per article—of the journals where the papers appeared. She found a strong correlation in journals that published only open-access articles but a weaker one in hybrid journals. Hybrid journals tended to cost more than purely open-access journals, too. In a paper published last year, Schönfelder suggested her findings reflect the legacy of the subscription prices of large, traditional publishers such as Elsevier and Springer Nature, which publish many hybrid journals. These highly profitable companies with large shares of the publishing market have operated with limited competitive pressure. “If [their] pricing behavior wins through, the open-access transformation will come at a much higher cost than expected today,” Schönfelder wrote. A complete shift to open access could lead publishers to boost publishing fees even further, to try to make up for lost subscription revenues, says Claudio Aspesi, a publishing industry consultant based in Switzerland. Although just over 30% of all papers published in 2019 were paid open access, subscriptions still accounted for more than 90% of publishers' revenues that year, according to Delta Think, a consulting and marketing firm. Coalition S seeks to exert downward pressure on prices by increasing transparency. When a grantee's research is published, Plan S requires publishers to disclose to funders the basis for their prices, including the cost of services such as proofreading, copy editing, and organizing peer review. Rooryck says the coalition will share the information with authors and libraries, many of which help fund publishing fees. He expects the practice will increase price competition or provide “at a minimum, confidence that some of these prices are fair.” Despite wide acknowledgment by scientists, publishers, librarians, and policymakers of open access' potential benefits, many are reluctant to go all in. Even in Europe, where the movement for open access has been especially strong, Plan S is unusual. Of 60 funders surveyed in 2019, only 37 had an open-access policy, and only 23 monitored compliance, according to a report prepared for SPARC Europe, a nonprofit that advocates for open access. Some authors remain hesitant, too. In multiple surveys, authors have ranked open-access publishing below their need to publish in prestigious, high-impact journals to gain tenure and promotion. And they may be wary of a perception among some scientists that journals that carry only gold open-access articles lack rigor. (That view, researchers say, may reflect that such journals are relatively new, which lowers their impact factor.) A recent study also hints at inequities, finding that established, funded researchers at prestigious institutions are more likely to pay to publish their work open access. Anthony Olejniczak and Molly Wilson of the Academic Analytics Research Center, part of a data firm in Columbus, Ohio, examined the demographics and publishing patterns of more than 180,000 U.S. scholars. Overall, 84% of biological scientists and 66% in the physical and mathematical sciences had authored or co-authored at least one gold open-access paper between 2014 and 2018. Those authors were more likely to have advanced faculty rank and federal grants and to work at one of the 65 leading research universities that belong to the Association of American Universities, Olejniczak and Wilson report in an upcoming paper in Quantitative Science Studies . Olejniczak and Wilson hypothesize that scientists who choose to pay for open access not only need financial resources, but also the sense of job security that tenure confers. “This is a good news, bad news story,” Olejniczak says. “Open access is thriving, and it's growing.” But, he adds, publishers collecting the fees should consider ways to accommodate a wider diversity of authors. ![Figure] The many colors of open accessGRAPHIC: V. ALTOUNIAN/ SCIENCE One tenet of the open-access movement has been that publishing fees can be funded by redirecting money university libraries currently spend on journal subscriptions—but that assumption faces questions. Although the “transformative” agreements that cover both reading and publishing of articles have rapidly increased the percentage of articles published open access at some institutions, the details of these deals (like traditional, subscription-only ones) are often secret and have other features that make it difficult to compare bottom-line costs. Comparing costs across institutions is also challenging because these deals usually involve large packages of journals, with the exact lineup varying by institution. Still, it is clear that making most articles gold open access could wallop the library budgets of research-intensive universities whose scientists publish the most papers. Many institutions that publish little research would save money by dropping subscriptions and letting faculty members read articles for free, analysts say, and publishers would look to recoup the lost revenue through publishing fees. Pay It Forward , a report published by librarians at the University of California (UC) and colleagues in 2016, remains one of the most comprehensive analyses of the impact of these shifts on universities. They calculated what each of UC's 10 campuses and three comparison institutions would have paid to publish as gold open access all articles from between 2009 and 2013 that listed one of their faculty members as a corresponding author. A key finding: At most of the research-intensive institutions studied—such as the UC campuses in Los Angeles and San Francisco and Harvard University—simply redirecting funds from journal subscriptions wouldn't cover the open-access fees. Those institutions could charge the difference to federal grants, but they would still have to cover fees on papers from studies done without grant funding. Harvard, for example, might have to boost its total library spending by 71%, or nearly $6 million. Rich universities like Harvard could potentially tap their huge endowments and copious research funding to cover these costs, but other universities could struggle. U.S. university library budgets have lagged the rate of inflation in higher education for years and now face cuts because of the coronavirus pandemic. Some researchers interviewed for UC's study said they were reluctant to spend grant money on open-access publishing fees because they would eat into funds for research. “But in practice, we found [faculty members] are independently spending millions of dollars” from grants on fees, says MacKenzie Smith, university librarian at UC Davis and one of the study's co-authors. UC is conducting an experiment that limits the universities' contribution to per-article publication fees in order to encourage faculty members to consider other funding sources and journals with lower fees. “We want to get authors more engaged in the cost aspect of publishing, or at least mindful of it,” Smith says. If paying for open-access publication becomes the default route for scientists, and publishers hike prices as expected, many analysts worry publishing will become a luxury that only better funded researchers can afford. That could create a self-reinforcing cycle in which well-funded researchers publish more, potentially attracting more attention—and more funding. If that comes to pass, it could be especially hard on early-career researchers and authors in the developing world who lack their own grants, and on those in disciplines that traditionally receive less funding, such as math. Although publishers offer waivers for authors, many do not always cover the entire publishing fee or disclose what percentage of requests they grant. Small, nonprofit societies that currently depend on subscription fees from their journals could also lose out in an open-access world, because the dynamics of the pay-to-publish model tend to favor publishers and journals that produce a high volume of articles, which affords economies of scale. “I am worried that in the zeal to go that last mile” to make a larger portion of articles open access, “we could end up really hurting the scientific enterprise,” says Sudip Parikh, CEO of AAAS, which publishes the Science family of journals. One of them, Science Advances , charges an open-access fee of $4500, whereas the rest operate on the traditional subscription-only model. Parikh says AAAS is considering other options to make papers free to read, but he wasn't ready to discuss them when Science went to press. “I don't pretend to know the answer yet,” he says. “But it feels like there are other possibilities” besides publishing fees. One model for sustaining open access without relying on per-article publishing fees comes from Latin America. Brazil and other countries have funded the creation of free open-access journals and article repositories, and the region in 2019 had the world's highest percentage of scholarly articles available open access, 61%, according to the Curtin Open Knowledge Initiative. Debate continues about how to control publishing costs. Many advocates for open access say making it more affordable will require a vast shift in the culture of science. In particular, tenure and promotion committees will need to lower their expectations that authors publish in prestigious, costly journals. But some argue that even if funders and institutions must cough up more money to help authors publish open access, the potential to accelerate scientific discovery would justify the added cost. The journal publishing industry's annual revenues of about $10 billion represent less than 1% of total global spending on R&D—and, in this view, it's reasonable to divert more of the total to scholarly communications that are essential to making the entire enterprise run. It's unlikely, though, that all scientific articles will ever become open access, says Rick Anderson, university librarian at Brigham Young University, who has written extensively about business models for journal publishing. “It just seems to me like the barriers to universal open access are too great,” he says. “Every open-access model solves some problems and creates other problems.” “What I think is much more likely in the future, almost inevitable, is a fairly diverse landscape of open-access and subscription models,” Anderson adds. “I haven't yet seen anything that has convinced me that toll [subscription-based] access is going to go away entirely.” : pending:yes
Couples who meet through smartphone dating apps are more motivated to move in together and have children, according to a new study. Researchers found that online daters have stronger long-term relationship goals than peers who hook up in more traditional ways - such as at the office or pub. Tinder and rivals such as Bumble, Match and Plenty of Fish have been criticised for fuelling casual sex. But, contrary to popular belief, spreading the net wider increases the chances of settling down with'Mr or Mrs Right', according to psychologists. An analysis of more than 3,000 over-18s in Switzerland showed couples who met on an app were more motivated by the idea of cohabiting.
Background: Misinformation spread through social media is a growing problem, and the emergence of COVID-19 has caused an explosion in new activity and renewed focus on the resulting threat to public health. Given this increased visibility, in-depth analysis of COVID-19 misinformation spread is critical to understanding the evolution of ideas with potential negative public health impact. Methods: Using a curated data set of COVID-19 tweets (N ~120 million tweets) spanning late January to early May 2020, we applied methods including regular expression filtering, supervised machine learning, sentiment analysis, geospatial analysis, and dynamic topic modeling to trace the spread of misinformation and to characterize novel features of COVID-19 conspiracy theories. Results: Random forest models for four major misinformation topics provided mixed results, with narrowly-defined conspiracy theories achieving F1 scores of 0.804 and 0.857, while more broad theories performed measurably worse, with scores of 0.654 and 0.347. Despite this, analysis using model-labeled data was beneficial for increasing the proportion of data matching misinformation indicators. We were able to identify distinct increases in negative sentiment, theory-specific trends in geospatial spread, and the evolution of conspiracy theory topics and subtopics over time. Conclusions: COVID-19 related conspiracy theories show that history frequently repeats itself, with the same conspiracy theories being recycled for new situations. We use a combination of supervised learning, unsupervised learning, and natural language processing techniques to look at the evolution of theories over the first four months of the COVID-19 outbreak, how these theories intertwine, and to hypothesize on more effective public health messaging to combat misinformation in online spaces.
The COVID-19 pandemic has caused international social tension and unrest. Besides the crisis itself, there are growing signs of rising conflict potential of societies around the world. Indicators of global mood changes are hard to detect and direct questionnaires suffer from social desirability biases. However, so-called implicit methods can reveal humans intrinsic desires from e.g. social media texts. We present psychologically validated social unrest predictors and replicate scalable and automated predictions, setting a new state of the art on a recent German shared task dataset. We employ this model to investigate a change of language towards social unrest during the COVID-19 pandemic by comparing established psychological predictors on samples of tweets from spring 2019 with spring 2020. The results show a significant increase of the conflict indicating psychometrics. With this work, we demonstrate the applicability of automated NLP-based approaches to quantitative psychological research.
Respiratory diseases kill million of people each year. Diagnosis of these pathologies is a manual, time-consuming process that has inter and intra-observer variability, delaying diagnosis and treatment. The recent COVID-19 pandemic has demonstrated the need of developing systems to automatize the diagnosis of pneumonia, whilst Convolutional Neural Network (CNNs) have proved to be an excellent option for the automatic classification of medical images. However, given the need of providing a confidence classification in this context it is crucial to quantify the reliability of the model's predictions. In this work, we propose a multi-level ensemble classification system based on a Bayesian Deep Learning approach in order to maximize performance while quantifying the uncertainty of each classification decision. This tool combines the information extracted from different architectures by weighting their results according to the uncertainty of their predictions. Performance of the Bayesian network is evaluated in a real scenario where simultaneously differentiating between four different pathologies: control vs bacterial pneumonia vs viral pneumonia vs COVID-19 pneumonia. A three-level decision tree is employed to divide the 4-class classification into three binary classifications, yielding an accuracy of 98.06% and overcoming the results obtained by recent literature. The reduced preprocessing needed for obtaining this high performance, in addition to the information provided about the reliability of the predictions evidence the applicability of the system to be used as an aid for clinicians.
COVID-19 was declared a pandemic by the World Health Organization (WHO) on March 11th, 2020. With half of the world's countries in lockdown as of April due to this pandemic, monitoring and understanding the spread of the virus and infection rates and how these factors relate to behavioural and societal parameters is crucial for effective policy making. This paper aims to investigate the effectiveness of masks, social distancing, lockdown and self-isolation for reducing the spread of SARS-CoV-2 infections. Our findings based on agent-based simulation modelling show that whilst enforcing a lockdown is widely believed to be the most efficient method to quickly reduce infection numbers, the practice of social distancing and the usage of surgical masks can potentially be more effective than enforcing a lockdown. Our multivariate analysis of simulation results using the Morris Elementary Effects Method suggests that if a sufficient proportion of the population wore surgical masks and followed social distancing regulations, then SARS-CoV-2 infections can be controlled without enforcing a lockdown.
In the case of clustered data, an artificial neural network with logcosh loss function learns the bigger cluster rather than the mean of the two. Even more so, the ANN when used for regression of a set-valued function, will learn a value close to one of the choices, in other words, it learns one branch of the set-valued function with high accuracy. This work suggests a method that uses artificial neural networks with logcosh loss to find the branches of set-valued mappings in parameter-outcome sample sets and classifies the samples according to those branches. The method not only classifies the data based on these branches but also provides an accurate prediction for the majority cluster. The method successfully classifies the data based on an invisible feature. A neural network was successfully established to predict the total number of cases, the logarithmic total number of cases, deaths, active cases and other relevant data of the coronavirus for each German district from a number of input variables. As it has been speculated that the Tuberculosis vaccine provides protection against the virus and since East Germany was vaccinated before reunification, an attempt was made to classify the Eastern and Western German districts by considering the vaccine information as an invisible feature.
The recent outbreak of COVID-19 has affected millions of individuals around the world and has posed a significant challenge to global healthcare. From the early days of the pandemic, it became clear that it is highly contagious and that human mobility contributes significantly to its spread. In this paper, we study the impact of population movement on the spread of COVID-19, and we capitalize on recent advances in the field of representation learning on graphs to capture the underlying dynamics. Specifically, we create a graph where nodes correspond to a country's regions and the edge weights denote human mobility from one region to another. Then, we employ graph neural networks to predict the number of future cases, encoding the underlying diffusion patterns that govern the spread into our learning model. Furthermore, to account for the limited amount of training data, we capitalize on the pandemic's asynchronous outbreaks across countries and use a model-agnostic meta-learning based method to transfer knowledge from one country's model to another's. We compare the proposed approach against simple baselines and more traditional forecasting techniques in 3 European countries. Experimental results demonstrate the superiority of our method, highlighting the usefulness of GNNs in epidemiological prediction. Transfer learning provides the best model, highlighting its potential to improve the accuracy of the predictions in case of secondary waves, if data from past/parallel outbreaks is utilized.