Goto

Collaborating Authors

 coronavirus


Prompt Stability Scoring for Text Annotation with Large Language Models

Barrie, Christopher, Palaiologou, Elli, Törnberg, Petter

arXiv.org Artificial Intelligence

Researchers are increasingly using language models (LMs) for text annotation. These approaches rely only on a prompt telling the model to return a given output according to a set of instructions. The reproducibility of LM outputs may nonetheless be vulnerable to small changes in the prompt design. This calls into question the replicability of classification routines. To tackle this problem, researchers have typically tested a variety of semantically similar prompts to determine what we call "prompt stability." These approaches remain ad-hoc and task specific. In this article, we propose a general framework for diagnosing prompt stability by adapting traditional approaches to intra- and inter-coder reliability scoring. We call the resulting metric the Prompt Stability Score (PSS) and provide a Python package PromptStability for its estimation. Using six different datasets and twelve outcomes, we classify >150k rows of data to: a) diagnose when prompt stability is low; and b) demonstrate the functionality of the package. We conclude by providing best practice recommendations for applied researchers.


Topic Modelling of Swedish Newspaper Articles about Coronavirus: a Case Study using Latent Dirichlet Allocation Method

Griciūtė, Bernadeta, Han, Lifeng, Nenadic, Goran

arXiv.org Artificial Intelligence

Topic Modelling (TM) is from the research branches of natural language understanding (NLU) and natural language processing (NLP) that is to facilitate insightful analysis from large documents and datasets, such as a summarisation of main topics and the topic changes. This kind of discovery is getting more popular in real-life applications due to its impact on big data analytics. In this study, from the social-media and healthcare domain, we apply popular Latent Dirichlet Allocation (LDA) methods to model the topic changes in Swedish newspaper articles about Coronavirus. We describe the corpus we created including 6515 articles, methods applied, and statistics on topic changes over approximately 1 year and two months period of time from 17th January 2020 to 13th March 2021. We hope this work can be an asset for grounding applications of topic modelling and can be inspiring for similar case studies in an era with pandemics, to support socio-economic impact research as well as clinical and healthcare analytics. Our data and source code are openly available at https://github. com/poethan/Swed_Covid_TM Keywords: Latent Dirichlet Allocation (LDA); Topic Modelling; Coronavirus; Pandemics; Natural Language Understanding; BERT-topic


A Novel Implementation of Machine Learning for the Efficient, Explainable Diagnosis of COVID-19 from Chest CT

Liu, Justin

arXiv.org Artificial Intelligence

In a worldwide health crisis as exigent as COVID-19, there has become a pressing need for rapid, reliable diagnostics. Currently, popular testing methods such as reverse transcription polymerase chain reaction (RT-PCR) can have high false negative rates. Consequently, COVID-19 patients are not accurately identified nor treated quickly enough to prevent transmission of the virus. However, the recent rise of medical CT data has presented promising avenues, since CT manifestations contain key characteristics indicative of COVID-19. This study aimed to take a novel approach in the machine learning-based detection of COVID-19 from chest CT scans. First, the dataset utilized in this study was derived from three major sources, comprising a total of 17,698 chest CT slices across 923 patient cases. Image preprocessing algorithms were then developed to reduce noise by excluding irrelevant features. Transfer learning was also implemented with the EfficientNetB7 pre-trained model to provide a backbone architecture and save computational resources. Lastly, several explainability techniques were leveraged to qualitatively validate model performance by localizing infected regions and highlighting fine-grained pixel details. The proposed model attained an overall accuracy of 0.927 and a sensitivity of 0.958. Explainability measures showed that the model correctly distinguished between relevant, critical features pertaining to COVID-19 chest CT images and normal controls. Deep learning frameworks provide efficient, human-interpretable COVID-19 diagnostics that could complement radiologist decisions or serve as an alternative screening tool. Future endeavors may provide insight into infection severity, patient risk stratification, and prognosis.


Top 5 uses of AI to combat Covid-19

#artificialintelligence

Artificial Intelligence tools and applications have skillfully tried to manage the analysis, diagnosis, tracing, and development of the pandemic in ways unthinkable with manpower solely. The greatest dilemma with this pandemic was that no one knew what it was and how it would react during the beginning of the pandemic. To make matters worse, Covid 19 has been rapidly mutating since its start, and researchers around the world aren't still quite prepared to interact with such a delicate mutating variant that has claimed hundreds and thousands of lives and has essentially changed the course of history forever. This is where AI's prowess comes into play. With deep learning and the combination of researchers from all around the world, Artificial Intelligence has helped us combat the pandemic in unimaginable ways. The foremost task of AI was to collect as much data as possible about the Coronavirus.


How Artificial Intelligence is Preventing the Spread of Infectious Diseases?

#artificialintelligence

Artificial intelligence can possibly oversee huge volumes of information to make reasonable examples for human arrangement and navigation. It can deal with information across various areas, which is incredibly dreary and tedious for people. This capacity of artificial intelligence to acclimatize information, digest and examine it to anticipate future pandemics and sickness spreads is fundamental. Technological change is formed and organized by cultural standards and relations, which are thus impacted by technological changes. An abundance of new technologies are opening up for quick subatomic distinguishing proof of microbes, yet additionally for the more exact observation of irresistible infections.


Artificial Intelligence - Powered Research To Predict Next SARS-Like Virus

#artificialintelligence

When the Coronavirus first emerged in China, many scientists were unaware of its name and what it could do to the community. This is probably the biggest pandemic most of us have seen in our lifetime. In this regard, it becomes important to understand such threats can even come in the future. For this reason, several international scientists are using artificial intelligence-aided programs to predict which will be the next big threat like Coronavirus. In this way, the health system across the world can be prepared with better vaccines to prevent further pandemics.


AI Writes About AI - Robot Writers AI

#artificialintelligence

Editors and writers curious about AI's ability to generate long-form writing will want to check-out this piece by SEPGRA, an economic think tank. The group decided to give GPT-3 -- one of the world's most powerful AI text generators -- a run for its money by inputting one, simple phrase and asking GPT-3 to respond. The phrase: "Write an essay about text written by AI." The resulting 900-word essay published in this article is emblematic of the tech's current prowess. Essentially: The piece begins with an excellent focus on the specific topic, but becomes ever-more generalized as the article unfolds. In fact, by the close of the essay, GPT-3 completely veers-off into a discussion of AI's oft-reported ability to beat the world's greatest chess masters.


TBCOV: Two Billion Multilingual COVID-19 Tweets with Sentiment, Entity, Geo, and Gender Labels

Imran, Muhammad, Qazi, Umair, Ofli, Ferda

arXiv.org Artificial Intelligence

The widespread usage of social networks during mass convergence events, such as health emergencies and disease outbreaks, provides instant access to citizen-generated data that carry rich information about public opinions, sentiments, urgent needs, and situational reports. Such information can help authorities understand the emergent situation and react accordingly. Moreover, social media plays a vital role in tackling misinformation and disinformation. This work presents TBCOV, a large-scale Twitter dataset comprising more than two billion multilingual tweets related to the COVID-19 pandemic collected worldwide over a continuous period of more than one year. More importantly, several state-of-the-art deep learning models are used to enrich the data with important attributes, including sentiment labels, named-entities (e.g., mentions of persons, organizations, locations), user types, and gender information. Last but not least, a geotagging method is proposed to assign country, state, county, and city information to tweets, enabling a myriad of data analysis tasks to understand real-world issues. Our sentiment and trend analyses reveal interesting insights and confirm TBCOV's broad coverage of important topics.


COVID-19: quality of life and artificial intelligence

#artificialintelligence

Bongs Lainjo Cybermatic International, Montréal, QC, Canada Correspondence: Bongs Lainjo Email [email protected] Abstract: The objective of the study is to conduct an exploratory review of the Covid-19 pandemic by focusing on the theme of Covid-19 pandemic morbidity and mortality, considering the dynamics of artificial intelligence and quality of life (QOL). The methods used in this research paper include a review of literature, anecdotal evidence, and reports on the morbidity of COVID-19, including the scope of its devastating effects in different countries such as the US, Africa, UK, China, and Brazil, among others. The findings of this study suggested that the devastating effects of the coronavirus are felt across different vulnerable populations. These include the elderly, front-line workers, marginalized communities, visible minorities, and more. The challenge in Africa is especially daunting because of inadequate infrastructure, and financial and human resources, among others. Besides, AI technology is being successfully used by scientists to enhance the development process of vaccines and drugs. However, its usage in other stages of the pandemic has not been adequately explored. Ultimately, it has been concluded that the effects of the Covid-19 are producing unprecedented and catastrophic outcomes in many countries. With a few exceptions, the common and current intervention approach is driven by many factors, including the compilation of relevant reliable and compelling data sets. On a positive note, the compelling trailblazing and catalytic contributions of AI towards the rapid discovery of COVID-19 vaccines are a good indication of future technological innovations and their effectiveness. History has a way of reminding us that while the good times are great, a business as usual comes with many unforeseen risks and challenges. On a positive note, stress, anxiety, and other mental health issues have turned around many mindsets in certain groups. There are now significant and unprecedented levels of compassion, empathy, and more, originating from many populations. One such instance, wherein significant challenges were posed to the community is at the time of the First World War. Besides, there was the Spanish plague, there was the second world war and for the last 60 plus years, we have had to live in a world of misgivings; ranging from populism to political unrests and instability in several parts of the world, primarily the Middle East and some parts of Asia.


Enzolytics Announces A Comprehensive Therapeutic Protocol For Production Of Monoclonal Antibodies To Address Ongoing And Future Pandemics

#artificialintelligence

COLLEGE STATION, TX / ACCESSWIRE / June 7, 2021 / Enzolytics, Inc. (OTC PINK:ENZC) (https://enzolytics.com/) has announced a coherent protocol that it intends to execute to meet the Company's objective of producing a therapeutic cure for HIV as well as a planned protocol to address existing and future pandemics. This protocol has been defined as a result of the Company's collaboration with Intel Corporation in the field of applying computer analysis (Artificial Intelligence - A.I.) to accelerate health care discoveries and development. This collaboration includes exploring the interaction of monoclonal antibodies with viruses in 3-dimensional matrices. This opens new innovative pathways for drug discovery and development. The Company is actively exploring biotech partnerships, in the same way the Company is working with Intel Corporation, to advance and provide effective therapies and cures for existing and new viral illnesses. In combination with the Company's patented and clinically tested anti-HIV peptide ITV-1, the Company proposes a collaboration to fully implement the following protocol for developing and deploying therapeutics to address existing and future pandemics.