Computing technology has become pervasive and with it the expectation for its ready availability when needed, thus basically at all times. Dependability is the set of techniques to build, configure, operate, and manage computer systems to ensure that they are reliable, available, safe, and secure.1 But alas, faults seem to be inherent to computer systems. Components can simply crash or produce incorrect output due to hardware or software bugs or can be invaded by impostors that orchestrate their behavior. Fault tolerance is the ability to enable a system as a whole to continue operating correctly and with acceptable performance, even if some of its components are faulty.3 Fault tolerance is not new; von Neumann himself designed techniques for computers to survive faults.4
The healthcare system in Latin America (LATAM) has made significant improvements in the last few decades. Nevertheless, it still faces significant challenges, including poor access to healthcare services, insufficient resources, and inequalities in health that may lead to decreased life expectancy, lower quality of life, and poor economic growth. Digital Healthcare (DH) enables the convergence of innovative technology with recent advances in neuroscience, medicine, and public healthcare policy.a In this article, we discuss key DH efforts that can help address some of the challenges of the healthcare system in LATAM focusing on two countries: Brazil and Mexico. We chose to study DH in the context of Brazil and Mexico as both countries are good representatives of the situation of the healthcare system in LATAM and face similar challenges along with other LATAM countries. Brazil and Mexico have the largest economies in the region and account for approximately half of the population and geographic territory of LATAM.11
Large, expensive, computing-intensive research initiatives have historically promoted high-performance computing (HPC) in the wealthiest countries, most notably in the U.S., Europe, Japan, and China. The exponential impact of the Internet and of artificial intelligence (AI) has pushed HPC to a new level, affecting economies and societies worldwide. In Latin America, this was no different. Nevertheless, the use of HPC in science affected the countries in the region in a heterogeneous way. Since the first edition in 1993 of the TOP500 list of most powerful supercomputing systems in the world, only Mexico and Brazil (with 18 appearances each) made the list with research-oriented supercomputers.
Theoretical computer science is everywhere, for TCS is concerned with the foundations of computing and computing is everywhere! In the last three decades, a vibrant Latin American TCS community has emerged: here, we describe and celebrate some of its many noteworthy achievements. Computer science became a distinct academic discipline in the 1950s and early 1960s. The first CS department in the U.S. was formed in 1962, and by the 1970s virtually every university in the U.S. had one. In contrast, by the late 1970s, just a handful of Latin American universities were actively conducting research in the area. Several CS departments were eventually established during the late 1980s. Often, theoreticians played a decisive role in the foundation of these departments. One key catalyst in articulating collaborations among the few but growing number of enthusiastic theoreticians who were active in the international academic arena was the foundation of regional conferences.
We use the term imaging sciences to refer to the overarching spectrum of scientific and technological contexts which involve images in digital format including, among others, image and video processing, scientific visualization, computer graphics, animations in games and simulators, remote sensing imagery, and also the wide set of associated application areas that have become ubiquitous during the last decade in science, art, human-computer interaction, entertainment, social networks, and many others. As an area that combines mathematics, engineering, and computer science, this discipline arose in a few universities in Argentina mostly in the form of elective classes and small research projects in electrical engineering or computer science departments. Only in the mid-2000s did some initiatives aiming to generate joint activities and to provide identity and visibility to the discipline start to appear. In this short paper, we present a brief history of the three laboratories with the most relevant research and development (R&D) activities in the discipline in Argentina, namely the Imaging Sciences Laboratory of the Universidad Nacional del Sur, the PLADEMA Institute at the Universidad Nacional del Centro de la Provincia de Buenos Aires, and the Image Processing Laboratory at the Universidad Nacional de Mar del Plata. The Imaging Sciences Laboratorya of the Electrical and Computer Engineering Department of the Universidad Nacional del Sur Bahía Blanca began its activities in the 1990s as a pioneer in Argentina and Latin America in research and teaching in computer graphics, and in visualization.
The Millennium Institute for Foundational Research on Dataa (IMFD) started its operations in June 2018, funded by the Millennium Science Initiative of the Chilean National Agency of Research and Development.b IMFD is a joint initiative led by Universidad de Chile and Universidad Católica de Chile, with the participation of five other Chilean universities: Universidad de Concepción, Universidad de Talca, Universidad Técnica Federico Santa María, Universidad Diego Portales, and Universidad Adolfo Ibáñez. IMFD aims to be a reference center in Latin America related to state-of-the-art research on the foundational problems with data, as well as its applications to tackling diverse issues ranging from scientific challenges to complex social problems. As tasks of this kind are interdisciplinary by nature, IMFD gathers a large number of researchers in several areas that include traditional computer science areas such as data management, Web science, algorithms and data structures, privacy and verification, information retrieval, data mining, machine learning, and knowledge representation, as well as some areas from other fields, including statistics, political science, and communication studies. IMFD currently hosts 36 researchers, seven postdoctoral fellows, and more than 100 students.
The evolution of artificial intelligence and related technologies have the potential to drastically increase the clinical importance of automated diagnosis tools. Putting these tools into use, however, is challenging, since the algorithm outcome will be used to make clinical decisions and wrong predictions can prevent the most appropriate treatment from being provided to the patient. Models should not only provide accurate predictions, but also evidence that supports the outcomes, so they can be audited, and their predictions double-checked. Some models are constructed in such a way they are difficult to interpret, hence the name black-box models. While there are methods that generate explanations for generic black-box classifiers,9 the solutions are usually not tailored for the needs of physicians and do not take any medical background into consideration.
Technology evolution is no longer keeping pace with the growth of data. We are facing problems storing and processing the huge amounts of data produced every day. People rely on data-intensive applications and new paradigms (for example, edge computing) to try to keep computation closer to where data is produced and needed. Thus, the need to store and query data in devices where capacity is surpassed by data volume is routine today, ranging from astronomy data to be processed by supercomputers, to personal data to be processed by wearable sensors. The scale is different, yet the underlying problem is the same.
Societies and industries are rapidly changing due to the adoption of artificial intelligence (AI) and will face deep transformations in upcoming years. In this scenario, it becomes critical for under-represented communities in technology, in particular developing countries like Latin America, to foster initiatives that are committed to developing tools for the local adoption of AI. Latin America, as well as many non-English speaking regions, face several problems for the adoption of AI technology, including the lack of diverse and representative resources for automated learning tasks. A highly problematic area in this regard is natural language processing (NLP), which is strongly dependent on labeled datasets for learning. However, most state-of-the-art NLP resources are allocated to English.
Latin America, with its rich and varied cultural heritage, is a region widely known by its diverse musical rhythms. Indeed, music and dance constitute an important part of Latin American cultural assets and identity.2 Some of these rhythms, although famous worldwide, belong to specific regions; for example, samba is from Brazil, tango is from Argentina, merengue is from the Dominican Republic, corrido is from Mexico and vallenato is from Colombia, among many other examples. Most of them were created by the cultural interaction between people from African, Native American, and European cultures that shared their music and instruments. Those heterogeneous cultural characteristics made these music styles appealing to an international audience.