Federated learning (FL) allows a server to learn a machine learning (ML) model across multiple decentralized clients that privately store their own training data. In contrast with centralized ML approaches, FL saves computation to the server and does not require the clients to outsource their private data to the server. However, FL is not free of issues. On the one hand, the model updates sent by the clients at each training epoch might leak information on the clients' private data. On the other hand, the model learnt by the server may be subjected to attacks by malicious clients; these security attacks might poison the model or prevent it from converging. In this paper, we first examine security and privacy attacks to FL and critically survey solutions proposed in the literature to mitigate each attack. Afterwards, we discuss the difficulty of simultaneously achieving security and privacy protection. Finally, we sketch ways to tackle this open problem and attain both security and privacy.
Since COVID-19 was first identified in December 2019, various public health interventions have been implemented across the world. As different measures are implemented at different countries at different times, we conduct an assessment of the relative effectiveness of the measures implemented in 18 countries and regions using data from 22/01/2020 to 02/04/2020. We compute the top one and two measures that are most effective for the countries and regions studied during the period. Two Explainable AI techniques, SHAP and ECPI, are used in our study; such that we construct (machine learning) models for predicting the instantaneous reproduction number ($R_t$) and use the models as surrogates to the real world and inputs that the greatest influence to our models are seen as measures that are most effective. Across-the-board, city lockdown and contact tracing are the two most effective measures. For ensuring $R_t<1$, public wearing face masks is also important. Mass testing alone is not the most effective measure although when paired with other measures, it can be effective. Warm temperature helps for reducing the transmission.
The Spanish government is planning to test 80,000 people a day for coronavirus with the roll-out of robot testers. Technology will be used to speed up testing of people in Spain, one of the countries hardest hit by the Covid-19 outbreak, with more than 200 deaths so far. According to Bloomberg, Spanish authorities now plan to increase daily testing from about 20,000 a day to 80,000, by using four robots to apply artificial intelligence (AI) to testing. Speaking at a conference on Saturday 21 March, Raquel Yotti, head of Madrid's health institute, said: "A plan to automate tests through robots has already been designed and Spain has committed to buying four robots that will allow us to execute 80,000 tests per day." Because of the ease that coronavirus spreads from person to person, testing has been identified as one of the best ways to control the disease.
Spain will unleash robots capable of testing 80,000 patients a day into the heart of its coronavirus fight. The Spanish government says it will deploy the machines that will increase testing from its current daily figure of between 15,000 and 20,000. Raquel Yotti, head of Madrid's Health Institute Carlos III, said the plans to deploy the robots are already under way. She spoke as Spain's death toll surpassed 1,300 and the number of cases reached nearly 25,000. She said at a conference: "A plan to automate tests through robots has been already designed, and Spain has committed to buying four robots that will allow us to execute 80,000 tests per day."
Risk assessment is a major challenge for supply chain managers, as it potentially affects business factors such as service costs, supplier competition and customer expectations. The increasing interconnectivity between organisations has put into focus methods for supply chain cyber risk management. We introduce a general approach to support such activity taking into account various techniques of attacking an organisation and its suppliers, as well as the impacts of such attacks. Since data is lacking in many respects, we use structured expert judgment methods to facilitate its implementation. We couple a family of forecasting models to enrich risk monitoring. The approach may be used to set up risk alarms, negotiate service level agreements, rank suppliers and identify insurance needs, among other management possibilities.
The automatic classification of applications and services is an invaluable feature for new generation mobile networks. Here, we propose and validate algorithms to perform this task, at runtime, from the raw physical channel of an operative mobile network, without having to decode and/or decrypt the transmitted flows. Towards this, we decode Downlink Control Information (DCI) messages carried within the LTE Physical Downlink Control CHannel (PDCCH). DCI messages are sent by the radio cell in clear text and, in this paper, are utilized to classify the applications and services executed at the connected mobile terminals. Two datasets are collected through a large measurement campaign: one labeled, used to train the classification algorithms, and one unlabeled, collected from four radio cells in the metropolitan area of Barcelona, in Spain. Among other approaches, our Convolutional Neural Network (CNN) classifier provides the highest classification accuracy of 99%. The CNN classifier is then augmented with the capability of rejecting sessions whose patterns do not conform to those learned during the training phase, and is subsequently utilized to attain a fine grained decomposition of the traffic for the four monitored radio cells, in an online and unsupervised fashion.
Arrieta, Alejandro Barredo, Díaz-Rodríguez, Natalia, Del Ser, Javier, Bennetot, Adrien, Tabik, Siham, Barbado, Alberto, García, Salvador, Gil-López, Sergio, Molina, Daniel, Benjamins, Richard, Chatila, Raja, Herrera, Francisco
In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.
An employer in Spain may not be able to fire a worker caught on a surveillance camera doing something prohibited if the company hasn't informed workers about the video system and its purpose, according to a recent trial court decision. In a case involving an employee fired after a security camera captured him in a parking-lot fight after work hours, a Pamplona labor court ruled that the video evidence was inadmissible under the European Union's General Data Protection Regulation (GDPR) and case law from the European Court of Human Rights (ECHR). "The judgment is of great interest since it is the first ruling by a Spanish court on the validity that can be given to the evidence of video recordings after the publication of the new Spanish Data Protection Law and also an interpretation of the new European Data Protection Regulation," according to a blog post from Manuel Vargas of Barcelona's Marti & Associats law firm. Under Spain's own data-protection law, employers who record a worker doing something illegal are considered to have fulfilled their duty to inform so long as they have posted a sign identifying a video surveillance zone, Vargas wrote. He also noted that recent case law from the Spanish Supreme Court endorses the idea that employers aren't obligated to notify workers that they plan to use video cameras to monitor their activity for possible disciplinary purposes.
Spain's Prime Minister Pedro Sánchez closed the Spanish Strategy for R&D i in Artificial Intelligence workshop, held in Granada. During his speech, Sánchez highlighted that technologies related to artificial intelligence are already one of the main factors of growth, and hence Spain and Europe have to make a joint effort to move forward on this important line for social and economic progress. Pedro Sánchez explained that the document presented on Monday is the first step in drawing up the National Strategy on Artificial Intelligence, which 11 ministerial departments will work on and which will be ready later this year. Sánchez stressed the importance of science, innovation and universities for the present and future of the country. In this regard, he highlighted the creation of a specific ministerial department for these fields, the approval of a fundamental Royal Decree-Law to make the functioning of scientific bodies more flexible and the strengthening of equal opportunities, as well as the approval, last Friday, of the Research Personnel Statute on Training, the stabilisation of 1,500 temporary positions on public research bodies, which account for 10% of the total research workforce.
Executives from some of the largest companies in Europe and leading academics gathered for the first meeting of the European Commission's (EC) AI4EU project, which aims to drive adoption of artificial intelligence (AI) in a wide range of industries. The group's launch meeting in Barcelona, Spain, brought together partners from 21 EU countries with 79 organisations involved. Delegates included telecoms operators, technology companies, other enterprises and academic institutions. It is being led by French aerospace and transport company Thales Group and features experts from Telenor and Orange among its representatives. AI4EU plans to conduct a number of pilots across different industries to define the benefits and issues related to AI technology.