Goto

Collaborating Authors

Results


Explainable AI for B5G/6G: Technical Aspects, Use Cases, and Research Challenges

arXiv.org Artificial Intelligence

When 5G began its commercialisation journey around 2020, the discussion on the vision of 6G also surfaced. Researchers expect 6G to have higher bandwidth, coverage, reliability, energy efficiency, lower latency, and, more importantly, an integrated "human-centric" network system powered by artificial intelligence (AI). Such a 6G network will lead to an excessive number of automated decisions made every second. These decisions can range widely, from network resource allocation to collision avoidance for self-driving cars. However, the risk of losing control over decision-making may increase due to high-speed data-intensive AI decision-making beyond designers and users' comprehension. The promising explainable AI (XAI) methods can mitigate such risks by enhancing the transparency of the black box AI decision-making process. This survey paper highlights the need for XAI towards the upcoming 6G age in every aspect, including 6G technologies (e.g., intelligent radio, zero-touch network management) and 6G use cases (e.g., industry 5.0). Moreover, we summarised the lessons learned from the recent attempts and outlined important research challenges in applying XAI for building 6G systems. This research aligns with goals 9, 11, 16, and 17 of the United Nations Sustainable Development Goals (UN-SDG), promoting innovation and building infrastructure, sustainable and inclusive human settlement, advancing justice and strong institutions, and fostering partnership at the global level.


On the Opportunities and Risks of Foundation Models

arXiv.org Artificial Intelligence

AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.


The Role of Social Movements, Coalitions, and Workers in Resisting Harmful Artificial Intelligence and Contributing to the Development of Responsible AI

arXiv.org Artificial Intelligence

There is mounting public concern over the influence that AI based systems has in our society. Coalitions in all sectors are acting worldwide to resist hamful applications of AI. From indigenous people addressing the lack of reliable data, to smart city stakeholders, to students protesting the academic relationships with sex trafficker and MIT donor Jeffery Epstein, the questionable ethics and values of those heavily investing in and profiting from AI are under global scrutiny. There are biased, wrongful, and disturbing assumptions embedded in AI algorithms that could get locked in without intervention. Our best human judgment is needed to contain AI's harmful impact. Perhaps one of the greatest contributions of AI will be to make us ultimately understand how important human wisdom truly is in life on earth.


Big Data Meet Cyber-Physical Systems: A Panoramic Survey

arXiv.org Machine Learning

The world is witnessing an unprecedented growth of cyber-physical systems (CPS), which are foreseen to revolutionize our world {via} creating new services and applications in a variety of sectors such as environmental monitoring, mobile-health systems, intelligent transportation systems and so on. The {information and communication technology }(ICT) sector is experiencing a significant growth in { data} traffic, driven by the widespread usage of smartphones, tablets and video streaming, along with the significant growth of sensors deployments that are anticipated in the near future. {It} is expected to outstandingly increase the growth rate of raw sensed data. In this paper, we present the CPS taxonomy {via} providing a broad overview of data collection, storage, access, processing and analysis. Compared with other survey papers, this is the first panoramic survey on big data for CPS, where our objective is to provide a panoramic summary of different CPS aspects. Furthermore, CPS {require} cybersecurity to protect {them} against malicious attacks and unauthorized intrusion, which {become} a challenge with the enormous amount of data that is continuously being generated in the network. {Thus, we also} provide an overview of the different security solutions proposed for CPS big data storage, access and analytics. We also discuss big data meeting green challenges in the contexts of CPS.


Machine learning capabilities aid healthcare cybersecurity

#artificialintelligence

As the new year draws near, healthcare organizations are thinking about where to focus their resources. Matt Mellen, security architect and healthcare solution lead at Palo Alto Networks, predicts that, in 2018, machine learning capabilities will not only enhance a healthcare organization's cybersec...


What to know before buying AI-based cybersecurity tools

#artificialintelligence

Some artificial intelligence and machine learning proponents present the technologies as if they were manna from heaven, tools that have the capability to replace humans. And it's not unusual for mere mention of the term "artificial intelligence" to evoke images of futuristic machines that can think for themselves. The truth is simpler than that. Artificial intelligence and machine learning are tools healthcare executives, technical staff and clinicians can use to enhance operations and improve healthcare. Artificial intelligence is when computers replicate something that humans do – real AI is when the results are as good or better than the best human results, said Dustin Rigg Hillard, vice president of engineering at Versive, which conducts machine learning and artificial intelligence hunting of cyber-adversaries and insider threats.


With phishing a primary threat, hospitals should invest in machine learning security

#artificialintelligence

On May 12, the largest ransomware outbreak in history took place, targeting 300,000 machines in 150 countries, with the U.K.'s National Health Service (NHS) taking the brunt of the attack. In fact, 48 hospital trusts in the U.K. were targeted by the NSA cyber weapon-powered WannaCry ransomware, in addition to an unknown number of hospitals in the United States. Further, the Health Information Trust Alliance (HITRUST) reported that not just hospital machines were infected, but also medical devices from both Bayer and Siemens. By shutting down systems, communication channels and equipment, cybercriminals locked healthcare professionals out of their EHRs, forced them to cancel appointments and even turned away emergency patients. Unfortunately, this is just another example of the healthcare industry being targeted by increasingly sophisticated and frequent ransomware attacks.


AI provides an urgent solution to evolving ransomware threats facing healthcare

#artificialintelligence

Artificial intelligence that can quickly identify patterns of risky behavior may be the only viable solution to protect health systems against an influx of ransomware attacks. The use of AI in the clinical environment has been well-documented as more health systems are turning to machine learning to improve oncology care, fight physician burnout, boost patient engagement and even reverse diabetes. But healthcare needs to use the power of machine learning to combat cybersecurity threats, according to a report (PDF) released by the Institute for Critical Infrastructure Technology. James Scott, a senior fellow at ICIT who authored the report, didn't mince words regarding the urgent need to protect patient information against cyberattacks, particularly ransomware, which has emerged as a critical threat to the industry over the past year. Scott noted that the healthcare industry "demonstrates lackadaisical cyber hygiene, finagled and Frankensteined networks, virtually unanimous absence of security operations teams and good ol' boys club bureaucratic board members flexing little more than smoke and mirror, cybersecurity theatrics as their organizational defense."


HIMSS 2017 buzz ranges from patient engagement to AI, machine learning

#artificialintelligence

HIMSS 2017 buzz centered on health data cybersecurity, but that hot topic of recent years' gatherings of the health IT universe simmered alongside emerging trends such as patient engagement and artificial intelligence and machine learning. The progression of healthcare IoT, or the Internet of Medical Things, is not without its challenges. Download a PDF of this exclusive guide now and learn how to overcome the obstacles: security, data overload, regulations, and more. This email address is already registered. By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.


A Dose of AI Could Be the Cure for Hospital Data Center Cyberattacks in 2017

#artificialintelligence

I know how terrible healthcare records theft can be. I myself have been the victim of a data theft by hackers who stole my deceased father's medical files, running up more than $300,000 in false charges. I am still disputing on-going bills that have been accruing for the last 15 years. This event led me on the path to finding a solution so others would not suffer the consequences that I continue to be impacted by, but hospitals and other healthcare providers must be willing to make the change. The writing is on the wall.