A multiagent based framework secured with layered SVM-based IDS for remote healthcare systems Artificial Intelligence

Since the number of elderly and patients who are in hospitals and healthcare centers are growing, providing efficient remote healthcare services seems very important. Currently, most such systems benefit from the distribution and autonomy features of multiagent systems and the structure of wireless sensor networks. On the one hand, securing the data of remote healthcare systems is one of the most significant concerns; particularly recent types of research about the security of remote healthcare systems keep them secure from eavesdropping and data modification. On the other hand, existing remote healthcare systems are still vulnerable against other common attacks of healthcare networks such as Denial of Service (DoS) and User to Root (U2R) attacks, because they are managed remotely and based on the Internet. Therefore, in this paper, we propose a secure framework for remote healthcare systems that consists of two phases. First, we design a healthcare system base on multiagent technology to collect data from a sensor network. Then, in the second phase, a layered architecture of intrusion detection systems that uses Support Vector Machine to learn the behavior of network traffic is applied. Based on our framework, we implement a secure remote healthcare system and evaluate this system against the frequent attacks of healthcare networks such as Smurf, Buffer overflow, Neptune, and Pod attacks. In the end, evaluation parameters of the layered architecture of intrusion detection systems prove the efficiency and correctness of our proposed framework.

Why the Future of Healthcare is Federated AI - insideBIGDATA


In this special guest feature, Akshay Sharma, Executive Vice President of Artificial Intelligence (AI) at Sharecare, highlights advancements and impact of federated AI and edge computing for the healthcare sector as it ensures data privacy and expands the breadth of individual, organizational, and clinical knowledge. Sharma joined Sharecare in 2021 as part of its acquisition of, the Silicon Valley-based company that accelerated digital transformation in healthcare. Sharma previously held various leadership positions including CTO, and vice president of engineering, a role in which he developed several key technologies that power mobile-based privacy products in healthcare. In addition to his role at Sharecare, Sharma serves as CTO of TEDxSanFrancisco and also is involved in initiatives to decentralize clinical trials. Sharma holds bachelor's degrees in engineering and engineering in information science from Visvesvaraya Technological University.

Hackers breach thousands of security cameras, exposing Tesla, jails and hospitals

The Japan Times

A group of hackers say they breached a massive trove of security-camera data collected by Silicon Valley startup Verkada Inc., gaining access to live feeds of 150,000 surveillance cameras inside hospitals, companies, police departments, prisons and schools. Companies whose footage was exposed include carmaker Tesla Inc. and software provider Cloudflare Inc. In addition, hackers were able to view video from inside women's health clinics, psychiatric hospitals and the offices of Verkada itself. Some of the cameras, including in hospitals, use facial-recognition technology to identify and categorize people captured on the footage. The hackers say they also have access to the full video archive of all Verkada customers.

Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities Artificial Intelligence

Advances in algorithmic fairness have largely omitted sexual orientation and gender identity. We explore queer concerns in privacy, censorship, language, online safety, health, and employment to study the positive and negative effects of artificial intelligence on queer communities. These issues underscore the need for new directions in fairness research that take into account a multiplicity of considerations, from privacy preservation, context sensitivity and process fairness, to an awareness of sociotechnical impact and the increasingly important role of inclusive and participatory research processes. Most current approaches for algorithmic fairness assume that the target characteristics for fairness--frequently, race and legal gender--can be observed or recorded. Sexual orientation and gender identity are prototypical instances of unobserved characteristics, which are frequently missing, unknown or fundamentally unmeasurable. This paper highlights the importance of developing new approaches for algorithmic fairness that break away from the prevailing assumption of observed characteristics.

How Might Artificial Intelligence Applications Impact Risk Management?


Artificial intelligence (AI) applications have attracted considerable ethical attention for good reasons. Although AI models might advance human welfare in unprecedented ways, progress will not occur without substantial risks. This article considers 3 such risks: system malfunctions, privacy protections, and consent to data repurposing. To meet these challenges, traditional risk managers will likely need to collaborate intensively with computer scientists, bioinformaticists, information technologists, and data privacy and security experts. This essay will speculate on the degree to which these AI risks might be embraced or dismissed by risk management.

AI and machine learning: a gift, and a curse, for cybersecurity


The Universal Health Services attack this past month has brought renewed attention to the threat of ransomware faced by health systems – and what hospitals can do to protect themselves against a similar incident. Security experts say that the attack, beyond being one of the most significant ransomware incidents in healthcare history, may also be emblematic of the ways machine learning and artificial intelligence are being leveraged by bad actors. With some kinds of "early worms," said Greg Foss, senior cybersecurity strategist at VMware Carbon Black, "we saw [cybercriminals] performing these automated actions, and taking information from their environment and using it to spread and pivot automatically; identifying information of value; and using that to exfiltrate." The complexity of performing these actions in a new environment relies on "using AI and ML at its core," said Foss. Once access is gained to a system, he continued, much malware doesn't require much user interference.

How 5G Will Impact - Dramatically Change - Individuals, Industries, nments


But the potential for 5G in business leaves plenty of room for excitement, too, and organizations should also start thinking about how 5G could improve processes and production. The time to dream is now.

AI and IoT Power Self-Serve Health Clinics


Advances in China's standard of living provide more people with access to healthcare. Nonetheless, with life expectancies now averaging 76.5 years, medical costs are on the rise. And while the number of top-tier hospitals throughout the country has more than doubled, the annual number of outpatient visits increased almost fourfold during that same period. Improving patient outcomes now relies on the use of new technologies such as real-time analytics, facial recognition, and the IoT. Innovation enables more people to get better access to healthcare information and advice without going to a hospital or waiting to see a doctor. It can also reduce the strain on overburdened medical personnel and resources by automating collection, transmission, and storage of healthcare data used in patient records.

Precision Health Data: Requirements, Challenges and Existing Techniques for Data Security and Privacy Artificial Intelligence

Precision health leverages information from various sources, including omics, lifestyle, environment, social media, medical records, and medical insurance claims to enable personalized care, prevent and predict illness, and precise treatments. It extensively uses sensing technologies (e.g., electronic health monitoring devices), computations (e.g., machine learning), and communication (e.g., interaction between the health data centers). As health data contain sensitive private information, including the identity of patient and carer and medical conditions of the patient, proper care is required at all times. Leakage of these private information affects the personal life, including bullying, high insurance premium, and loss of job due to the medical history. Thus, the security, privacy of and trust on the information are of utmost importance. Moreover, government legislation and ethics committees demand the security and privacy of healthcare data. Herein, in the light of precision health data security, privacy, ethical and regulatory requirements, finding the best methods and techniques for the utilization of the health data, and thus precision health is essential. In this regard, firstly, this paper explores the regulations, ethical guidelines around the world, and domain-specific needs. Then it presents the requirements and investigates the associated challenges. Secondly, this paper investigates secure and privacy-preserving machine learning methods suitable for the computation of precision health data along with their usage in relevant health projects. Finally, it illustrates the best available techniques for precision health data security and privacy with a conceptual system model that enables compliance, ethics clearance, consent management, medical innovations, and developments in the health domain.

Privacy Preserving Recalibration under Domain Shift Artificial Intelligence

Classifiers deployed in high-stakes real-world applications must output calibrated confidence scores, i.e. their predicted probabilities should reflect empirical frequencies. Recalibration algorithms can greatly improve a model's probability estimates; however, existing algorithms are not applicable in real-world situations where the test data follows a different distribution from the training data, and privacy preservation is paramount (e.g. protecting patient records). We introduce a framework that abstracts out the properties of recalibration problems under differential privacy constraints. This framework allows us to adapt existing recalibration algorithms to satisfy differential privacy while remaining effective for domain-shift situations. Guided by our framework, we also design a novel recalibration algorithm, accuracy temperature scaling, that outperforms prior work on private datasets. In an extensive empirical study, we find that our algorithm improves calibration on domain-shift benchmarks under the constraints of differential privacy. On the 15 highest severity perturbations of the ImageNet-C dataset, our method achieves a median ECE of 0.029, over 2x better than the next best recalibration method and almost 5x better than without recalibration.