If you are involved in next-generation digital product engineering, experimenting with artificial intelligence (AI) will help you imagine new business models, revenue streams, and experiences. But you should be wary of wild headlines about cutting-edge AI breakthroughs. For every AlphaFold that solves a 50-year-old problem about protein folding, there are dozens of less glitzy but perhaps more impactful business AI advances that are helping to make it more responsible and privacy-conscious. As algorithms imbibe increasingly huge data sets both in training and deployment, data privacy as it relates to AI/machiene learning (ML) will only grow in importance, especially with new regulations expanding upon GDPR, CCPA, HIPAA, etc. In fact, the FDA recently issued a new action plan for regulating AI in medical devices.
Neurodivergent workers bring pattern recognition and skills that are crucial to enterprises and cybersecurity. I caught up with Craig Froelich, chief information security officer at Bank of America, to talk about hiring neurodiverse workers and how they can benefit cybersecurity teams. Here are some of the highlights. Neurodiversity is part of Bank of America's hiring strategy. Neuro-diverse people and neurodivergent people have been in our organization for a long time.
This article is written in response to the recent TraceTogether privacy saga. For the non-Singaporeans out there, TraceTogether is Singapore's contact tracing initiative in response to the COVID-19 pandemic in Singapore. The objective of the program was to quickly identify people who might be in close contact with anyone who has tested positive for the virus. It comprises of an app or physical token which uses Bluetooth signals to store proximity records. As at the end December 2020, 70% of Singapore residents were supposedly on the programme.
The healthcare and pharmaceutical industry has found itself in the spotlight as all eyes turn to it in the race to find and develop a treatment in the fight against COVID-19. With the brightest minds within the healthcare and life sciences industries working together across international boundaries, sharing research data and findings, there has been increased pressure placed upon medical researchers to find the answers that everyone is looking for in the current crisis. Not all of this research can be done manually by the scientists and researchers, so we have seen a dramatic uptick in the application of Artificial Intelligence (AI) and Machine Learning (ML) techniques, which enable researchers to press "fast-forward" on their ability to analyse data, identify trends and/or anomalies, and deliver meaningful results that can then be acted upon. As a result of the vast amount of data that is now being generated, collated, processed, and stored, public attention has focused on how this data is to be secured, in line with expanding international privacy laws and regulatory requirements. Keeping the reems of personal healthcare records and the swathes of Intellectual Property (IP) contained within these AI workflows secure is of paramount importance and this can now be achieved efficiently and effectively with the rise of a new technology – Confidential Computing.
Washington – People returning to the office following the pandemic will find an array of tech-infused gadgetry to improve workplace safety but which could pose risks for long-term personal and medical privacy. Temperature checks, distance monitors, digital "passports," wellness surveys and robotic cleaning and disinfection systems are being deployed in many workplaces seeking to reopen. Tech giants and startups are offering solutions that include computer vision detection of vital signs to wearables that can offer early indications of the onset of COVID-19 and apps that keep track of health metrics. Salesforce and IBM have partnered on a "digital health pass" to let people share their vaccination and health status on their smartphone. Clear, a tech startup known for airport screening, has created its own health pass which is being used by organizations such as the National Hockey League and MGM Resorts.
Unfortunately, scams trying to steal your heart and money are, too. As Valentine's Day nears, potential scammers are attempting to take advantage, focused on stealing personal information or money. Whether you're looking for love on social networks or dating sites or looking to buy a special gift for your loved one, scammers are lurking to trick you. This season in particular, as many Americans remain homebound due to COVID-19 outbreaks, the number of scams related to romance or Valentine's Day is on the rise. Lynette Owens, global director of internet safety at Trend Micro, said scams related to romance are up 20% over last year, caused by the "double whammy" of people staying online more due to the pandemic and increased isolation.
By Unique Kumar Mental Health: There is a sincere need to improve mental health, As per the reports available, it is estimated by the WHO World health organization, the burden of mental health to the tune of 2443 disability-adjusted life years (DALYs) per 100, 000 population, and the age-adjusted suicide rate is 21.1. Between 2012 & 2030, the health conditions are pegged at 1.03 trillion dollars. Similarly, the mental health survey, 2016 estimated that over 85% of people with common mental disorders such as depression or anxiety disorder and 73.6% of people with mental disorders such as psychosis or bipolar disorder. Interestingly the ideal psychiatrist rate is 1:8000 to 10,000 but actual figures stand at 1:2,00,000 and this is a huge gap. It is not possible to fill the vacancies by employing more and more people & the way to address the problem is only possible by leveraging technology and use Artificial Intelligence, Automation with help of Mobile apps to improve mental health.
Over the past decade, calls for better measures to protect sensitive, personally identifiable information have blossomed into what politicians like to call a "hot-button issue." Certainly, privacy violations have become rampant and people have grown keenly aware of just how vulnerable they are. When it comes to potential remedies, however, proposals have varied widely, leading to bitter, politically charged arguments. To date, what has chiefly come of that have been bureaucratic policies that satisfy almost no one--and infuriate many. Now, into this muddled picture comes differential privacy. First formalized in 2006, it's an approach based on a mathematically rigorous definition of privacy that allows formalization and proof of the guarantees against re-identification offered by a system. While differential privacy has been accepted by theorists for some time, its implementation has turned out to be subtle and tricky, with practical applications only now starting to become available. To date, differential privacy has been adopted by the U.S. Census Bureau, along with a number of technology companies, but what this means and how these organizations have implemented their systems remains a mystery to many. It's also unlikely that the emergence of differential privacy signals an end to all the difficult decisions and trade-offs, but it does signify that there now are measures of privacy that can be quantified and reasoned about--and then used to apply suitable privacy protections. A milestone in the effort to make this capability generally available came in September 2019 when Google released an open source version of the differential privacy library that the company has used with many of its core products. In the exchange that follows, two of the people at Google who were central to the effort to release the library as open source--Damien Desfontaines, privacy software engineer; and Miguel Guevara, who leads Google's differential privacy product development effort--reflect on the engineering challenges that lie ahead, as well as what remains to be done to achieve their ultimate goal of providing privacy protection by default.
Innovations in artificial intelligence (AI) have fundamentally changed the email security landscape in recent years, but it can often be hard to determine what makes one system different than the next. In reality, under that umbrella term significant differences exist in approaches that may determine whether the technology provides genuine protection or simply a perceived notion of defense. The Rise of Fearware When the global pandemic hit, and governments began enforcing travel bans and imposing stringent restrictions, there was undoubtedly a collective sense of fear and uncertainty. As explained in this blog, cybercriminals were quick to capitalize, taking advantage of people's desire for information to send out topical emails related to COVID-19 containing malware or credential-grabbing links. These emails often spoofed the Centers for Disease Control and Prevention (CDC) and, later on, as the economic impact of the pandemic began to take hold, the Small Business Administration (SBA).
For the channel, 2020 was a tale of two cities. On one hand, customers and governments recognized partners as an essential service and central to their ability to rapidly respond to a worsening pandemic. On the other, customer demand shifted to automation, cloud acceleration, customer/employee experience, and e-commerce/marketplaces, where many technology channel parts were left in the cold. The industry experienced a "K-shaped" recovery where partners who had skills, resources, and prebuilt practices around the business needs of their customers excelled with double- (and sometime triple-) digit growth. Yet many smaller VARs and MSPs were down by double digits, relying on government, vendor, and distributor funding to survive.