Goto

Collaborating Authors

Results


Opinion: Regulations and common sense must pace machine learning

#artificialintelligence

The first Industrial Revolution used steam and water to mechanize production. The second, the Technological Revolution, offered standardization and industrialization. The third capitalized on electronics and information technology to automate production. Now a fourth Industrial Revolution, our modern Digital Age, is building on the third; expanding exponentially, it is disrupting and transforming our lives, while evolving too fast for governance, ethics and management to keep pace. Most high school graduates have been exposed to information technology through personal computers, word processing software and their phones. Nonetheless, the digital divide separates the tech savvy from the tech illiterate, driven by disparities in access to technology for pre-K to 12 students based on where they live and socioeconomic realities.


Cybersecurity in Healthcare: How to Prevent Cybercrime

#artificialintelligence

Because COVID-19 made it difficult for consumers to venture out and run their usual errands, FIs needed to find other ways to provide their services. The only way for them to really keep up with the speedy digitization was through the implementation of AI systems. To further discuss all things AI, PaymentsJournal sat down with Sudhir Jha, Mastercard SVP and head of Brighterion, and Tim Sloane, VP of Payments Innovation at Mercator Advisory Group. Jha believes that there were two fundamentally big changes that occurred in banking during the pandemic: the environment began constantly shifting, and person-to-person interactions were abruptly limited. "Every week, every month, there were different ways that we were trying to react to the pandemic," explained Jha.


Digital Identities Unlock Digital Transformation

#artificialintelligence

Digital identities are a key component in the development of digital economies, the digital transformation of government, and the delivery of digital operating technologies including the Internet of Things (IoT) and industrial automation. By identifying and authenticating people, software, hardware components, and digital services, new capabilities can be introduced rapidly and securely and integrated into ecosystems, delivering new capabilities using digital identities as a key component of integration.


AI consumes a lot of energy. Hackers could make it consume more.

#artificialintelligence

The news: A new type of attack could increase the energy consumption of AI systems. In the same way a denial-of-service attack on the internet seeks to clog up a network and make it unusable, the new attack forces a deep neural network to tie up more computational resources than necessary and slow down its "thinking" process. The target: In recent years, growing concern over the costly energy consumption of large AI models has led researchers to design more efficient neural networks. One category, known as input-adaptive multi-exit architectures, works by splitting up tasks according to how hard they are to solve. It then spends the minimum amount of computational resources needed to solve each. Say you have a picture of a lion looking straight at the camera with perfect lighting and a picture of a lion crouching in a complex landscape, partly hidden from view.


Budget 2021: Digital economy strategy gets nearly AU$1 billion

ZDNet

The federal government has delivered a new digital economy strategy, which it has described as an investment into the settings, infrastructure, and incentives to grow Australia's digital economy. In the strategy on a page [PDF], the government declares the digital economy is key to securing Australia's economic future and recovery from COVID-19. "The Digital Economy Strategy targets investments that will underpin improvements in jobs, productivity and make Australia's economy more resilient," it says. Despite many arguing the nation is already behind its peers, the government believes Australia's place in the world will be defined by how it adapts to digital technologies and modernises its economy. "The next 10 years will determine whether we lead or fall behind," it claims.


How AI will Revolutionize Cybersecurity

#artificialintelligence

Cybersecurity experts said that Machine Learning and Artificial Intelligence have positively and negatively affected cybersecurity. Although relatively new AI security tools are often used to define "good" as opposed to "bad" by comparing the behavior of entities throughout the environment with those living in similar environments. Artificial intelligence algorithms are used to train data to respond to different situations. Artificial Intelligence is helping Cybersecurity to accelerate its technological progress. Security experts, including CISOs with products purporting to use artificial intelligence to dramatically improve the accuracy and efficiency speed of both threat detection and response.


Mitigating Emerging Cyber Security Threats Using Artificial Intelligence

#artificialintelligence

Last week, I taught a cybersecurity course at the University of Oxford case. I felt that this is significant because typically the problem domain of AI and cybersecurity is mostly an Anomaly detection or a Signature detection problem. Also, most of the times, cybersecurity professionals use specific tools such as splunk or darktrace(which we cover in our course) – but these threats and their mitigations are very new. Hence, they need exploring from first principles/research. Thus, we can cover newer threats such as adversarial attacks(making modifications to input data to force machine-learning algorithms to behave in ways they're not supposed to).


Pinaki Laskar on LinkedIn: #Cybersecurity #technology #autonomousvehicles

#artificialintelligence

AI Researcher, Cognitive Technologist Inventor - AI Thinking, Think Chain Innovator - AIOT, XAI, Autonomous Cars, IIOT Founder Fisheyebox Spatial Computing Savant, Transformative Leader, Industry X.0 Practitioner Applied to vehicles, cybersecurity takes on an important role - Systems & Components that govern safety must be protected from harmful attacks, unauthorised access, damage or anything else that might interfere with safety functions. Increasingly, today's vehicles feature driver assistance #technology, such as forward collision warning, automatic emergency braking and vehicle safety communications. In the future, the deployment of driver assistance technologies may result in avoiding crashes altogether, particularly crashes attributed to human drivers' choices. A multi-layered approach to vehicle cybersecurity reduces the possibility of a successful vehicle cyber-attack and mitigates the potential consequences of a successful intrusion.


Artificial Intelligence ban slammed for failing to address "vast abuse potential" - Malwarebytes Labs

#artificialintelligence

A written proposal to ban several uses of artificial intelligence (AI) and to place new oversight on other "high-risk" AI applications--published by the European Commission this week--met fierce opposition from several digital rights advocates in Europe. Portrayed as a missed opportunity by privacy experts, the EU Commission's proposal bans four broad applications of AI, but it includes several loopholes that could lead to abuse, and it fails to include a mechanism to add other AI applications to the ban list. It deems certain types of AI applications as "high-risk"--meaning their developers will need to abide by certain restrictions--but some of those same applications were specifically called out by many digital rights groups earlier this year as "incompatible with a democratic society." It creates new government authorities, but the responsibilities of those authorities may overlap with separate authorities devoted to overall data protection. Most upsetting to digital rights experts, it appears, is that the 107-page document (not including the necessary annexes) offers only glancing restrictions on biometric surveillance, like facial recognition software.


US army develops new tool to detect deepfakes threatening national security

The Independent - Tech

US Army scientists have developed a novel tool that can help soldiers detect deepfakes that pose threat to national security. The advance could lead to a mobile software that warns people when fake videos are played on the phone. Deepfakes are hyper-realistic video content made using artificial intelligence tools that falsely depicts individuals saying or doing something, explained Suya You and Shuowen (Sean) Hu from the Army Research Laboratory in the US. The growing number of these fake videos in circulation can be harmful to society – from the creation of non-consensual explicit content to doctored media by foreign adversaries that are used in disinformation campaigns. According to the scientists, while there were close to 8,000 of these deepfake video clips online at the beginning of 2019, in just about nine months, this number nearly doubled to about 15,000.