Goto

Collaborating Authors

Trade Marks in the Metaverse

#artificialintelligence

The lines between physical and virtual reality are getting set for an overhaul as the Metaverse continues its gradual entry into our everyday lives. Whilst the Metaverse is still only in its infancy, leading developers in the space have already made it possible for users to work, learn, socialise, play and do business in this exciting, albeit slightly mystifying, new digital world. Whether or not you have plans to blend your business activities with NFTs and/or the Metaverse, there is value in awareness of the threats and opportunities facing brand promotion and protection that arrive with the evolving digital arena. You may have heard when Nike Inc recently applied to register a number of trade marks globally, which included claims in class 9 for "downloadable virtual goods", and in class 35 for "retail store services featuring virtual goods". Since then, Nike Inc has become far less lonely in their exploration of NFTs (non-fungible tokens) and the opportunities that exist in the Metaverse, with many other well-known companies having shown interest in joining the evolving digital arena.


Activation Functions (updated) – The Code-It List

#artificialintelligence

This entry has been updated to TensorFlow v2.10.0 and PyTorch 1.12.1 An activation function is a function that is applied to a neuron in a neural network to help it learn complex patterns of data, deciding what should be transmitted to the next neuron in the network. A perceptron is a neural network unit that inputs the data to be learned into a neuron and processes it according to its activation function. The perceptron is a simple algorithm that, given an input vector x of m values(x_1, x_2, ..., x_m), outputs a 1 or a 0 (step function), and its function is defined as follows: Here, ω is a vector of weights, ωx is the dot product, and b is the bias. If x is on this line, the answer is positive; otherwise, it is negative.


What is Semantic Role Labeling

#artificialintelligence

In natural language processing for machine learning models, semantic role labeling is associated with the predicate, where the action of the sentence is depicted. SRL or semantic role labeling does the crucial task of determining how different instances are related to the primary predicate. Semantic Role Labelling is also referred to as thematic role labeling and goes systematically for interpreting the syntactic expression of a sentence, ideally, with the parsing tree method. Semantic role labeling is appropriate for NLP tasks that involve the extraction of multiple meanings mentioned in a language and depends largely on the structure or scheme of the parsing trees applied. The semantic role labeling method is also used in image captioning for deep learning and Computer Vision tasks; herein, SRL is utilized for extracting the relation between the image and the background.


6 Reasons Why Artificial Intelligence Can't Replace Humans at Work

#artificialintelligence

When faced with the rapid growth of AI technology in today's labor market, employers probably think of automated processes that make work easier, faster, and more efficient. On the other hand, employees probably fear losing their jobs and being replaced by a machine. While artificial intelligence is designed to replace manual labor with a more effective and quicker way of doing work, it cannot override the need for human input in the workspace. In this article, you will see why humans are still immensely valuable in the workplace and cannot be fully replaced by artificial intelligence. Emotional intelligence is one distinguishing factor that makes humans forever relevant in the workplace.


The AI Revolution In Banking

#artificialintelligence

We have identified four steps the most successful banks in AI deployment are following. First and foremost, they are seeking to develop AI everywhere -- not just in client-facing applications but across their enterprises. Second, they are emulating the very tech companies they are trying to defeat: Instead of thinking like banks deploying technology, they are thinking like tech companies getting into banking. Third, they are committing full bore to improving and modernizing their data practices -- not only by moving to the cloud but also by bolstering governance and data stewardship. Finally, the trailblazers are creating AI factories -- internal centers of excellence brimming with talented workers -- that they then can deploy across the enterprise.


Is artificial intelligence the pill that health care needs?

#artificialintelligence

Scheduling nurses in the emergency department of St. Michael's Hospital used to be a painful four-hour-a-day job. Now it's done in 15 minutes thanks to an automated program built by data scientists at Unity Health, where a team of more than 25 employees is harnessing artificial intelligence and machine learning to improve care. Unity Health includes St. Mike's, St. Joseph's Health Centre and Providence Healthcare. The team has also created an early warning system that alerts doctors and nurses if a patient is at risk of going to the ICU or dying. The programs are just two of more than 40 that have gone live since 2019, when the analytics department was founded, largely due to Dr. Tim Rutledge, Unity's CEO, who believes the technology can dramatically change health care.


How To Normalize Satellite Images For Deep Learning

#artificialintelligence

Normalization of input data for deep learning (DL) applications is an important step that impacts network convergence and final results. In case of long-tailed satellite signals, proper normalization can be quite a challenge -- we were tired of trying to understand why the models we trained on one location didn't always translate to another location as well as we thought they should -- so we set out to explore what kind of normalization schemes are most suited for the task. Deep-learning-based automatic field delineation from satellite images is becoming an important tool in large-scale evaluations and monitoring of land cover and crop production. One of the steps in the workflow is normalization of the band values, which impacts network performance and quality of the results. The aim of this study is to investigate and quantify the effects of several normalization methods on the performance of our existing field delineation algorithm.


A safe space to learn about sexual, reproductive health

#artificialintelligence

An innovative chatbot designed for sharing critical information about sexual and reproductive health (SRH) with young people in India is demonstrating how artificial intelligence (AI) applications can engage vulnerable and hard-to-reach population segments. Working with the Population Foundation of India (PFI), Helen Wang, associate professor of communication, College of Arts and Sciences, examined the user-centered design and engagement of SnehAI, the first Hinglish (Hindi and English) chatbot purposefully developed for social and behavioral change. "Many AI technologies today are motivated by profit, but we must also be aware that AI can be leveraged in ways that facilitate social and behavior change," says Wang, who specializes in entertainment-education and storytelling as instruments for health promotion. "SnehAI is a powerful testimonial of the vital potential that lies in AI for good." The findings from Wang's instrumental case study appear in the Journal of Medical Internet Research.


Darth Vader Now Voiced by Artificial Intelligence

#artificialintelligence

Darth Vader, the villain of the Star Wars franchise, is now voiced by artificial intelligence after the retirement of actor James Earl Jones. Jones, who is 91 years old, has voiced the helmeted menace since 1977's Star Wars: Episode IV – A New Hope (originally titled Star Wars). His voice as Darth Vader was last heard in the 2019 film The Rise of Skywalker. The space opera will now use an AI replication of Jones's voice, created by Ukrainian start-up Respeecher. The voice was first heard in the show Obi Wan Kenobi.


Even smartest AI can't match human eye - Gadget

#artificialintelligence

A common artificial intelligence model known as deep convolutional neural networks (DCNNs) does not see objects the way humans do – and that could be dangerous in real-world AI applications. That is the conclusion of Professor James Elder, co-author of a York University study published recently, which finds that AI cannot use something called "configural shape perception", which is standard in human perception for recognising shapes. Published in the Cell Press journal iScience, the paper Deep learning models fail to capture the configural nature of human shape perception is a collaborative study by Elder, who holds the York research chair in human and computer vision and is co-director of York's Centre for AI & Society, co-authored with assistant psychology professor Nicholas Baker at Loyola College in Chicago, a former postdoctoral fellow at York. The study employed novel visual stimuli called "Frankensteins" to explore how the human brain and DCNNs process holistic, configural object properties. "Frankensteins are simply objects that have been taken apart and put back together the wrong way around," says Elder. "As a result, they have all the right local features, but in the wrong places."