Goto

Collaborating Authors

algorithm


Artificial Intelligence Creates Better Art Than You (Sometimes)

#artificialintelligence

In 2018, in late October, a distinctly odd painting appeared at the fine art auction house Christe's. At a distance, the painting looks like a 19th-century portrait of an austere gentleman dressed in black. Contained in a gilt frame, the portly gentleman appears middle-aged; his white-collar insinuates that he is a man of the church. The painting seems unassuming, something expected at an auction house that sells billions of dollars of painting each year. However, upon closer inspection, things get a bit odd.


ULTRA-SWARM: Creating digital twins of UAV swarms for firefighting and aid delivery

Robohub

For my PhD, I'm studying how global problems such as wildfires and aid delivery in remote areas can benefit from innovative technologies such as UAV (unmanned aerial vehicle) swarms. Every year, vast areas of forests are destroyed due to wildfires. Wildfires occur more frequently as climate change induces extreme weather conditions. As a result, wildfires are often larger and more intense. Over the past 5 years, countries around the globe witnessed unprecedented effects of wildfires.


Interlinking Artificial Intelligence with Human Brain through Cognition

#artificialintelligence

For a very long time, humans have been trying to design a machine that has complex capabilities like how human brain does. When artificial intelligence first came into existence, people thought that making a model that imitates humans will be easy. But it took more than five decades for scientists to turn the concept successful. Today, we are running after machines that carry the cognitive capabilities of human brain in it. Why is designing a mechanism that is similar to human brain complex?


Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness

#artificialintelligence

Machine learning, artificial intelligence, and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. Despite much promising research currently being undertaken, particularly in imaging, the literature as a whole lacks transparency, clear reporting to facilitate replicability, exploration for potential ethical concerns, and clear demonstrations of effectiveness. Among the many reasons why these problems exist, one of the most important (for which we provide a preliminary solution here) is the current lack of best practice guidance specific to machine learning and artificial intelligence. However, we believe that interdisciplinary groups pursuing research and impact projects involving machine learning and artificial intelligence for health would benefit from explicitly addressing a series of questions concerning transparency, reproducibility, ethics, and effectiveness (TREE). The 20 critical questions proposed here provide a framework for research groups to inform the design, conduct, and reporting; for editors and peer reviewers to evaluate contributions to the literature; and for patients, clinicians and policy makers to critically appraise where new findings may deliver patient benefit. Machine learning (ML), artificial intelligence (AI), and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. The potential uses include improving diagnostic accuracy,1 more reliably predicting prognosis,2 targeting treatments,3 and increasing the operational efficiency of health systems.4 Examples of potentially disruptive technology with early promise include image based diagnostic applications of ML/AI, which have shown the most early clinical promise (eg, deep learning based algorithms improving accuracy in diagnosing retinal pathology compared with that of specialist physicians5), or natural language processing used as a tool to extract information from structured and unstructured (that is, free) text embedded in electronic health records.2 Although we are only just …


How to sharpen machine learning with smarter management of edge cases

#artificialintelligence

Machine learning (ML) applications are transforming business strategy, popping up in every vertical and niche to convert huge datasets into valuable predictions that guide executives to make better business decisions, seize opportunities, and spot and mitigate risks. While ML models are rife with potential, it's quality data that allows them to become accurate and effective. Today's enterprises are handling huge floods of data, including unstructured data, all of which needs annotating before ML models can produce dependable predictions. Data processing is often under-scrutinised, but it's crucial for accurate and relevant forecasts. If data is mislabeled or annotated incorrectly, all your predictions will be based on misconceptions, making them basically untrustworthy.


Top Artificial Intelligence Jobs to Apply in April 2021

#artificialintelligence

Artificial intelligence is considered the future technology. It is already making a significant impact on modern enterprises. AI has sparked several innovations and brought digital disruption to diverse industries. It has also changed the jobs market forever. As more and more companies leverage this technology, AI has become the most in-demand skills to land a job in an organization, regardless of industry.


Artificial Intelligence In 2021 - The Developments So Far

#artificialintelligence

Artificial Intelligence is one of the most, if not the only disruptive technology that made a massive impact in the modern world. It is a concept that continues to reach a wider audience with regular developments and researches done by scientists, engineers, and entrepreneurs who are working to advance the field. Before the pandemic wreaked havoc in 2020, machine learning, a branch of artificial intelligence was causing disruptions across industries. But during the COVID-19 pandemic, it became evident that self-teaching algorithms and smart machines will play a big role in the ongoing fight against the viral outbreak and serve our society in the future too. Artificial intelligence technology remains a key trend in our work world and personal world.


Is AI there yet?

#artificialintelligence

It's a cold winter day in Detroit, but the sun is shining bright. Robert Williams decided to spend some quality time rolling on his house's front loan with his two daughters. Suddenly, police officers appeared from nowhere and brought to an abrupt halt a perfect family day. Robert was ripped from the arms of his crying daughters without an explanation, and cold handcuffs now gripped his hands. The police took him away in no time! His family were left shaken in disbelief at the scene which had unfolded in front of their eyes. What followed for Robert were 30 long hours in police custody.


World's most advanced AI system installed

#artificialintelligence

New Zealand's most powerful supercomputer for artificial intelligence applications has been installed at the University of Waikato as part of its commitment positioning New Zealand as a world leader in AI research and development. The NVIDIA DGX A100 is the first computer of its kind in New Zealand and is the world's most advanced system for powering universal AI workloads. The machine has been referred to as the Ferrari of computing because of how fast it can rapidly and efficiently process massive amounts of data, allowing students and researchers at the University to process at lightning-fast speeds, enabling machine learning and artificial intelligence that can solve problems from addressing climate change to managing our biodiversity. Machine learning uses algorithms to explore huge data sets and create models that provide answers or outcomes mirroring human decision making. Models can be trained to recognise things like patterns, facial expressions, and spoken words – or they can find anomalies like credit card fraud.


Deep Learning Tutorial for Beginners: A [Step-by-Step] Guide

#artificialintelligence

Deep Learning is a subdivision of machine learning that imitates the working of a human brain with the help of artificial neural networks. It is useful in processing Big Data and can create important patterns that provide valuable insight into important decision making. The manual labeling of unsupervised data is time-consuming and expensive. DeepLearning tutorials help to overcome this with the help of highly sophisticated algorithms that provide essential insights by analyzing and cumulating the data. Deep Learning leverages the different layers of neural networks that enable learning, unlearning, and relearning.