Goto

Collaborating Authors

Results


What is Artificial Intelligence and How its work?

#artificialintelligence

Artificial Intelligence ( AI) is a vast branch of computer science that deals with the development of smart machines capable of executing tasks that usually require human intelligence. AI is an interdisciplinary science with different approaches, but in nearly every field of the education field, software industry, developments in machine learning and deep learning are causing a paradigm change. How is artificial intelligence operation? Are robots able to think? Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: "Can machines think?"


How Artificial Intelligence is Changing Content Marketing

#artificialintelligence

In more than one way, Artificial Intelligence or AI has already made a powerful presence in digital marketing. As a segment of the digital marketing plan, content marketing is also getting blessed by the influence of artificial intelligence (AI) technology. The impact has been limited to just a few aspects, like content marketing strategy and content recommendations based on age, demographic locations, personal preferences, etc. There are many to be implemented within a few years from the growing collaboration between AI-powered content marketing tools and content marketers. Today, we're going to take a forward leap to see how artificial intelligence is transforming content marketing as well.


The future of AI depends on 9 companies. If they fail, we're doomed.

#artificialintelligence

Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence. If artificial intelligence will destroy humanity, it probably won't be through killer robots and the incarnation--it will be through a thousand paper cuts. In the shadow of the immense benefits of advances in technology, the dark effects of AI algorithms are slowly creeping into different aspects of our lives, causing divide, unintentionally marginalizing groups of people, stealing our attention, and widening the gap between the wealthy and the poor. While we're already seeing and discussing many of the negative aspects of AI, not enough is being done to address them. And the reason is that we're looking in the wrong place, as futurist and Amy Webb discusses in her book The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity. Many are quick to blame large tech companies for the problems caused by artificial intelligence.


Collaborating with AI to create Bach-like compositions in AWS DeepComposer

#artificialintelligence

AWS DeepComposer provides a creative and hands-on experience for learning generative AI and machine learning (ML). We recently launched the Edit melody feature, which allows you to add, remove, or edit specific notes, giving you full control of the pitch, length, and timing for each note. In this post, you can learn to use the Edit melody feature to collaborate with the autoregressive convolutional neural network (AR-CNN) algorithm and create interesting Bach-style compositions. Through human-AI collaboration, we can surpass what humans and AI systems can create independently. For example, you can seek inspiration from AI to create art or music outside their area of expertise or offload the more routine tasks, like creating variations on a melody, and focus on the more interesting and creative tasks.


'Reasonable Explainability' for Regulating AI in Health

#artificialintelligence

Emerging technology is slowly finding a place in developing countries for its potential to plug gaps in ailing public service systems, such as healthcare. At the same time, cases of bias and discrimination that overlap with the complexity of algorithms have created a trust problem with technology. Promoting transparency in algorithmic decision-making through explainability can be pivotal in addressing the lack of trust with medical artificial intelligence (AI), but this comes with challenges for providers and regulators. In generating explainability, AI providers need to prioritise their accountability to patient safety given that the most accurate of algorithms are still opaque. There are also additional costs involved. Regulators looking to facilitate the entry of innovation while prioritising patient safety will need to look into ascertaining a reasonable level of explainability considering risk factors and the context of its use, and adaptive and experimental means of regulation. Artificial intelligence (AI) models across the globe have come under the scanner over ethical issues; for instance, Amazon's hiring algorithm reportedly discriminates against women,[1] and there is evidence of racial bias in the facial recognition software used by law enforcement in the United States (US).[2] While biased AI has various implications, concerns around the use of AI in ethically sensitive industries, such as healthcare, justifiably require closer examination. Medical AI models have become more commonplace in clinical and healthcare settings due to their higher accuracy and lower turnaround time and cost in comparison to non-AI techniques.


The Future of AI Part 1

#artificialintelligence

It was reported that Venture Capital investments into AI related startups made a significant increase in 2018, jumping by 72% compared to 2017, with 466 startups funded from 533 in 2017. PWC moneytree report stated that that seed-stage deal activity in the US among AI-related companies rose to 28% in the fourth-quarter of 2018, compared to 24% in the three months prior, while expansion-stage deal activity jumped to 32%, from 23%. There will be an increasing international rivalry over the global leadership of AI. President Putin of Russia was quoted as saying that "the nation that leads in AI will be the ruler of the world". Billionaire Mark Cuban was reported in CNBC as stating that "the world's first trillionaire would be an AI entrepreneur".


The AI workplace and ArcGIS Deep Learning Workflow

#artificialintelligence

Welcome to part 4 of my AI and GeoAI Series that will cover the more technical aspects of GeoAI and ArcGIS. Previously, part 1 of this series covered the Future Impacts of AI on Mapping and Modernization which introduced the concept of GeoAI and why you should care about having an AI as a future coworker. Part 2 of the series, GIS, Artificial Intelligence, and Automation in the Workplace covered specific geospatial professions that will be drastically effected by introduction of GeoAI technology in the workplace. Part 3 addressed Teaming with the Machine - AI in the workplace the emergence of the new geospatial working relationship between information, humans, and artificial intelligence to be successful in an organizations mission. For part 4, we will address 3 specific GeoAI areas in ArcGIS that will help you with your journey to developing your Deep Learning workflows.


About predicting the future - how AI will transform our lives

#artificialintelligence

While some forecasts will probably get at least something right, others will likely be useful only as demonstrations of how hard it is to predict, and many don't make much sense. What we would like to achieve is for you to be able to look at these and other forecasts, and be able to critically evaluate them. The political scientist Philip E. Tetlock, author of Superforecasting: The Art and Science of Prediction, classifies people into two categories: those who have one big idea ("hedgehogs"), and those who have many small ideas ("foxes"). Tetlock has carried out an experiment between 1984 and 2003 to study factors that could help us identify which predictions are likely to be accurate and which are not. One of the significant findings was that foxes tend to be clearly better at prediction than hedgehogs, especially when it comes to long-term forecasting.


Future of AI Part 2

#artificialintelligence

This part of the series looks at the future of AI with much of the focus in the period after 2025. The leading AI researcher, Geoff Hinton, stated that it is very hard to predict what advances AI will bring beyond five years, noting that exponential progress makes the uncertainty too great. This article will therefore consider both the opportunities as well as the challenges that we will face along the way across different sectors of the economy. It is not intended to be exhaustive. AI deals with the area of developing computing systems which are capable of performing tasks that humans are very good at, for example recognising objects, recognising and making sense of speech, and decision making in a constrained environment. Some of the classical approaches to AI include (non-exhaustive list) Search algorithms such as Breath-First, Depth-First, Iterative Deepening Search, A* algorithm, and the field of Logic including Predicate Calculus and Propositional Calculus. Local Search approaches were also developed for example Simulated Annealing, Hill Climbing (see also Greedy), Beam Search and Genetic Algorithms (see below). Machine Learning is defined as the field of AI that applies statistical methods to enable computer systems to learn from the data towards an end goal. The term was introduced by Arthur Samuel in 1959. A non-exhaustive list of examples of techniques include Linear Regression, Logistic Regression, K-Means, k-Nearest Neighbour (kNN), Naive Bayes, Support Vector Machine (SVM), Decision Trees, Random Forests, XG Boost, Light Gradient Boosting Machine (LightGBM), CatBoost. Deep Learning refers to the field of Neural Networks with several hidden layers. Such a neural network is often referred to as a deep neural network. Neural Networks are biologically inspired networks that extract abstract features from the data in a hierarchical fashion.


The tensions between explainable AI and good public policy

#artificialintelligence

There are two reasons why. First, with machine learning in general and neural networks or deep learning in particular, there is often a trade-off between performance and explainability. The larger and more complex a model, the harder it will be to understand, even though its performance is generally better. Unfortunately, for complex situations with many interacting influences--which is true of many key areas of policy--machine learning will often be more useful the more of a black box it is. As a result, holding such systems accountable will almost always be a matter of post hoc monitoring and evaluation.