Artificial intelligence (AI) is no longer a thing of science fiction, it exists in the world all around us, automating simple tasks and dramatically improving our lives. But as AI and automation becomes increasingly capable, how will this alternative labor source affect your future workforce? In this article, we'll take a look at both some optimistic and pessimistic views of the future of our jobs amidst increasing AI capabilities. A two-year study from McKinsey Global Institute suggests that by 2030, intelligent agents and robots could replace as much as 30 percent of the world's current human labor. McKinsey suggests that, in terms of scale, the automation revolution could rival the move away from agricultural labor during the 1900s in the United States and Europe, and more recently, the explosion of the Chinese labor economy.
Welcome to part 4 of my AI and GeoAI Series that will cover the more technical aspects of GeoAI and ArcGIS. Previously, part 1 of this series covered the Future Impacts of AI on Mapping and Modernization which introduced the concept of GeoAI and why you should care about having an AI as a future coworker. Part 2 of the series, GIS, Artificial Intelligence, and Automation in the Workplace covered specific geospatial professions that will be drastically effected by introduction of GeoAI technology in the workplace. Part 3 addressed Teaming with the Machine - AI in the workplace the emergence of the new geospatial working relationship between information, humans, and artificial intelligence to be successful in an organizations mission. For part 4, we will address 3 specific GeoAI areas in ArcGIS that will help you with your journey to developing your Deep Learning workflows.
Artificial intelligence (AI) in fashion is no longer a secret and has widely been used to mostly help businesses to streamline processes and increase sales. But the skillsets of fashion designers and computer scientists are miles apart, so it's not until recently that the creative applications of AI in this industry have been explored. "Initial uses of artificial intelligence have focused on quantifiable business needs, which has allowed for start-ups to offer a service to brands," Matthew Drinkwater, head of the fashion innovation agency (FIA) at London College of Fashion (LCF), told Forbes. "Creativity is much more difficult to quantify and therefore more likely to follow behind." Seeing the opportunity for AI to play a bigger role in the creative process, LFC has launched an AI course aiming to develop creative fashion solutions and experiences that challenge the current approaches to fashion design.
Artificial Intelligence is transforming the world at a rapid and accelerating pace, offering huge potential, but also posing social and economic challenges. Human beings are naturally fearful of machines – this is a constant. Technological advancements tend to outpace cultural shifts. It has taken the shock of a global pandemic to accelerate the uptake of many technologies that have been around for at least a decade. Unsurprisingly, much of the public discussion on AI has focused on recent controversies around facial recognition, automated decision-making and exam algorithms.
Artificial Intelligence (AI) is often regarded as “Great and Powerful;” it can add tremendous value by transforming business workflows with faster, smarter decisions. At the same time, AI can be mysterious and even scary. In order to build trust, AI needs to be transparent and explainable: “out from behind the curtain” so to speak. As IBM’s recent study on AI Ethics found, corporate boards are looking to Data and Technology leaders to make that happen, and I couldn’t agree more. CDOs and CTOs can be instrumental in bringing forth both human value and human values in enterprise AI. Putting the human first To build trust in business AI, we must always put the value of the human first. This should happen at the data-provider level and the decision-maker level. At the provider level, building trust starts with data governance to ensure that the data itself can be trusted. In our organization, embedded within this is the IBM…
While some forecasts will probably get at least something right, others will likely be useful only as demonstrations of how hard it is to predict, and many don't make much sense. What we would like to achieve is for you to be able to look at these and other forecasts, and be able to critically evaluate them. The political scientist Philip E. Tetlock, author of Superforecasting: The Art and Science of Prediction, classifies people into two categories: those who have one big idea ("hedgehogs"), and those who have many small ideas ("foxes"). Tetlock has carried out an experiment between 1984 and 2003 to study factors that could help us identify which predictions are likely to be accurate and which are not. One of the significant findings was that foxes tend to be clearly better at prediction than hedgehogs, especially when it comes to long-term forecasting.
We invite applications for a tenure-track position in computer science, focused on explainable artificial intelligence, and ability to collaborate with social sciences. DKE research lines include human-centered aspects of recommender systems, as well as a strong applied mathematics component such as dynamic game theory (differential, evolutionary, spatial and stochastic game theory). The position is supported by the large and growing Explainable and Reliable Artificial Intelligence (ERAI) group of DKE. The group consists of Associate & Assistant Professors, postdoctoral researchers, PhD candidates and master/bachelor students. The ERAI group works together closely on a day-to-day basis, to exchange knowledge, ideas, and research advancements.
In June, a crisis erupted in the artificial intelligence world. Conversation on Twitter exploded after a new tool for creating realistic, high-resolution images of people from pixelated photos showed its racial bias, turning a pixelated yet recognizable photo of former President Barack Obama into a high-resolution photo of a white man. Researchers soon posted images of other famous Black, Asian, and Indian people, and other people of color, being turned white. Two well-known AI corporate researchers -- Facebook's chief AI scientist, Yann LeCun, and Google's co-lead of AI ethics, Timnit Gebru -- expressed strongly divergent views about how to interpret the tool's error. A heated, multiday online debate ensued, dividing the field into two distinct camps: Some argued that the bias shown in the results came from bad (that is, incomplete) data being fed into the algorithm, while others argued that it came from bad (that is, short-sighted) decisions about the algorithm itself, including what data to consider.
EU countries are already strong in digital industry and business-to-business applications. With a high-quality digital infrastructure and a regulatory framework that protects privacy and freedom of speech, the EU could become a global leader in the data economy and its applications. AI could help people with improved health care, safer cars and other transport systems, tailored, cheaper and longer-lasting products and services. It can also facilitate access to information, education and training. The need for distance learning became more important because of the Covid-19 pandemic. AI can also make workplace safer as robots can be used for dangerous parts of jobs, and open new job positions as AI-driven industries grow and change.
In today's world, AI systems are used to decide who gets hired, the quality of medical treatment we receive, and whether we become a suspect in a police investigation. While these tools show great promise, they can also harm vulnerable and marginalized people, and threaten civil rights. Unchecked, unregulated and, at times, unwanted, AI systems can amplify racism, sexism, ableism, and other forms of discrimination. The Algorithmic Justice League's mission is to raise awareness about the impacts of AI, equip advocates with empirical research, build the voice and choice of the most impacted communities, and galvanize researchers, policy makers, and industry practitioners to mitigate AI harms and biases. We're building a movement to shift the AI ecosystem towards equitable and accountable AI.