negative impact


Fig. 1: Summary of positive and negative impact of AI on the various SDGs.

#artificialintelligence

The numbers inside the colored squares represent each of the SDGs (see the Supplementary Data 1). The percentages on the top indicate the proportion of all targets potentially affected by AI and the ones in the inner circle of the figure correspond to proportions within each SDG. The results corresponding to the three main groups, namely Society, Economy, and Environment, are also shown in the outer circle of the figure. The results obtained when the type of evidence is taken into account are shown by the inner shaded area and the values in brackets.


Deep Learning Training with Simulated Approximate Multipliers

arXiv.org Machine Learning

This paper presents by simulation how approximate multipliers can be utilized to enhance the training performance of convolutional neural networks (CNNs). Approximate multipliers have significantly better performance in terms of speed, power, and area compared to exact multipliers. However, approximate multipliers have an inaccuracy which is defined in terms of the Mean Relative Error (MRE). To assess the applicability of approximate multipliers in enhancing CNN training performance, a simulation for the impact of approximate multipliers error on CNN training is presented. The paper demonstrates that using approximate multipliers for CNN training can significantly enhance the performance in terms of speed, power, and area at the cost of a small negative impact on the achieved accuracy. Additionally, the paper proposes a hybrid training method which mitigates this negative impact on the accuracy. Using the proposed hybrid method, the training can start using approximate multipliers then switches to exact multipliers for the last few epochs. Using this method, the performance benefits of approximate multipliers in terms of speed, power, and area can be attained for a large portion of the training stage. On the other hand, the negative impact on the accuracy is diminished by using the exact multipliers for the last epochs of training.


Why Don't We Trust Machines when We Obviously Should?

#artificialintelligence

This is what makes them dangerous.' Previously, I talked about how AI could change the human-machine relationship. Who should be making the decisions in an autonomous car? Should humans always be able to overrule robot decisions? What if you only have a split second to react?


AI Ethics for Systemic Issues: A Structural Approach

arXiv.org Artificial Intelligence

The debate on AI ethics largely focuses on technical improvements and stronger regulation to prevent accidents or misuse of AI, with solutions relying on holding individual actors accountable for responsible AI development. While useful and necessary, we argue that this "agency" approach disregards more indirect and complex risks resulting from AI's interaction with the socio-economic and political context. This paper calls for a "structural" approach to assessing AI's effects in order to understand and prevent such systemic risks where no individual can be held accountable for the broader negative impacts. This is particularly relevant for AI applied to systemic issues such as climate change and food security which require political solutions and global cooperation. To properly address the wide range of AI risks and ensure 'AI for social good', agency-focused policies must be complemented by policies informed by a structural approach.


6 things to keep in mind when deploying artificial intelligence at scale

#artificialintelligence

EVERY business leader today understands the potential and promise of artificial intelligence (AI). Many of them -- especially those that are ahead of the curve in terms of technology adoption -- are at a stage where they can leverage AI to pioneer exciting use cases. However, charging ahead with the technology has its challenges including inviting scrutiny from regulators, causing concern among customers, and fuelling fear among employees. Facebook, for example, which is at the forefront of innovation with several cutting-edge AI use cases in its labs and on its platforms, seems to have gotten regulators (unnecessarily) concerned. As a result, CEO Mark Zuckerberg and COO Sheryl Sandberg are now looking for ways to assuage fears and charge ahead with implementing and scaling their AI projects.


How Tech Can Help Curb Emissions by Planting 500 Billion New Trees

#artificialintelligence

Trees are a low-tech, high-efficiency way to offset much of humankind's negative impact on the climate. What's even better, we have plenty of room for a lot more of them. A new study conducted by researchers at Switzerland's ETH-Zürich, published in Science, details how Earth could support almost an additional billion hectares of trees without the new forests pushing into existing urban or agricultural areas. Once the trees grow to maturity, they could store more than 200 billion metric tons of carbon. Great news indeed, but it still leaves us with some huge unanswered questions.


The 7 Most Dangerous Technology Trends In 2020 Everyone Should Know About

#artificialintelligence

As we enter new frontiers with the latest technology trends and enjoy the many positive impacts and benefits it can have on the way we work, play and live, we must always be mindful and prepare for possible negative impacts and potential misuse of the technology. The British, Chinese, and United States armed forces are testing how interconnected, cooperative drones could be used in military operations. Inspired by a swarm of insects working together, drone swarms could revolutionize future conflicts, whether it be by overwhelming enemy sensors with their numbers or to effectively cover a large area for search-and-rescue missions. The difference between swarms and how drones are used by the military today is that the swarm could organize itself based on the situation and through interactions with each other to accomplish a goal. While this technology is still in the experimentation stage, the reality of a swarm that is smart enough to coordinate its own behavior is moving closer to reality.


The 7 Most Dangerous Technology Trends In 2020 Everyone Should Know About

#artificialintelligence

As we enter new frontiers with the latest technology trends and enjoy the many positive impacts and benefits it can have on the way we work, play and live, we must always be mindful and prepare for possible negative impacts and potential misuse of the technology. The British, Chinese, and United States armed forces are testing how interconnected, cooperative drones could be used in military operations. Inspired by a swarm of insects working together, drone swarms could revolutionize future conflicts, whether it be by overwhelming enemy sensors with their numbers or to effectively cover a large area for search-and-rescue missions. The difference between swarms and how drones are used by the military today is that the swarm could organize itself based on the situation and through interactions with each other to accomplish a goal. While this technology is still in the experimentation stage, the reality of a swarm that is smart enough to coordinate its own behavior is moving closer to reality.


New Research Shows How AI Will Impact the Workforce

#artificialintelligence

Across age groups, U.S. employees believe that paralegals (4%), insurance underwriters (5%), and pharmacists (7%) have the best chance to survive automation; More part time employees (25%) fear that AI will take their jobs within 10 years compared to full-time workers (18%), although there is no significant difference in attitudes on the specific jobs they think are likely to disappear. Employees at the largest companies (with more than 20,000 staff) are slightly less afraid (17%) than the overall group (19%) about the effect of AI/bots on their jobs, possibly because they have already experienced its negative impact (10%), and see a more stable future. Across age groups, U.S. employees believe that paralegals (4%), insurance underwriters (5%), and pharmacists (7%) have the best chance to survive automation; More part time employees (25%) fear that AI will take their jobs within 10 years compared to full-time workers (18%), although there is no significant difference in attitudes on the specific jobs they think are likely to disappear. Employees at the largest companies (with more than 20,000 staff) are slightly less afraid (17%) than the overall group (19%) about the effect of AI/bots on their jobs, possibly because they have already experienced its negative impact (10%), and see a more stable future.


What Are The Negative Impacts Of Artificial Intelligence (AI)?

#artificialintelligence

Artificial intelligence (AI) is doing a lot of good and will continue to provide many benefits for our modern world, but along with the good, there will inevitably be negative consequences. The sooner we begin to contemplate what those might be, the better equipped we will be to mitigate and manage the dangers. "Success in creating effective AI could be the biggest event in the history of our civilisation. So we cannot know if we will be infinitely helped by AI or ignored by it and sidelined, or conceivably destroyed by it." The first step in being able to prepare for the negative impacts of artificial intelligence is to consider what some of those negative impacts might be.