When tech entrepreneur David Heinmeier Hansson recently took to Twitter saying the Apple Card gave him a credit limit that was 20 times higher than his wife's, despite the fact that she had a higher credit score, it may have been the first major headline about algorithmic bias you read in your everyday life. It was not the first -- there have been major stories about potential algorithmic bias in child care and insurance -- and it won't be the last. The chief technology officer of project management software firm Basecamp, Heinmeier was not the only tech figure speaking out about algorithmic bias and the Apple Card. In fact, Apple's own co-founder Steve Wozniak had a similar experience. Presidential candidate Elizabeth Warren even got in on the action, bashing Apple and Goldman, and regulators said they are launching a probe.
These days, the words artificial intelligence (AI) and China are almost synonymous. In fact, any media or business circle discussions regarding AI would seem incomplete without a mention of China, and it's no secret that the Chinese government and Chinese tech companies continue to invest heavily in building AI-related capabilities as part of their goal to make China a global AI leader. China is undeniably well on its way to becoming a world leader of the AI age. However, in the midst of their excitement over China, many global leaders are underestimating the potential for AI adoption that the rest of the Asia Pacific has to offer. In my recent report, I noted that almost every country and every industry in the Asia Pacific region is interested in becoming AI-first.
China is selling its most advanced "fully autonomous" military drones with fears that it could lead to a bloodbath in the Middle East. The Asian superpower is reportedly selling AI-enhanced combat drones to the region, with potentially disastrous consequences. Prof Toby Walsh, of the University of NSW, in Australia, said: "They would be impossible to defend yourself against. "Once the shooting starts, every human on the battlefield will be dead." US Defence Sec Mark Esper has said that China is selling drones programmed to decide themselves who lives or dies. He told a conference on Artificial Intelligence: "As we speak, the Chinese government is already exporting some of its most advanced military aerial drones to the Middle East as it prepares to export its next generation stealth UAVs when those come online.
AntWorks, a global provider of artificial intelligence and intelligent automation solutions powered by fractal science, today announced an exclusive partnership with the SEED Group, a member of The Private Office of Sheikh Saeed bin Ahmed Al Maktoum. The partnership will support expansion of intelligent automation within the Middle East (ME), a region where AI is expected to become a US$320 billion by 2030. The SEED Group establishes groundbreaking companies with a strong presence in the Gulf Cooperation Council (GCC) and will work with AntWorks to offer ethical AI solutions for GCC companies with ANTsteinTM SQUARE, the world's first and only Integrated Automation Platform (IAP), powered by fractal science. AntWorks seek to replicate its success across Asia, the UK and US, where the organisation has automated entire business processes end-to-end for many clients across the BFSI (Banking, Financial Services and Insurance), transportation, logistics and public sector, among others. With successful adoption of AntWorks' IAP solution, businesses will stand to save millions and realise increased performance and efficiency by automating and processing business data, including unstructured data, which will make up 80% of the world's data by 2025.
Dan Jacobson, a research and development staff member in the Biosciences Division at the US Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL), has a few ideas. For the past 5 years, Jacobson and his team have studied plants to understand the genetic variables and patterns that make them adaptable to changing environments and climates. As a computational biologist, Jacobson uses some of the world's most powerful supercomputers for his work--including the recently decommissioned Cray XK7 Titan and the world's most powerful and smartest supercomputer for open science, the IBM AC922 Summit supercomputer, both located at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility at ORNL. Last year, Jacobson and his team won an Association for Computing Machinery Gordon Bell Prize after using a special computing technique known as "mixed precision" on Summit to become the first group to reach exascale speed--approximately a quintillion calculations per second. Jacobson's team is currently working on numerous projects that form an integrated roadmap for the future of AI in plant breeding and bioenergy.
In the 1980s, Bloom County was arguably the most popular comic strip on the planet. In one sequence, the penguin Opus is running for elected office, and the local computer nerd, Oliver Wendell Jones, uses AI to analyze polling data and determines that the ideal image that voters want in a candidate is "chocolate éclair." The root problem here is that of AI explainability. Had Opus demanded an explanation on why chocolate éclair was the suggestion of the AI, the error would have been discovered. But Opus didn't, for he trusted the system, and the system was wrong.
The issues of ethical development and deployment of applications using artificial intelligence (AI) technologies is rife with nuance and complexity. Because humans are diverse -- different genders, races, values and cultural norms -- AI algorithms and automated processes won't work with equal acceptance or effectiveness for everyone worldwide. What most people agree upon is that these technologies should be used to improve the human condition. There are many AI success stories with positive outcomes in fields from healthcare to education to transportation. But there have also been unexpected problems with several AI applications including facial recognition and unintended bias in numerous others.
Mr Musk said: "I think there are a lot, a tremendous amount of investment going on in AI. "Where there is a lack of investment is in AI safety, and there should be, in my view, a government agency that oversees anything related to AI to confirm that it is – does not represent a public safety risk. "Just as there is a regulatory authority for, like the Food and Drug Administration, there's NHTSA for automotive safety, there's the FAA for aircraft safety. "We've generally come to the conclusion that it is important to have a government referee or a referee that is serving the public interest in ensuring that things are safe when there's a potential danger to the public.
As fears about AI's disruptive potential have grown, AI ethics has come to the fore in recent years. Concerns around privacy, transparency and the ability of algorithms to warp social and political discourse in unexpected ways have resulted in a flurry of pronouncements from companies, governments, and even supranational organizations on how to conduct ethical AI development. The majority have focused on outlining high-level principles that should guide those building these systems. Whether by chance or by design, the principles they have coalesced around closely resemble those at the heart of medical ethics. But writing in Nature Machine Intelligence, Brent Mittelstadt from the University of Oxford points out that AI development is a very different beast to medicine, and a simple copy and paste won't work.