The fear of robots coming for your job is one of the many challenges confronting 21st-century workers, but the machines aren't ready to take on every industry just yet. Bridgewater Associates, the massive hedge fund founded by legendary investor Ray Dalio, just released a report on the changing relationship between labour and capital in the US. One of the big factors the Bridgewater authors highlighted was the ongoing rise in automation across industries, which they noted could be a support for corporate profits in the years to come as more efficient robots and software potentially replace slower and error-prone human labour. Bridgewater cited a 2016 report from consulting firm McKinsey & Company that looked at which industries in the US were most susceptible to being automated. The McKinsey report used data from the Department of Labour to estimate how much time workers in various industry sectors spent doing different types of tasks, and which of those tasks could, theoretically, be automated using present technology.
Just a week after it was announced, Google's new AI ethics board is already in trouble. The board, founded to guide "responsible development of AI" at Google, would have had eight members and met four times over the course of 2019 to consider concerns about Google's AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more. Of the eight people listed in Google's initial announcement, one (privacy researcher Alessandro Acquisti) has announced on Twitter that he won't serve, and two others are the subject of petitions calling for their removal -- Kay Coles James, president of the conservative Heritage Foundation think tank, and Dyan Gibbens, CEO of drone company Trumbull Unmanned. Thousands of Google employees have signed onto the petition calling for James's removal.
Beacon is unlike any other member of staff at Staffordshire University. It is available 24/7 to answer students' questions, and deals with a number of queries every day – mostly the same ones over and over again – but always stays incredibly patient. That patience is perhaps what gives it away: Beacon is an artificial intelligence (AI) education tool, and the first digital assistant of its kind to be operating at a UK university. Staffordshire developed Beacon with cloud service provider ANS and launched it in January this year. The chatbot, which can be downloaded in a mobile app, enhances the student experience by answering timetable questions and suggesting societies to join.
AI-powered loan and credit approval processes have been marred by unforeseen bias. Smart speakers have secretly turned on and recorded thousands of minutes of audio of their owners. Unfortunately, there's no industry-standard, best-practices handbook on AI ethics for companies to follow--at least not yet. Some large companies, including Microsoft and Google, are developing their own internal ethical frameworks. A number of think tanks, research organizations, and advocacy groups, meanwhile, have been developing a wide variety of ethical frameworks and guidelines for AI.
Artificial intelligence (AI) could displace millions of jobs in the future, damaging growth in developing regions such as Africa, says Ian Goldin, professor of globalisation and development at Oxford University. I have spent my career in international development, and in recent years have established a research group at Oxford University looking at the impact of disruptive technologies on developing economies. Perhaps the most important question we have looked at is whether AI will pose a threat - or provide new opportunities - for developing regions such as Africa. Optimists say that such places could use rapidly advancing AI systems to boost productivity and leapfrog ahead. But I am becoming increasingly concerned that AI will, in fact, block the traditional growth path by replacing low-wage jobs with robots.
Last Wednesday, US lawmakers introduced a new bill that represents one of the country's first major efforts to regulate AI. There are likely to be more to come. It hints at a dramatic shift in Washington's stance toward one of this century's most powerful technologies. Only a few years ago, policymakers had little inclination to regulate AI. Now, as the consequences of not doing so grow increasingly tangible, a small contingent in Congress is advancing a broader strategy to rein the technology in.
This week, the European Union published a set of ethical guidelines detailing how businesses and governments can achieve trustworthy artificial intelligence (AI)--that is, AI that is lawful, ethical, and socially and technologically robust. While these guidelines are not laws, they set out a framework for lawmakers and companies to achieve trustworthy AI. "The EU's new Ethics guidelines for trustworthy AI are a considered and constructive step toward addressing the impact of trustworthy AI on humankind, and toward laying the groundwork for necessary further discussion between key stakeholders in the private, public and governmental sectors," Juan Miguel de Joya, a consultant at the International Telecommunication Union and a member of the Association for Computing Machinery's US Technology Policy Committee, told TechRepublic. SEE: Artificial intelligence: A business leader's guide (free PDF) (TechRepublic) The EU's new guidelines should start conversations among businesses worldwide that may not have the resources to independently assess the impact of the technology, de Joya said. "Perhaps most fundamentally and significantly, release of the new guidelines is an opportunity for government, business, computing professionals and other stakeholders--particularly in the United States--to capture and channel the momentum of these discussions into real understanding of AI's potential and pitfalls," de Joya said. These guidelines are "a welcome, solid and significant step forward," Lorraine Kisselburgh, a visiting fellow in the Center for Education and Research in Information Assurance and Security (CERIAS) at Purdue University, and a member of the Association for Computing Machinery's US Technology Policy Committee, told TechRepublic.
Eric Horvitz is a technical fellow and director at Microsoft Research Labs. A recipient of the Feigenbaum and the Allen Newell Prizes for contributions to artificial intelligence (AI), he is also on the US President's Council of Advisors on Science and Technology, Defense Advanced Research Projects Agency, and the Allen Institute for Artificial Intelligence. He is also part of the standing committee of Stanford University's One Hundred Year Study on Artificial Intelligence. Horvitz, who comes at least once a year to the country to interact with the India labs team, spoke about his work at Microsoft Research. He also shared his thoughts on the benefits and fear of AI, and attempts to address the bias in algorithms.
When artificial intelligence systems start getting creative, they can create great things – and scary ones. Take, for instance, an AI program that let web users compose music along with a virtual Johann Sebastian Bach by entering notes into a program that generates Bach-like harmonies to match them. Run by Google, the app drew great praise for being groundbreaking and fun to play with. It also attracted criticism, and raised concerns about AI's dangers. My study of how emerging technologies affect people's lives has taught me that the problems go beyond the admittedly large concern about whether algorithms can really create music or art in general.
High-end fashion chain LK Bennett has been bought out of administration, saving 325 jobs. However, 15 of the retailer's stores are not included in the deal and will close, leading to the loss of 110 jobs. LK Bennett has been bought by Byland UK which was set up by Rebecca Feng, who runs the company's Chinese franchises. The sale includes the company's headquarters, 21 stores and all of its concessions. The amount paid has not been disclosed.