The challenges of making the technology industry a more welcoming place for women are numerous, especially in the booming field of artificial intelligence. To get a sense of just how monumental a task the tech community faces, look no further than the marquee gathering for AI's top scientists. Preparations for this year's event drew controversy not only because there weren't enough female speakers or study authors. The biggest debate was over the conference's name. The annual Conference and Workshop on Neural Information Processing Systems, formerly known as NIPS, had become a punchline symbol about just how bad the gender imbalance is for artificial intelligence.
BEIJING – China's policy of "reform and opening up" has brought monumental changes to the world's most populous country since its launch 40 years ago under leader Deng Xiaoping. Next week, China will mark the anniversary of the shift, agreed to at a Communist Party gathering on Dec. 18, 1978. Ou Banlan, 52, is a retired garment factory worker in Shenzhen, a former fishing village that was the testing ground for the reforms and morphed into a major manufacturing and high-tech hub. "My life is much better than that of my parents' generation," said the diminutive woman with short black hair, standing in front of the factory where she once toiled. She was born and raised in a village outside Shenzhen.
Several contemporary artists tackling the social implications of technology have been banned by censors from China's upcoming Guangzhou Triennial. One of them was Heather Dewey-Hagborg, whose works often critique biotechnology, notably including portraits derived from the DNA of Chelsea Manning. She woke up last on December 8th to an email from one of the show's three curators, Angelique Spaninks, explaining that her piece T3511 was being pulled last-minute. The triennial, titled "As We May Think, Feedforward," explores the links between humanity and technology and opens on December 21st. Spaninks told Dewey-Hagborg that her work was censored by the government, and while she was given no official justification, speculated that authorities were sensitive to bioethics issues.
Allocating repetitive and low-grade work to robots will allow solicitors to focus on more complicated tasks, the profession's regulator claimed yesterday. In a bid to calm nerves over the growing use of artificial intelligence in the legal profession, the Solicitors Regulation Authority said that robots would help lawyers deal with increased competition in the legal services market from non-traditional providers. However, it warned that there were serious ethical issues around the use of artificial intelligence by lawyers. The authority, which regulates 140,000 practising solicitors in England and Wales, said that law firms must "be able to explain the assumptions and reasoning behind some automated decisions". That would not necessarily be easy, it said.
You're probably used to the presence of facial recognition cameras at airports and other transport hubs, but what about at concerts? That's the step Taylor Swift's team took at her May 18th show at the Rose Bowl, in a bid to identify her stalkers. According to Rolling Stone, the camera was hidden inside a display kiosk at the event, and sent images of anyone who stopped to look at the display to a "command post" in Nashville, where they were cross-referenced with other photos of the star's known stalkers. As the target of numerous death and rape threats, Swift arguably has a valid motivation for leveraging such technology. However, it's unclear who has ownership of the photos of her concertgoers, or how long they will remain on file.
Last year, at the University of St. Thomas in Minneapolis, Minnesota, a graduate student set up an artificial intelligence system based on Microsoft's facial-recognition tools, which he used to predict their emotional state. The idea, the university explained at a national education conference in February 2018, would be to allow teachers to gather real-time information on how their lessons were being received, in order to maximize "student engagement." Although the program was only tested as a short experiment, and was never adopted by the university, such a system has the potential to be rife with flaws and primed for abuse, warn experts at the research group AI Now. The St. Thomas proof-of-concept system is just one of a number of data points that AI Now--a group composed of tech employees from companies including Microsoft and Google, and affiliated with New York University--says exemplify the need for stricter regulation of artificial intelligence. The group's report, published Thursday, underscores the inherent dangers in using A.I. to do things like amplify surveillance in fields including finance and policing, and argues that accountability and oversight are necessities where this type of nascent technology is concerned.
The Commonwealth Scientific and Industrial Research Organisation's (CSIRO) Data61, alongside IAG and the University of Sydney, has created a new artificial intelligence (AI)-focused institute, aimed at exploring the ethics of the emerging technology. The Gradient Institute, Data61 explained, is an independent non-profit charged with researching the ethics of AI, as well as developing ethical AI-based systems, focused essentially on creating a "world where all systems behave ethically". "By embedding ethics into AI, we believe we will be able to choose ways to avoid the mistakes of the past by creating better outcomes through ethically-aware machine learning," Institute CEO Bill Simpson-Young said. "For example, in recruitment when automated systems use historical data to guide decision making they can bias against subgroups who have historically been underrepresented in certain occupations. "By embedding ethics in the creation of AI we can mitigate these biases which are evident today in industries like retail, telecommunications, and financial services." In addition to research, it is expected the new institute will also explore the ethics of AI through practice, policy advocacy, public awareness, and training, specifically where the ethical development and use of AI is concerned. The institute will use research findings to create open source ethical AI tools that can be adopted and adapted by business and government, Data61 said in a statement Thursday. "As AI becomes more widely adopted, it's critical to ensure technologies are developed with ethical considerations in mind," Data61 CEO Adrian Turner added. "We need to get this right as a country, to reap the benefits of AI from productivity gains to new-to-the-world value." See also: AI'more dangerous than nukes': Elon Musk still firm on regulatory oversight Speaking with ZDNet during Data61's annual conference this year in Brisbane, acting director of Engineering and Design at Data61 Hilary Cinis said ethics is all about the reduction of harm. One way around ingrained ethical bias, she said, was to ensure that teams building the algorithms are diverse. She said a "cultural rethink" around development needs to happen. Similarly, Salesforce user research architect Kathy Baxter said at the Human Rights & Technology Conference in Sydney earlier this year that one main problem that arises is bias can be difficult to see in data. Equally complex, she said, is the question of what it means to be fair. "If you follow the headlines, you'll see that AI is sexist, racist, and full of systematic biases," she said. "AI is based on probability and statistics," she continued. "If an AI is using any of these factors -- race, religion, gender, age, sexual orientation -- it is going to disenfranchise a segment of the population unfairly and even if you are not explicitly using these factors in the algorithm, there are proxies for them that you may not even be aware of.
It is a relatively mild scene in a documentary about the sexual predator who helped transform American politics. Back when he ran Fox News, Roger Ailes bought up his hometown paper, and in Divide and Conquer--now in theaters--the Putnam County News and Recorder's former copy editor describes what happened to her after she eventually quit the job. In the next few days, people she had messaged privately about Ailes on Facebook began finding out that he was looking into them. One even received a phone call: "This is Roger Ailes, and I hear you've been making threats about me." Ailes then quoted the friends' Facebook conversation, verbatim.
Two mental health chatbot apps have required updates after struggling to handle reports of child sexual abuse. In tests, neither Wysa nor Woebot told an apparent victim to seek emergency help. The BBC also found the apps had problems dealing with eating disorders and drug use. The Children's Commissioner for England said the flaws meant the chatbots were not currently "fit for purpose" for use by youngsters. "They should be able to recognise and flag for human intervention a clear breach of law or safeguarding of children," said Anne Longfield.
In this Oct. 31, 2018, photo, a screen displays a computer-generated image of a Watrix employee walking during a demonstration of their firm's gait recognition software at their company's offices in Beijing. A Chinese technology startup hopes to begin selling software that recognizes people by their body shape and how they walk, enabling identification when faces are hidden from cameras. Already used by police on the streets of Beijing and Shanghai, "gait recognition" is part of a major push to develop artificial-intelligence and data-driven surveillance across China, raising concern about how far the technology will go. As many businesses prepare for the coming year, one of the key priorities is determining best use case and strategic implementation of artificial intelligence as it applies to the core competencies of the company. This is a fairly challenging area on a variety of levels.