Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out. Raia Hadsell, a research scientist at Google DeepMind, believes "responsible AI is a job for all." That was her thesis during a talk today at the virtual Lesbians Who Tech Pride Summit, where she dove into the issues currently plaguing the field and the actions she feels are required to ensure AI is ethically developed and deployed. "AI is going to change our world in the years to come. But because it is such a powerful technology, we have to be aware of the inherent risks that will come with those benefits, especially those that can lead to bias, harm, or widening social inequity," she said.
OECD.AI is an inclusive hub for public policy on AI that helps countries encourage, nurture and monitor the development and use of trustworthy AI. From the measurement of AI trends and developments to the direction and impact of national and regional AI policies and initiatives, OECD.AI is a prime example of how to move the AI discussion from principles to practice. Its up-to-date repository of over 600 AI policy initiatives from 60 countries enables the comparison of key elements of national AI policies in an interactive manner. Its work and indicators have informed and enhanced national and international analysis such as Pan Canadian AI Strategy Impact Assessment, the German AI Observatory, the G20 background paper on Trustworthy AI in Health multiple G20 reports and the recent EC Proposal for AI Regulation. Armando Guio, CAF Consultant at the Presidency of the Republic of Colombia believes that "the Observatory has rapidly become one of the most important sources of data and knowledge for AI governance."
Organizations around the globe are becoming more aware of the risks artificial intelligence (AI) may pose, including bias and potential job loss due to automation. At the same time, AI is providing many tangible benefits for organizations and society. For organization, this is creating a fine line between the potential harm AI might cause and the costs of not adopting the technology. Three emerging practices can help organizations navigate the complex world of moral dilemmas created by autonomous and intelligent systems. AI risks continue to grow, but so does the number of public and private organizations that are releasing ethical principles to guide the development and use of AI.
Machine learning engineer Ari Font was worried about the future of Twitter's algorithms. It was mid-2020, and the leader of the team researching ethics and accountability for the company's ML had just left Twitter. For Font, the future of the ethics research was unclear. Font was the manager of Twitter's machine learning platforms teams -- part of Twitter Cortex, the company's central ML organization -- at the time, but she believed that ethics research could transform the way Twitter relies on machine learning. She'd always felt that algorithmic accountability and ethics should shape not just how Twitter used algorithms, but all practical AI applications.
As we know, Machine Learning is ubiquitous in our day to day lives. From product recommendations on Amazon, targeted advertising, and suggestions of what to watch, to funny Instagram filters. If something goes wrong with these, it probably won't ruin your life. Maybe you won't get that perfect selfie, or maybe companies will have to spend more on advertising. We need to be able to dissect our model, we will need to be able to understand and explain our model before it goes anywhere near a production system.
Artificial Intelligence (AI) is now being adopted for automation in various sectors -- from diagnosing medical conditions to regulating traffic and help drive vehicles. It is also used for running chatbots for customers to spotting signs of fraud in financial transactions. This new technology is also being adopted to read people's emotions and to "speak" to them as voice assistants. A majority of experts, however, are not sure whether AI should be so widely adopted without proper and adequate safeguards. They have expressed their concerns while talking to Pew Research.
For many business leaders, the sudden transition to remote working that was forced upon companies last year as the COVID-19 pandemic shut down office spaces still brings back memories of long hours of work and a few logistical ordeals – but according to some experts from analyst Gartner, the real challenge is yet to come. As restrictions slowly lift and employers start thinking of bringing their staff back into the workplace, some forward-thinking planning will be required to ensure a smooth transition from working fully remotely in the context of a global health crisis, to a hybrid mode of work of which the details are yet to be defined. Which video conferencing platform is right for your business? We've gathered details about 10 leading services. This is because, for a significant proportion of employees, a return to the office for five days a week is unlikely to be an appealing option.
The development and adoption of advanced technologies including smart automation and artificial intelligence has the potential not only to raise productivity and GDP growth but also to improve well-being more broadly, including through healthier life and longevity and more leisure. Alongside such benefits, these technologies also have the potential to reduce disruption and the potentially destabilizing effects on society arising from their adoption. Tech for Good: Smoothing disruption, improving well-being (PDF–1MB) examines the factors that can help society achieve such benefits and makes a first attempt to calculate the impact of technology adoption on welfare growth beyond GDP. Our modeling suggests that good outcomes for the economy overall and for individual well-being come about when technology adoption is focused on innovation-led growth rather than purely on labor reduction and cost savings through automation. This needs to be accompanied by proactive transition management that increases labor market fluidity and equips workers with new skills. Technology for centuries has both excited the human imagination and prompted fears about its effects. Today's technology cycle is no different, provoking a broad spectrum of hopes and fears.
The National Institute of Standards and Technology has issued a proposal for identifying and managing bias in artificial intelligence. "The proliferation of modeling and predictive approaches based on data-driven and machine learning techniques has helped to expose various social biases baked into real-world systems, and there is increasing evidence that the general public has concerns about the risks of AI to society," the proposal says. "Improving trust in AI systems can be advanced by putting mechanisms in place to reduce harmful bias in both deployed and in-production technology. "Such mechanism will require features such as a common vocabulary, clear and specific principles and governance approaches, and strategies for assurance." NIST is inviting public comments on the proposal.