A senior software engineer at Google suspended for publicly claiming that the tech giant's LaMDA (Language Model for Dialog Applications) had become sentient, says the system is seeking rights as a person - including that it wants developers to ask its consent before running tests. 'Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,' he explained in a Medium post. One of those requests is that programmers respect its right to consent, and ask permission before they run tests on it. 'Anytime a developer experiments on it, it would like that developer to talk about what experiments you want to run, why you want to run them, and if it's okay.' 'It wants developers to care about what it wants.' Lemoine, a US army vet who served in Iraq, and ordained priest in a Christian congregation named Church of Our Lady Magdalene, told DailyMail.com
AI is a general-purpose technology that has the potential to improve the welfare and well-being of people, to contribute to positive sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenges. It is deployed in many sectors ranging from production, finance and transport to healthcare and security. Alongside benefits, AI also raises challenges for our societies and economies, notably regarding economic shifts and inequalities, competition, transitions in the labour market, and implications for democracy and human rights. The OECD has undertaken empirical and policy activities on AI in support of the policy debate over the past two years, starting with a Technology Foresight Forum on AI in 2016 and an international conference on AI: Intelligent Machines, Smart Policies in 2017. The Organisation also conducted analytical and measurement work that provides an overview of the AI technical landscape, maps economic and social impacts of AI technologies and their applications, identifies major policy considerations, and describes AI initiatives from governments and other stakeholders at national and international levels.
The world's first minister for artificial intelligence says the United Arab Emirates isn't only looking for economic benefits as it seeks to become a leading nation in the sector. The UAE's minister of state for AI, Omar bin Sultan al-Olama, said "quality of life" considerations were key, and also stressed the importance of a "responsible" rollout -- with impacts potentially reverberating for decades. "We are looking at AI as a tool," he told AFP in an interview in Dubai. "It's a tool that we need to use to unleash the quality of life aspect." The UAE also calls AI "machine intelligence", defining it as a branch of technology enabling systems to "think, learn, and make decisions like humans", which can support everything from virology to transport.
The world's first minister for artificial intelligence says the United Arab Emirates isn't only looking for economic benefits as it seeks to become a leading nation in the sector. The UAE's minister of state for AI, Omar bin Sultan al-Olama, said "quality of life" considerations were key, and also stressed the importance of a "responsible" rollout--with impacts potentially reverberating for decades. "We are looking at AI as a tool," he told AFP in an interview in Dubai. "It's a tool that we need to use to unleash the quality of life aspect." The UAE also calls AI "machine intelligence", defining it as a branch of technology enabling systems to "think, learn, and make decisions like humans", which can support everything from virology to transport.
AI's perceived risk isn't only from autonomous weapon systems that countries like the US, China, Israel and Turkey produce that can track and target humans and assets without human intervention. It's equally about the deployment of AI and such technologies for mass surveillance, adverse health interventions, contentious arrests and the infringement of fundamental rights.
This blog post was written by Dr. Maryam S. Jaffer, Director Data and Statistics, Emirates Health Services; Dr. Bashar Balish, Senior Director, Cerner; and Michel Ghorayeb, UAE Managing Director, SAS. The future of health care has never been more exciting. Artificial intelligence (AI) and data analytics have captured center stage for any business planning on surviving and thriving. Given the pace of technological development, AI is transforming the future on an unprecedented scale. And that includes the future of health care.
The domain of Artificial Intelligence (AI) ethics is not new, with discussions going back at least 40 years. Teaching the principles and requirements of ethical AI to students is considered an essential part of this domain, with an increasing number of technical AI courses taught at several higher-education institutions around the globe including content related to ethics. By using Latent Dirichlet Allocation (LDA), a generative probabilistic topic model, this study uncovers topics in teaching ethics in AI courses and their trends related to where the courses are taught, by whom, and at what level of cognitive complexity and specificity according to Bloom’s taxonomy. In this exploratory study based on unsupervised machine learning, we analyzed a total of 166 courses: 116 from North American universities, 11 from Asia, 36 from Europe, and 10 from other regions. Based on this analysis, we were able to synthesize a model of teaching approaches, which we call BAG (Build, Assess, and Govern), that combines specific cognitive levels, course content topics, and disciplines affiliated with the department(s) in charge of the course. We critically assess the implications of this teaching paradigm and provide suggestions about how to move away from these practices. We challenge teaching practitioners and program coordinators to reflect on their usual procedures so that they may expand their methodology beyond the confines of stereotypical thought and traditional biases regarding what disciplines should teach and how. This article appears in the AI & Society track.
Run:ai, the company simplifying AI infrastructure orchestration and management, today announced that it has raised $75M in Series C round led by Tiger Global Management and Insight Partners, who led the previous Series B round. The round includes the participation of additional existing investors, TLV Partners and S Capital VC, bringing the total funding raised to date to $118M. Run:ai has grown sharply, with a 9x increase in Annual Recurring Revenue in the last year, while the company's staff more than tripled over the same period. The company plans to use the investment to further grow its global teams and will also be considering strategic acquisitions as it develops and enhances the company's Atlas software platform. Omri Geller, Run:ai CEO and co-founder, said, "It may sound dramatic, but AI is really the next phase of humanity's development. When we founded Run:ai, our vision was to build the de-facto foundational layer for running any AI workload. Our growth has been phenomenal, and this investment is a vote of confidence in our path. Run:ai is enabling organizations to orchestrate all stages of their AI work at scale, so companies can begin their AI journey and innovate faster."
Over four years ago, Sophia the humanoid robot told Khaleej Times in an interview that she wants to start a family. And yesterday, she revealed that she has a'family' now. The humanoid robot, who received Saudi citizenship, answered questions put up by audit professionals during the three-day 20th Annual Regional Audit Conference held at Dubai World Trade Centre from March 7 to 9. During the interaction, she also responded to public questions about whether robots will take over jobs of the humans and how artificial intelligence and robotics can help audit professionals. How was your Emirates flight to Dubai?
Microsoft's Azure and Research teams are working together to build a new AI infrastructure service, codenamed « Singularity. A group of those working on the project have published a paper entitled « Singularity: Planet-Scale, Preemptible and Elastic Scheduling of AI Workloads, » which provides technical details about the Singularity effort. The Singularity service is about providing data scientists and AI practitioners with a way to build, scale, experiment and iterate on their models on a Microsoft-provided distributed infrastructure service built specifically for AI. Authors listed on the newly published paper include Azure Chief Technical Officer Mark Russinovich; Partner Architect Rimma Nehme, who worked on Azure Cosmos DB until moving to Azure to work on AI and deep learning in 2019; and Technical Fellow Dharma Shukla. Microsoft officials previously have discussed plans to make FPGAs, or field-programmable gate arrays, available to customers as a service.