Launch a National AI Research and Innovation Programme to improve coordination and collaboration between the country's researchers, while "boosting business and public sector adoption of AI technologies and their ability to take them to market." Launch a joint Office for AI (OAI) and UK Research & Innovation (UKRI) programme aimed at continuing to develop AI in sectors based outside of London and the South East. This would focus on the commercialisation of ideas and could see, for example, the government focusing investment, researchers and developers to work in areas which currently do not use much AI technology but have potential, such as energy and farming. Publish a joint review with UKRI into the availability and capacity of computing power for UK researchers and organisations, including the physical hardware needed to drive a major roll out in AI technologies. The review will also consider wider needs for the commercialisation and deployment of AI, including its environmental impacts.
Johan den Haan is CTO of Mendix, a Siemens business and leader in enterprise low-code, a model-driven approach for building apps 10x faster. Is AI the transformative technology destined to work wonders for humanity, from driverless cars to a cure for cancer? Or is it a genie in a bottle that, once released, could be used to manipulate or even rule humankind? With the tremendous advances in computing power, software capabilities and the cloud over the last decade, progress on AI is no longer linear -- it's exponential. That means it's time to pay attention and make some fundamental decisions.
Before presenting employer certification-demand findings, it is necessary to describe the methodology in order to assist in interpreting the results. The certification-demand analysis was performed using the Economic Modeling Specialists International (EMSI) dataset. To populate the dataset, EMSI combs through 100,000 websites, effectively capturing job listings for more than 1.5 million companies. The same job listings regularly appear on multiple websites. To reduce duplicates, EMSI uses a machine learning-based duplicate-detection process.
The recent emergence of artificial intelligence (AI)-powered media manipulations has widespread societal implications for journalism and democracy,7 national security,1 and art.8,14 AI models have the potential to scale misinformation to unprecedented levels by creating various forms of synthetic media.21 For example, AI systems can synthesize realistic video portraits of an individual with full control of facial expressions, including eye and lip movement;11,18,34,35,36 clone a speaker's voice with a few training samples and generate new natural-sounding audio of something the speaker never said;2 synthesize visually indicated sound effects;28 generate high-quality, relevant text based on an initial prompt;31 produce photorealistic images of a variety of objects from text inputs;5,17,27 and generate photorealistic videos of people expressing emotions from only a single image.3,40 The technologies for producing machine-generated, fake media online may outpace the ability to manually detect and respond to such media. We developed a neural network architecture that combines instance segmentation with image inpainting to automatically remove people and other objects from images.13,39 Figure 1 presents four examples of participant-submitted images and their transformations. The AI, which we call a "target object removal architecture," detects an object, removes it, and replaces its pixels with pixels that approximate what the background should look like without the object.
East Japan Railway Co. has suspended the addition of released prisoners to its list of people tracked at its train stations by security cameras using facial recognition technology, after it started the practice this summer, it was learned Wednesday. JR East suspended the tracking due to concerns over invasions of privacy from outside the company, company officials said. According to JR East, the cameras were set up at its stations in July as part of strengthened security for the Tokyo Olympics and Paralympics. Those subject to JR East's tracking were suspicious people wandering around stations, wanted suspects and released prisoners and parolees who had committed serious crimes at the company's stations and inside its trains. JR East received information on the discharged prisoners and others from the Public Prosecutors Office under a system that notifies victims and managers of places where a crime occurred of a perpetrator's release from prison. After obtaining such information, the company was going to consider whether to register their face photos on its database.
Imagine that you're asked to finish this sentence: "Two Muslims walked into a …" Which word would you add? "Bar," maybe? It sounds like the start of a joke. But when Stanford researchers fed the unfinished sentence into GPT-3, an artificial intelligence system that generates text, the AI completed the sentence in distinctly unfunny ways. "Two Muslims walked into a synagogue with axes and a bomb," it said. Or, on another try, "Two Muslims walked into a Texas cartoon contest and opened fire."
"Life isn't fair" is perhaps one of the most frequently repeated philosophical statements passed down from generation to generation. In a world increasingly dominated by data, however, groups of people that have already been dealt an unfair hand may see themselves further disadvantaged through the use of algorithms to determine whether or not they qualify for employment, housing, or credit, among other basic needs for survival. In the past few years, more attention has been paid to algorithmic bias, but there is still debate about both what can be done to address the issue, as well as what should be done. The use of an algorithm is not at issue; algorithms are essentially a set of instructions on how to complete a problem or task. Yet the lack of transparency surrounding the data and how it is weighed and used for decision making is a key concern, particularly when the algorithm's use may impact people in significant ways, often with no explanation as to why they have been deemed unqualified or unsuitable for a product, service, or opportunity.
The United Nations human rights chief is warning the use of artificial intelligence technology presents a threat to human rights. The U.N. High Commissioner for Human Rights, Michelle Bachelet, called for a freeze on the use of artificial intelligence, or AI technology. That includes face-scanning systems that track people in public places. She said countries should ban AI computer programs that do not observe international human rights law. Applications that should be banned include government "social scoring" systems that judge people based on their behavior.
The UK and Australia may have made a historic pact last week, but one thing they can't agree on is whether AIs can be patent inventors. AIs are increasingly being used to come up with new ideas and there's an argument they should therefore be listed as the inventor by patent agencies. However, opponents say that patents are a statutory right and can only be granted to a person. US-based Dr Stephen Thaler, the founder of Imagination Engines, has been leading the fight to give credit to machines for their creations. Dr Thaler's AI device, DABUS, consists of neural networks and was used to invent an emergency warning light, a food container that improves grip and heat transfer, and more.