The British are taking their obsession with the weather to new heights. Today, the UK announced it is advancing its project to build the world's most powerful climate and weather supercomputer with the help of Microsoft. The country's weather service, the Met Office, has struck a multimillion-pound agreement with the tech company on the project, which was previously earmarked to receive £1.2 billion ($1.6 billion) of government funding. While the UK already boasts a weather supercomputer -- which can perform 16,000 trillion calculations a second -- the new machine will be twice as powerful. By gaining access to more detailed climate modeling, the UK is hoping to future-proof its city and transport infrastructure to protect them against extreme weather events.
Artificial intelligence-based algorithms can influence people to prefer one political candidate – or a would-be partner – over another, according to researchers. "We are worried that everyone is using recommendation algorithms all the time, but there was no information on how effective those recommendation algorithms are," says Helena Matute at the University of Deusto in Spain. Her work with her colleague Ujué Agudo, also at the University of Deusto, was designed to investigate the issue. The researchers carried out a series of four experiments in which participants were told they were interacting with an algorithm that would judge their personality. The'algorithm' did not actually do this: it was a mock algorithm that responded in the same way regardless of the information participants gave it.
Machine learning, artificial intelligence, and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. Despite much promising research currently being undertaken, particularly in imaging, the literature as a whole lacks transparency, clear reporting to facilitate replicability, exploration for potential ethical concerns, and clear demonstrations of effectiveness. Among the many reasons why these problems exist, one of the most important (for which we provide a preliminary solution here) is the current lack of best practice guidance specific to machine learning and artificial intelligence. However, we believe that interdisciplinary groups pursuing research and impact projects involving machine learning and artificial intelligence for health would benefit from explicitly addressing a series of questions concerning transparency, reproducibility, ethics, and effectiveness (TREE). The 20 critical questions proposed here provide a framework for research groups to inform the design, conduct, and reporting; for editors and peer reviewers to evaluate contributions to the literature; and for patients, clinicians and policy makers to critically appraise where new findings may deliver patient benefit. Machine learning (ML), artificial intelligence (AI), and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. The potential uses include improving diagnostic accuracy,1 more reliably predicting prognosis,2 targeting treatments,3 and increasing the operational efficiency of health systems.4 Examples of potentially disruptive technology with early promise include image based diagnostic applications of ML/AI, which have shown the most early clinical promise (eg, deep learning based algorithms improving accuracy in diagnosing retinal pathology compared with that of specialist physicians5), or natural language processing used as a tool to extract information from structured and unstructured (that is, free) text embedded in electronic health records.2 Although we are only just …
Bésame Cosmetics founder and makeup historian Gabriela Hernandez delivers insights into the billion-dollar cosmetic industry. Learn how makeup was deeply impacted by society's perception of women. A make-up artist has become an internet sensation after transforming herself into popular celebrities -- even fooling her friends and phone. Liss Lacao, 29, has recreated the recognizable features of celebrities such as Gordon Ramsay, Dolly Parton, the Queen and British Prime Minister Boris Johnson. She's so good, she's even fooled her iPhone -- which has facial recognition -- and her friends into thinking she was one of the A-listers.
Poppy Gustafsson runs a cutting-edge and gender-diverse cybersecurity firm on the brink of a £3bn stock market debut, but she is happy to reference pop culture classic the Terminator to help describe what Darktrace actually does. Launched in Cambridge eight years ago by an unlikely alliance of mathematicians, former spies from GCHQ and the US and artificial intelligence (AI) experts, Darktrace provides protection, enabling businesses to stay one step ahead of increasingly smarter and dangerous hackers and viruses. Marketing its products as the digital equivalent of the human body's ability to fight illness, Darktrace's AI-security works as an "enterprise immune system", can "self-learn and self-heal" and has an "autonomous response capability" to tackle threats without instruction as they are detected. "It really does feel like we're in this new era of cybersecurity," says Gustafsson, the chief executive of Darktrace. "The arms race will absolutely continue, I really don't think it's very long until this [AI] innovation gets into the hands of attackers, and we will see these very highly targeted and specific attacks that humans won't necessarily be able to spot and defend themselves from. "It's not going to be these futuristic Terminator-style robots out shooting each other, it's going to be all these little pieces of code fighting in the background of our businesses.
Having recently announced the launch of the new UK Cyber Security Council, the UK government has followed up by announcing its plans to publish a new National Artificial Intelligence Strategy (the AI Strategy) later this year. The aim of the AI Strategy is to build on the United Kingdom's position as a global center for the development, commercialization, and adoption of responsible AI. Digital Secretary Oliver Dowden announced the strategy, commenting, "Unleashing the power of AI is a top priority in our plan to be the most pro-tech government ever. The UK is already a world leader in this revolutionary technology and the new AI Strategy will help us seize its full potential--from creating new jobs and improving productivity to tackling climate change and delivering better public services." The intention is for the AI Strategy to align with the UK government's overall plans to support jobs and economic growth through increased investment in infrastructure, skills, and innovation.
To unleash the potential of AI safely, however, issues such as accuracy, human control, transparency, bias and privacy need to be addressed. So governments should be role-modelling the ethical use of AI, and educating their people on AI and how to be ready for the opportunities and challenges. One way countries could do this would be through setting up a body that is a visible focus for AI: a centre of excellence. Our project recommends this as a way to increase ethical AI use in a country and build public support for it across the economy and society. The centre could draw staff from industry, government, academia and civil society, using a multidisciplinary and collaborative approach to provide advice on AI and algorithm use for government operations. The centre would start to raise awareness on AI itself and encourage conversations about people's level of comfort with using it in different situations.
Having recently announced the launch of the new UK Cyber Security Council, the UK government has followed up by announcing its plans to publish a new National Artificial Intelligence Strategy (the AI Strategy) later this year. The aim of the AI Strategy is to build on the United Kingdom's position as a global center for the development, commercialization, and adoption of responsible AI. The AI Strategy will focus on the following: Growth of the economy through the widespread use of AI technologies. The intention is for the AI Strategy to align with the UK government's overall plans to support jobs and economic growth through increased investment in infrastructure, skills, and innovation. Through this strategy, they will nurture their AI pioneers to accelerate bringing new technologies to market.
Between 2015 and 2020 people applying for visas to enter the United Kingdom to work, study or visit loved ones would fill in the paperwork in the usual way, and that data would then be handed over to an algorithm to assess. It would give them a rating: red, amber or green. Of those being assessed as green 96.3 per cent were waved through. Those marked as red – the'riskiest' category – weren't automatically rejected, but were subject to further checks, with senior staff being brought in to check the data and make a final decision. This partially automated process, run by the Home Office, ultimately approved 48 per cent of red applications. Those using it trusted its decisions.
The 24 March, 2020 will be remembered by some for the news that Prince Charles tested positive for Covid and was isolating in Scotland. In Athens it was memorable as the day the traffic went silent. Twenty-four hours into a hard lockdown, Greeks were acclimatising to a new reality in which they had to send an SMS to the government in order to leave the house. As well as millions of text messages, the Greek government faced extraordinary dilemmas. The European Union's most vulnerable economy, its oldest population along with Italy, and one of its weakest health systems faced the first wave of a pandemic that overwhelmed richer countries with fewer pensioners and stronger health provision. One Greek who did go into the office that day was Kyriakos Pierrakakis, the minister for digital transformation, whose signature was inked in blue on an agreement with the US technology company, Palantir. The deal, which would not be revealed to the public for another nine months, gave one of the world's most controversial tech companies access to vast amounts of personal data while offering its software to help Greece weather the Covid storm. The zero-cost agreement was not registered on the public procurement system, neither did the Greek government carry out a data impact assessment – the mandated check to see whether an agreement might violate privacy laws. The questions that emerge in pandemic Greece echo those from across Europe during Covid and show Palantir extending into sectors from health to policing, aviation to commerce and even academia.