Any business in its right mind should be painfully aware of how much money they could bleed via skillful Business Email Compromise (BEC) scams, where fraudsters convincingly forge emails, invoices, contracts and letters to socially engineer the people who hold the purse strings. And any human in their right mind should be at least a little freaked out by how easy it now is to churn out convincing deepfake videos – including, say, of you, cast in an adult movie, or of your CEO saying things that… well, they would simply never say. Well, welcome to a hybrid version of those hoodwinks: deepfake audio, which was recently used in what's considered to be the first known case of an AI-generated voice of a CEO to bilk a UK-based energy firm out of €220,000 (USD $243,000). The Wall Street Journal reports that some time in March, the British CEO thought he had gotten a call from the CEO of his business's parent company, which is based in Germany. Whoever placed the call sounded legitimate.
Artificial Intelligence is now being used to detect cancer in a pioneering procedure. A new blood test uses AI to quickly scan for brain tumours with 90 per cent accuracy. Scientist hope that this new diagnostic tool could be used by the NHS and hospitals worldwide. Brain tumours are hard to detect and cause symptoms that can be confused with other maladies. These ambiguous symptoms include headaches, memory loss and vision problems, with a scan being the only way to detect the cancerous cells.
A Supreme Court justice has added his voice to calls for the regulation of computer algorithms handling crucial decisions about people's lives. An'expert commission' could help ensure that automated decision making processes have'a capacity for mercy', Lord Sales (Philip Sales QC), said last night. Presenting the British and Irish Legal Information Institute's Sir Henry Brooke Lecture, Lord Sales said the growing role of algorithms and artificial intelligence poses significant legal problems, in particular around the fundamental concept of agency. Existing prejudices could be embedded in hidden rules that are impossible to challenge, he said. 'AI may get to the stage where it will understand the rules of equity and how to recognise hard cases, but we are not there yet.'
The Chancellor of the High Court has urged commercial lawyers to prepare for the disruptive impact of technology on the law, the legal system and legal profession before others "steal a march" on them. Sir Geoffrey Vos said the profession needed "to turn its incredible intellectual fire-power towards the development of the English common law, so that it can effectively tackle the problems thrown up by the use of big data, cryptoassets, on-chain smart contracts, and artificial intelligence". Expressing confidence that the English common law could adapt to these challenges, he added: "My plea is that you do not leave it too late, because there are many other brilliant lawyers in other jurisdictions who are motivated to steal a march on their common law colleagues in the UK." Giving the Commercial Bar Association's annual lecture this week, Sir Geoffrey warned commercial lawyers that it was too late to hope to retire before any of this became a reality. "It is already reality," he said. Rather, he encouraged lawyers "to think imaginatively about the world in which the commercial legal services of the future will be required".
The Lloyds Bank National Business Awards is the flagship awards programme that recognises and rewards excellence across all sectors in the UK, celebrating businesses that combine creativity and innovation with results, and recognize companies that set new standards of excellence within their industries.
As fears about AI's disruptive potential have grown, AI ethics has come to the fore in recent years. Concerns around privacy, transparency and the ability of algorithms to warp social and political discourse in unexpected ways have resulted in a flurry of pronouncements from companies, governments, and even supranational organizations on how to conduct ethical AI development. The majority have focused on outlining high-level principles that should guide those building these systems. Whether by chance or by design, the principles they have coalesced around closely resemble those at the heart of medical ethics. But writing in Nature Machine Intelligence, Brent Mittelstadt from the University of Oxford points out that AI development is a very different beast to medicine, and a simple copy and paste won't work.
Although AIs are entering new areas every day, a handful of AI laboratories that still focus on artificial intelligence are still consuming large amounts of cash and have made not much progress on AI. According to the documents submitted to the UK Companies Registry in August, only the Alphabet-owned AGI Lab DeepMind lost $570 million in 2018 alone. Another AI Lab, OpenAI, which aims to create AGI, had to abandon its non-profit organisation to find investors in its expensive research. Both labs have achieved extraordinary success, including the creation of robots that can play complex board games and video games. But they are still far from creating artificial intelligence.
A terminally-ill British scientist dying from a muscle wasting disease says has fully completed his transition into the world's first full CYBORG -- called'Peter 2.0'. Peter Scott-Morgan, 61, decided to challenge what it meant to be human when he refused to accept his fate following a diagnosis of motor neurone disease in 2017. He said he wanted to push the boundaries of what science can achieve so decided to extend his life and become fully robotic. And this week the world-renowned roboticist returned to his home in Torquay, Devon, after 24 days in intensive care, with all medical procedures now complete and able to begin his re-booted life. But the evolution of his machine-like existence doesn't end there -- and he joked he had more upgrades scheduled than Microsoft.
Despite scepticism over the cost-effectiveness of new technologies, and the worries surrounding the potential difficulty of their installation, many companies in the UK are deciding to use emerging tech such as artificial intelligence (AI). This is according to a new report by Genesys, which claims almost two thirds (60 per cent) of UK firms are either using AI already, or planning to do so within a year from now. More than a third (37 per cent) are already using such tech to drive business objectives, increase efficiency and cut costs, while 42 per cent expect to see a positive impact within 12 months. But scepticism and worry remain. A significant portion of UK employers believe implementation will be too complex, and a quarter has its doubts whether or not the tech is over-hyped.
"I say this to everyone in the media world who I talk to," says Darren Atkins, wrapping up our phone interview: "Please, absolutely do not portray this as a hidden agenda to get rid of staff." Atkins is the Chief Technology Office for AI automation at East Suffolk and North Essex NHS Foundation Trust – group of hospitals employing more than 10,000 staff, who serve a quarter of a million people in the South East of England. "If this technology is applied in the wrong way, it can be very threatening," Atkins says. "Our main priority is to free up time for staff to do the work that they should be doing, rather than the work that has no value." Just over a year ago, Atkins led the deployment of virtual workers across his group of NHS hospitals – and according to him, it's been an unqualified success. Patients are missing fewer appointments and staff are happier.