A new artificial intelligence sleep app has been developed that might be able to replace sleeping pills for insomnia sufferers. Sleepio uses an AI algorithm to provide individuals with tailored cognitive behavioural therapy for insomnia (CBT-I). The National Institute for Health and Care Excellence (Nice) said it would save the NHS money as well as reduce prescriptions of medicines such as zolpidem and zopiclone, which can be dependency forming. Its economic analysis found healthcare costs were lower after one year of using Sleepio, mostly because of fewer GP appointments and sleeping pills prescribed. The app provides a digital six-week self-help programme involving a sleep test, weekly interactive CBT-I sessions and keeping a diary about sleeping patterns.
Financial services compliance is a big area. Prajit Nanu, CEO of B2B payments platform Nium, says it's in everybody's interest that payment transactions are as frictionless as possible, but many commonly used payment systems carry unnecessary layers of complexity, including when ensuring regulations and compliance. He says automation can help to resolve lags arising from risk and compliance checks, which can be a time-consuming and labour-intensive process, particularly for those dealing with cross region, cross country checks. An automated payment platform appropriately integrated with other business software can perform these checks much more seamlessly. Nanu says: "Digital tools, such as individualised transaction profiles, coupled with the output of machine learning processes, will be able to offer real-time solutions which significantly reduce the time required for risk and compliance checks, while still allowing effective identity verification and fraud detection checks."
Neuromorphic chips have been endorsed in research showing that they are much more energy efficient at operating large deep learning networks than non-neuromorphic hardware. This may become important as AI adoption increases. The study was carried out by the Institute of Theoretical Computer Science at the Graz University of Technology (TU Graz) in Austria using Intel's Loihi 2 silicon, a second-generation experimental neuromorphic chip announced by Intel Labs last year that has about a million artificial neurons. Their research paper, "A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware," published in Nature Machine Intelligence, claims that the Intel chips are up to 16 times more energy efficient in deep learning tasks than performing the same task on non-neuromorphic hardware. The hardware tested consisted of 32 Loihi chips.
"Data is the new oil." Originally coined in 2006 by the British mathematician Clive Humby, this phrase is arguably more apt today than it was then, as smartphones rival automobiles for relevance and the technology giants know more about us than we would like to admit. Just as it does for the financial services industry, the hyper-digitization of the economy presents both opportunity and potential peril for financial regulators. On the upside, reams of information are newly within their reach, filled with signals about financial system risks that regulators spend their days trying to understand. The explosion of data sheds light on global money movement, economic trends, customer onboarding decisions, quality of loan underwriting, noncompliance with regulations, financial institutions' efforts to reach the underserved, and much more. Importantly, it also contains the answers to regulators' questions about the risks of new technology itself. Digitization of finance generates novel kinds of hazards and accelerates their development. Problems can flare up between scheduled regulatory examinations and can accumulate imperceptibly beneath the surface of information reflected in traditional reports. Thanks to digitization, regulators today have a chance to gather and analyze much more data and to see much of it in something close to real time. The potential for peril arises from the concern that the regulators' current technology framework lacks the capacity to synthesize the data. The irony is that this flood of information is too much for them to handle.
In the last decade, in-game events like Veteruns, store bundles and esports competitions have all been used as vehicles to raise money for charity. Between March 20 and April 3, all "Fortnite" proceeds were donated to four humanitarian relief funds to aid those affected by the war in Ukraine. Awesome Games Done Quick, in which players speed run hundreds of titles such as "Deathloop," "Sekiro: Shadows Die Twice" and "Super Mario 3D Land" for charity, raised over $3 million for the Prevent Cancer Foundation. And "League of Legends" players sent $6 million to Riot Games' Social Impact Fund through the purchase of the game's 1,000th skin.
Join the Applied AI Conference (online in 2022) where we focus on the real-world impact of AI in this year's topics Sales & Marketing. All of our participants and speakers may arrange virtual 1:1 meetings with each other. We connect AI solution developers with potential users, and we make sure you walk away with a bag of leads. Ever wondered how you could make AI work for your business? AAIC is the right place for you.
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Today is a big day for AI announcements from Microsoft, both from this week's Build conference and beyond. But one common theme bubbles over consistently: For AI to become more useful for business applications, it needs to be easier, simpler, more explainable, more accessible and, most of all, responsible. Responsible AI is actually at the heart of a lot of today's Build news, John Montgomery, corporate vice president of Azure AI, told VentureBeat. Most notable is Azure Machine Learning's preview of a responsible AI dashboard, which brings together capabilities in use over the past 18 months, such as data explorer, model interpretability, error analysis, counterfactual and causal inference analysis, into a single view.
Dyson has signalled it is placing a "big bet" on producing robots capable of household chores by 2030, as it looks to move beyond the vacuum cleaners, fans and dryers that made its founder one of the wealthiest British businessmen. The company, founded by billionaire Sir James Dyson, on Wednesday published photographs of robot arms being used in household settings, including cleaning furniture, a claw picking up plates, and a hand-like machine picking up a teddy bear. While those may not sound like major achievements, robots still struggle with many actions that represent simple tasks for humans, such as grasping fragile objects or dealing with unfamiliar obstacles. Solving those and other problems could create new markets for the company. Dyson wants to build the UK's largest robotics research centre at its Hullavington Airfield site, close to its design centre in Malmesbury, Wiltshire.
Algorithmic systems – often referred to by the buzzword Artificial Intelligence (AI) – increasingly pervade our daily lives. They are used to detect social benefits fraud, to surveil people at the workplace, or to predict parolees' risk of reoffending. Often, these systems do not only rest on shaky scientific grounds but can be used in ways that infringe people's basic rights – like those to non-discrimination, freedom of expression, privacy, or access to justice –, can undermine foundational democratic principles, and through their non-transparent nature and the lack of accountability mechanisms can be in tension with the rule of law. Against this background and in light of its mandate, the Council of Europe has recognized the need for states to govern the development and use of AI systems. The Council of Europe is an international organization founded in 1949 with the task to uphold human rights, democracy, and the rule of law in Europe.