Goto

Collaborating Authors

 clifford


Trainability of Quantum Models Beyond Known Classical Simulability

Meyer, Sabri, Scala, Francesco, Tacchino, Francesco, Lucchi, Aurelien

arXiv.org Artificial Intelligence

Variational Quantum Algorithms (VQAs) are promising candidates for near-term quantum computing, yet they face scalability challenges due to barren plateaus, where gradients vanish exponentially in the system size. Recent conjectures suggest that avoiding barren plateaus might inherently lead to classical simulability, thus limiting the opportunities for quantum advantage. In this work, we advance the theoretical understanding of the relationship between the trainability and computational complexity of VQAs, thus directly addressing the conjecture. We introduce the Linear Clifford Encoder (LCE), a novel technique that ensures constant-scaling gradient statistics on optimization landscape regions that are close to Clifford circuits. Additionally, we leverage classical Taylor surrogates to reveal computational complexity phase transitions from polynomial to super-polynomial as the initialization region size increases. Combining these results, we reveal a deeper link between trainability and computational complexity, and analytically prove that barren plateaus can be avoided in regions for which no classical surrogate is known to exist. Furthermore, numerical experiments on LCE transformed landscapes confirm in practice the existence of a super-polynomially complex ``transition zone'' where gradients decay polynomially. These findings indicate a plausible path to practically relevant, barren plateau-free variational models with potential for quantum advantage.


Auto-FEDUS: Autoregressive Generative Modeling of Doppler Ultrasound Signals from Fetal Electrocardiograms

Rafiei, Alireza, Clifford, Gari D., Katebi, Nasim

arXiv.org Artificial Intelligence

Fetal health monitoring through one-dimensional Doppler ultrasound (DUS) signals offers a cost-effective and accessible approach that is increasingly gaining interest. Despite its potential, the development of machine learning based techniques to assess the health condition of mothers and fetuses using DUS signals remains limited. This scarcity is primarily due to the lack of extensive DUS datasets with a reliable reference for interpretation and data imbalance across different gestational ages. In response, we introduce a novel autoregressive generative model designed to map fetal electrocardiogram (FECG) signals to corresponding DUS waveforms (Auto-FEDUS). By leveraging a neural temporal network based on dilated causal convolutions that operate directly on the waveform level, the model effectively captures both short and long-range dependencies within the signals, preserving the integrity of generated data. Cross-subject experiments demonstrate that Auto-FEDUS outperforms conventional generative architectures across both time and frequency domain evaluations, producing DUS signals that closely resemble the morphology of their real counterparts. The realism of these synthesized signals was further gauged using a quality assessment model, which classified all as good quality, and a heart rate estimation model, which produced comparable results for generated and real data, with a Bland-Altman limit of 4.5 beats per minute. This advancement offers a promising solution for mitigating limited data availability and enhancing the training of DUS-based fetal models, making them more effective and generalizable.


Edge AI for Real-time Fetal Assessment in Rural Guatemala

Katebi, Nasim, Ahmad, Mohammad, Motie-Shirazi, Mohsen, Phan, Daniel, Kolesnikova, Ellen, Nikookar, Sepideh, Rafiei, Alireza, Korikana, Murali K., Hall-Clifford, Rachel, Castro, Esteban, Sut, Rosibely, Coyote, Enma, Strader, Anahi Venzor, Ramos, Edlyn, Rohloff, Peter, Sameni, Reza, Clifford, Gari D.

arXiv.org Artificial Intelligence

Perinatal complications, defined as conditions that arise during pregnancy, childbirth, and the immediate postpartum period, represent a significant burden on maternal and neonatal health worldwide. Factors contributing to these disparities include limited access to quality healthcare, socioeconomic inequalities, and variations in healthcare infrastructure. Addressing these issues is crucial for improving health outcomes for mothers and newborns, particularly in underserved communities. To mitigate these challenges, we have developed an AI-enabled smartphone application designed to provide decision support at the point-of-care. This tool aims to enhance health monitoring during pregnancy by leveraging machine learning (ML) techniques. The intended use of this application is to assist midwives during routine home visits by offering real-time analysis and providing feedback based on collected data. The application integrates TensorFlow Lite (TFLite) and other Python-based algorithms within a Kotlin framework to process data in real-time. It is designed for use in low-resource settings, where traditional healthcare infrastructure may be lacking. The intended patient population includes pregnant women and new mothers in underserved areas and the developed system was piloted in rural Guatemala. This ML-based solution addresses the critical need for accessible and quality perinatal care by empowering healthcare providers with decision support tools to improve maternal and neonatal health outcomes.


The arrogant ex-soldier who turned into a triple killer

BBC News

Former soldier Kyle Clifford raped and murdered Louise Hunt, and killed her sister Hannah and mother Carol in attacks described by police as "barbaric". What happened and what has emerged since? Days before the attacks, Louise had ended an 18-month relationship with Clifford. She told Clifford, who she had met through a dating app, it was "sucking the life out of me". They did not like the way Clifford treated Louise, finding him disrespectful, arrogant, rude and "odd". He had hidden relationships with other women from Louise, and went on a dating site moments after receiving the message ending theirs.


Cringing before the tech giants is no way to make Britain an AI superpower John Naughton

The Guardian

But last Monday he broke the habit of a lifetime in a speech delivered at University College London. It was about AI, which he sees as "the defining opportunity of our generation". The UK, he declared "is the nation of Babbage, Lovelace and Turing", not to mention the country "that gave birth to the modern computer and the world wide web. So mark my words – Britain will be one of the great AI superpowers." Within days of taking office, the PM had invited Matt Clifford, a smart tech bro from central casting, to think about "how we seize the opportunities of AI".


Practical and efficient quantum circuit synthesis and transpiling with Reinforcement Learning

Kremer, David, Villar, Victor, Paik, Hanhee, Duran, Ivan, Faro, Ismael, Cruz-Benito, Juan

arXiv.org Artificial Intelligence

This paper demonstrates the integration of Reinforcement Learning (RL) into quantum transpiling workflows, significantly enhancing the synthesis and routing of quantum circuits. By employing RL, we achieve near-optimal synthesis of Linear Function, Clifford, and Permutation circuits, up to 9, 11 and 65 qubits respectively, while being compatible with native device instruction sets and connectivity constraints, and orders of magnitude faster than optimization methods such as SAT solvers. We also achieve significant reductions in two-qubit gate depth and count for circuit routing up to 133 qubits with respect to other routing heuristics such as SABRE. We find the method to be efficient enough to be useful in practice in typical quantum transpiling pipelines. Our results set the stage for further AI-powered enhancements of quantum computing workflows.


Downing Street trying to agree statement about AI risks with world leaders

The Guardian

Rishi Sunak's advisers are trying to thrash out an agreement among world leaders on a statement warning about the risks of artificial intelligence as they finalise the agenda for the AI safety summit next month. Downing Street officials have been touring the world talking to their counterparts from China to the EU and the US as they work to agree on words to be used in a communique at the two-day conference. But they are unlikely to agree a new international organisation to scrutinise cutting-edge AI, despite interest from the UK in giving the government's AI taskforce a global role. Sunak's AI summit will produce a communique on the risks of AI models, provide an update on White House-brokered safety guidelines and end with "like-minded" countries debating how national security agencies can scrutinise the most dangerous versions of the technology. The possibility of some form of international cooperation on cutting-edge AI that can pose a threat to human life will also be discussed on the final day of the summit on 1 and 2 November at Bletchley Park, according to a draft agenda seen by the Guardian.


AI should require license like medical, nuclear work on advanced tools: Britain's Labour Party

FOX News

Center for A.I. Safety Director Dan Hendrycks explains concerns about how the rapid growth of artificial intelligence could impact society. The United Kingdom should prohibit technology developers from working on advanced artificial intelligence tools unless they have a license to do so, according to the British Labour Party. Lucy Powell, a spokesperson for Britain's main left-wing political party, told the Guardian this week that much stricter rules should be imposed on companies regarding the training of their AI products on large datasets similar to those used by OpenAI to build ChatGPT. "My real point of concern is the lack of any regulation of the large language models that can then be applied across a range of AI tools, whether that's governing how they are built, how they are managed or how they are controlled," said Powell, who suggested AI should be licensed similarly to both the medical field and nuclear power. Both fields are tightly regulated by British government bodies.


AI should be licensed like medicines or nuclear power, Labour suggests

The Guardian

The UK should bar technology developers from working on advanced artificial intelligence tools unless they have a licence to do so, Labour has said. Ministers should introduce much stricter rules around companies training their AI products on vast datasets of the kind used by OpenAI to build ChatGPT, Lucy Powell, Labour's digital spokesperson, told the Guardian. Her comments come amid a rethink at the top of government over how to regulate the fast-moving world of AI, with the prime minister, Rishi Sunak, acknowledging it could pose an "existential" threat to humanity. One of the government's advisers on artificial intelligence also said on Monday that humanity could have only two years before AI is able to outwit people, the latest in a series of stark warnings about the threat posed by the fast-developing technology. Powell said: "My real point of concern is the lack of any regulation of the large language models that can then be applied across a range of AI tools, whether that's governing how they are built, how they are managed or how they are controlled."


ChatGPT is a robot con artist, and we're suckers for trusting it

#artificialintelligence

A few days after Google and Microsoft announced they'd be delivering search results generated by chatbots -- artificially intelligent software capable of producing uncannily human-sounding prose -- I fretted that our new AI helpers are not to be trusted. After all, Google's own AI researchers had warned the company that chatbots would be "stochastic parrots" (likely to squawk things that are wrong, stupid, or offensive) and "prone to hallucinating" (liable to just make stuff up). The bots, drawing on what are known as large language models, "are trained to predict the likelihood of utterances," a team from DeepMind, the Alphabet-owned AI company, wrote last year in a presentation on the risks of LLMs. "Yet, whether or not a sentence is likely does not reliably indicate whether the sentence is also correct." These chatbots, in other words, are not actually intelligent.