Goto

Collaborating Authors

 harvey


Jobs in video games dried up, so we made our own

BBC News

Despite struggling to find a full-time job in the industry, Holly has just released her first commercial game alongside uni course mate Harvey Hayman. Morgan: Metal Detective - a "cosy" slow-paced adventure game set in Cornwall -began life as their end-of-year project. Players take on the role of Morgan, a young girl who uses her late grandfather's metal detector to find lost items and return them to residents on a small island. Holly and Harvey tell BBC Newsbeat it's a personal project for both of them, inspired by childhood holidays in south-west England, but one they've had to work hard to get over the line. Funding for new video games has also declined in the past two years, so the project has been largely self-financed.

  Country: Europe > United Kingdom > England (1.00)
  Industry: Leisure & Entertainment > Games > Computer Games (1.00)

ECG Latent Feature Extraction with Autoencoders for Downstream Prediction Tasks

Harvey, Christopher, Shomaji, Sumaiya, Yao, Zijun, Noheria, Amit

arXiv.org Artificial Intelligence

The electrocardiogram (ECG) is an inexpensive and widely available tool for cardiac assessment. Despite its standardized format and small file size, the high complexity and inter-individual variability of ECG signals (typically a 60,000-size vector with 12 leads at 500 Hz) make it challenging to use in deep learning models, especially when only small training datasets are available. This study addresses these challenges by exploring feature generation methods from representative beat ECGs, focusing on Principal Component Analysis (PCA) and Autoencoders to reduce data complexity. We introduce three novel Variational Autoencoder (VAE) variants-Stochastic Autoencoder (SAE), Annealed beta-VAE (A beta-VAE), and Cyclical beta VAE (C beta-VAE)-and compare their effectiveness in maintaining signal fidelity and enhancing downstream prediction tasks using a Light Gradient Boost Machine (LGBM). The A beta-VAE achieved superior signal reconstruction, reducing the mean absolute error (MAE) to 15.7+/-3.2 muV, which is at the level of signal noise. Moreover, the SAE encodings, when combined with traditional ECG summary features, improved the prediction of reduced Left Ventricular Ejection Fraction (LVEF), achieving an holdout test set area under the receiver operating characteristic curve (AUROC) of 0.901 with a LGBM classifier. This performance nearly matches the 0.909 AUROC of state-of-the-art CNN model but requires significantly less computational resources. Further, the ECG feature extraction-LGBM pipeline avoids overfitting and retains predictive performance when trained with less data. Our findings demonstrate that these VAE encodings are not only effective in simplifying ECG data but also provide a practical solution for applying deep learning in contexts with limited-scale labeled training data.


Scarlett Johansson warns of AI dangers, says 'there's no boundary here'

FOX News

AI expert Marva Bailer explains how, even though there are currently laws in place, the average person has more access than ever to create deepfakes of celebrities. Scarlett Johansson has taken a vocal stand on artificial intelligence, after having her likeness and voice used without permission. Last year, Johansson said she had been asked to voice OpenAI's Chatbot by CEO Sam Altman, but turned down the job, only for people to notice that the feature, named "Sky," sounded almost exactly like the actress. It was like: If that can happen to me, how are we going to protect ourselves from this? There's no boundary here; we're setting ourselves up to be taken advantage of," the 40-year-old told InStyle Magazine earlier this month. In a statement to NPR following the release of "Sky," Johansson said, "When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference.


Developing a system for real-time sensing of flooded roads

AIHub

Roadway-related incidents are a leading cause of flood fatalities nationwide, but limited flood-reporting tools make it difficult to evaluate road conditions in real time. Existing tools -- traffic cameras, water-level sensors and even social media data -- can provide observations of flooding, but they are often not primarily designed for sensing flood conditions on roads and do not work in conjunction. A network of sensors could improve situational flood level awareness; however, they are expensive to operate at scale. Engineers at Rice University have developed a possible solution to this problem: an automated data fusion framework called OpenSafe Fusion. Short for Open Source Situational Awareness Framework for Mobility using Data Fusion, OpenSafe Fusion leverages existing individual reporting mechanisms and public data sources to sense quickly evolving road conditions during urban flooding events, which are becoming increasingly frequent.


The Fall of Babylon Is a Warning for AI Unicorns

WIRED

In late 2016, Hugh Harvey was working as a consultant doctor in the UK's National Health Service. Harvey had dabbled in machine learning while doing a research degree, and had seen the potential for artificial intelligence to revolutionize health care. But he felt strongly that the introduction of AI into medicine was not going to come from within the NHS--it was going to come from industry. So when an opportunity opened up at a buzzy new health-tech startup, Babylon Health, he applied. Founded in London in 2013 by Ali Parsa, a British-Iranian ex-banker, Babylon had a lofty goal: It wanted to do with health care what Google did with information; that is, make it freely and easily available to everyone.


PwC's 4,000 legal staffers get AI assistant as law chatbots gain steam

#artificialintelligence

PwC said it partnered with AI startup Harvey for an initial 12-month contract, which the accounting and consulting firm said will help lawyers with contract analysis, regulatory compliance work, due diligence and other legal advisory and consulting services. PwC said it will also determine ways for tax professionals to use the technology. It said its access to Harvey's technology is exclusive among the Big Four professional services firms. Harvey is built on technology from OpenAI, the Microsoft Corp-backed startup that on Tuesday released an upgraded version of its AI sensation ChatGPT. Harvey received a $5 million investment last year in a funding round led by the OpenAI Startup Fund.


Will artificial intelligence replace your lawyer–and will its name be Harvey?

#artificialintelligence

Enter Harvey, today's golden child that lives at the intersection of technology and law. Harvey is an A.I. platform that can help lawyers perform legal tasks in areas such as due diligence, litigation, and compliance. Described as "the innovative artificial intelligence platform built on a version of Open AI's latest models enhanced for legal work," legaltech startup Harvey, the self-styled "generative A.I. for elite law firms," is about to play in the big leagues. Harvey is being rolled out for use by 3,500 lawyers in 43 offices of Allen & Overy, the seventh largest law firm in the world and part of London's "Magic Circle." I've watched legaltech evolve from the inside for decades.


Generative AI Is Coming For the Lawyers

WIRED

David Wakeling, head of London-based law firm Allen & Overy's markets innovation group, first came across law-focused generative AI tool Harvey in September 2022. He approached OpenAI, the system's developer, to run a small experiment. A handful of his firm's lawyers would use the system to answer simple questions about the law, draft documents, and take first passes at messages to clients. The trial started small, Wakeling says, but soon ballooned. Around 3,500 workers across the company's 43 offices ended up using the tool, asking it around 40,000 queries in total.


Generative Legal AI + 'The Last Human Mile' – Artificial Lawyer

#artificialintelligence

There has been a surge of interest in what generative AI can do. But what does this technology really mean for the legal sector? To find out we must navigate a path between'Death of the Lawyer 2.0' hysteria and those who dismiss the whole thing as a gimmick. Artificial Lawyer looks at what this tech can really do. Generative AI (gen AI), working via Large Language Models such as OpenAI's GPT-3, can do some amazing things.


Harvey, which uses AI to answer legal questions, lands cash from OpenAI

#artificialintelligence

Harvey, a startup building what it describes as a "copilot for lawyers," today emerged from stealth with $5 million in funding led by the OpenAI Startup Fund, the tranche through which OpenAI and its partners are investing in early-stage AI companies tackling major problems. Also participating in the round was Jeff Dean, the lead of Google AI, Google's AI research division. Harvey was founded by Winston Weinberg, a former securities and antitrust litigator at law firm O'Melveny & Myers, and Gabriel Pereyra, previously a research scientist at DeepMind, Google Brain (another of Google's AI groups) and Meta AI. Weinberg and Pereyra are roomates -- Pereyra showed Weinberg OpenAI's GPT-3 text-generating system and Weinberg realized that it could be used to improve legal workflows. "Our product provides lawyers with a natural language interface for their existing legal workflows," Pereyra told TechCrunch in an email interview. "Instead of manually editing legal documents or performing legal research, Harvey enables lawyers to describe the task they wish to accomplish in simple instructions and receive the generated result.