In the last two years, large enterprise organizations have been scaling up their artificial intelligence and machine learning efforts. To apply models to hundreds of use-cases, organizations need to operationalize their machine learning models across the organization. At the center of this scaling up effort is ModelOp, the company that builds solutions to scale the processes that take models from the data science lab into production. Even before their recent $6 million Series A funding led by Valley Capital Partners with participation from Silicon Valley Data Capital, they are already the leader providing ModelOps solutions to Fortune 1000 companies. ModelOps is a capability that focuses on getting models into 24/7 production.
Back in 2008, theoretical physicist Stephen Hawking used a speech synthesizer program on an Apple II computer to "talk." He had to use hand controls to work the system, which became problematic as his case of Lou Gehrig's disease progressed. When he upgraded to a new device, called a "cheek switch," it detected when Hawking tensed the muscle in his cheek, helping him speak, write emails, or surf the Web. Now, neuroscientists at the University of California, San Francisco have come up with a far more advanced technology--an artificial intelligence program that can turn thoughts into text. In time, it has the potential to help millions of people with speech disabilities communicate with ease.
Stress can lead to poor decision-making, and people hunting for George Clooney's face could help us understand why. Thackery Brown at Stanford University, California, and his colleagues asked 38 people, with an average age of 23, to navigate looping paths around 12 different virtual towns in a simulated environment. Each town had just a few streets and took about a minute to navigate. The researchers also placed the face of a celebrity – George Clooney, for example – at a point along the route. The team then asked the participants to navigate the simulation again while lying inside a functional magnetic resonance imagine (fMRI) machine.
Hyunsoo Kim, a 29-year-old entrepreneur in South Korea, is on a mission to democratize artificial intelligence to enable more companies, both large and small, to utilize the emerging technology. So it's only fitting that Kim, cofounder of Superb AI, has been selected as the featured honoree for the Enterprise Technology category of this year's Forbes 30 Under 30 Asia list, leading a pack of several fellow honorees who founded startups based on AI. Since launching Superb AI in April 2018 with four cofounders, Kim has grown his startup to $2 million in revenues last year and 21 employees, fueled by increasing demand for AI. Profits are still in the future, but Superb AI also managed last year to join Y Combinator, a prominent Silicon Valley startup accelerator. So far, it has raised $2 million in funding from Y Combinator, Duke University and VC firms in Silicon Valley, Seoul and Dubai, giving it a valuation of $12 million as of March 2019.
Scientists have developed an artificial intelligence system that can translate a person's thoughts into text by analysing their brain activity. Researchers at the University of California developed the AI to decipher up to 250 words in real-time from a set of between 30 and 50 sentences. The algorithm was trained using the neural signals of four women with electrodes implanted in their brains, which were already in place to monitor epileptic seizures. The volunteers repeatedly read sentences aloud while the researchers fed the brain data to the AI to unpick patterns that could be associated with individual words. The average word error rate across a repeated set was as low as 3%.
In the heart of Silicon Valley, Stanford clinicians and researchers are exploring whether artificial intelligence could help manage a potential surge of Covid-19 patients -- and identify patients who will need intensive care before their condition rapidly deteriorates. The challenge is not to build the algorithm -- the Stanford team simply picked an off-the-shelf tool already on the market -- but rather to determine how to carefully integrate it into already-frenzied clinical operations. "The hardest part, the most important part of this work is not the model development. But it's the workflow design, the change management, figuring out how do you develop that system the model enables," said Ron Li, a Stanford physician and clinical informaticist leading the effort. Li will present the work on Wednesday at a virtual conference hosted by Stanford's Institute for Human-Centered Artificial Intelligence.
Researchers from the University of California, San Francisco, have developed a brain implant which uses deep-learning artificial intelligence to transform thoughts into complete sentences. The technology could one day be used to help restore speech in patients who are unable to speak due to paralysis. "The algorithm is a special kind of artificial neural network, inspired by work in machine translation," Joseph Makin, one of the researchers involved in the project, told Digital Trends. "Their problem, like ours, is to transform a sequence of arbitrary length into a sequence of arbitrary length." The neural net, Makin explained, consists of two stages.
The world is only just getting used to the power and sophistication of virtual assistants made by companies like Amazon and Google, which can decode our spoken speech with eerie precision compared to what the technology was capable of only a few short years ago. In truth, however, a far more impressive and mind-boggling milestone may be just around the corner, making speech recognition seem almost like child's play: artificial intelligence (AI) systems that can translate our brain activity into fully formed text, without hearing a single word uttered. Brain-machine interfaces have evolved in leaps and bounds over recent decades, proceeding from animal models to human participants, and are, in fact, already attempting this very kind of thing. Just not with much accuracy yet, researchers from the University of California San Francisco explain in a new study. To see if they could improve upon that, a team led by neurosurgeon Edward Chang of UCSF's Chang Lab used a new method to decode the electrocorticogram: the record of electrical impulses that occur during cortical activity, picked up by electrodes implanted in the brain.
This article is made possible by Intel's GameDev BOOST program -- dedicated to helping indie game developers everywhere achieve their dreams Throughout Kevin He's career in the tech and gaming industries, something has always bothered him. While rendering and other technologies evolved tremendously over time, animation woefully lagged behind. As an engineer, He wanted to find an efficient way to create more lifelike animations, and he was convinced that advanced physics simulation and AI could solve that problem. So in 2014, He struck out on his own to start DeepMotion. The goal of the San Mateo, California-based company is to provide developers with powerful software development kits (SDK) that'll allow them to create realistic animations for games and other applications.
Have you ever wondered about this concept called Mind reading? It might be a myth right. But a team of scientists led by neurosurgeon Edward Chang of University of California, San Francisco (UCSF) has developed an Artificial Intelligence (AI) that can converts someone's brain thoughts into text. Their study was published in Nature Neuroscience. In the study, in which four patients with epilepsy wore the implants to monitor seizures caused by their medical condition, the UCSF team ran a side experiment: having the participants read and repeat a number of set sentences aloud, while the electrodes recorded their brain activity during the exercise.