Goto

Collaborating Authors

Results


Neptune.ai Named to the 2022 CB Insights AI 100 List of Most Promising AI Startups - neptune.ai

#artificialintelligence

InstaDeep is an EMEA leader in delivering decision-making AI products. Leveraging their extensive know-how in GPU-accelerated computing, deep learning, and reinforcement learning, they have built products, such as the novel DeepChain platform, to tackle the most complex challenges across a range of industries. InstaDeep has also developed collaborations with global leaders in the AI ecosystem, such as Google DeepMind, NVIDIA, and Intel. They are part of Intel's AI Builders program and are one of only 2 NVIDIA Elite Service Delivery Partners across EMEA. The InstaDeep team is made up of approximately 155 people working across its network of offices in London, Paris, Tunis, Lagos, Dubai, and Cape Town, and is growing fast.


Doctors Are Very Worried About Medical AI That Predicts Race

#artificialintelligence

To conclude, our study showed that medical AI systems can easily learn to recognise self-reported racial identity from medical images, and that this capability is extremely difficult to isolate,


Google's DeepMind says it is close to achieving 'human-level' artificial intelligence

Daily Mail - Science & tech

DeepMind, a British company owned by Google, may be on the verge of achieving human-level artificial intelligence (AI). Nando de Freitas, a research scientist at DeepMind and machine learning professor at Oxford University, has said'the game is over' in regards to solving the hardest challenges in the race to achieve artificial general intelligence (AGI). AGI refers to a machine or program that has the ability to understand or learn any intellectual task that a human being can, and do so without training. According to De Freitas, the quest for scientists is now scaling up AI programs, such as with more data and computing power, to create an AGI. Earlier this week, DeepMind unveiled a new AI'agent' called Gato that can complete 604 different tasks'across a wide range of environments'. Gato uses a single neural network – a computing system with interconnected nodes that works like nerve cells in the human brain.


Using deep learning to predict physical interactions of protein complexes

AIHub

A 3D rendering of a protein complex structures predicted from protein sequences by AF2Complex. From the muscle fibers that move us to the enzymes that replicate our DNA, proteins are the molecular machinery that makes life possible. Protein function heavily depends on their three-dimensional structure, and researchers around the world have long endeavored to answer a seemingly simple inquiry to bridge function and form: if you know the building blocks of these molecular machines, can you predict how they are assembled into their functional shape? This question is not so easy to answer. With complex structures dependent on intricate physical interactions, researchers have turned to artificial neural network models – mathematical frameworks that convert complex patterns into numerical representations – to predict and "see" the shape of proteins in 3D.


Research Engineer - Scalable Alignment

#artificialintelligence

At DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives, and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, maternity or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know. At DeepMind, we've built a unique culture and work environment where long-term ambitious research can flourish. Our special interdisciplinary team combines the best techniques from deep learning, reinforcement learning and systems neuroscience to build general-purpose learning algorithms.


AI recognition of patient race in medical imaging: a modelling study

#artificialintelligence

Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person's race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient's racial identity from medical images. Using private (Emory CXR, Emory Chest CT, Emory Cervical Spine, and Emory Mammogram) and public (MIMIC-CXR, CheXpert, National Lung Cancer Screening Trial, RSNA Pulmonary Embolism CT, and Digital Hand Atlas) datasets, we evaluated, first, performance quantification of deep learning models in detecting race from medical images, including the ability of these models to generalise to external environments and across multiple imaging modalities. Second, we assessed possible confounding of anatomic and phenotypic population features by assessing the ability of these hypothesised confounders to detect race in isolation using regression models, and by re-evaluating the deep learning models by testing them on datasets stratified by these hypothesised confounding variables. Last, by exploring the effect of image corruptions on model performance, we investigated the underlying mechanism by which AI models can recognise race.


What is Deep Learning?

#artificialintelligence

What do you achieve with deep learning? Deep learning is a part of our daily life. For example, when you upload a photo to Facebook, deep learning helps by automatically tagging your friends. If you use digital assistants like Siri, Cortana or Alexa, they serve you to the benefit with the help of natural language processing and speech recognition. When you meet with overseas customers on Skype, it translates in real time.


La veille de la cybersécurité

#artificialintelligence

Machines don't always understand what we want from them. Can new language models teach them to read between the lines? If artificial intelligence is intended to resemble a brain, with networks of artificial neurons substituting for real cells, then what would happen if you compared the activities in deep learning algorithms to those in a human brain? Last week, researchers from Meta AI announced that they would be partnering with neuroimaging center Neurospin (CEA) and INRIA to try to do just that. Through this collaboration, they're planning to analyze human brain activity and deep learning algorithms trained on language or speech tasks in response to the same written or spoken texts.


How Is Artificial Intelligence Being Used for Advance Research on Cancer?

#artificialintelligence

AI for Cancer Treatment: Cancer is one of the most dangerous diseases in the whole world. Every day people are looking for ways to cure cancer. AI and its various applications are reshaping the way scientists and researchers approach cancer treatment. Tumors are very complex diseases. It is very difficult to study the behavior of a tumor hence the treatment is much difficult.


Deep Learning: Types and Applications in Healthcare

#artificialintelligence

Deep learning (DL), also known as deep structured learning or hierarchical learning, is a subset of machine learning. It is loosely based on the way neurons connect to each other to process information in animal brains. To mimic these connections, DL uses a layered algorithmic architecture known as artificial neural networks (ANNs) to analyze the data. By analyzing how data is filtered through the layers of the ANN and how the layers interact with each other, a DL algorithm can'learn' to make correlations and connections in the data. These capabilities make DL algorithms an innovative tool with the potential to transform healthcare.