Goto

Collaborating Authors

 hartung


Scientists unveil plan to create biocomputers powered by human brain cells interview with Prof Thomas Hartung (senior author of the paper)

Robohub

Despite AI's impressive track record, its computational power pales in comparison with that of the human brain. Scientists unveil a revolutionary path to drive computing forward: organoid intelligence (OI), where lab-grown brain organoids serve as biological hardware. "This new field of biocomputing promises unprecedented advances in computing speed, processing power, data efficiency, and storage capabilities – all with lower energy needs," say the authors in an article published in Frontiers in Science. Artificial intelligence (AI) has long been inspired by the human brain. This approach proved highly successful: AI boasts impressive achievements – from diagnosing medical conditions to composing poetry.


From silicon to brain cells: How biology may hold the future of computers

#artificialintelligence

As artificial intelligence software and advanced computers revolutionize modern technology, some researchers see a future where computer programmers leap from silicon to organic molecules. Scientists with Johns Hopkins University are investigating the possibility of "biocomputers" – programs modelled from organic molecules such as human DNA or proteins – unlocking new insights on human biology and advancing the processing power of future tech. Much of these technological anticipations derive from something called "organoids," which are lab-grown tissues resembling fully grown organs, sharing similar biological complexities to tissues comprised in kidneys, lungs and brain cells. Organoids, which have become more prominent in labs over the last two decades, currently offer scientists a more ethical alternative to animal or human testing, mimicking basic functions of cells and advancing scientific understandings towards how those cells operate. Most recently, scientists with Johns Hopkins have been assessing the nature of "brain organoids," which are orbs the size of a pen dot that mirror the basic neural functions of learning and remembering in the human brain, according to a news release.


Move over, artificial intelligence. Scientists announce a new 'organoid intelligence' field

#artificialintelligence

Organoids are lab-grown tissues that resemble organs. These three-dimensional structures, usually derived from stem cells, have been used in labs for nearly two decades, where scientists have been able to avoid harmful human or animal testing by experimenting on the stand-ins for kidneys, lungs and other organs. Brain organoids don't actually resemble tiny versions of the human brain, but the pen dot-size cell cultures contain neurons that are capable of brainlike functions, forming a multitude of connections. Scientists call the phenomenon "intelligence in a dish." This magnified image shows a brain organoid produced in Hartung's lab.


Move over, artificial intelligence. Scientists announce a new 'organoid intelligence' field

#artificialintelligence

Computers powered by human brain cells may sound like science fiction, but a team of researchers in the United States believes such machines, part of a new field called "organoid intelligence," could shape the future -- and now they have a plan to get there. Organoids are lab-grown tissues that resemble organs. These three-dimensional structures, usually derived from stem cells, have been used in labs for nearly two decades, where scientists have been able to avoid harmful human or animal testing by experimenting on the stand-ins for kidneys, lungs and other organs. Brain organoids don't actually resemble tiny versions of the human brain, but the pen dot-size cell cultures contain neurons that are capable of brainlike functions, forming a multitude of connections. Scientists call the phenomenon "intelligence in a dish."


Machine Learning of Toxicological Big Data Enables Read-Across Structure Activity Relationships (RASAR) Outperforming Animal Test Reproducibility Toxicological Sciences Oxford Academic

#artificialintelligence

Earlier we created a chemical hazard database via natural language processing of dossiers submitted to the European Chemical Agency with approximately 10 000 chemicals. We identified repeat OECD guideline tests to establish reproducibility of acute oral and dermal toxicity, eye and skin irritation, mutagenicity and skin sensitization. Based on 350–700 chemicals each, the probability that an OECD guideline animal test would output the same result in a repeat test was 78%–96% (sensitivity 50%–87%). An expanded database with more than 866 000 chemical properties/hazards was used as training data and to model health hazards and chemical properties. The constructed models automate and extend the read-across method of chemical classification. The novel models called RASARs (read-across structure activity relationship) use binary fingerprints and Jaccard distance to define chemical similarity. A large chemical similarity adjacency matrix is constructed from this similarity metric and is used ...


AI may soon save a ton of cute (and ugly) animals from drug testing

#artificialintelligence

As cold-blooded and inhuman as it may sound, animal tests are an integral part of modern-day drug and chemical compounds development and approval procedures. Scientists can't still reliably predict the properties of new chemicals, let alone how these compounds might interact with living cells. But a new paper published in the research journal Toxicological Sciences shows that it is possible to predict the attributes of new compounds using the data we already have about past tests and experiments. The artificially intelligent system was trained to predict the toxicity of tens of thousands of unknown chemicals, based on previous animal tests, and the results are, in some cases, more accurate and reliable than real animal tests. Using AI in the drug development process is nothing new.


New Artificial Intelligence System Could End Animal Testing Forever

#artificialintelligence

A new computer system could spell the end for animal testing. The new system offers up more accurate results than animal testing, predicting the toxicity of a substance almost immediately. It is also less expensive, less time-consuming, and poses much less of an ethical dilemma. "These results are a real eye-opener, they suggest that we can replace many animal tests with computer-based prediction and get more reliable results," Professor Thomas Hartung, the lead designer of the system, told the Financial Times. Hartung and his team of researchers used artificial intelligence to analyze the results of 800,000 tests on 10,000 different chemicals, held on a database.


AI is getting closer to replacing animal testing

#artificialintelligence

Scientists test new chemical compounds on animals because we still don't completely understand the world around us. But an artificial intelligence system published in the research journal Toxicological Sciences shows that it might be possible to automate some tests using the knowledge about chemical interactions we already have. The AI was trained to predict how toxic tens of thousands of unknown chemicals could be, based on previous animal tests, and the algorithm's results were shown to be as accurate as live animal tests. The algorithm can predict results from nine different tests, from skin corrosion to eye irritation, which authors say comprised 57% of all animal testing done in the EU in 2011. This isn't the first computer system to try to predict whether a chemical will be toxic, but the scale of data that the system is able to use is novel.


Software beats animal tests at predicting toxicity of chemicals

#artificialintelligence

Computer programs can, in some cases, predict chemical toxicity as well as tests done on rats and other animals.Credit: Coneyl Jay/SPL Machine-learning software trained on masses of chemical-safety data is so good at predicting some kinds of toxicity that it now rivals -- and sometimes outperforms -- expensive animal studies, researchers report. Computer models could replace some standard safety studies conducted on millions of animals each year, such as dropping compounds into rabbits' eyes to check if they are irritants, or feeding chemicals to rats to work out lethal doses, says Thomas Hartung, a toxicologist at Johns Hopkins University in Baltimore, Maryland. "The power of big data means we can produce a tool more predictive than many animal tests." In a paper published in Toxicological Sciences1 on 11 July, Hartung's team reports that its algorithm can accurately predict toxicity for tens of thousands of chemicals -- a range much broader than other published models achieve -- across nine kinds of test, from inhalation damage to harm to aquatic ecosystems. The paper "draws attention to the new possibilities of big data", says Bennard van Ravenzwaay, a toxicologist at the chemicals firm BASF in Ludwigshafen, Germany.


Big data beats animal testing for finding toxic chemicals - Futurity

#artificialintelligence

You are free to share this article under the Attribution 4.0 International license. Scientists may be able to better predict the toxicity of new chemicals through data analysis than with standard tests on animals, according to a new study. The researchers say they developed a large database of known chemicals and then used it to map the toxic properties of different chemical structures. They then showed they could predict the toxic properties of a new chemical compound with structures similar to a known chemical, and do it more accurately than with an animal test. "A new pesticide, for example, might require 30 separate animal tests, costing the sponsoring company about $20 million…" The most advanced toxicity-prediction tool the team developed was on average about 87 percent accurate in reproducing consensus animal-test-based results across nine common tests, which account for 57 percent of the world's animal toxicology testing.